WorldWideScience

Sample records for facial features representative

  1. Enhancing facial features by using clear facial features

    Science.gov (United States)

    Rofoo, Fanar Fareed Hanna

    2017-09-01

    The similarity of features between individuals of same ethnicity motivated the idea of this project. The idea of this project is to extract features of clear facial image and impose them on blurred facial image of same ethnic origin as an approach to enhance a blurred facial image. A database of clear images containing 30 individuals equally divided to five different ethnicities which were Arab, African, Chines, European and Indian. Software was built to perform pre-processing on images in order to align the features of clear and blurred images. And the idea was to extract features of clear facial image or template built from clear facial images using wavelet transformation to impose them on blurred image by using reverse wavelet. The results of this approach did not come well as all the features did not align together as in most cases the eyes were aligned but the nose or mouth were not aligned. Then we decided in the next approach to deal with features separately but in the result in some cases a blocky effect was present on features due to not having close matching features. In general the available small database did not help to achieve the goal results, because of the number of available individuals. The color information and features similarity could be more investigated to achieve better results by having larger database as well as improving the process of enhancement by the availability of closer matches in each ethnicity.

  2. Learning representative features for facial images based on a modified principal component analysis

    Science.gov (United States)

    Averkin, Anton; Potapov, Alexey

    2013-05-01

    The paper is devoted to facial image analysis and particularly deals with the problem of automatic evaluation of the attractiveness of human faces. We propose a new approach for automatic construction of feature space based on a modified principal component analysis. Input data sets for the algorithm are the learning data sets of facial images, which are rated by one person. The proposed approach allows one to extract features of the individual subjective face beauty perception and to predict attractiveness values for new facial images, which were not included into a learning data set. The Pearson correlation coefficient between values predicted by our method for new facial images and personal attractiveness estimation values equals to 0.89. This means that the new approach proposed is promising and can be used for predicting subjective face attractiveness values in real systems of the facial images analysis.

  3. Extraction and representation of common feature from uncertain facial expressions with cloud model.

    Science.gov (United States)

    Wang, Shuliang; Chi, Hehua; Yuan, Hanning; Geng, Jing

    2017-12-01

    Human facial expressions are key ingredient to convert an individual's innate emotion in communication. However, the variation of facial expressions affects the reliable identification of human emotions. In this paper, we present a cloud model to extract facial features for representing human emotion. First, the uncertainties in facial expression are analyzed in the context of cloud model. The feature extraction and representation algorithm is established under cloud generators. With forward cloud generator, facial expression images can be re-generated as many as we like for visually representing the extracted three features, and each feature shows different roles. The effectiveness of the computing model is tested on Japanese Female Facial Expression database. Three common features are extracted from seven facial expression images. Finally, the paper is concluded and remarked.

  4. Feature selection from a facial image for distinction of sasang constitution.

    Science.gov (United States)

    Koo, Imhoi; Kim, Jong Yeol; Kim, Myoung Geun; Kim, Keun Ho

    2009-09-01

    Recently, oriental medicine has received attention for providing personalized medicine through consideration of the unique nature and constitution of individual patients. With the eventual goal of globalization, the current trend in oriental medicine research is the standardization by adopting western scientific methods, which could represent a scientific revolution. The purpose of this study is to establish methods for finding statistically significant features in a facial image with respect to distinguishing constitution and to show the meaning of those features. From facial photo images, facial elements are analyzed in terms of the distance, angle and the distance ratios, for which there are 1225, 61 250 and 749 700 features, respectively. Due to the very large number of facial features, it is quite difficult to determine truly meaningful features. We suggest a process for the efficient analysis of facial features including the removal of outliers, control for missing data to guarantee data confidence and calculation of statistical significance by applying ANOVA. We show the statistical properties of selected features according to different constitutions using the nine distances, 10 angles and 10 rates of distance features that are finally established. Additionally, the Sasang constitutional meaning of the selected features is shown here.

  5. Feature Selection from a Facial Image for Distinction of Sasang Constitution

    Directory of Open Access Journals (Sweden)

    Imhoi Koo

    2009-01-01

    Full Text Available Recently, oriental medicine has received attention for providing personalized medicine through consideration of the unique nature and constitution of individual patients. With the eventual goal of globalization, the current trend in oriental medicine research is the standardization by adopting western scientific methods, which could represent a scientific revolution. The purpose of this study is to establish methods for finding statistically significant features in a facial image with respect to distinguishing constitution and to show the meaning of those features. From facial photo images, facial elements are analyzed in terms of the distance, angle and the distance ratios, for which there are 1225, 61 250 and 749 700 features, respectively. Due to the very large number of facial features, it is quite difficult to determine truly meaningful features. We suggest a process for the efficient analysis of facial features including the removal of outliers, control for missing data to guarantee data confidence and calculation of statistical significance by applying ANOVA. We show the statistical properties of selected features according to different constitutions using the nine distances, 10 angles and 10 rates of distance features that are finally established. Additionally, the Sasang constitutional meaning of the selected features is shown here.

  6. Feature Selection from a Facial Image for Distinction of Sasang Constitution

    Science.gov (United States)

    Koo, Imhoi; Kim, Jong Yeol; Kim, Myoung Geun

    2009-01-01

    Recently, oriental medicine has received attention for providing personalized medicine through consideration of the unique nature and constitution of individual patients. With the eventual goal of globalization, the current trend in oriental medicine research is the standardization by adopting western scientific methods, which could represent a scientific revolution. The purpose of this study is to establish methods for finding statistically significant features in a facial image with respect to distinguishing constitution and to show the meaning of those features. From facial photo images, facial elements are analyzed in terms of the distance, angle and the distance ratios, for which there are 1225, 61 250 and 749 700 features, respectively. Due to the very large number of facial features, it is quite difficult to determine truly meaningful features. We suggest a process for the efficient analysis of facial features including the removal of outliers, control for missing data to guarantee data confidence and calculation of statistical significance by applying ANOVA. We show the statistical properties of selected features according to different constitutions using the nine distances, 10 angles and 10 rates of distance features that are finally established. Additionally, the Sasang constitutional meaning of the selected features is shown here. PMID:19745013

  7. Fusing Facial Features for Face Recognition

    Directory of Open Access Journals (Sweden)

    Jamal Ahmad Dargham

    2012-06-01

    Full Text Available Face recognition is an important biometric method because of its potential applications in many fields, such as access control, surveillance, and human-computer interaction. In this paper, a face recognition system that fuses the outputs of three face recognition systems based on Gabor jets is presented. The first system uses the magnitude, the second uses the phase, and the third uses the phase-weighted magnitude of the jets. The jets are generated from facial landmarks selected using three selection methods. It was found out that fusing the facial features gives better recognition rate than either facial feature used individually regardless of the landmark selection method.

  8. Facial expression identification using 3D geometric features from Microsoft Kinect device

    Science.gov (United States)

    Han, Dongxu; Al Jawad, Naseer; Du, Hongbo

    2016-05-01

    Facial expression identification is an important part of face recognition and closely related to emotion detection from face images. Various solutions have been proposed in the past using different types of cameras and features. Microsoft Kinect device has been widely used for multimedia interactions. More recently, the device has been increasingly deployed for supporting scientific investigations. This paper explores the effectiveness of using the device in identifying emotional facial expressions such as surprise, smile, sad, etc. and evaluates the usefulness of 3D data points on a face mesh structure obtained from the Kinect device. We present a distance-based geometric feature component that is derived from the distances between points on the face mesh and selected reference points in a single frame. The feature components extracted across a sequence of frames starting and ending by neutral emotion represent a whole expression. The feature vector eliminates the need for complex face orientation correction, simplifying the feature extraction process and making it more efficient. We applied the kNN classifier that exploits a feature component based similarity measure following the principle of dynamic time warping to determine the closest neighbors. Preliminary tests on a small scale database of different facial expressions show promises of the newly developed features and the usefulness of the Kinect device in facial expression identification.

  9. Frame-Based Facial Expression Recognition Using Geometrical Features

    Directory of Open Access Journals (Sweden)

    Anwar Saeed

    2014-01-01

    Full Text Available To improve the human-computer interaction (HCI to be as good as human-human interaction, building an efficient approach for human emotion recognition is required. These emotions could be fused from several modalities such as facial expression, hand gesture, acoustic data, and biophysiological data. In this paper, we address the frame-based perception of the universal human facial expressions (happiness, surprise, anger, disgust, fear, and sadness, with the help of several geometrical features. Unlike many other geometry-based approaches, the frame-based method does not rely on prior knowledge of a person-specific neutral expression; this knowledge is gained through human intervention and not available in real scenarios. Additionally, we provide a method to investigate the performance of the geometry-based approaches under various facial point localization errors. From an evaluation on two public benchmark datasets, we have found that using eight facial points, we can achieve the state-of-the-art recognition rate. However, this state-of-the-art geometry-based approach exploits features derived from 68 facial points and requires prior knowledge of the person-specific neutral expression. The expression recognition rate using geometrical features is adversely affected by the errors in the facial point localization, especially for the expressions with subtle facial deformations.

  10. Facial expression recognition in the wild based on multimodal texture features

    Science.gov (United States)

    Sun, Bo; Li, Liandong; Zhou, Guoyan; He, Jun

    2016-11-01

    Facial expression recognition in the wild is a very challenging task. We describe our work in static and continuous facial expression recognition in the wild. We evaluate the recognition results of gray deep features and color deep features, and explore the fusion of multimodal texture features. For the continuous facial expression recognition, we design two temporal-spatial dense scale-invariant feature transform (SIFT) features and combine multimodal features to recognize expression from image sequences. For the static facial expression recognition based on video frames, we extract dense SIFT and some deep convolutional neural network (CNN) features, including our proposed CNN architecture. We train linear support vector machine and partial least squares classifiers for those kinds of features on the static facial expression in the wild (SFEW) and acted facial expression in the wild (AFEW) dataset, and we propose a fusion network to combine all the extracted features at decision level. The final achievement we gained is 56.32% on the SFEW testing set and 50.67% on the AFEW validation set, which are much better than the baseline recognition rates of 35.96% and 36.08%.

  11. Dynamic facial expression recognition based on geometric and texture features

    Science.gov (United States)

    Li, Ming; Wang, Zengfu

    2018-04-01

    Recently, dynamic facial expression recognition in videos has attracted growing attention. In this paper, we propose a novel dynamic facial expression recognition method by using geometric and texture features. In our system, the facial landmark movements and texture variations upon pairwise images are used to perform the dynamic facial expression recognition tasks. For one facial expression sequence, pairwise images are created between the first frame and each of its subsequent frames. Integration of both geometric and texture features further enhances the representation of the facial expressions. Finally, Support Vector Machine is used for facial expression recognition. Experiments conducted on the extended Cohn-Kanade database show that our proposed method can achieve a competitive performance with other methods.

  12. Facial feature tracking: a psychophysiological measure to assess exercise intensity?

    Science.gov (United States)

    Miles, Kathleen H; Clark, Bradley; Périard, Julien D; Goecke, Roland; Thompson, Kevin G

    2018-04-01

    The primary aim of this study was to determine whether facial feature tracking reliably measures changes in facial movement across varying exercise intensities. Fifteen cyclists completed three, incremental intensity, cycling trials to exhaustion while their faces were recorded with video cameras. Facial feature tracking was found to be a moderately reliable measure of facial movement during incremental intensity cycling (intra-class correlation coefficient = 0.65-0.68). Facial movement (whole face (WF), upper face (UF), lower face (LF) and head movement (HM)) increased with exercise intensity, from lactate threshold one (LT1) until attainment of maximal aerobic power (MAP) (WF 3464 ± 3364mm, P exercise intensities (UF minus LF at: LT1, 1048 ± 383mm; LT2, 1208 ± 611mm; MAP, 1401 ± 712mm; P exercise intensity.

  13. Interpretation of appearance: the effect of facial features on first impressions and personality

    DEFF Research Database (Denmark)

    Wolffhechel, Karin Marie Brandt; Fagertun, Jens; Jacobsen, Ulrik Plesner

    2014-01-01

    Appearance is known to influence social interactions, which in turn could potentially influence personality development. In this study we focus on discovering the relationship between self-reported personality traits, first impressions and facial characteristics. The results reveal that several...... personality traits can be read above chance from a face, and that facial features influence first impressions. Despite the former, our prediction model fails to reliably infer personality traits from either facial features or first impressions. First impressions, however, could be inferred more reliably from...... facial features. We have generated artificial, extreme faces visualising the characteristics having an effect on first impressions for several traits. Conclusively, we find a relationship between first impressions, some personality traits and facial features and consolidate that people on average assess...

  14. iFER: facial expression recognition using automatically selected geometric eye and eyebrow features

    Science.gov (United States)

    Oztel, Ismail; Yolcu, Gozde; Oz, Cemil; Kazan, Serap; Bunyak, Filiz

    2018-03-01

    Facial expressions have an important role in interpersonal communications and estimation of emotional states or intentions. Automatic recognition of facial expressions has led to many practical applications and became one of the important topics in computer vision. We present a facial expression recognition system that relies on geometry-based features extracted from eye and eyebrow regions of the face. The proposed system detects keypoints on frontal face images and forms a feature set using geometric relationships among groups of detected keypoints. Obtained feature set is refined and reduced using the sequential forward selection (SFS) algorithm and fed to a support vector machine classifier to recognize five facial expression classes. The proposed system, iFER (eye-eyebrow only facial expression recognition), is robust to lower face occlusions that may be caused by beards, mustaches, scarves, etc. and lower face motion during speech production. Preliminary experiments on benchmark datasets produced promising results outperforming previous facial expression recognition studies using partial face features, and comparable results to studies using whole face information, only slightly lower by ˜ 2.5 % compared to the best whole face facial recognition system while using only ˜ 1 / 3 of the facial region.

  15. Towards the automation of forensic facial individualisation: Comparing forensic to non forensic eyebrow features

    NARCIS (Netherlands)

    Zeinstra, Christopher Gerard; Veldhuis, Raymond N.J.; Spreeuwers, Lieuwe Jan

    2014-01-01

    The Facial Identification Scientific Working Group (FISWG) publishes recommendations regarding one-to-one facial comparisons. At this moment a draft version of a facial image comparison feature list for morphological analysis has been published. This feature list is based on casework experience by

  16. Orientations for the successful categorization of facial expressions and their link with facial features.

    Science.gov (United States)

    Duncan, Justin; Gosselin, Frédéric; Cobarro, Charlène; Dugas, Gabrielle; Blais, Caroline; Fiset, Daniel

    2017-12-01

    Horizontal information was recently suggested to be crucial for face identification. In the present paper, we expand on this finding and investigate the role of orientations for all the basic facial expressions and neutrality. To this end, we developed orientation bubbles to quantify utilization of the orientation spectrum by the visual system in a facial expression categorization task. We first validated the procedure in Experiment 1 with a simple plaid-detection task. In Experiment 2, we used orientation bubbles to reveal the diagnostic-i.e., task relevant-orientations for the basic facial expressions and neutrality. Overall, we found that horizontal information was highly diagnostic for expressions-surprise excepted. We also found that utilization of horizontal information strongly predicted performance level in this task. Despite the recent surge of research on horizontals, the link with local features remains unexplored. We were thus also interested in investigating this link. In Experiment 3, location bubbles were used to reveal the diagnostic features for the basic facial expressions. Crucially, Experiments 2 and 3 were run in parallel on the same participants, in an interleaved fashion. This way, we were able to correlate individual orientation and local diagnostic profiles. Our results indicate that individual differences in horizontal tuning are best predicted by utilization of the eyes.

  17. Seven Non-melanoma Features to Rule Out Facial Melanoma

    Directory of Open Access Journals (Sweden)

    Philipp Tschandl

    2017-08-01

    Full Text Available Facial melanoma is difficult to diagnose and dermatoscopic features are often subtle. Dermatoscopic non-melanoma patterns may have a comparable diagnostic value. In this pilot study, facial lesions were collected retrospectively, resulting in a case set of 339 melanomas and 308 non-melanomas. Lesions were evaluated for the prevalence (> 50% of lesional surface of 7 dermatoscopic non-melanoma features: scales, white follicles, erythema/reticular vessels, reticular and/or curved lines/fingerprints, structureless brown colour, sharp demarcation, and classic criteria of seborrhoeic keratosis. Melanomas had a lower number of non-melanoma patterns (p < 0.001. Scoring a lesion suspicious when no prevalent non-melanoma pattern is found resulted in a sensitivity of 88.5% and a specificity of 66.9% for the diagnosis of melanoma. Specificity was higher for solar lentigo (78.8% and seborrhoeic keratosis (74.3% and lower for actinic keratosis (61.4% and lichenoid keratosis (25.6%. Evaluation of prevalent non-melanoma patterns can provide slightly lower sensitivity and higher specificity in detecting facial melanoma compared with already known malignant features.

  18. Facial Feature Extraction Using Frequency Map Series in PCNN

    Directory of Open Access Journals (Sweden)

    Rencan Nie

    2016-01-01

    Full Text Available Pulse coupled neural network (PCNN has been widely used in image processing. The 3D binary map series (BMS generated by PCNN effectively describes image feature information such as edges and regional distribution, so BMS can be treated as the basis of extracting 1D oscillation time series (OTS for an image. However, the traditional methods using BMS did not consider the correlation of the binary sequence in BMS and the space structure for every map. By further processing for BMS, a novel facial feature extraction method is proposed. Firstly, consider the correlation among maps in BMS; a method is put forward to transform BMS into frequency map series (FMS, and the method lessens the influence of noncontinuous feature regions in binary images on OTS-BMS. Then, by computing the 2D entropy for every map in FMS, the 3D FMS is transformed into 1D OTS (OTS-FMS, which has good geometry invariance for the facial image, and contains the space structure information of the image. Finally, by analyzing the OTS-FMS, the standard Euclidean distance is used to measure the distances for OTS-FMS. Experimental results verify the effectiveness of OTS-FMS in facial recognition, and it shows better recognition performance than other feature extraction methods.

  19. Accurate landmarking of three-dimensional facial data in the presence of facial expressions and occlusions using a three-dimensional statistical facial feature model.

    Science.gov (United States)

    Zhao, Xi; Dellandréa, Emmanuel; Chen, Liming; Kakadiaris, Ioannis A

    2011-10-01

    Three-dimensional face landmarking aims at automatically localizing facial landmarks and has a wide range of applications (e.g., face recognition, face tracking, and facial expression analysis). Existing methods assume neutral facial expressions and unoccluded faces. In this paper, we propose a general learning-based framework for reliable landmark localization on 3-D facial data under challenging conditions (i.e., facial expressions and occlusions). Our approach relies on a statistical model, called 3-D statistical facial feature model, which learns both the global variations in configurational relationships between landmarks and the local variations of texture and geometry around each landmark. Based on this model, we further propose an occlusion classifier and a fitting algorithm. Results from experiments on three publicly available 3-D face databases (FRGC, BU-3-DFE, and Bosphorus) demonstrate the effectiveness of our approach, in terms of landmarking accuracy and robustness, in the presence of expressions and occlusions.

  20. Assessing the accuracy of perceptions of intelligence based on heritable facial features

    OpenAIRE

    Lee, Anthony J.; Hibbs, Courtney; Wright, Margaret J.; Martin, Nicholas G.; Keller, Matthew C.; Zietsch, Brendan P.

    2017-01-01

    Perceptions of intelligence based on facial features can have a profound impact on many social situations, but findings have been mixed as to whether these judgements are accurate. Even if such perceptions were accurate, the underlying mechanism is unclear. Several possibilities have been proposed, including evolutionary explanations where certain morphological facial features are associated with fitness-related traits (including cognitive development), or that intelligence judgements are ove...

  1. Nine-year-old children use norm-based coding to visually represent facial expression.

    Science.gov (United States)

    Burton, Nichola; Jeffery, Linda; Skinner, Andrew L; Benton, Christopher P; Rhodes, Gillian

    2013-10-01

    Children are less skilled than adults at making judgments about facial expression. This could be because they have not yet developed adult-like mechanisms for visually representing faces. Adults are thought to represent faces in a multidimensional face-space, and have been shown to code the expression of a face relative to the norm or average face in face-space. Norm-based coding is economical and adaptive, and may be what makes adults more sensitive to facial expression than children. This study investigated the coding system that children use to represent facial expression. An adaptation aftereffect paradigm was used to test 24 adults and 18 children (9 years 2 months to 9 years 11 months old). Participants adapted to weak and strong antiexpressions. They then judged the expression of an average expression. Adaptation created aftereffects that made the test face look like the expression opposite that of the adaptor. Consistent with the predictions of norm-based but not exemplar-based coding, aftereffects were larger for strong than weak adaptors for both age groups. Results indicate that, like adults, children's coding of facial expressions is norm-based. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  2. An Improved AAM Method for Extracting Human Facial Features

    Directory of Open Access Journals (Sweden)

    Tao Zhou

    2012-01-01

    Full Text Available Active appearance model is a statistically parametrical model, which is widely used to extract human facial features and recognition. However, intensity values used in original AAM cannot provide enough information for image texture, which will lead to a larger error or a failure fitting of AAM. In order to overcome these defects and improve the fitting performance of AAM model, an improved texture representation is proposed in this paper. Firstly, translation invariant wavelet transform is performed on face images and then image structure is represented using the measure which is obtained by fusing the low-frequency coefficients with edge intensity. Experimental results show that the improved algorithm can increase the accuracy of the AAM fitting and express more information for structures of edge and texture.

  3. Contributions of feature shapes and surface cues to the recognition of facial expressions.

    Science.gov (United States)

    Sormaz, Mladen; Young, Andrew W; Andrews, Timothy J

    2016-10-01

    Theoretical accounts of face processing often emphasise feature shapes as the primary visual cue to the recognition of facial expressions. However, changes in facial expression also affect the surface properties of the face. In this study, we investigated whether this surface information can also be used in the recognition of facial expression. First, participants identified facial expressions (fear, anger, disgust, sadness, happiness) from images that were manipulated such that they varied mainly in shape or mainly in surface properties. We found that the categorization of facial expression is possible in either type of image, but that different expressions are relatively dependent on surface or shape properties. Next, we investigated the relative contributions of shape and surface information to the categorization of facial expressions. This employed a complementary method that involved combining the surface properties of one expression with the shape properties from a different expression. Our results showed that the categorization of facial expressions in these hybrid images was equally dependent on the surface and shape properties of the image. Together, these findings provide a direct demonstration that both feature shape and surface information make significant contributions to the recognition of facial expressions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Joint Facial Action Unit Detection and Feature Fusion: A Multi-conditional Learning Approach.

    Science.gov (United States)

    Eleftheriadis, Stefanos; Rudovic, Ognjen; Pantic, Maja

    2016-10-05

    Automated analysis of facial expressions can benefit many domains, from marketing to clinical diagnosis of neurodevelopmental disorders. Facial expressions are typically encoded as a combination of facial muscle activations, i.e., action units. Depending on context, these action units co-occur in specific patterns, and rarely in isolation. Yet, most existing methods for automatic action unit detection fail to exploit dependencies among them, and the corresponding facial features. To address this, we propose a novel multi-conditional latent variable model for simultaneous fusion of facial features and joint action unit detection. Specifically, the proposed model performs feature fusion in a generative fashion via a low-dimensional shared subspace, while simultaneously performing action unit detection using a discriminative classification approach. We show that by combining the merits of both approaches, the proposed methodology outperforms existing purely discriminative/generative methods for the target task. To reduce the number of parameters, and avoid overfitting, a novel Bayesian learning approach based on Monte Carlo sampling is proposed, to integrate out the shared subspace. We validate the proposed method on posed and spontaneous data from three publicly available datasets (CK+, DISFA and Shoulder-pain), and show that both feature fusion and joint learning of action units leads to improved performance compared to the state-of-the-art methods for the task.

  5. Extracted facial feature of racial closely related faces

    Science.gov (United States)

    Liewchavalit, Chalothorn; Akiba, Masakazu; Kanno, Tsuneo; Nagao, Tomoharu

    2010-02-01

    Human faces contain a lot of demographic information such as identity, gender, age, race and emotion. Human being can perceive these pieces of information and use it as an important clue in social interaction with other people. Race perception is considered the most delicacy and sensitive parts of face perception. There are many research concerning image-base race recognition, but most of them are focus on major race group such as Caucasoid, Negroid and Mongoloid. This paper focuses on how people classify race of the racial closely related group. As a sample of racial closely related group, we choose Japanese and Thai face to represents difference between Northern and Southern Mongoloid. Three psychological experiment was performed to study the strategies of face perception on race classification. As a result of psychological experiment, it can be suggested that race perception is an ability that can be learn. Eyes and eyebrows are the most attention point and eyes is a significant factor in race perception. The Principal Component Analysis (PCA) was performed to extract facial features of sample race group. Extracted race features of texture and shape were used to synthesize faces. As the result, it can be suggested that racial feature is rely on detailed texture rather than shape feature. This research is a indispensable important fundamental research on the race perception which are essential in the establishment of human-like race recognition system.

  6. Face detection and facial feature localization using notch based templates

    International Nuclear Information System (INIS)

    Qayyum, U.

    2007-01-01

    We present a real time detection off aces from the video with facial feature localization as well as the algorithm capable of differentiating between the face/non-face patterns. The need of face detection and facial feature localization arises in various application of computer vision, so a lot of research is dedicated to come up with a real time solution. The algorithm should remain simple to perform real time whereas it should not compromise on the challenges encountered during the detection and localization phase, keeping simplicity and all challenges i.e. algorithm invariant to scale, translation, and (+-45) rotation transformations. The proposed system contains two parts. Visual guidance and face/non-face classification. The visual guidance phase uses the fusion of motion and color cues to classify skin color. Morphological operation with union-structure component labeling algorithm extracts contiguous regions. Scale normalization is applied by nearest neighbor interpolation method to avoid the effect of different scales. Using the aspect ratio of width and height size. Region of Interest (ROI) is obtained and then passed to face/non-face classifier. Notch (Gaussian) based templates/ filters are used to find circular darker regions in ROI. The classified face region is handed over to facial feature localization phase, which uses YCbCr eyes/lips mask for face feature localization. The empirical results show an accuracy of 90% for five different videos with 1000 face/non-face patterns and processing rate of proposed algorithm is 15 frames/sec. (author)

  7. An Algorithm Based on the Self-Organized Maps for the Classification of Facial Features

    Directory of Open Access Journals (Sweden)

    Gheorghe Gîlcă

    2015-12-01

    Full Text Available This paper deals with an algorithm based on Self Organized Maps networks which classifies facial features. The proposed algorithm can categorize the facial features defined by the input variables: eyebrow, mouth, eyelids into a map of their grouping. The groups map is based on calculating the distance between each input vector and each output neuron layer , the neuron with the minimum distance being declared winner neuron. The network structure consists of two levels: the first level contains three input vectors, each having forty-one values, while the second level contains the SOM competitive network which consists of 100 neurons. The proposed system can classify facial features quickly and easily using the proposed algorithm based on SOMs.

  8. Effects of Bariatric Surgery on Facial Features

    Directory of Open Access Journals (Sweden)

    Vardan Papoian

    2015-09-01

    Full Text Available BackgroundBariatric surgeries performed in the USA has increased twelve-fold in the past two decades. The effects of rapid weight loss on facial features has not been previously studied. We hypothesized that bariatric surgery will mimic the effects of aging thus giving the patient an older and less attractive appearance.MethodsConsecutive patients were enrolled from the bariatric surgical clinic at our institution. Pre and post weight loss photographs were taken and used to generate two surveys. The surveys were distributed through social media to assess the difference between the preoperative and postoperative facial photos, in terms of patients' perceived age and overall attractiveness. 102 respondents completed the first survey and 95 respondents completed the second survey.ResultsOf the 14 patients, five showed statistically significant change in perceived age (three more likely to be perceived older and two less likely to be perceived older. The patients were assessed to be more attractive postoperatively, which showed statistical significance.ConclusionsWeight loss does affect facial aesthetics. Mild weight loss is perceived by survey respondents to give the appearance of a younger but less attractive patient, while substantial weight loss is perceived to give the appearance of an older but more attractive patient.

  9. Binary pattern flavored feature extractors for Facial Expression Recognition: An overview

    DEFF Research Database (Denmark)

    Kristensen, Rasmus Lyngby; Tan, Zheng-Hua; Ma, Zhanyu

    2015-01-01

    This paper conducts a survey of modern binary pattern flavored feature extractors applied to the Facial Expression Recognition (FER) problem. In total, 26 different feature extractors are included, of which six are selected for in depth description. In addition, the paper unifies important FER...

  10. Local binary pattern variants-based adaptive texture features analysis for posed and nonposed facial expression recognition

    Science.gov (United States)

    Sultana, Maryam; Bhatti, Naeem; Javed, Sajid; Jung, Soon Ki

    2017-09-01

    Facial expression recognition (FER) is an important task for various computer vision applications. The task becomes challenging when it requires the detection and encoding of macro- and micropatterns of facial expressions. We present a two-stage texture feature extraction framework based on the local binary pattern (LBP) variants and evaluate its significance in recognizing posed and nonposed facial expressions. We focus on the parametric limitations of the LBP variants and investigate their effects for optimal FER. The size of the local neighborhood is an important parameter of the LBP technique for its extraction in images. To make the LBP adaptive, we exploit the granulometric information of the facial images to find the local neighborhood size for the extraction of center-symmetric LBP (CS-LBP) features. Our two-stage texture representations consist of an LBP variant and the adaptive CS-LBP features. Among the presented two-stage texture feature extractions, the binarized statistical image features and adaptive CS-LBP features were found showing high FER rates. Evaluation of the adaptive texture features shows competitive and higher performance than the nonadaptive features and other state-of-the-art approaches, respectively.

  11. Representing affective facial expressions for robots and embodied conversational agents by facial landmarks

    NARCIS (Netherlands)

    Liu, C.; Ham, J.R.C.; Postma, E.O.; Midden, C.J.H.; Joosten, B.; Goudbeek, M.

    2013-01-01

    Affective robots and embodied conversational agents require convincing facial expressions to make them socially acceptable. To be able to virtually generate facial expressions, we need to investigate the relationship between technology and human perception of affective and social signals. Facial

  12. The extraction and use of facial features in low bit-rate visual communication.

    Science.gov (United States)

    Pearson, D

    1992-01-29

    A review is given of experimental investigations by the author and his collaborators into methods of extracting binary features from images of the face and hands. The aim of the research has been to enable deaf people to communicate by sign language over the telephone network. Other applications include model-based image coding and facial-recognition systems. The paper deals with the theoretical postulates underlying the successful experimental extraction of facial features. The basic philosophy has been to treat the face as an illuminated three-dimensional object and to identify features from characteristics of their Gaussian maps. It can be shown that in general a composite image operator linked to a directional-illumination estimator is required to accomplish this, although the latter can often be omitted in practice.

  13. Feature Fusion Algorithm for Multimodal Emotion Recognition from Speech and Facial Expression Signal

    Directory of Open Access Journals (Sweden)

    Han Zhiyan

    2016-01-01

    Full Text Available In order to overcome the limitation of single mode emotion recognition. This paper describes a novel multimodal emotion recognition algorithm, and takes speech signal and facial expression signal as the research subjects. First, fuse the speech signal feature and facial expression signal feature, get sample sets by putting back sampling, and then get classifiers by BP neural network (BPNN. Second, measure the difference between two classifiers by double error difference selection strategy. Finally, get the final recognition result by the majority voting rule. Experiments show the method improves the accuracy of emotion recognition by giving full play to the advantages of decision level fusion and feature level fusion, and makes the whole fusion process close to human emotion recognition more, with a recognition rate 90.4%.

  14. Is the emotion recognition deficit associated with frontotemporal dementia caused by selective inattention to diagnostic facial features?

    Science.gov (United States)

    Oliver, Lindsay D; Virani, Karim; Finger, Elizabeth C; Mitchell, Derek G V

    2014-07-01

    Frontotemporal dementia (FTD) is a debilitating neurodegenerative disorder characterized by severely impaired social and emotional behaviour, including emotion recognition deficits. Though fear recognition impairments seen in particular neurological and developmental disorders can be ameliorated by reallocating attention to critical facial features, the possibility that similar benefits can be conferred to patients with FTD has yet to be explored. In the current study, we examined the impact of presenting distinct regions of the face (whole face, eyes-only, and eyes-removed) on the ability to recognize expressions of anger, fear, disgust, and happiness in 24 patients with FTD and 24 healthy controls. A recognition deficit was demonstrated across emotions by patients with FTD relative to controls. Crucially, removal of diagnostic facial features resulted in an appropriate decline in performance for both groups; furthermore, patients with FTD demonstrated a lack of disproportionate improvement in emotion recognition accuracy as a result of isolating critical facial features relative to controls. Thus, unlike some neurological and developmental disorders featuring amygdala dysfunction, the emotion recognition deficit observed in FTD is not likely driven by selective inattention to critical facial features. Patients with FTD also mislabelled negative facial expressions as happy more often than controls, providing further evidence for abnormalities in the representation of positive affect in FTD. This work suggests that the emotional expression recognition deficit associated with FTD is unlikely to be rectified by adjusting selective attention to diagnostic features, as has proven useful in other select disorders. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Recovering Faces from Memory: The Distracting Influence of External Facial Features

    Science.gov (United States)

    Frowd, Charlie D.; Skelton, Faye; Atherton, Chris; Pitchford, Melanie; Hepton, Gemma; Holden, Laura; McIntyre, Alex H.; Hancock, Peter J. B.

    2012-01-01

    Recognition memory for unfamiliar faces is facilitated when contextual cues (e.g., head pose, background environment, hair and clothing) are consistent between study and test. By contrast, inconsistencies in external features, especially hair, promote errors in unfamiliar face-matching tasks. For the construction of facial composites, as carried…

  16. Alagille syndrome in a Vietnamese cohort: mutation analysis and assessment of facial features.

    Science.gov (United States)

    Lin, Henry C; Le Hoang, Phuc; Hutchinson, Anne; Chao, Grace; Gerfen, Jennifer; Loomes, Kathleen M; Krantz, Ian; Kamath, Binita M; Spinner, Nancy B

    2012-05-01

    Alagille syndrome (ALGS, OMIM #118450) is an autosomal dominant disorder that affects multiple organ systems including the liver, heart, eyes, vertebrae, and face. ALGS is caused by mutations in one of two genes in the Notch Signaling Pathway, Jagged1 (JAG1) or NOTCH2. In this study, analysis of 21 Vietnamese ALGS individuals led to the identification of 19 different mutations (18 JAG1 and 1 NOTCH2), 17 of which are novel, including the third reported NOTCH2 mutation in Alagille Syndrome. The spectrum of JAG1 mutations in the Vietnamese patients is similar to that previously reported, including nine frameshift, three missense, two splice site, one nonsense, two whole gene, and one partial gene deletion. The missense mutations are all likely to be disease causing, as two are loss of cysteines (C22R and C78G) and the third creates a cryptic splice site in exon 9 (G386R). No correlation between genotype and phenotype was observed. Assessment of clinical phenotype revealed that skeletal manifestations occur with a higher frequency than in previously reported Alagille cohorts. Facial features were difficult to assess and a Vietnamese pediatric gastroenterologist was only able to identify the facial phenotype in 61% of the cohort. To assess the agreement among North American dysmorphologists at detecting the presence of ALGS facial features in the Vietnamese patients, 37 clinical dysmorphologists evaluated a photographic panel of 20 Vietnamese children with and without ALGS. The dysmorphologists were unable to identify the individuals with ALGS in the majority of cases, suggesting that evaluation of facial features should not be used in the diagnosis of ALGS in this population. This is the first report of mutations and phenotypic spectrum of ALGS in a Vietnamese population. Copyright © 2012 Wiley Periodicals, Inc.

  17. Facial and Ocular Features of Marfan Syndrome

    Directory of Open Access Journals (Sweden)

    Juan C. Leoni

    2014-10-01

    Full Text Available Marfan syndrome is the most common inherited disorder of connective tissue affecting multiple organ systems. Identification of the facial, ocular and skeletal features should prompt referral for aortic imaging since sudden death by aortic dissection and rupture remains a major cause of death in patients with unrecognized Marfan syndrome. Echocardiography is recommended as the initial imaging test, and once a dilated aortic root is identified magnetic resonance or computed tomography should be done to assess the entire aorta. Prophylactic aortic root replacement is safe and has been demonstrated to improve life expectancy in patients with Marfan syndrome. Medical therapy for Marfan syndrome includes the use of beta blockers in older children and adults with an enlarged aorta. Addition of angiotensin receptor antagonists has been shown to slow the progression of aortic root dilation compared to beta blockers alone. Lifelong and regular follow up in a center for specialized care is important for patients with Marfan syndrome. We present a case of a patient with clinical features of Marfan syndrome and discuss possible therapeutic interventions for her dilated aorta.

  18. Hair length, facial attractiveness, personality attribution: A multiple fitness model of hairdressing

    OpenAIRE

    Bereczkei, Tamas; Mesko, Norbert

    2007-01-01

    Multiple Fitness Model states that attractiveness varies across multiple dimensions, with each feature representing a different aspect of mate value. In the present study, male raters judged the attractiveness of young females with neotenous and mature facial features, with various hair lengths. Results revealed that the physical appearance of long-haired women was rated high, regardless of their facial attractiveness being valued high or low. Women rated as most attractive were those whose f...

  19. Facial Expression Recognition of Various Internal States via Manifold Learning

    Institute of Scientific and Technical Information of China (English)

    Young-Suk Shin

    2009-01-01

    Emotions are becoming increasingly important in human-centered interaction architectures. Recognition of facial expressions, which are central to human-computer interactions, seems natural and desirable. However, facial expressions include mixed emotions, continuous rather than discrete, which vary from moment to moment. This paper represents a novel method of recognizing facial expressions of various internal states via manifold learning, to achieve the aim of humancentered interaction studies. A critical review of widely used emotion models is described, then, facial expression features of various internal states via the locally linear embedding (LLE) are extracted. The recognition of facial expressions is created with the pleasure-displeasure and arousal-sleep dimensions in a two-dimensional model of emotion. The recognition result of various internal state expressions that mapped to the embedding space via the LLE algorithm can effectively represent the structural nature of the two-dimensional model of emotion. Therefore our research has established that the relationship between facial expressions of various internal states can be elaborated in the two-dimensional model of emotion, via the locally linear embedding algorithm.

  20. Millennial Filipino Student Engagement Analyzer Using Facial Feature Classification

    Science.gov (United States)

    Manseras, R.; Eugenio, F.; Palaoag, T.

    2018-03-01

    Millennials has been a word of mouth of everybody and a target market of various companies nowadays. In the Philippines, they comprise one third of the total population and most of them are still in school. Having a good education system is important for this generation to prepare them for better careers. And a good education system means having quality instruction as one of the input component indicators. In a classroom environment, teachers use facial features to measure the affect state of the class. Emerging technologies like Affective Computing is one of today’s trends to improve quality instruction delivery. This, together with computer vision, can be used in analyzing affect states of the students and improve quality instruction delivery. This paper proposed a system of classifying student engagement using facial features. Identifying affect state, specifically Millennial Filipino student engagement, is one of the main priorities of every educator and this directed the authors to develop a tool to assess engagement percentage. Multiple face detection framework using Face API was employed to detect as many student faces as possible to gauge current engagement percentage of the whole class. The binary classifier model using Support Vector Machine (SVM) was primarily set in the conceptual framework of this study. To achieve the most accuracy performance of this model, a comparison of SVM to two of the most widely used binary classifiers were tested. Results show that SVM bested RandomForest and Naive Bayesian algorithms in most of the experiments from the different test datasets.

  1. Facial expression recognition under partial occlusion based on fusion of global and local features

    Science.gov (United States)

    Wang, Xiaohua; Xia, Chen; Hu, Min; Ren, Fuji

    2018-04-01

    Facial expression recognition under partial occlusion is a challenging research. This paper proposes a novel framework for facial expression recognition under occlusion by fusing the global and local features. In global aspect, first, information entropy are employed to locate the occluded region. Second, principal Component Analysis (PCA) method is adopted to reconstruct the occlusion region of image. After that, a replace strategy is applied to reconstruct image by replacing the occluded region with the corresponding region of the best matched image in training set, Pyramid Weber Local Descriptor (PWLD) feature is then extracted. At last, the outputs of SVM are fitted to the probabilities of the target class by using sigmoid function. For the local aspect, an overlapping block-based method is adopted to extract WLD features, and each block is weighted adaptively by information entropy, Chi-square distance and similar block summation methods are then applied to obtain the probabilities which emotion belongs to. Finally, fusion at the decision level is employed for the data fusion of the global and local features based on Dempster-Shafer theory of evidence. Experimental results on the Cohn-Kanade and JAFFE databases demonstrate the effectiveness and fault tolerance of this method.

  2. Impaired recognition of facial emotions from low-spatial frequencies in Asperger syndrome.

    Science.gov (United States)

    Kätsyri, Jari; Saalasti, Satu; Tiippana, Kaisa; von Wendt, Lennart; Sams, Mikko

    2008-01-01

    The theory of 'weak central coherence' [Happe, F., & Frith, U. (2006). The weak coherence account: Detail-focused cognitive style in autism spectrum disorders. Journal of Autism and Developmental Disorders, 36(1), 5-25] implies that persons with autism spectrum disorders (ASDs) have a perceptual bias for local but not for global stimulus features. The recognition of emotional facial expressions representing various different levels of detail has not been studied previously in ASDs. We analyzed the recognition of four basic emotional facial expressions (anger, disgust, fear and happiness) from low-spatial frequencies (overall global shapes without local features) in adults with an ASD. A group of 20 participants with Asperger syndrome (AS) was compared to a group of non-autistic age- and sex-matched controls. Emotion recognition was tested from static and dynamic facial expressions whose spatial frequency contents had been manipulated by low-pass filtering at two levels. The two groups recognized emotions similarly from non-filtered faces and from dynamic vs. static facial expressions. In contrast, the participants with AS were less accurate than controls in recognizing facial emotions from very low-spatial frequencies. The results suggest intact recognition of basic facial emotions and dynamic facial information, but impaired visual processing of global features in ASDs.

  3. A Robust Shape Reconstruction Method for Facial Feature Point Detection

    Directory of Open Access Journals (Sweden)

    Shuqiu Tan

    2017-01-01

    Full Text Available Facial feature point detection has been receiving great research advances in recent years. Numerous methods have been developed and applied in practical face analysis systems. However, it is still a quite challenging task because of the large variability in expression and gestures and the existence of occlusions in real-world photo shoot. In this paper, we present a robust sparse reconstruction method for the face alignment problems. Instead of a direct regression between the feature space and the shape space, the concept of shape increment reconstruction is introduced. Moreover, a set of coupled overcomplete dictionaries termed the shape increment dictionary and the local appearance dictionary are learned in a regressive manner to select robust features and fit shape increments. Additionally, to make the learned model more generalized, we select the best matched parameter set through extensive validation tests. Experimental results on three public datasets demonstrate that the proposed method achieves a better robustness over the state-of-the-art methods.

  4. A Classification Method of Normal and Overweight Females Based on Facial Features for Automated Medical Applications

    Directory of Open Access Journals (Sweden)

    Bum Ju Lee

    2012-01-01

    Full Text Available Obesity and overweight have become serious public health problems worldwide. Obesity and abdominal obesity are associated with type 2 diabetes, cardiovascular diseases, and metabolic syndrome. In this paper, we first suggest a method of predicting normal and overweight females according to body mass index (BMI based on facial features. A total of 688 subjects participated in this study. We obtained the area under the ROC curve (AUC value of 0.861 and kappa value of 0.521 in Female: 21–40 (females aged 21–40 years group, and AUC value of 0.76 and kappa value of 0.401 in Female: 41–60 (females aged 41–60 years group. In two groups, we found many features showing statistical differences between normal and overweight subjects by using an independent two-sample t-test. We demonstrated that it is possible to predict BMI status using facial characteristics. Our results provide useful information for studies of obesity and facial characteristics, and may provide useful clues in the development of applications for alternative diagnosis of obesity in remote healthcare.

  5. Facial Expression Recognition

    NARCIS (Netherlands)

    Pantic, Maja; Li, S.; Jain, A.

    2009-01-01

    Facial expression recognition is a process performed by humans or computers, which consists of: 1. Locating faces in the scene (e.g., in an image; this step is also referred to as face detection), 2. Extracting facial features from the detected face region (e.g., detecting the shape of facial

  6. The look of fear and anger: facial maturity modulates recognition of fearful and angry expressions.

    Science.gov (United States)

    Sacco, Donald F; Hugenberg, Kurt

    2009-02-01

    The current series of studies provide converging evidence that facial expressions of fear and anger may have co-evolved to mimic mature and babyish faces in order to enhance their communicative signal. In Studies 1 and 2, fearful and angry facial expressions were manipulated to have enhanced babyish features (larger eyes) or enhanced mature features (smaller eyes) and in the context of a speeded categorization task in Study 1 and a visual noise paradigm in Study 2, results indicated that larger eyes facilitated the recognition of fearful facial expressions, while smaller eyes facilitated the recognition of angry facial expressions. Study 3 manipulated facial roundness, a stable structure that does not vary systematically with expressions, and found that congruency between maturity and expression (narrow face-anger; round face-fear) facilitated expression recognition accuracy. Results are discussed as representing a broad co-evolutionary relationship between facial maturity and fearful and angry facial expressions. (c) 2009 APA, all rights reserved

  7. A newly recognized syndrome of severe growth deficiency, microcephaly, intellectual disability, and characteristic facial features.

    Science.gov (United States)

    Vinkler, Chana; Leshinsky-Silver, Esther; Michelson, Marina; Haas, Dorothea; Lerman-Sagie, Tally; Lev, Dorit

    2014-01-01

    Genetic syndromes with proportionate severe short stature are rare. We describe two sisters born to nonconsanguineous parents with severe linear growth retardation, poor weight gain, microcephaly, characteristic facial features, cutaneous syndactyly of the toes, high myopia, and severe intellectual disability. During infancy and early childhood, the girls had transient hepatosplenomegaly and low blood cholesterol levels that normalized later. A thorough evaluation including metabolic studies, radiological, and genetic investigations were all normal. Cholesterol metabolism and transport were studied and no definitive abnormality was found. No clinical deterioration was observed and no metabolic crises were reported. After due consideration of other known hereditary causes of post-natal severe linear growth retardation, microcephaly, and intellectual disability, we propose that this condition represents a newly recognized autosomal recessive multiple congenital anomaly-intellectual disability syndrome. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  8. СREATING OF BARCODES FOR FACIAL IMAGES BASED ON INTENSITY GRADIENTS

    Directory of Open Access Journals (Sweden)

    G. A. Kukharev

    2014-05-01

    Full Text Available The paper provides analysis of existing approaches to the generating of barcodes and description of the system structure for generating of barcodes from facial images. The method for generating of standard type linear barcodes from facial images is proposed. This method is based on the difference of intensity gradients, which represent images in the form of initial features. Further averaging of these features into a limited number of intervals is performed; the quantization of results into decimal digits from 0 to 9 and table conversion into the standard barcode is done. Testing was conducted on the Face94 database and database of composite faces of different ages. It showed that the proposed method ensures the stability of generated barcodes according to changes of scale, pose and mirroring of facial images, as well as changes of facial expressions and shadows on faces from local lighting. The proposed solutions are computationally low-cost and do not require the use of any specialized image processing software for generating of facial barcodes in real-time systems.

  9. The Association of Quantitative Facial Color Features with Cold Pattern in Traditional East Asian Medicine

    Directory of Open Access Journals (Sweden)

    Sujeong Mun

    2017-01-01

    Full Text Available Introduction. Facial diagnosis is a major component of the diagnostic method in traditional East Asian medicine. We investigated the association of quantitative facial color features with cold pattern using a fully automated facial color parameterization system. Methods. The facial color parameters of 64 participants were obtained from digital photographs using an automatic color correction and color parameter calculation system. Cold pattern severity was evaluated using a questionnaire. Results. The a⁎ values of the whole face, lower cheek, and chin were negatively associated with cold pattern score (CPS (whole face: B=-1.048, P=0.021; lower cheek: B=-0.494, P=0.007; chin: B=-0.640, P=0.031, while b⁎ value of the lower cheek was positively associated with CPS (B=0.234, P=0.019. The a⁎ values of the whole face were significantly correlated with specific cold pattern symptoms including cold abdomen (partial ρ=-0.354, P<0.01 and cold sensation in the body (partial ρ=-0.255, P<0.05. Conclusions. a⁎ values of the whole face were negatively associated with CPS, indicating that individuals with increased levels of cold pattern had paler faces. These findings suggest that objective facial diagnosis has utility for pattern identification.

  10. Mirror on the wall: a study of women's perception of facial features as they age.

    Science.gov (United States)

    Sezgin, Billur; Findikcioglu, Kemal; Kaya, Basar; Sibar, Serhat; Yavuzer, Reha

    2012-05-01

    Facial aesthetic treatments are among the most popular cosmetic procedures worldwide, but the factors that motivate women to change their facial appearance are not fully understood. The authors examine the relationships among the facial areas on which women focus most as they age, women's general self-perception, and the effect of their personal focus on "beauty points" on their perception of other women's faces. In this prospective study, 200 women who presented to a cosmetic surgery outpatient clinic for consultation between December 2009 and February 2010 completed a questionnaire. The 200 participants were grouped by age: 20-29 years, 30-39, 40-49, and 50 or older (50 women in each group). They were asked which part of their face they focus on most when looking in the mirror, which part they notice most in other women (of different age groups), what they like/dislike most about their own face, and whether they wished to change any facial feature. A positive correlation was found between women's focal points and the areas they dislike or desire to change. Younger women focused mainly on their nose and skin, while older women focused on their periorbital area and jawline. Women focus on their personal focal points when looking at other women in their 20s and 30s, but not when looking at older women. Women presenting for cosmetic surgery consultation focus on the areas that they dislike most, which leads to a desire to change those features. The plastic surgeon must fully understand patients' expectations to select appropriate candidates and maximize satisfaction with the outcomes.

  11. [Facial palsy].

    Science.gov (United States)

    Cavoy, R

    2013-09-01

    Facial palsy is a daily challenge for the clinicians. Determining whether facial nerve palsy is peripheral or central is a key step in the diagnosis. Central nervous lesions can give facial palsy which may be easily differentiated from peripheral palsy. The next question is the peripheral facial paralysis idiopathic or symptomatic. A good knowledge of anatomy of facial nerve is helpful. A structure approach is given to identify additional features that distinguish symptomatic facial palsy from idiopathic one. The main cause of peripheral facial palsies is idiopathic one, or Bell's palsy, which remains a diagnosis of exclusion. The most common cause of symptomatic peripheral facial palsy is Ramsay-Hunt syndrome. Early identification of symptomatic facial palsy is important because of often worst outcome and different management. The prognosis of Bell's palsy is on the whole favorable and is improved with a prompt tapering course of prednisone. In Ramsay-Hunt syndrome, an antiviral therapy is added along with prednisone. We also discussed of current treatment recommendations. We will review short and long term complications of peripheral facial palsy.

  12. Morphological integration of soft-tissue facial morphology in Down Syndrome and siblings.

    Science.gov (United States)

    Starbuck, John; Reeves, Roger H; Richtsmeier, Joan

    2011-12-01

    Down syndrome (DS), resulting from trisomy of chromosome 21, is the most common live-born human aneuploidy. The phenotypic expression of trisomy 21 produces variable, though characteristic, facial morphology. Although certain facial features have been documented quantitatively and qualitatively as characteristic of DS (e.g., epicanthic folds, macroglossia, and hypertelorism), all of these traits occur in other craniofacial conditions with an underlying genetic cause. We hypothesize that the typical DS face is integrated differently than the face of non-DS siblings, and that the pattern of morphological integration unique to individuals with DS will yield information about underlying developmental associations between facial regions. We statistically compared morphological integration patterns of immature DS faces (N = 53) with those of non-DS siblings (N = 54), aged 6-12 years using 31 distances estimated from 3D coordinate data representing 17 anthropometric landmarks recorded on 3D digital photographic images. Facial features are affected differentially in DS, as evidenced by statistically significant differences in integration both within and between facial regions. Our results suggest a differential affect of trisomy on facial prominences during craniofacial development. 2011 Wiley Periodicals, Inc.

  13. Sensorineural Deafness, Distinctive Facial Features and Abnormal Cranial Bones

    Science.gov (United States)

    Gad, Alona; Laurino, Mercy; Maravilla, Kenneth R.; Matsushita, Mark; Raskind, Wendy H.

    2008-01-01

    The Waardenburg syndromes (WS) account for approximately 2% of congenital sensorineural deafness. This heterogeneous group of diseases currently can be categorized into four major subtypes (WS types 1-4) on the basis of characteristic clinical features. Multiple genes have been implicated in WS, and mutations in some genes can cause more than one WS subtype. In addition to eye, hair and skin pigmentary abnormalities, dystopia canthorum and broad nasal bridge are seen in WS type 1. Mutations in the PAX3 gene are responsible for the condition in the majority of these patients. In addition, mutations in PAX3 have been found in WS type 3 that is distinguished by musculoskeletal abnormalities, and in a family with a rare subtype of WS, craniofacial-deafness-hand syndrome (CDHS), characterized by dysmorphic facial features, hand abnormalities, and absent or hypoplastic nasal and wrist bones. Here we describe a woman who shares some, but not all features of WS type 3 and CDHS, and who also has abnormal cranial bones. All sinuses were hypoplastic, and the cochlea were small. No sequence alteration in PAX3 was found. These observations broaden the clinical range of WS and suggest there may be genetic heterogeneity even within the CDHS subtype. PMID:18553554

  14. A Diagnosis to Consider in an Adult Patient with Facial Features and Intellectual Disability: Williams Syndrome.

    Science.gov (United States)

    Doğan, Özlem Akgün; Şimşek Kiper, Pelin Özlem; Utine, Gülen Eda; Alikaşifoğlu, Mehmet; Boduroğlu, Koray

    2017-03-01

    Williams syndrome (OMIM #194050) is a rare, well-recognized, multisystemic genetic condition affecting approximately 1/7,500 individuals. There are no marked regional differences in the incidence of Williams syndrome. The syndrome is caused by a hemizygous deletion of approximately 28 genes, including ELN on chromosome 7q11.2. Prenatal-onset growth retardation, distinct facial appearance, cardiovascular abnormalities, and unique hypersocial behavior are among the most common clinical features. Here, we report the case of a patient referred to us with distinct facial features and intellectual disability, who was diagnosed with Williams syndrome at the age of 37 years. Our aim is to increase awareness regarding the diagnostic features and complications of this recognizable syndrome among adult health care providers. Williams syndrome is usually diagnosed during infancy or childhood, but in the absence of classical findings, such as cardiovascular anomalies, hypercalcemia, and cognitive impairment, the diagnosis could be delayed. Due to the multisystemic and progressive nature of the syndrome, accurate diagnosis is critical for appropriate care and screening for the associated morbidities that may affect the patient's health and well-being.

  15. Persistent facial pain conditions

    DEFF Research Database (Denmark)

    Forssell, Heli; Alstergren, Per; Bakke, Merete

    2016-01-01

    Persistent facial pains, especially temporomandibular disorders (TMD), are common conditions. As dentists are responsible for the treatment of most of these disorders, up-to date knowledge on the latest advances in the field is essential for successful diagnosis and management. The review covers...... TMD, and different neuropathic or putative neuropathic facial pains such as persistent idiopathic facial pain and atypical odontalgia, trigeminal neuralgia and painful posttraumatic trigeminal neuropathy. The article presents an overview of TMD pain as a biopsychosocial condition, its prevalence......, clinical features, consequences, central and peripheral mechanisms, diagnostic criteria (DC/TMD), and principles of management. For each of the neuropathic facial pain entities, the definitions, prevalence, clinical features, and diagnostics are described. The current understanding of the pathophysiology...

  16. An adaptation study of internal and external features in facial representations.

    Science.gov (United States)

    Hills, Charlotte; Romano, Kali; Davies-Thompson, Jodie; Barton, Jason J S

    2014-07-01

    Prior work suggests that internal features contribute more than external features to face processing. Whether this asymmetry is also true of the mental representations of faces is not known. We used face adaptation to determine whether the internal and external features of faces contribute differently to the representation of facial identity, whether this was affected by familiarity, and whether the results differed if the features were presented in isolation or as part of a whole face. In a first experiment, subjects performed a study of identity adaptation for famous and novel faces, in which the adapting stimuli were whole faces, the internal features alone, or the external features alone. In a second experiment, the same faces were used, but the adapting internal and external features were superimposed on whole faces that were ambiguous to identity. The first experiment showed larger aftereffects for unfamiliar faces, and greater aftereffects from internal than from external features, and the latter was true for both familiar and unfamiliar faces. When internal and external features were presented in a whole-face context in the second experiment, aftereffects from either internal or external features was less than that from the whole face, and did not differ from each other. While we reproduce the greater importance of internal features when presented in isolation, we find this is equally true for familiar and unfamiliar faces. The dominant influence of internal features is reduced when integrated into a whole-face context, suggesting another facet of expert face processing. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. Facial Nerve Schwannoma: A Case Report, Radiological Features and Literature Review.

    Science.gov (United States)

    Pilloni, Giulia; Mico, Barbara Massa; Altieri, Roberto; Zenga, Francesco; Ducati, Alessandro; Garbossa, Diego; Tartara, Fulvio

    2017-12-22

    Facial nerve schwannoma localized in the middle fossa is a rare lesion. We report a case of a facial nerve schwannoma in a 30-year-old male presenting with facial nerve palsy. Magnetic resonance imaging (MRI) showed a 3 cm diameter tumor of the right middle fossa. The tumor was removed using a sub-temporal approach. Intraoperative monitoring allowed for identification of the facial nerve, so it was not damaged during the surgical excision. Neurological clinical examination at discharge demonstrated moderate facial nerve improvement (Grade III House-Brackmann).

  18. The review and results of different methods for facial recognition

    Science.gov (United States)

    Le, Yifan

    2017-09-01

    In recent years, facial recognition draws much attention due to its wide potential applications. As a unique technology in Biometric Identification, facial recognition represents a significant improvement since it could be operated without cooperation of people under detection. Hence, facial recognition will be taken into defense system, medical detection, human behavior understanding, etc. Several theories and methods have been established to make progress in facial recognition: (1) A novel two-stage facial landmark localization method is proposed which has more accurate facial localization effect under specific database; (2) A statistical face frontalization method is proposed which outperforms state-of-the-art methods for face landmark localization; (3) It proposes a general facial landmark detection algorithm to handle images with severe occlusion and images with large head poses; (4) There are three methods proposed on Face Alignment including shape augmented regression method, pose-indexed based multi-view method and a learning based method via regressing local binary features. The aim of this paper is to analyze previous work of different aspects in facial recognition, focusing on concrete method and performance under various databases. In addition, some improvement measures and suggestions in potential applications will be put forward.

  19. The face is not an empty canvas: how facial expressions interact with facial appearance.

    Science.gov (United States)

    Hess, Ursula; Adams, Reginald B; Kleck, Robert E

    2009-12-12

    Faces are not simply blank canvases upon which facial expressions write their emotional messages. In fact, facial appearance and facial movement are both important social signalling systems in their own right. We here provide multiple lines of evidence for the notion that the social signals derived from facial appearance on the one hand and facial movement on the other interact in a complex manner, sometimes reinforcing and sometimes contradicting one another. Faces provide information on who a person is. Sex, age, ethnicity, personality and other characteristics that can define a person and the social group the person belongs to can all be derived from the face alone. The present article argues that faces interact with the perception of emotion expressions because this information informs a decoder's expectations regarding an expresser's probable emotional reactions. Facial appearance also interacts more directly with the interpretation of facial movement because some of the features that are used to derive personality or sex information are also features that closely resemble certain emotional expressions, thereby enhancing or diluting the perceived strength of particular expressions.

  20. Ring 2 chromosome associated with failure to thrive, microcephaly and dysmorphic facial features.

    Science.gov (United States)

    López-Uriarte, Arelí; Quintero-Rivera, Fabiola; de la Fuente Cortez, Beatriz; Puente, Viviana Gómez; Campos, María Del Roble Velazco; de Villarreal, Laura E Martínez

    2013-10-15

    We report here a child with a ring chromosome 2 [r(2)] associated with failure to thrive, microcephaly and dysmorphic features. The chromosomal aberration was defined by chromosome microarray analysis, revealing two small deletions of 2p25.3 (139 kb) and 2q37.3 (147 kb). We show the clinical phenotype of the patient, using a conventional approach and the molecular cytogenetics of a male with a history of prenatal intrauterine growth restriction (IUGR), failure to thrive, microcephaly and dysmorphic facial features. The phenotype is very similar to that reported in other clinical cases with ring chromosome 2. © 2013 Elsevier B.V. All rights reserved.

  1. Facial Emotions Recognition using Gabor Transform and Facial Animation Parameters with Neural Networks

    Science.gov (United States)

    Harit, Aditya; Joshi, J. C., Col; Gupta, K. K.

    2018-03-01

    The paper proposed an automatic facial emotion recognition algorithm which comprises of two main components: feature extraction and expression recognition. The algorithm uses a Gabor filter bank on fiducial points to find the facial expression features. The resulting magnitudes of Gabor transforms, along with 14 chosen FAPs (Facial Animation Parameters), compose the feature space. There are two stages: the training phase and the recognition phase. Firstly, for the present 6 different emotions, the system classifies all training expressions in 6 different classes (one for each emotion) in the training stage. In the recognition phase, it recognizes the emotion by applying the Gabor bank to a face image, then finds the fiducial points, and then feeds it to the trained neural architecture.

  2. Hierarchical Spatio-Temporal Probabilistic Graphical Model with Multiple Feature Fusion for Binary Facial Attribute Classification in Real-World Face Videos.

    Science.gov (United States)

    Demirkus, Meltem; Precup, Doina; Clark, James J; Arbel, Tal

    2016-06-01

    Recent literature shows that facial attributes, i.e., contextual facial information, can be beneficial for improving the performance of real-world applications, such as face verification, face recognition, and image search. Examples of face attributes include gender, skin color, facial hair, etc. How to robustly obtain these facial attributes (traits) is still an open problem, especially in the presence of the challenges of real-world environments: non-uniform illumination conditions, arbitrary occlusions, motion blur and background clutter. What makes this problem even more difficult is the enormous variability presented by the same subject, due to arbitrary face scales, head poses, and facial expressions. In this paper, we focus on the problem of facial trait classification in real-world face videos. We have developed a fully automatic hierarchical and probabilistic framework that models the collective set of frame class distributions and feature spatial information over a video sequence. The experiments are conducted on a large real-world face video database that we have collected, labelled and made publicly available. The proposed method is flexible enough to be applied to any facial classification problem. Experiments on a large, real-world video database McGillFaces [1] of 18,000 video frames reveal that the proposed framework outperforms alternative approaches, by up to 16.96 and 10.13%, for the facial attributes of gender and facial hair, respectively.

  3. Recognizing Facial Slivers.

    Science.gov (United States)

    Gilad-Gutnick, Sharon; Harmatz, Elia Samuel; Tsourides, Kleovoulos; Yovel, Galit; Sinha, Pawan

    2018-07-01

    We report here an unexpectedly robust ability of healthy human individuals ( n = 40) to recognize extremely distorted needle-like facial images, challenging the well-entrenched notion that veridical spatial configuration is necessary for extracting facial identity. In face identification tasks of parametrically compressed internal and external features, we found that the sum of performances on each cue falls significantly short of performance on full faces, despite the equal visual information available from both measures (with full faces essentially being a superposition of internal and external features). We hypothesize that this large deficit stems from the use of positional information about how the internal features are positioned relative to the external features. To test this, we systematically changed the relations between internal and external features and found preferential encoding of vertical but not horizontal spatial relationships in facial representations ( n = 20). Finally, we employ magnetoencephalography imaging ( n = 20) to demonstrate a close mapping between the behavioral psychometric curve and the amplitude of the M250 face familiarity, but not M170 face-sensitive evoked response field component, providing evidence that the M250 can be modulated by faces that are perceptually identifiable, irrespective of extreme distortions to the face's veridical configuration. We theorize that the tolerance to compressive distortions has evolved from the need to recognize faces across varying viewpoints. Our findings help clarify the important, but poorly defined, concept of facial configuration and also enable an association between behavioral performance and previously reported neural correlates of face perception.

  4. Chondromyxoid fibroma of the mastoid facial nerve canal mimicking a facial nerve schwannoma.

    Science.gov (United States)

    Thompson, Andrew L; Bharatha, Aditya; Aviv, Richard I; Nedzelski, Julian; Chen, Joseph; Bilbao, Juan M; Wong, John; Saad, Reda; Symons, Sean P

    2009-07-01

    Chondromyxoid fibroma of the skull base is a rare entity. Involvement of the temporal bone is particularly rare. We present an unusual case of progressive facial nerve paralysis with imaging and clinical findings most suggestive of a facial nerve schwannoma. The lesion was tubular in appearance, expanded the mastoid facial nerve canal, protruded out of the stylomastoid foramen, and enhanced homogeneously. The only unusual imaging feature was minor calcification within the tumor. Surgery revealed an irregular, cystic lesion. Pathology diagnosed a chondromyxoid fibroma involving the mastoid portion of the facial nerve canal, destroying the facial nerve.

  5. Facial Expression Recognition Through Machine Learning

    Directory of Open Access Journals (Sweden)

    Nazia Perveen

    2015-08-01

    Full Text Available Facial expressions communicate non-verbal cues which play an important role in interpersonal relations. Automatic recognition of facial expressions can be an important element of normal human-machine interfaces it might likewise be utilized as a part of behavioral science and in clinical practice. In spite of the fact that people perceive facial expressions for all intents and purposes immediately solid expression recognition by machine is still a challenge. From the point of view of automatic recognition a facial expression can be considered to comprise of disfigurements of the facial parts and their spatial relations or changes in the faces pigmentation. Research into automatic recognition of the facial expressions addresses the issues encompassing the representation and arrangement of static or dynamic qualities of these distortions or face pigmentation. We get results by utilizing the CVIPtools. We have taken train data set of six facial expressions of three persons and for train data set purpose we have total border mask sample 90 and 30 border mask sample for test data set purpose and we use RST- Invariant features and texture features for feature analysis and then classified them by using k- Nearest Neighbor classification algorithm. The maximum accuracy is 90.

  6. Artificial Neural Networks and Gene Expression Programing based age estimation using facial features

    Directory of Open Access Journals (Sweden)

    Baddrud Z. Laskar

    2015-10-01

    Full Text Available This work is about estimating human age automatically through analysis of facial images. It has got a lot of real-world applications. Due to prompt advances in the fields of machine vision, facial image processing, and computer graphics, automatic age estimation via faces in computer is one of the dominant topics these days. This is due to widespread real-world applications, in areas of biometrics, security, surveillance, control, forensic art, entertainment, online customer management and support, along with cosmetology. As it is difficult to estimate the exact age, this system is to estimate a certain range of ages. Four sets of classifications have been used to differentiate a person’s data into one of the different age groups. The uniqueness about this study is the usage of two technologies i.e., Artificial Neural Networks (ANN and Gene Expression Programing (GEP to estimate the age and then compare the results. New methodologies like Gene Expression Programing (GEP have been explored here and significant results were found. The dataset has been developed to provide more efficient results by superior preprocessing methods. This proposed approach has been developed, tested and trained using both the methods. A public data set was used to test the system, FG-NET. The quality of the proposed system for age estimation using facial features is shown by broad experiments on the available database of FG-NET.

  7. Genetics Home Reference: oral-facial-digital syndrome

    Science.gov (United States)

    ... related conditions that affect the development of the oral cavity (the mouth and teeth), facial features, and digits ( ... this disorder involve problems with development of the oral cavity , facial features, and digits. Most forms are also ...

  8. 3D Facial Pattern Analysis for Autism

    Science.gov (United States)

    2010-07-01

    et al. (2001) proposed a two-level Garbor wavelet network (GWN) to detect eight facial features. In Bhuiyan et al. (2003) six facial features are...Toyama, K., Krüger, V., 2001. Hierarchical Wavelet Networks for Facial Feature Localization. ICCV’01 Workshop on Recognition, Analysis and... pathological  (red) and normal structure (blue) (b)  signed distance map (negative distance indicates the  pathological  shape is inside) (c) raw

  9. Dysmorphic Facial Features and Other Clinical Characteristics in Two Patients with PEX1 Gene Mutations

    Science.gov (United States)

    Gunduz, Mehmet

    2016-01-01

    Peroxisomal disorders are a group of genetically heterogeneous metabolic diseases related to dysfunction of peroxisomes. Dysmorphic features, neurological abnormalities, and hepatic dysfunction can be presenting signs of peroxisomal disorders. Here we presented dysmorphic facial features and other clinical characteristics in two patients with PEX1 gene mutation. Follow-up periods were 3.5 years and 1 year in the patients. Case I was one-year-old girl that presented with neurodevelopmental delay, hepatomegaly, bilateral hearing loss, and visual problems. Ophthalmologic examination suggested septooptic dysplasia. Cranial magnetic resonance imaging (MRI) showed nonspecific gliosis at subcortical and periventricular deep white matter. Case II was 2.5-year-old girl referred for investigation of global developmental delay and elevated liver enzymes. Ophthalmologic examination findings were consistent with bilateral nystagmus and retinitis pigmentosa. Cranial MRI was normal. Dysmorphic facial features including broad nasal root, low set ears, downward slanting eyes, downward slanting eyebrows, and epichantal folds were common findings in two patients. Molecular genetic analysis indicated homozygous novel IVS1-2A>G mutation in Case I and homozygous p.G843D (c.2528G>A) mutation in Case II in the PEX1 gene. Clinical findings and developmental prognosis vary in PEX1 gene mutation. Kabuki-like phenotype associated with liver pathology may indicate Zellweger spectrum disorders (ZSD). PMID:27882258

  10. Fixation to features and neural processing of facial expressions in a gender discrimination task.

    Science.gov (United States)

    Neath, Karly N; Itier, Roxane J

    2015-10-01

    Early face encoding, as reflected by the N170 ERP component, is sensitive to fixation to the eyes. Whether this sensitivity varies with facial expressions of emotion and can also be seen on other ERP components such as P1 and EPN, was investigated. Using eye-tracking to manipulate fixation on facial features, we found the N170 to be the only eye-sensitive component and this was true for fearful, happy and neutral faces. A different effect of fixation to features was seen for the earlier P1 that likely reflected general sensitivity to face position. An early effect of emotion (∼120 ms) for happy faces was seen at occipital sites and was sustained until ∼350 ms post-stimulus. For fearful faces, an early effect was seen around 80 ms followed by a later effect appearing at ∼150 ms until ∼300 ms at lateral posterior sites. Results suggests that in this emotion-irrelevant gender discrimination task, processing of fearful and happy expressions occurred early and largely independently of the eye-sensitivity indexed by the N170. Processing of the two emotions involved different underlying brain networks active at different times. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Incongruence Between Observers’ and Observed Facial Muscle Activation Reduces Recognition of Emotional Facial Expressions From Video Stimuli

    Directory of Open Access Journals (Sweden)

    Tanja S. H. Wingenbach

    2018-06-01

    Full Text Available According to embodied cognition accounts, viewing others’ facial emotion can elicit the respective emotion representation in observers which entails simulations of sensory, motor, and contextual experiences. In line with that, published research found viewing others’ facial emotion to elicit automatic matched facial muscle activation, which was further found to facilitate emotion recognition. Perhaps making congruent facial muscle activity explicit produces an even greater recognition advantage. If there is conflicting sensory information, i.e., incongruent facial muscle activity, this might impede recognition. The effects of actively manipulating facial muscle activity on facial emotion recognition from videos were investigated across three experimental conditions: (a explicit imitation of viewed facial emotional expressions (stimulus-congruent condition, (b pen-holding with the lips (stimulus-incongruent condition, and (c passive viewing (control condition. It was hypothesised that (1 experimental condition (a and (b result in greater facial muscle activity than (c, (2 experimental condition (a increases emotion recognition accuracy from others’ faces compared to (c, (3 experimental condition (b lowers recognition accuracy for expressions with a salient facial feature in the lower, but not the upper face area, compared to (c. Participants (42 males, 42 females underwent a facial emotion recognition experiment (ADFES-BIV while electromyography (EMG was recorded from five facial muscle sites. The experimental conditions’ order was counter-balanced. Pen-holding caused stimulus-incongruent facial muscle activity for expressions with facial feature saliency in the lower face region, which reduced recognition of lower face region emotions. Explicit imitation caused stimulus-congruent facial muscle activity without modulating recognition. Methodological implications are discussed.

  12. Incongruence Between Observers' and Observed Facial Muscle Activation Reduces Recognition of Emotional Facial Expressions From Video Stimuli.

    Science.gov (United States)

    Wingenbach, Tanja S H; Brosnan, Mark; Pfaltz, Monique C; Plichta, Michael M; Ashwin, Chris

    2018-01-01

    According to embodied cognition accounts, viewing others' facial emotion can elicit the respective emotion representation in observers which entails simulations of sensory, motor, and contextual experiences. In line with that, published research found viewing others' facial emotion to elicit automatic matched facial muscle activation, which was further found to facilitate emotion recognition. Perhaps making congruent facial muscle activity explicit produces an even greater recognition advantage. If there is conflicting sensory information, i.e., incongruent facial muscle activity, this might impede recognition. The effects of actively manipulating facial muscle activity on facial emotion recognition from videos were investigated across three experimental conditions: (a) explicit imitation of viewed facial emotional expressions (stimulus-congruent condition), (b) pen-holding with the lips (stimulus-incongruent condition), and (c) passive viewing (control condition). It was hypothesised that (1) experimental condition (a) and (b) result in greater facial muscle activity than (c), (2) experimental condition (a) increases emotion recognition accuracy from others' faces compared to (c), (3) experimental condition (b) lowers recognition accuracy for expressions with a salient facial feature in the lower, but not the upper face area, compared to (c). Participants (42 males, 42 females) underwent a facial emotion recognition experiment (ADFES-BIV) while electromyography (EMG) was recorded from five facial muscle sites. The experimental conditions' order was counter-balanced. Pen-holding caused stimulus-incongruent facial muscle activity for expressions with facial feature saliency in the lower face region, which reduced recognition of lower face region emotions. Explicit imitation caused stimulus-congruent facial muscle activity without modulating recognition. Methodological implications are discussed.

  13. Incongruence Between Observers’ and Observed Facial Muscle Activation Reduces Recognition of Emotional Facial Expressions From Video Stimuli

    Science.gov (United States)

    Wingenbach, Tanja S. H.; Brosnan, Mark; Pfaltz, Monique C.; Plichta, Michael M.; Ashwin, Chris

    2018-01-01

    According to embodied cognition accounts, viewing others’ facial emotion can elicit the respective emotion representation in observers which entails simulations of sensory, motor, and contextual experiences. In line with that, published research found viewing others’ facial emotion to elicit automatic matched facial muscle activation, which was further found to facilitate emotion recognition. Perhaps making congruent facial muscle activity explicit produces an even greater recognition advantage. If there is conflicting sensory information, i.e., incongruent facial muscle activity, this might impede recognition. The effects of actively manipulating facial muscle activity on facial emotion recognition from videos were investigated across three experimental conditions: (a) explicit imitation of viewed facial emotional expressions (stimulus-congruent condition), (b) pen-holding with the lips (stimulus-incongruent condition), and (c) passive viewing (control condition). It was hypothesised that (1) experimental condition (a) and (b) result in greater facial muscle activity than (c), (2) experimental condition (a) increases emotion recognition accuracy from others’ faces compared to (c), (3) experimental condition (b) lowers recognition accuracy for expressions with a salient facial feature in the lower, but not the upper face area, compared to (c). Participants (42 males, 42 females) underwent a facial emotion recognition experiment (ADFES-BIV) while electromyography (EMG) was recorded from five facial muscle sites. The experimental conditions’ order was counter-balanced. Pen-holding caused stimulus-incongruent facial muscle activity for expressions with facial feature saliency in the lower face region, which reduced recognition of lower face region emotions. Explicit imitation caused stimulus-congruent facial muscle activity without modulating recognition. Methodological implications are discussed. PMID:29928240

  14. Facial anatomy.

    Science.gov (United States)

    Marur, Tania; Tuna, Yakup; Demirci, Selman

    2014-01-01

    Dermatologic problems of the face affect both function and aesthetics, which are based on complex anatomical features. Treating dermatologic problems while preserving the aesthetics and functions of the face requires knowledge of normal anatomy. When performing successfully invasive procedures of the face, it is essential to understand its underlying topographic anatomy. This chapter presents the anatomy of the facial musculature and neurovascular structures in a systematic way with some clinically important aspects. We describe the attachments of the mimetic and masticatory muscles and emphasize their functions and nerve supply. We highlight clinically relevant facial topographic anatomy by explaining the course and location of the sensory and motor nerves of the face and facial vasculature with their relations. Additionally, this chapter reviews the recent nomenclature of the branching pattern of the facial artery. © 2013 Elsevier Inc. All rights reserved.

  15. Odor valence linearly modulates attractiveness, but not age assessment, of invariant facial features in a memory-based rating task.

    Science.gov (United States)

    Seubert, Janina; Gregory, Kristen M; Chamberland, Jessica; Dessirier, Jean-Marc; Lundström, Johan N

    2014-01-01

    Scented cosmetic products are used across cultures as a way to favorably influence one's appearance. While crossmodal effects of odor valence on perceived attractiveness of facial features have been demonstrated experimentally, it is unknown whether they represent a phenomenon specific to affective processing. In this experiment, we presented odors in the context of a face battery with systematic feature manipulations during a speeded response task. Modulatory effects of linear increases of odor valence were investigated by juxtaposing subsequent memory-based ratings tasks--one predominantly affective (attractiveness) and a second, cognitive (age). The linear modulation pattern observed for attractiveness was consistent with additive effects of face and odor appraisal. Effects of odor valence on age perception were not linearly modulated and may be the result of cognitive interference. Affective and cognitive processing of faces thus appear to differ in their susceptibility to modulation by odors, likely as a result of privileged access of olfactory stimuli to affective brain networks. These results are critically discussed with respect to potential biases introduced by the preceding speeded response task.

  16. Heartbeat Rate Measurement from Facial Video

    DEFF Research Database (Denmark)

    Haque, Mohammad Ahsanul; Irani, Ramin; Nasrollahi, Kamal

    2016-01-01

    Heartbeat Rate (HR) reveals a person’s health condition. This paper presents an effective system for measuring HR from facial videos acquired in a more realistic environment than the testing environment of current systems. The proposed method utilizes a facial feature point tracking method...... by combining a ‘Good feature to track’ and a ‘Supervised descent method’ in order to overcome the limitations of currently available facial video based HR measuring systems. Such limitations include, e.g., unrealistic restriction of the subject’s movement and artificial lighting during data capture. A face...

  17. Nablus mask-like facial syndrome

    DEFF Research Database (Denmark)

    Allanson, Judith; Smith, Amanda; Hare, Heather

    2012-01-01

    Nablus mask-like facial syndrome (NMLFS) has many distinctive phenotypic features, particularly tight glistening skin with reduced facial expression, blepharophimosis, telecanthus, bulky nasal tip, abnormal external ear architecture, upswept frontal hairline, and sparse eyebrows. Over the last few...

  18. A genome-wide association study identifies five loci influencing facial morphology in Europeans.

    Directory of Open Access Journals (Sweden)

    Fan Liu

    2012-09-01

    Full Text Available Inter-individual variation in facial shape is one of the most noticeable phenotypes in humans, and it is clearly under genetic regulation; however, almost nothing is known about the genetic basis of normal human facial morphology. We therefore conducted a genome-wide association study for facial shape phenotypes in multiple discovery and replication cohorts, considering almost ten thousand individuals of European descent from several countries. Phenotyping of facial shape features was based on landmark data obtained from three-dimensional head magnetic resonance images (MRIs and two-dimensional portrait images. We identified five independent genetic loci associated with different facial phenotypes, suggesting the involvement of five candidate genes--PRDM16, PAX3, TP63, C5orf50, and COL17A1--in the determination of the human face. Three of them have been implicated previously in vertebrate craniofacial development and disease, and the remaining two genes potentially represent novel players in the molecular networks governing facial development. Our finding at PAX3 influencing the position of the nasion replicates a recent GWAS of facial features. In addition to the reported GWA findings, we established links between common DNA variants previously associated with NSCL/P at 2p21, 8q24, 13q31, and 17q22 and normal facial-shape variations based on a candidate gene approach. Overall our study implies that DNA variants in genes essential for craniofacial development contribute with relatively small effect size to the spectrum of normal variation in human facial morphology. This observation has important consequences for future studies aiming to identify more genes involved in the human facial morphology, as well as for potential applications of DNA prediction of facial shape such as in future forensic applications.

  19. A Genome-Wide Association Study Identifies Five Loci Influencing Facial Morphology in Europeans

    Science.gov (United States)

    Liu, Fan; van der Lijn, Fedde; Schurmann, Claudia; Zhu, Gu; Chakravarty, M. Mallar; Hysi, Pirro G.; Wollstein, Andreas; Lao, Oscar; de Bruijne, Marleen; Ikram, M. Arfan; van der Lugt, Aad; Rivadeneira, Fernando; Uitterlinden, André G.; Hofman, Albert; Niessen, Wiro J.; Homuth, Georg; de Zubicaray, Greig; McMahon, Katie L.; Thompson, Paul M.; Daboul, Amro; Puls, Ralf; Hegenscheid, Katrin; Bevan, Liisa; Pausova, Zdenka; Medland, Sarah E.; Montgomery, Grant W.; Wright, Margaret J.; Wicking, Carol; Boehringer, Stefan; Spector, Timothy D.; Paus, Tomáš; Martin, Nicholas G.; Biffar, Reiner; Kayser, Manfred

    2012-01-01

    Inter-individual variation in facial shape is one of the most noticeable phenotypes in humans, and it is clearly under genetic regulation; however, almost nothing is known about the genetic basis of normal human facial morphology. We therefore conducted a genome-wide association study for facial shape phenotypes in multiple discovery and replication cohorts, considering almost ten thousand individuals of European descent from several countries. Phenotyping of facial shape features was based on landmark data obtained from three-dimensional head magnetic resonance images (MRIs) and two-dimensional portrait images. We identified five independent genetic loci associated with different facial phenotypes, suggesting the involvement of five candidate genes—PRDM16, PAX3, TP63, C5orf50, and COL17A1—in the determination of the human face. Three of them have been implicated previously in vertebrate craniofacial development and disease, and the remaining two genes potentially represent novel players in the molecular networks governing facial development. Our finding at PAX3 influencing the position of the nasion replicates a recent GWAS of facial features. In addition to the reported GWA findings, we established links between common DNA variants previously associated with NSCL/P at 2p21, 8q24, 13q31, and 17q22 and normal facial-shape variations based on a candidate gene approach. Overall our study implies that DNA variants in genes essential for craniofacial development contribute with relatively small effect size to the spectrum of normal variation in human facial morphology. This observation has important consequences for future studies aiming to identify more genes involved in the human facial morphology, as well as for potential applications of DNA prediction of facial shape such as in future forensic applications. PMID:23028347

  20. Tensor Rank Preserving Discriminant Analysis for Facial Recognition.

    Science.gov (United States)

    Tao, Dapeng; Guo, Yanan; Li, Yaotang; Gao, Xinbo

    2017-10-12

    Facial recognition, one of the basic topics in computer vision and pattern recognition, has received substantial attention in recent years. However, for those traditional facial recognition algorithms, the facial images are reshaped to a long vector, thereby losing part of the original spatial constraints of each pixel. In this paper, a new tensor-based feature extraction algorithm termed tensor rank preserving discriminant analysis (TRPDA) for facial image recognition is proposed; the proposed method involves two stages: in the first stage, the low-dimensional tensor subspace of the original input tensor samples was obtained; in the second stage, discriminative locality alignment was utilized to obtain the ultimate vector feature representation for subsequent facial recognition. On the one hand, the proposed TRPDA algorithm fully utilizes the natural structure of the input samples, and it applies an optimization criterion that can directly handle the tensor spectral analysis problem, thereby decreasing the computation cost compared those traditional tensor-based feature selection algorithms. On the other hand, the proposed TRPDA algorithm extracts feature by finding a tensor subspace that preserves most of the rank order information of the intra-class input samples. Experiments on the three facial databases are performed here to determine the effectiveness of the proposed TRPDA algorithm.

  1. Magnetoencephalographic study on facial movements

    Directory of Open Access Journals (Sweden)

    Kensaku eMiki

    2014-07-01

    Full Text Available In this review, we introduced our three studies that focused on facial movements. In the first study, we examined the temporal characteristics of neural responses elicited by viewing mouth movements, and assessed differences between the responses to mouth opening and closing movements and an averting eyes condition. Our results showed that the occipitotemporal area, the human MT/V5 homologue, was active in the perception of both mouth and eye motions. Viewing mouth and eye movements did not elicit significantly different activity in the occipitotemporal area, which indicated that perception of the movement of facial parts may be processed in the same manner, and this is different from motion in general. In the second study, we investigated whether early activity in the occipitotemporal region evoked by eye movements was influenced by a face contour and/or features such as the mouth. Our results revealed specific information processing for eye movements in the occipitotemporal region, and this activity was significantly influenced by whether movements appeared with the facial contour and/or features, in other words, whether the eyes moved, even if the movement itself was the same. In the third study, we examined the effects of inverting the facial contour (hair and chin and features (eyes, nose, and mouth on processing for static and dynamic face perception. Our results showed the following: (1 In static face perception, activity in the right fusiform area was affected more by the inversion of features while that in the left fusiform area was affected more by a disruption in the spatial relationship between the contour and features, and (2 In dynamic face perception, activity in the right occipitotemporal area was affected by the inversion of the facial contour.

  2. Recurrent unilateral facial nerve palsy in a child with dehiscent facial nerve canal

    Directory of Open Access Journals (Sweden)

    Christopher Liu

    2016-12-01

    Full Text Available Objective: The dehiscent facial nerve canal has been well documented in histopathological studies of temporal bones as well as in clinical setting. We describe clinical and radiologic features of a child with recurrent facial nerve palsy and dehiscent facial nerve canal. Methods: Retrospective chart review. Results: A 5-year-old male was referred to the otolaryngology clinic for evaluation of recurrent acute otitis media and hearing loss. He also developed recurrent left peripheral FN palsy associated with episodes of bilateral acute otitis media. High resolution computed tomography of the temporal bones revealed incomplete bony coverage of the tympanic segment of the left facial nerve. Conclusions: Recurrent peripheral FN palsy may occur in children with recurrent acute otitis media in the presence of a dehiscent facial nerve canal. Facial nerve canal dehiscence should be considered in the differential diagnosis of children with recurrent peripheral FN palsy.

  3. Quantitative anatomical analysis of facial expression using a 3D motion capture system: Application to cosmetic surgery and facial recognition technology.

    Science.gov (United States)

    Lee, Jae-Gi; Jung, Su-Jin; Lee, Hyung-Jin; Seo, Jung-Hyuk; Choi, You-Jin; Bae, Hyun-Sook; Park, Jong-Tae; Kim, Hee-Jin

    2015-09-01

    The topography of the facial muscles differs between males and females and among individuals of the same gender. To explain the unique expressions that people can make, it is important to define the shapes of the muscle, their associations with the skin, and their relative functions. Three-dimensional (3D) motion-capture analysis, often used to study facial expression, was used in this study to identify characteristic skin movements in males and females when they made six representative basic expressions. The movements of 44 reflective markers (RMs) positioned on anatomical landmarks were measured. Their mean displacement was large in males [ranging from 14.31 mm (fear) to 41.15 mm (anger)], and 3.35-4.76 mm smaller in females [ranging from 9.55 mm (fear) to 37.80 mm (anger)]. The percentages of RMs involved in the ten highest mean maximum displacement values in making at least one expression were 47.6% in males and 61.9% in females. The movements of the RMs were larger in males than females but were more limited. Expanding our understanding of facial expression requires morphological studies of facial muscles and studies of related complex functionality. Conducting these together with quantitative analyses, as in the present study, will yield data valuable for medicine, dentistry, and engineering, for example, for surgical operations on facial regions, software for predicting changes in facial features and expressions after corrective surgery, and the development of face-mimicking robots. © 2015 Wiley Periodicals, Inc.

  4. Influence of gravity upon some facial signs.

    Science.gov (United States)

    Flament, F; Bazin, R; Piot, B

    2015-06-01

    Facial clinical signs and their integration are the basis of perception than others could have from ourselves, noticeably the age they imagine we are. Facial modifications in motion and their objective measurements before and after application of skin regimen are essential to go further in evaluation capacities to describe efficacy in facial dynamics. Quantification of facial modifications vis à vis gravity will allow us to answer about 'control' of facial shape in daily activities. Standardized photographs of the faces of 30 Caucasian female subjects of various ages (24-73 year) were successively taken at upright and supine positions within a short time interval. All these pictures were therefore reframed - any bias due to facial features was avoided when evaluating one single sign - for clinical quotation by trained experts of several facial signs regarding published standardized photographic scales. For all subjects, the supine position increased facial width but not height, giving a more fuller appearance to the face. More importantly, the supine position changed the severity of facial ageing features (e.g. wrinkles) compared to an upright position and whether these features were attenuated or exacerbated depended on their facial location. Supine station mostly modifies signs of the lower half of the face whereas those of the upper half appear unchanged or slightly accentuated. These changes appear much more marked in the older groups, where some deep labial folds almost vanish. These alterations decreased the perceived ages of the subjects by an average of 3.8 years. Although preliminary, this study suggests that a 90° rotation of the facial skin vis à vis gravity induces rapid rearrangements among which changes in tensional forces within and across the face, motility of interstitial free water among underlying skin tissue and/or alterations of facial Langer lines, likely play a significant role. © 2015 Society of Cosmetic Scientists and the Société Fran

  5. Assessment of the facial features and chin development of fetuses with use of serial three-dimensional sonography and the mandibular size monogram in a Chinese population.

    Science.gov (United States)

    Tsai, Meng-Yin; Lan, Kuo-Chung; Ou, Chia-Yo; Chen, Jen-Huang; Chang, Shiuh-Young; Hsu, Te-Yao

    2004-02-01

    Our purpose was to evaluate whether the application of serial three-dimensional (3D) sonography and the mandibular size monogram can allow observation of dynamic changes in facial features, as well as chin development in utero. The mandibular size monogram has been established through a cross-sectional study involving 183 fetal images. The serial changes of facial features and chin development are assessed in a cohort study involving 40 patients. The monogram reveals that the Biparietal distance (BPD)/Mandibular body length (MBL) ratio is gradually decreased with the advance of gestational age. The cohort study conducted with serial 3D sonography shows the same tendency. Both the images and the results of paired-samples t test (Pmonogram display disproportionate growth of the fetal head and chin that leads to changes in facial features in late gestation. This fact must be considered when we evaluate fetuses at risk for development of micrognathia.

  6. An Efficient Multimodal 2D + 3D Feature-based Approach to Automatic Facial Expression Recognition

    KAUST Repository

    Li, Huibin

    2015-07-29

    We present a fully automatic multimodal 2D + 3D feature-based facial expression recognition approach and demonstrate its performance on the BU-3DFE database. Our approach combines multi-order gradient-based local texture and shape descriptors in order to achieve efficiency and robustness. First, a large set of fiducial facial landmarks of 2D face images along with their 3D face scans are localized using a novel algorithm namely incremental Parallel Cascade of Linear Regression (iPar-CLR). Then, a novel Histogram of Second Order Gradients (HSOG) based local image descriptor in conjunction with the widely used first-order gradient based SIFT descriptor are used to describe the local texture around each 2D landmark. Similarly, the local geometry around each 3D landmark is described by two novel local shape descriptors constructed using the first-order and the second-order surface differential geometry quantities, i.e., Histogram of mesh Gradients (meshHOG) and Histogram of mesh Shape index (curvature quantization, meshHOS). Finally, the Support Vector Machine (SVM) based recognition results of all 2D and 3D descriptors are fused at both feature-level and score-level to further improve the accuracy. Comprehensive experimental results demonstrate that there exist impressive complementary characteristics between the 2D and 3D descriptors. We use the BU-3DFE benchmark to compare our approach to the state-of-the-art ones. Our multimodal feature-based approach outperforms the others by achieving an average recognition accuracy of 86.32%. Moreover, a good generalization ability is shown on the Bosphorus database.

  7. Tracking subtle stereotypes of children with trisomy 21: from facial-feature-based to implicit stereotyping.

    Directory of Open Access Journals (Sweden)

    Claire Enea-Drapeau

    Full Text Available BACKGROUND: Stigmatization is one of the greatest obstacles to the successful integration of people with Trisomy 21 (T21 or Down syndrome, the most frequent genetic disorder associated with intellectual disability. Research on attitudes and stereotypes toward these people still focuses on explicit measures subjected to social-desirability biases, and neglects how variability in facial stigmata influences attitudes and stereotyping. METHODOLOGY/PRINCIPAL FINDINGS: The participants were 165 adults including 55 young adult students, 55 non-student adults, and 55 professional caregivers working with intellectually disabled persons. They were faced with implicit association tests (IAT, a well-known technique whereby response latency is used to capture the relative strength with which some groups of people--here photographed faces of typically developing children and children with T21--are automatically (without conscious awareness associated with positive versus negative attributes in memory. Each participant also rated the same photographed faces (consciously accessible evaluations. We provide the first evidence that the positive bias typically found in explicit judgments of children with T21 is smaller for those whose facial features are highly characteristic of this disorder, compared to their counterparts with less distinctive features and to typically developing children. We also show that this bias can coexist with negative evaluations at the implicit level (with large effect sizes, even among professional caregivers. CONCLUSION: These findings support recent models of feature-based stereotyping, and more importantly show how crucial it is to go beyond explicit evaluations to estimate the true extent of stigmatization of intellectually disabled people.

  8. An Efficient Multimodal 2D + 3D Feature-based Approach to Automatic Facial Expression Recognition

    KAUST Repository

    Li, Huibin; Ding, Huaxiong; Huang, Di; Wang, Yunhong; Zhao, Xi; Morvan, Jean-Marie; Chen, Liming

    2015-01-01

    We present a fully automatic multimodal 2D + 3D feature-based facial expression recognition approach and demonstrate its performance on the BU-3DFE database. Our approach combines multi-order gradient-based local texture and shape descriptors in order to achieve efficiency and robustness. First, a large set of fiducial facial landmarks of 2D face images along with their 3D face scans are localized using a novel algorithm namely incremental Parallel Cascade of Linear Regression (iPar-CLR). Then, a novel Histogram of Second Order Gradients (HSOG) based local image descriptor in conjunction with the widely used first-order gradient based SIFT descriptor are used to describe the local texture around each 2D landmark. Similarly, the local geometry around each 3D landmark is described by two novel local shape descriptors constructed using the first-order and the second-order surface differential geometry quantities, i.e., Histogram of mesh Gradients (meshHOG) and Histogram of mesh Shape index (curvature quantization, meshHOS). Finally, the Support Vector Machine (SVM) based recognition results of all 2D and 3D descriptors are fused at both feature-level and score-level to further improve the accuracy. Comprehensive experimental results demonstrate that there exist impressive complementary characteristics between the 2D and 3D descriptors. We use the BU-3DFE benchmark to compare our approach to the state-of-the-art ones. Our multimodal feature-based approach outperforms the others by achieving an average recognition accuracy of 86.32%. Moreover, a good generalization ability is shown on the Bosphorus database.

  9. Automatic facial animation parameters extraction in MPEG-4 visual communication

    Science.gov (United States)

    Yang, Chenggen; Gong, Wanwei; Yu, Lu

    2002-01-01

    Facial Animation Parameters (FAPs) are defined in MPEG-4 to animate a facial object. The algorithm proposed in this paper to extract these FAPs is applied to very low bit-rate video communication, in which the scene is composed of a head-and-shoulder object with complex background. This paper addresses the algorithm to automatically extract all FAPs needed to animate a generic facial model, estimate the 3D motion of head by points. The proposed algorithm extracts human facial region by color segmentation and intra-frame and inter-frame edge detection. Facial structure and edge distribution of facial feature such as vertical and horizontal gradient histograms are used to locate the facial feature region. Parabola and circle deformable templates are employed to fit facial feature and extract a part of FAPs. A special data structure is proposed to describe deformable templates to reduce time consumption for computing energy functions. Another part of FAPs, 3D rigid head motion vectors, are estimated by corresponding-points method. A 3D head wire-frame model provides facial semantic information for selection of proper corresponding points, which helps to increase accuracy of 3D rigid object motion estimation.

  10. Joint Facial Action Unit Detection and Feature Fusion: A Multi-Conditional Learning Approach

    NARCIS (Netherlands)

    Eleftheriadis, Stefanos; Rudovic, Ognjen; Pantic, Maja

    2016-01-01

    Automated analysis of facial expressions can benefit many domains, from marketing to clinical diagnosis of neurodevelopmental disorders. Facial expressions are typically encoded as a combination of facial muscle activations, i.e., action units. Depending on context, these action units co-occur in

  11. A statistical method for 2D facial landmarking

    NARCIS (Netherlands)

    Dibeklioğlu, H.; Salah, A.A.; Gevers, T.

    2012-01-01

    Many facial-analysis approaches rely on robust and accurate automatic facial landmarking to correctly function. In this paper, we describe a statistical method for automatic facial-landmark localization. Our landmarking relies on a parsimonious mixture model of Gabor wavelet features, computed in

  12. Factors contributing to the adaptation aftereffects of facial expression.

    Science.gov (United States)

    Butler, Andrea; Oruc, Ipek; Fox, Christopher J; Barton, Jason J S

    2008-01-29

    Previous studies have demonstrated the existence of adaptation aftereffects for facial expressions. Here we investigated which aspects of facial stimuli contribute to these aftereffects. In Experiment 1, we examined the role of local adaptation to image elements such as curvature, shape and orientation, independent of expression, by using hybrid faces constructed from either the same or opposing expressions. While hybrid faces made with consistent expressions generated aftereffects as large as those with normal faces, there were no aftereffects from hybrid faces made from different expressions, despite the fact that these contained the same local image elements. In Experiment 2, we examined the role of facial features independent of the normal face configuration by contrasting adaptation with whole faces to adaptation with scrambled faces. We found that scrambled faces also generated significant aftereffects, indicating that expressive features without a normal facial configuration could generate expression aftereffects. In Experiment 3, we examined the role of facial configuration by using schematic faces made from line elements that in isolation do not carry expression-related information (e.g. curved segments and straight lines) but that convey an expression when arranged in a normal facial configuration. We obtained a significant aftereffect for facial configurations but not scrambled configurations of these line elements. We conclude that facial expression aftereffects are not due to local adaptation to image elements but due to high-level adaptation of neural representations that involve both facial features and facial configuration.

  13. Neural processing of fearful and happy facial expressions during emotion-relevant and emotion-irrelevant tasks: a fixation-to-feature approach

    Science.gov (United States)

    Neath-Tavares, Karly N.; Itier, Roxane J.

    2017-01-01

    Research suggests an important role of the eyes and mouth for discriminating facial expressions of emotion. A gaze-contingent procedure was used to test the impact of fixation to facial features on the neural response to fearful, happy and neutral facial expressions in an emotion discrimination (Exp.1) and an oddball detection (Exp.2) task. The N170 was the only eye-sensitive ERP component, and this sensitivity did not vary across facial expressions. In both tasks, compared to neutral faces, responses to happy expressions were seen as early as 100–120ms occipitally, while responses to fearful expressions started around 150ms, on or after the N170, at both occipital and lateral-posterior sites. Analyses of scalp topographies revealed different distributions of these two emotion effects across most of the epoch. Emotion processing interacted with fixation location at different times between tasks. Results suggest a role of both the eyes and mouth in the neural processing of fearful expressions and of the mouth in the processing of happy expressions, before 350ms. PMID:27430934

  14. Cosmetics as a Feature of the Extended Human Phenotype: Modulation of the Perception of Biologically Important Facial Signals

    Science.gov (United States)

    Etcoff, Nancy L.; Stock, Shannon; Haley, Lauren E.; Vickery, Sarah A.; House, David M.

    2011-01-01

    Research on the perception of faces has focused on the size, shape, and configuration of inherited features or the biological phenotype, and largely ignored the effects of adornment, or the extended phenotype. Research on the evolution of signaling has shown that animals frequently alter visual features, including color cues, to attract, intimidate or protect themselves from conspecifics. Humans engage in conscious manipulation of visual signals using cultural tools in real time rather than genetic changes over evolutionary time. Here, we investigate one tool, the use of color cosmetics. In two studies, we asked viewers to rate the same female faces with or without color cosmetics, and we varied the style of makeup from minimal (natural), to moderate (professional), to dramatic (glamorous). Each look provided increasing luminance contrast between the facial features and surrounding skin. Faces were shown for 250 ms or for unlimited inspection time, and subjects rated them for attractiveness, competence, likeability and trustworthiness. At 250 ms, cosmetics had significant positive effects on all outcomes. Length of inspection time did not change the effect for competence or attractiveness. However, with longer inspection time, the effect of cosmetics on likability and trust varied by specific makeup looks, indicating that cosmetics could impact automatic and deliberative judgments differently. The results suggest that cosmetics can create supernormal facial stimuli, and that one way they may do so is by exaggerating cues to sexual dimorphism. Our results provide evidence that judgments of facial trustworthiness and attractiveness are at least partially separable, that beauty has a significant positive effect on judgment of competence, a universal dimension of social cognition, but has a more nuanced effect on the other universal dimension of social warmth, and that the extended phenotype significantly influences perception of biologically important signals at first

  15. Cosmetics as a feature of the extended human phenotype: modulation of the perception of biologically important facial signals.

    Directory of Open Access Journals (Sweden)

    Nancy L Etcoff

    Full Text Available Research on the perception of faces has focused on the size, shape, and configuration of inherited features or the biological phenotype, and largely ignored the effects of adornment, or the extended phenotype. Research on the evolution of signaling has shown that animals frequently alter visual features, including color cues, to attract, intimidate or protect themselves from conspecifics. Humans engage in conscious manipulation of visual signals using cultural tools in real time rather than genetic changes over evolutionary time. Here, we investigate one tool, the use of color cosmetics. In two studies, we asked viewers to rate the same female faces with or without color cosmetics, and we varied the style of makeup from minimal (natural, to moderate (professional, to dramatic (glamorous. Each look provided increasing luminance contrast between the facial features and surrounding skin. Faces were shown for 250 ms or for unlimited inspection time, and subjects rated them for attractiveness, competence, likeability and trustworthiness. At 250 ms, cosmetics had significant positive effects on all outcomes. Length of inspection time did not change the effect for competence or attractiveness. However, with longer inspection time, the effect of cosmetics on likability and trust varied by specific makeup looks, indicating that cosmetics could impact automatic and deliberative judgments differently. The results suggest that cosmetics can create supernormal facial stimuli, and that one way they may do so is by exaggerating cues to sexual dimorphism. Our results provide evidence that judgments of facial trustworthiness and attractiveness are at least partially separable, that beauty has a significant positive effect on judgment of competence, a universal dimension of social cognition, but has a more nuanced effect on the other universal dimension of social warmth, and that the extended phenotype significantly influences perception of biologically important

  16. Cosmetics as a feature of the extended human phenotype: modulation of the perception of biologically important facial signals.

    Science.gov (United States)

    Etcoff, Nancy L; Stock, Shannon; Haley, Lauren E; Vickery, Sarah A; House, David M

    2011-01-01

    Research on the perception of faces has focused on the size, shape, and configuration of inherited features or the biological phenotype, and largely ignored the effects of adornment, or the extended phenotype. Research on the evolution of signaling has shown that animals frequently alter visual features, including color cues, to attract, intimidate or protect themselves from conspecifics. Humans engage in conscious manipulation of visual signals using cultural tools in real time rather than genetic changes over evolutionary time. Here, we investigate one tool, the use of color cosmetics. In two studies, we asked viewers to rate the same female faces with or without color cosmetics, and we varied the style of makeup from minimal (natural), to moderate (professional), to dramatic (glamorous). Each look provided increasing luminance contrast between the facial features and surrounding skin. Faces were shown for 250 ms or for unlimited inspection time, and subjects rated them for attractiveness, competence, likeability and trustworthiness. At 250 ms, cosmetics had significant positive effects on all outcomes. Length of inspection time did not change the effect for competence or attractiveness. However, with longer inspection time, the effect of cosmetics on likability and trust varied by specific makeup looks, indicating that cosmetics could impact automatic and deliberative judgments differently. The results suggest that cosmetics can create supernormal facial stimuli, and that one way they may do so is by exaggerating cues to sexual dimorphism. Our results provide evidence that judgments of facial trustworthiness and attractiveness are at least partially separable, that beauty has a significant positive effect on judgment of competence, a universal dimension of social cognition, but has a more nuanced effect on the other universal dimension of social warmth, and that the extended phenotype significantly influences perception of biologically important signals at first

  17. [Neural representations of facial identity and its associative meaning].

    Science.gov (United States)

    Eifuku, Satoshi

    2012-07-01

    Since the discovery of "face cells" in the early 1980s, single-cell recording experiments in non-human primates have made significant contributions toward the elucidation of neural mechanisms underlying face perception and recognition. In this paper, we review the recent progress in face cell studies, including the recent remarkable findings of the face patches that are scattered around the anterior temporal cortical areas of monkeys. In particular, we focus on the neural representations of facial identity within these areas. The identification of faces requires both discrimination of facial identities and generalization across facial views. It has been indicated by some laboratories that the population of face cells found in the anterior ventral inferior temporal cortex of monkeys represent facial identity in a manner which is facial view-invariant. These findings suggest a relatively distributed representation that operates for facial identification. It has also been shown that certain individual neurons in the medial temporal lobe of humans represent view-invariant facial identity. This finding suggests a relatively sparse representation that may be employed for memory formation. Finally, we summarize our recent study, showing that the population of face cells in the anterior ventral inferior temporal cortex of monkeys that represent view-invariant facial identity, can also represent learned paired associations between an abstract picture and a particular facial identity, extending our understanding of the function of the anterior ventral inferior temporal cortex in the recognition of associative meanings of faces.

  18. Variable developmental delays and characteristic facial features-A novel 7p22.3p22.2 microdeletion syndrome?

    Science.gov (United States)

    Yu, Andrea C; Zambrano, Regina M; Cristian, Ingrid; Price, Sue; Bernhard, Birgitta; Zucker, Marc; Venkateswaran, Sunita; McGowan-Jordan, Jean; Armour, Christine M

    2017-06-01

    Isolated 7p22.3p22.2 deletions are rarely described with only two reports in the literature. Most other reported cases either involve a much larger region of the 7p arm or have an additional copy number variation. Here, we report five patients with overlapping microdeletions at 7p22.3p22.2. The patients presented with variable developmental delays, exhibiting relative weaknesses in expressive language skills and relative strengths in gross, and fine motor skills. The most consistent facial features seen in these patients included a broad nasal root, a prominent forehead a prominent glabella and arched eyebrows. Additional variable features amongst the patients included microcephaly, metopic ridging or craniosynostosis, cleft palate, cardiac defects, and mild hypotonia. Although the patients' deletions varied in size, there was a 0.47 Mb region of overlap which contained 7 OMIM genes: EIP3B, CHST12, LFNG, BRAT1, TTYH3, AMZ1, and GNA12. We propose that monosomy of this region represents a novel microdeletion syndrome. We recommend that individuals with 7p22.3p22.2 deletions should receive a developmental assessment and a thorough cardiac exam, with consideration of an echocardiogram, as part of their initial evaluation. © 2017 Wiley Periodicals, Inc.

  19. Cranio-facial clefts in pre-hispanic America.

    Science.gov (United States)

    Marius-Nunez, A L; Wasiak, D T

    2015-10-01

    Among the representations of congenital malformations in Moche ceramic art, cranio-facial clefts have been portrayed in pottery found in Moche burials. These pottery vessels were used as domestic items during lifetime and funerary offerings upon death. The aim of this study was to examine archeological evidence for representations of cranio-facial cleft malformations in Moche vessels. Pottery depicting malformations of the midface in Moche collections in Lima-Peru were studied. The malformations portrayed on pottery were analyzed using the Tessier classification. Photographs were authorized by the Larco Museo.Three vessels were observed to have median cranio-facial dysraphia in association with midline cleft of the lower lip with cleft of the mandible. ML001489 portrays a median cranio-facial dysraphia with an orbital cleft and a midline cleft of the lower lip extending to the mandible. ML001514 represents a median facial dysraphia in association with an orbital facial cleft and a vertical orbital dystopia. ML001491 illustrates a median facial cleft with a soft tissue cleft. Three cases of midline, orbital and lateral facial clefts have been portrayed in Moche full-figure portrait vessels. They represent the earliest registries of congenital cranio-facial malformations in ancient Peru. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  20. Replicating distinctive facial features in lineups: identification performance in young versus older adults.

    Science.gov (United States)

    Badham, Stephen P; Wade, Kimberley A; Watts, Hannah J E; Woods, Natalie G; Maylor, Elizabeth A

    2013-04-01

    Criminal suspects with distinctive facial features, such as tattoos or bruising, may stand out in a police lineup. To prevent suspects from being unfairly identified on the basis of their distinctive feature, the police often manipulate lineup images to ensure that all of the members appear similar. Recent research shows that replicating a distinctive feature across lineup members enhances eyewitness identification performance, relative to removing that feature on the target. In line with this finding, the present study demonstrated that with young adults (n = 60; mean age = 20), replication resulted in more target identifications than did removal in target-present lineups and that replication did not impair performance, relative to removal, in target-absent lineups. Older adults (n = 90; mean age = 74) performed significantly worse than young adults, identifying fewer targets and more foils; moreover, older adults showed a minimal benefit from replication over removal. This pattern is consistent with the associative deficit hypothesis of aging, such that older adults form weaker links between faces and their distinctive features. Although replication did not produce much benefit over removal for older adults, it was not detrimental to their performance. Therefore, the results suggest that replication may not be as beneficial to older adults as it is to young adults and demonstrate a new practical implication of age-related associative deficits in memory.

  1. Human age estimation framework using different facial parts

    Directory of Open Access Journals (Sweden)

    Mohamed Y. El Dib

    2011-03-01

    Full Text Available Human age estimation from facial images has a wide range of real-world applications in human computer interaction (HCI. In this paper, we use the bio-inspired features (BIF to analyze different facial parts: (a eye wrinkles, (b whole internal face (without forehead area and (c whole face (with forehead area using different feature shape points. The analysis shows that eye wrinkles which cover 30% of the facial area contain the most important aging features compared to internal face and whole face. Furthermore, more extensive experiments are made on FG-NET database by increasing the number of missing pictures in older age groups using MORPH database to enhance the results.

  2. Human Amygdala Tracks a Feature-Based Valence Signal Embedded within the Facial Expression of Surprise.

    Science.gov (United States)

    Kim, M Justin; Mattek, Alison M; Bennett, Randi H; Solomon, Kimberly M; Shin, Jin; Whalen, Paul J

    2017-09-27

    Human amygdala function has been traditionally associated with processing the affective valence (negative vs positive) of an emotionally charged event, especially those that signal fear or threat. However, this account of human amygdala function can be explained by alternative views, which posit that the amygdala might be tuned to either (1) general emotional arousal (activation vs deactivation) or (2) specific emotion categories (fear vs happy). Delineating the pure effects of valence independent of arousal or emotion category is a challenging task, given that these variables naturally covary under many circumstances. To circumvent this issue and test the sensitivity of the human amygdala to valence values specifically, we measured the dimension of valence within the single facial expression category of surprise. Given the inherent valence ambiguity of this category, we show that surprised expression exemplars are attributed valence and arousal values that are uniquely and naturally uncorrelated. We then present fMRI data from both sexes, showing that the amygdala tracks these consensus valence values. Finally, we provide evidence that these valence values are linked to specific visual features of the mouth region, isolating the signal by which the amygdala detects this valence information. SIGNIFICANCE STATEMENT There is an open question as to whether human amygdala function tracks the valence value of cues in the environment, as opposed to either a more general emotional arousal value or a more specific emotion category distinction. Here, we demonstrate the utility of surprised facial expressions because exemplars within this emotion category take on valence values spanning the dimension of bipolar valence (positive to negative) at a consistent level of emotional arousal. Functional neuroimaging data showed that amygdala responses tracked the valence of surprised facial expressions, unconfounded by arousal. Furthermore, a machine learning classifier identified

  3. Colesteatoma causando paralisia facial Cholesteatoma causing facial paralysis

    Directory of Open Access Journals (Sweden)

    José Ricardo Gurgel Testa

    2003-10-01

    blood supply or production of neurotoxic substances secreted from either the cholesteatoma matrix or bacteria enclosed in the tumor. AIM: To evaluate the incidence, clinical features and treatment of the facial palsy due cholesteatoma. STUDY DESIGN: Clinical retrospective. MATERIAL AND METHOD: Retrospective study of 10 cases of facial paralysis due cholesteatoma selected through a survey of 206 decompressions of the facial nerve due various aetiologies realized in the last 10 years in UNIFESP-EPM. RESULTS: The incidence of facial paralysis due cholesteatoma in this study was 4,85%, with female predominance (60%. The average age of the patients was 39 years. The duration and severity of the facial palsy associated with the extension of lesion were important for the functional recovery of the facial nerve. CONCLUSION: Early surgical approach is necessary in these cases to improve the nerve function more adequately. When disruption or intense fibrous replacement occurs in the facial nerve, nerve grafting (greater auricular/sural nerves and/or hypoglossal facial anastomosis may be suggested.

  4. Asians' Facial Responsiveness to Basic Tastes by Automated Facial Expression Analysis System.

    Science.gov (United States)

    Zhi, Ruicong; Cao, Lianyu; Cao, Gang

    2017-03-01

    Growing evidence shows that consumer choices in real life are mostly driven by unconscious mechanisms rather than conscious. The unconscious process could be measured by behavioral measurements. This study aims to apply automatic facial expression analysis technique for consumers' emotion representation, and explore the relationships between sensory perception and facial responses. Basic taste solutions (sourness, sweetness, bitterness, umami, and saltiness) with 6 levels plus water were used, which could cover most of the tastes found in food and drink. The other contribution of this study is to analyze the characteristics of facial expressions and correlation between facial expressions and perceptive hedonic liking for Asian consumers. Up until now, the facial expression application researches only reported for western consumers, while few related researches investigated the facial responses during food consuming for Asian consumers. Experimental results indicated that facial expressions could identify different stimuli with various concentrations and different hedonic levels. The perceived liking increased at lower concentrations and decreased at higher concentrations, while samples with medium concentrations were perceived as the most pleasant except sweetness and bitterness. High correlations were founded between perceived intensities of bitterness, umami, saltiness, and facial reactions of disgust and fear. Facial expression disgust and anger could characterize emotion "dislike," and happiness could characterize emotion "like," while neutral could represent "neither like nor dislike." The identified facial expressions agree with the perceived sensory emotions elicited by basic taste solutions. The correlation analysis between hedonic levels and facial expression intensities obtained in this study are in accordance with that discussed for western consumers. © 2017 Institute of Food Technologists®.

  5. Research on facial expression simulation based on depth image

    Science.gov (United States)

    Ding, Sha-sha; Duan, Jin; Zhao, Yi-wu; Xiao, Bo; Wang, Hao

    2017-11-01

    Nowadays, face expression simulation is widely used in film and television special effects, human-computer interaction and many other fields. Facial expression is captured by the device of Kinect camera .The method of AAM algorithm based on statistical information is employed to detect and track faces. The 2D regression algorithm is applied to align the feature points. Among them, facial feature points are detected automatically and 3D cartoon model feature points are signed artificially. The aligned feature points are mapped by keyframe techniques. In order to improve the animation effect, Non-feature points are interpolated based on empirical models. Under the constraint of Bézier curves we finish the mapping and interpolation. Thus the feature points on the cartoon face model can be driven if the facial expression varies. In this way the purpose of cartoon face expression simulation in real-time is came ture. The experiment result shows that the method proposed in this text can accurately simulate the facial expression. Finally, our method is compared with the previous method. Actual data prove that the implementation efficiency is greatly improved by our method.

  6. Estimador de calidad en sistemas de reconocimiento facial

    OpenAIRE

    Espejo Caballero, Daniel

    2015-01-01

    El fin de este proyecto es conseguir obtener una estimación de la calidad de una imagen facial, a partir del estudio y extracción de características obtenidas, a partir de las imágenes faciales. The goal of this project is get a quality estimation of a facial image, using the extraction and learning of the differents features that we can extract from a facial image.

  7. Does Facial Amimia Impact the Recognition of Facial Emotions? An EMG Study in Parkinson’s Disease

    Science.gov (United States)

    Argaud, Soizic; Delplanque, Sylvain; Houvenaghel, Jean-François; Auffret, Manon; Duprez, Joan; Vérin, Marc; Grandjean, Didier; Sauleau, Paul

    2016-01-01

    According to embodied simulation theory, understanding other people’s emotions is fostered by facial mimicry. However, studies assessing the effect of facial mimicry on the recognition of emotion are still controversial. In Parkinson’s disease (PD), one of the most distinctive clinical features is facial amimia, a reduction in facial expressiveness, but patients also show emotional disturbances. The present study used the pathological model of PD to examine the role of facial mimicry on emotion recognition by investigating EMG responses in PD patients during a facial emotion recognition task (anger, joy, neutral). Our results evidenced a significant decrease in facial mimicry for joy in PD, essentially linked to the absence of reaction of the zygomaticus major and the orbicularis oculi muscles in response to happy avatars, whereas facial mimicry for expressions of anger was relatively preserved. We also confirmed that PD patients were less accurate in recognizing positive and neutral facial expressions and highlighted a beneficial effect of facial mimicry on the recognition of emotion. We thus provide additional arguments for embodied simulation theory suggesting that facial mimicry is a potential lever for therapeutic actions in PD even if it seems not to be necessarily required in recognizing emotion as such. PMID:27467393

  8. Sad Facial Expressions Increase Choice Blindness

    Directory of Open Access Journals (Sweden)

    Yajie Wang

    2018-01-01

    Full Text Available Previous studies have discovered a fascinating phenomenon known as choice blindness—individuals fail to detect mismatches between the face they choose and the face replaced by the experimenter. Although previous studies have reported a couple of factors that can modulate the magnitude of choice blindness, the potential effect of facial expression on choice blindness has not yet been explored. Using faces with sad and neutral expressions (Experiment 1 and faces with happy and neutral expressions (Experiment 2 in the classic choice blindness paradigm, the present study investigated the effects of facial expressions on choice blindness. The results showed that the detection rate was significantly lower on sad faces than neutral faces, whereas no significant difference was observed between happy faces and neutral faces. The exploratory analysis of verbal reports found that participants who reported less facial features for sad (as compared to neutral expressions also tended to show a lower detection rate of sad (as compared to neutral faces. These findings indicated that sad facial expressions increased choice blindness, which might have resulted from inhibition of further processing of the detailed facial features by the less attractive sad expressions (as compared to neutral expressions.

  9. Sad Facial Expressions Increase Choice Blindness.

    Science.gov (United States)

    Wang, Yajie; Zhao, Song; Zhang, Zhijie; Feng, Wenfeng

    2017-01-01

    Previous studies have discovered a fascinating phenomenon known as choice blindness-individuals fail to detect mismatches between the face they choose and the face replaced by the experimenter. Although previous studies have reported a couple of factors that can modulate the magnitude of choice blindness, the potential effect of facial expression on choice blindness has not yet been explored. Using faces with sad and neutral expressions (Experiment 1) and faces with happy and neutral expressions (Experiment 2) in the classic choice blindness paradigm, the present study investigated the effects of facial expressions on choice blindness. The results showed that the detection rate was significantly lower on sad faces than neutral faces, whereas no significant difference was observed between happy faces and neutral faces. The exploratory analysis of verbal reports found that participants who reported less facial features for sad (as compared to neutral) expressions also tended to show a lower detection rate of sad (as compared to neutral) faces. These findings indicated that sad facial expressions increased choice blindness, which might have resulted from inhibition of further processing of the detailed facial features by the less attractive sad expressions (as compared to neutral expressions).

  10. Contactless measurement of muscles fatigue by tracking facial feature points in a video

    DEFF Research Database (Denmark)

    Irani, Ramin; Nasrollahi, Kamal; Moeslund, Thomas B.

    2014-01-01

    their exercises when the level of the fatigue might be dangerous for the patients. The current technology for measuring tiredness, like Electromyography (EMG), requires installing some sensors on the body. In some applications, like remote patient monitoring, this however might not be possible. To deal...... with such cases, in this paper we present a contactless method based on computer vision techniques to measure tiredness by detecting, tracking, and analyzing some facial feature points during the exercise. Experimental results on several test subjects and comparing them against ground truth data show...... that the proposed system can properly find the temporal point of tiredness of the muscles when the test subjects are doing physical exercises....

  11. Facial attractiveness, symmetry and cues of good genes.

    Science.gov (United States)

    Scheib, J E; Gangestad, S W; Thornhill, R

    1999-09-22

    Cues of phenotypic condition should be among those used by women in their choice of mates. One marker of better phenotypic condition is thought to be symmetrical bilateral body and facial features. However, it is not clear whether women use symmetry as the primary cue in assessing the phenotypic quality of potential mates or whether symmetry is correlated with other facial markers affecting physical attractiveness. Using photographs of men's faces, for which facial symmetry had been measured, we found a relationship between women's attractiveness ratings of these faces and symmetry, but the subjects could not rate facial symmetry accurately. Moreover, the relationship between facial attractiveness and symmetry was still observed, even when symmetry cues were removed by presenting only the left or right half of faces. These results suggest that attractive features other than symmetry can be used to assess phenotypic condition. We identified one such cue, facial masculinity (cheek-bone prominence and a relatively longer lower face), which was related to both symmetry and full- and half-face attractiveness.

  12. Facial expression recognition based on improved deep belief networks

    Science.gov (United States)

    Wu, Yao; Qiu, Weigen

    2017-08-01

    In order to improve the robustness of facial expression recognition, a method of face expression recognition based on Local Binary Pattern (LBP) combined with improved deep belief networks (DBNs) is proposed. This method uses LBP to extract the feature, and then uses the improved deep belief networks as the detector and classifier to extract the LBP feature. The combination of LBP and improved deep belief networks is realized in facial expression recognition. In the JAFFE (Japanese Female Facial Expression) database on the recognition rate has improved significantly.

  13. Facial orientation and facial shape in extant great apes: a geometric morphometric analysis of covariation.

    Science.gov (United States)

    Neaux, Dimitri; Guy, Franck; Gilissen, Emmanuel; Coudyzer, Walter; Vignaud, Patrick; Ducrocq, Stéphane

    2013-01-01

    The organization of the bony face is complex, its morphology being influenced in part by the rest of the cranium. Characterizing the facial morphological variation and craniofacial covariation patterns in extant hominids is fundamental to the understanding of their evolutionary history. Numerous studies on hominid facial shape have proposed hypotheses concerning the relationship between the anterior facial shape, facial block orientation and basicranial flexion. In this study we test these hypotheses in a sample of adult specimens belonging to three extant hominid genera (Homo, Pan and Gorilla). Intraspecific variation and covariation patterns are analyzed using geometric morphometric methods and multivariate statistics, such as partial least squared on three-dimensional landmarks coordinates. Our results indicate significant intraspecific covariation between facial shape, facial block orientation and basicranial flexion. Hominids share similar characteristics in the relationship between anterior facial shape and facial block orientation. Modern humans exhibit a specific pattern in the covariation between anterior facial shape and basicranial flexion. This peculiar feature underscores the role of modern humans' highly-flexed basicranium in the overall integration of the cranium. Furthermore, our results are consistent with the hypothesis of a relationship between the reduction of the value of the cranial base angle and a downward rotation of the facial block in modern humans, and to a lesser extent in chimpanzees.

  14. Spectrum of mucocutaneous, ocular and facial features and delineation of novel presentations in 62 classical Ehlers-Danlos syndrome patients.

    Science.gov (United States)

    Colombi, M; Dordoni, C; Venturini, M; Ciaccio, C; Morlino, S; Chiarelli, N; Zanca, A; Calzavara-Pinton, P; Zoppi, N; Castori, M; Ritelli, M

    2017-12-01

    Classical Ehlers-Danlos syndrome (cEDS) is characterized by marked cutaneous involvement, according to the Villefranche nosology and its 2017 revision. However, the diagnostic flow-chart that prompts molecular testing is still based on experts' opinion rather than systematic published data. Here we report on 62 molecularly characterized cEDS patients with focus on skin, mucosal, facial, and articular manifestations. The major and minor Villefranche criteria, additional 11 mucocutaneous signs and 15 facial dysmorphic traits were ascertained and feature rates compared by sex and age. In our cohort, we did not observe any mandatory clinical sign. Skin hyperextensibility plus atrophic scars was the most frequent combination, whereas generalized joint hypermobility according to the Beighton score decreased with age. Skin was more commonly hyperextensible on elbows, neck, and knees. The sites more frequently affected by abnormal atrophic scarring were knees, face (especially forehead), pretibial area, and elbows. Facial dysmorphism commonly affected midface/orbital areas with epicanthal folds and infraorbital creases more commonly observed in young patients. Our findings suggest that the combination of ≥1 eye dysmorphism and facial/forehead scars may support the diagnosis in children. Minor acquired traits, such as molluscoid pseudotumors, subcutaneous spheroids, and signs of premature skin aging are equally useful in adults. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  15. Geographic variation in chin shape challenges the universal facial attractiveness hypothesis.

    Directory of Open Access Journals (Sweden)

    Zaneta M Thayer

    Full Text Available The universal facial attractiveness (UFA hypothesis proposes that some facial features are universally preferred because they are reliable signals of mate quality. The primary evidence for this hypothesis comes from cross-cultural studies of perceived attractiveness. However, these studies do not directly address patterns of morphological variation at the population level. An unanswered question is therefore: Are universally preferred facial phenotypes geographically invariant, as the UFA hypothesis implies? The purpose of our study is to evaluate this often overlooked aspect of the UFA hypothesis by examining patterns of geographic variation in chin shape. We collected symphyseal outlines from 180 recent human mandibles (90 male, 90 female representing nine geographic regions. Elliptical Fourier functions analysis was used to quantify chin shape, and principle components analysis was used to compute shape descriptors. In contrast to the expectations of the UFA hypothesis, we found significant geographic differences in male and female chin shape. These findings are consistent with region-specific sexual selection and/or random genetic drift, but not universal sexual selection. We recommend that future studies of facial attractiveness take into consideration patterns of morphological variation within and between diverse human populations.

  16. Contralateral botulinum toxin injection to improve facial asymmetry after acute facial paralysis.

    Science.gov (United States)

    Kim, Jin

    2013-02-01

    The application of botulinum toxin to the healthy side of the face in patients with long-standing facial paralysis has been shown to be a minimally invasive technique that improves facial symmetry at rest and during facial motion, but our experience using botulinum toxin therapy for facial sequelae prompted the idea that botulinum toxin might be useful in acute cases of facial paralysis, leading to improve facial asymmetry. In cases in which medical or surgical treatment options are limited because of existing medical problems or advanced age, most patients with acute facial palsy are advised to await spontaneous recovery or are informed that no effective intervention exists. The purpose of this study was to evaluate the effect of botulinum toxin treatment for facial asymmetry in 18 patients after acute facial palsy who could not be optimally treated by medical or surgical management because of severe medical or other problems. From 2009 to 2011, nine patients with Bell's palsy, 5 with herpes zoster oticus and 4 with traumatic facial palsy (10 men and 8 women; age range, 22-82 yr; mean, 50.8 yr) participated in this study. Botulinum toxin A (Botox; Allergan Incorporated, Irvine, CA, USA) was injected using a tuberculin syringe with a 27-gauge needle. The amount injected per site varied from 2.5 to 3 U, and the total dose used per patient was 32 to 68 U (mean, 47.5 +/- 8.4 U). After administration of a single dose of botulinum toxin A on the nonparalyzed side of 18 patients with acute facial paralysis, marked relief of facial asymmetry was observed in 8 patients within 1 month of injection. Decreased facial asymmetry and strengthened facial function on the paralyzed side led to an increased HB and SB grade within 6 months after injection. Use of botulinum toxin after acute facial palsy cases is of great value. Such therapy decreases the relative hyperkinesis contralateral to the paralysis, leading to greater symmetric function. Especially in patients with medical

  17. Adaptive weighted local textural features for illumination, expression, and occlusion invariant face recognition

    Science.gov (United States)

    Cui, Chen; Asari, Vijayan K.

    2014-03-01

    Biometric features such as fingerprints, iris patterns, and face features help to identify people and restrict access to secure areas by performing advanced pattern analysis and matching. Face recognition is one of the most promising biometric methodologies for human identification in a non-cooperative security environment. However, the recognition results obtained by face recognition systems are a affected by several variations that may happen to the patterns in an unrestricted environment. As a result, several algorithms have been developed for extracting different facial features for face recognition. Due to the various possible challenges of data captured at different lighting conditions, viewing angles, facial expressions, and partial occlusions in natural environmental conditions, automatic facial recognition still remains as a difficult issue that needs to be resolved. In this paper, we propose a novel approach to tackling some of these issues by analyzing the local textural descriptions for facial feature representation. The textural information is extracted by an enhanced local binary pattern (ELBP) description of all the local regions of the face. The relationship of each pixel with respect to its neighborhood is extracted and employed to calculate the new representation. ELBP reconstructs a much better textural feature extraction vector from an original gray level image in different lighting conditions. The dimensionality of the texture image is reduced by principal component analysis performed on each local face region. Each low dimensional vector representing a local region is now weighted based on the significance of the sub-region. The weight of each sub-region is determined by employing the local variance estimate of the respective region, which represents the significance of the region. The final facial textural feature vector is obtained by concatenating the reduced dimensional weight sets of all the modules (sub-regions) of the face image

  18. Regression-based Multi-View Facial Expression Recognition

    NARCIS (Netherlands)

    Rudovic, Ognjen; Patras, Ioannis; Pantic, Maja

    2010-01-01

    We present a regression-based scheme for multi-view facial expression recognition based on 2蚠D geometric features. We address the problem by mapping facial points (e.g. mouth corners) from non-frontal to frontal view where further recognition of the expressions can be performed using a

  19. Local intensity area descriptor for facial recognition in ideal and noise conditions

    Science.gov (United States)

    Tran, Chi-Kien; Tseng, Chin-Dar; Chao, Pei-Ju; Ting, Hui-Min; Chang, Liyun; Huang, Yu-Jie; Lee, Tsair-Fwu

    2017-03-01

    We propose a local texture descriptor, local intensity area descriptor (LIAD), which is applied for human facial recognition in ideal and noisy conditions. Each facial image is divided into small regions from which LIAD histograms are extracted and concatenated into a single feature vector to represent the facial image. The recognition is performed using a nearest neighbor classifier with histogram intersection and chi-square statistics as dissimilarity measures. Experiments were conducted with LIAD using the ORL database of faces (Olivetti Research Laboratory, Cambridge), the Face94 face database, the Georgia Tech face database, and the FERET database. The results demonstrated the improvement in accuracy of our proposed descriptor compared to conventional descriptors [local binary pattern (LBP), uniform LBP, local ternary pattern, histogram of oriented gradients, and local directional pattern]. Moreover, the proposed descriptor was less sensitive to noise and had low histogram dimensionality. Thus, it is expected to be a powerful texture descriptor that can be used for various computer vision problems.

  20. Spoofing detection on facial images recognition using LBP and GLCM combination

    Science.gov (United States)

    Sthevanie, F.; Ramadhani, K. N.

    2018-03-01

    The challenge for the facial based security system is how to detect facial image falsification such as facial image spoofing. Spoofing occurs when someone try to pretend as a registered user to obtain illegal access and gain advantage from the protected system. This research implements facial image spoofing detection method by analyzing image texture. The proposed method for texture analysis combines the Local Binary Pattern (LBP) and Gray Level Co-occurrence Matrix (GLCM) method. The experimental results show that spoofing detection using LBP and GLCM combination achieves high detection rate compared to that of using only LBP feature or GLCM feature.

  1. Delayed speech development, facial asymmetry, strabismus, and transverse ear lobe creases: a new syndrome?

    OpenAIRE

    Méhes, K

    1993-01-01

    A 4 year 9 month old boy and his 3 year 5 month old sister presented with delayed speech development, facial asymmetry, strabismus, and transverse ear lobe creases. The same features were found in their mother, but the father had no such anomalies. To our knowledge this familial association has not been described before and may represent an autosomal dominant syndrome.

  2. Delineation of facial archetypes by 3d averaging.

    Science.gov (United States)

    Shaweesh, Ashraf I; Thomas, C David L; Bankier, Agnes; Clement, John G

    2004-10-01

    The objective of this study was to investigate the feasibility of creating archetypal 3D faces through computerized 3D facial averaging. A 3D surface scanner Fiore and its software were used to acquire the 3D scans of the faces while 3D Rugle3 and locally-developed software generated the holistic facial averages. 3D facial averages were created from two ethnic groups; European and Japanese and from children with three previous genetic disorders; Williams syndrome, achondroplasia and Sotos syndrome as well as the normal control group. The method included averaging the corresponding depth (z) coordinates of the 3D facial scans. Compared with other face averaging techniques there was not any warping or filling in the spaces by interpolation; however, this facial average lacked colour information. The results showed that as few as 14 faces were sufficient to create an archetypal facial average. In turn this would make it practical to use face averaging as an identification tool in cases where it would be difficult to recruit a larger number of participants. In generating the average, correcting for size differences among faces was shown to adjust the average outlines of the facial features. It is assumed that 3D facial averaging would help in the identification of the ethnic status of persons whose identity may not be known with certainty. In clinical medicine, it would have a great potential for the diagnosis of syndromes with distinctive facial features. The system would also assist in the education of clinicians in the recognition and identification of such syndromes.

  3. Facial Asymmetry-Based Age Group Estimation: Role in Recognizing Age-Separated Face Images.

    Science.gov (United States)

    Sajid, Muhammad; Taj, Imtiaz Ahmad; Bajwa, Usama Ijaz; Ratyal, Naeem Iqbal

    2018-04-23

    Face recognition aims to establish the identity of a person based on facial characteristics. On the other hand, age group estimation is the automatic calculation of an individual's age range based on facial features. Recognizing age-separated face images is still a challenging research problem due to complex aging processes involving different types of facial tissues, skin, fat, muscles, and bones. Certain holistic and local facial features are used to recognize age-separated face images. However, most of the existing methods recognize face images without incorporating the knowledge learned from age group estimation. In this paper, we propose an age-assisted face recognition approach to handle aging variations. Inspired by the observation that facial asymmetry is an age-dependent intrinsic facial feature, we first use asymmetric facial dimensions to estimate the age group of a given face image. Deeply learned asymmetric facial features are then extracted for face recognition using a deep convolutional neural network (dCNN). Finally, we integrate the knowledge learned from the age group estimation into the face recognition algorithm using the same dCNN. This integration results in a significant improvement in the overall performance compared to using the face recognition algorithm alone. The experimental results on two large facial aging datasets, the MORPH and FERET sets, show that the proposed age group estimation based on the face recognition approach yields superior performance compared to some existing state-of-the-art methods. © 2018 American Academy of Forensic Sciences.

  4. Impaired social brain network for processing dynamic facial expressions in autism spectrum disorders

    Directory of Open Access Journals (Sweden)

    Sato Wataru

    2012-08-01

    Full Text Available Abstract Background Impairment of social interaction via facial expressions represents a core clinical feature of autism spectrum disorders (ASD. However, the neural correlates of this dysfunction remain unidentified. Because this dysfunction is manifested in real-life situations, we hypothesized that the observation of dynamic, compared with static, facial expressions would reveal abnormal brain functioning in individuals with ASD. We presented dynamic and static facial expressions of fear and happiness to individuals with high-functioning ASD and to age- and sex-matched typically developing controls and recorded their brain activities using functional magnetic resonance imaging (fMRI. Result Regional analysis revealed reduced activation of several brain regions in the ASD group compared with controls in response to dynamic versus static facial expressions, including the middle temporal gyrus (MTG, fusiform gyrus, amygdala, medial prefrontal cortex, and inferior frontal gyrus (IFG. Dynamic causal modeling analyses revealed that bi-directional effective connectivity involving the primary visual cortex–MTG–IFG circuit was enhanced in response to dynamic as compared with static facial expressions in the control group. Group comparisons revealed that all these modulatory effects were weaker in the ASD group than in the control group. Conclusions These results suggest that weak activity and connectivity of the social brain network underlie the impairment in social interaction involving dynamic facial expressions in individuals with ASD.

  5. [Infantile facial paralysis: diagnostic and therapeutic features].

    Science.gov (United States)

    Montalt, J; Barona, R; Comeche, C; Basterra, J

    2000-01-01

    This paper deals with a series of 11 cases of peripheral unilateral facial paralyses affecting children under 15 years. Following parameters are reviewed: age, sex, side immobilized, origin, morbid antecedents, clinical and neurophysiological explorations (electroneurography through magnetic stimulation) and the evolutive course of the cases. These items are assembled in 3 sketches in the article. Clinical assessment of face movility is more difficult as the patient is younger, nevertheless electroneurography was possible in the whole group. Clinical restoration was complete, excepting one complicated cholesteatomatous patient. Some aspects concerning the etiology, diagnostic explorations and management of each pediatric case are discussed.

  6. Facial emotion perception impairments in schizophrenia patients with comorbid antisocial personality disorder.

    Science.gov (United States)

    Tang, Dorothy Y Y; Liu, Amy C Y; Lui, Simon S Y; Lam, Bess Y H; Siu, Bonnie W M; Lee, Tatia M C; Cheung, Eric F C

    2016-02-28

    Impairment in facial emotion perception is believed to be associated with aggression. Schizophrenia patients with antisocial features are more impaired in facial emotion perception than their counterparts without these features. However, previous studies did not define the comorbidity of antisocial personality disorder (ASPD) using stringent criteria. We recruited 30 participants with dual diagnoses of ASPD and schizophrenia, 30 participants with schizophrenia and 30 controls. We employed the Facial Emotional Recognition paradigm to measure facial emotion perception, and administered a battery of neurocognitive tests. The Life History of Aggression scale was used. ANOVAs and ANCOVAs were conducted to examine group differences in facial emotion perception, and control for the effect of other neurocognitive dysfunctions on facial emotion perception. Correlational analyses were conducted to examine the association between facial emotion perception and aggression. Patients with dual diagnoses performed worst in facial emotion perception among the three groups. The group differences in facial emotion perception remained significant, even after other neurocognitive impairments were controlled for. Severity of aggression was correlated with impairment in perceiving negative-valenced facial emotions in patients with dual diagnoses. Our findings support the presence of facial emotion perception impairment and its association with aggression in schizophrenia patients with comorbid ASPD. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  7. Details from dignity to decay: facial expression lines in visual arts.

    Science.gov (United States)

    Heckmann, Marc

    2003-10-01

    A number of dermatologic procedures are intended to reduce facial wrinkles. This article is about wrinkles as a statement of art. This article explores how frown lines and other facial wrinkles are used in visual art to feature personal peculiarities and accentuate specific feelings or moods. Facial lines as an artistic element emerged with advanced painting techniques evolving during the Renaissance and following periods. The skill to paint fine details, the use of light and shadow, and the understanding of space that allowed for a three-dimensional presentation of the human face were essential prerequisites. Painters used facial lines to emphasize respected values such as dignity, determination, diligence, and experience. Facial lines, however, were often accentuated to portrait negative features such as anger, fear, aggression, sadness, exhaustion, and decay. This has reinforced a cultural stigma of facial wrinkles expressing not only age but also misfortune, dismay, or even tragedy. Removing wrinkles by dermatologic procedures may not only aim to make people look younger but also to liberate them from unwelcome negative connotations. On the other hand, consideration and care must be taken-especially when interfering with facial muscles-to preserve a natural balance of emotional facial expressions.

  8. Genetics Home Reference: branchio-oculo-facial syndrome

    Science.gov (United States)

    ... face and neck. Its characteristic features include skin anomalies on the neck, malformations of the eyes and ears, and distinctive facial features. "Branchio-" refers to the branchial arches, which are structures in the developing embryo ...

  9. Facial Affect Recognition Using Regularized Discriminant Analysis-Based Algorithms

    Directory of Open Access Journals (Sweden)

    Cheng-Yuan Shih

    2010-01-01

    Full Text Available This paper presents a novel and effective method for facial expression recognition including happiness, disgust, fear, anger, sadness, surprise, and neutral state. The proposed method utilizes a regularized discriminant analysis-based boosting algorithm (RDAB with effective Gabor features to recognize the facial expressions. Entropy criterion is applied to select the effective Gabor feature which is a subset of informative and nonredundant Gabor features. The proposed RDAB algorithm uses RDA as a learner in the boosting algorithm. The RDA combines strengths of linear discriminant analysis (LDA and quadratic discriminant analysis (QDA. It solves the small sample size and ill-posed problems suffered from QDA and LDA through a regularization technique. Additionally, this study uses the particle swarm optimization (PSO algorithm to estimate optimal parameters in RDA. Experiment results demonstrate that our approach can accurately and robustly recognize facial expressions.

  10. Discrimination of gender using facial image with expression change

    Science.gov (United States)

    Kuniyada, Jun; Fukuda, Takahiro; Terada, Kenji

    2005-12-01

    By carrying out marketing research, the managers of large-sized department stores or small convenience stores obtain the information such as ratio of men and women of visitors and an age group, and improve their management plan. However, these works are carried out in the manual operations, and it becomes a big burden to small stores. In this paper, the authors propose a method of men and women discrimination by extracting difference of the facial expression change from color facial images. Now, there are a lot of methods of the automatic recognition of the individual using a motion facial image or a still facial image in the field of image processing. However, it is very difficult to discriminate gender under the influence of the hairstyle and clothes, etc. Therefore, we propose the method which is not affected by personality such as size and position of facial parts by paying attention to a change of an expression. In this method, it is necessary to obtain two facial images with an expression and an expressionless. First, a region of facial surface and the regions of facial parts such as eyes, nose, and mouth are extracted in the facial image with color information of hue and saturation in HSV color system and emphasized edge information. Next, the features are extracted by calculating the rate of the change of each facial part generated by an expression change. In the last step, the values of those features are compared between the input data and the database, and the gender is discriminated. In this paper, it experimented for the laughing expression and smile expression, and good results were provided for discriminating gender.

  11. Facial nerve paralysis in children

    Science.gov (United States)

    Ciorba, Andrea; Corazzi, Virginia; Conz, Veronica; Bianchini, Chiara; Aimoni, Claudia

    2015-01-01

    Facial nerve palsy is a condition with several implications, particularly when occurring in childhood. It represents a serious clinical problem as it causes significant concerns in doctors because of its etiology, its treatment options and its outcome, as well as in little patients and their parents, because of functional and aesthetic outcomes. There are several described causes of facial nerve paralysis in children, as it can be congenital (due to delivery traumas and genetic or malformative diseases) or acquired (due to infective, inflammatory, neoplastic, traumatic or iatrogenic causes). Nonetheless, in approximately 40%-75% of the cases, the cause of unilateral facial paralysis still remains idiopathic. A careful diagnostic workout and differential diagnosis are particularly recommended in case of pediatric facial nerve palsy, in order to establish the most appropriate treatment, as the therapeutic approach differs in relation to the etiology. PMID:26677445

  12. Analysis of differences between Western and East-Asian faces based on facial region segmentation and PCA for facial expression recognition

    Science.gov (United States)

    Benitez-Garcia, Gibran; Nakamura, Tomoaki; Kaneko, Masahide

    2017-01-01

    Darwin was the first one to assert that facial expressions are innate and universal, which are recognized across all cultures. However, recent some cross-cultural studies have questioned this assumed universality. Therefore, this paper presents an analysis of the differences between Western and East-Asian faces of the six basic expressions (anger, disgust, fear, happiness, sadness and surprise) focused on three individual facial regions of eyes-eyebrows, nose and mouth. The analysis is conducted by applying PCA for two feature extraction methods: appearance-based by using the pixel intensities of facial parts, and geometric-based by handling 125 feature points from the face. Both methods are evaluated using 4 standard databases for both racial groups and the results are compared with a cross-cultural human study applied to 20 participants. Our analysis reveals that differences between Westerns and East-Asians exist mainly on the regions of eyes-eyebrows and mouth for expressions of fear and disgust respectively. This work presents important findings for a better design of automatic facial expression recognition systems based on the difference between two racial groups.

  13. Judgment of Nasolabial Esthetics in Cleft Lip and Palate Is Not Influenced by Overall Facial Attractiveness.

    Science.gov (United States)

    Kocher, Katharina; Kowalski, Piotr; Kolokitha, Olga-Elpis; Katsaros, Christos; Fudalej, Piotr S

    2016-05-01

    To determine whether judgment of nasolabial esthetics in cleft lip and palate (CLP) is influenced by overall facial attractiveness. Experimental study. University of Bern, Switzerland. Seventy-two fused images (36 of boys, 36 of girls) were constructed. Each image comprised (1) the nasolabial region of a treated child with complete unilateral CLP (UCLP) and (2) the external facial features, i.e., the face with masked nasolabial region, of a noncleft child. Photographs of the nasolabial region of six boys and six girls with UCLP representing a wide range of esthetic outcomes, i.e., from very good to very poor appearance, were randomly chosen from a sample of 60 consecutively treated patients in whom nasolabial esthetics had been rated in a previous study. Photographs of external facial features of six boys and six girls without UCLP with various esthetics were randomly selected from patients' files. Eight lay raters evaluated the fused images using a 100-mm visual analogue scale. Method reliability was assessed by reevaluation of fused images after >1 month. A regression model was used to analyze which elements of facial esthetics influenced the perception of nasolabial appearance. Method reliability was good. A regression analysis demonstrated that only the appearance of the nasolabial area affected the esthetic scores of fused images (coefficient = -11.44; P esthetic evaluation can be performed on images of full faces.

  14. Quality of life assessment in facial palsy: validation of the Dutch Facial Clinimetric Evaluation Scale.

    Science.gov (United States)

    Kleiss, Ingrid J; Beurskens, Carien H G; Stalmeier, Peep F M; Ingels, Koen J A O; Marres, Henri A M

    2015-08-01

    This study aimed at validating an existing health-related quality of life questionnaire for patients with facial palsy for implementation in the Dutch language and culture. The Facial Clinimetric Evaluation Scale was translated into the Dutch language using a forward-backward translation method. A pilot test with the translated questionnaire was performed in 10 patients with facial palsy and 10 normal subjects. Finally, cross-cultural adaption was accomplished at our outpatient clinic for facial palsy. Analyses for internal consistency, test-retest reliability, construct validity and responsiveness were performed. Ninety-three patients completed the Dutch Facial Clinimetric Evaluation Scale, the Dutch Facial Disability Index, and the Dutch Short Form (36) Health Survey. Cronbach's α, representing internal consistency, was 0.800. Test-retest reliability was shown by an intraclass correlation coefficient of 0.737. Correlations with the House-Brackmann score, Sunnybrook score, Facial Disability Index physical function, and social/well-being function were -0.292, 0.570, 0.713, and 0.575, respectively. The SF-36 domains correlate best with the FaCE social function domain, with the strongest correlation between the both social function domains (r = 0.576). The FaCE score did statistically significantly increase in 35 patients receiving botulinum toxin type A (P = 0.042, Student t test). The domains 'facial comfort' and 'social function' improved statistically significantly as well (P = 0.022 and P = 0.046, respectively, Student t-test). The Dutch Facial Clinimetric Evaluation Scale shows good psychometric values and can be implemented in the management of Dutch-speaking patients with facial palsy in the Netherlands. Translation of the instrument into other languages may lead to widespread use, making evaluation and comparison possible among different providers.

  15. Greater perceptual sensitivity to happy facial expression.

    Science.gov (United States)

    Maher, Stephen; Ekstrom, Tor; Chen, Yue

    2014-01-01

    Perception of subtle facial expressions is essential for social functioning; yet it is unclear if human perceptual sensitivities differ in detecting varying types of facial emotions. Evidence diverges as to whether salient negative versus positive emotions (such as sadness versus happiness) are preferentially processed. Here, we measured perceptual thresholds for the detection of four types of emotion in faces--happiness, fear, anger, and sadness--using psychophysical methods. We also evaluated the association of the perceptual performances with facial morphological changes between neutral and respective emotion types. Human observers were highly sensitive to happiness compared with the other emotional expressions. Further, this heightened perceptual sensitivity to happy expressions can be attributed largely to the emotion-induced morphological change of a particular facial feature (end-lip raise).

  16. Integration of internal and external facial features in 8- to 10-year-old children and adults.

    Science.gov (United States)

    Meinhardt-Injac, Bozana; Persike, Malte; Meinhardt, Günter

    2014-06-01

    Investigation of whole-part and composite effects in 4- to 6-year-old children gave rise to claims that face perception is fully mature within the first decade of life (Crookes & McKone, 2009). However, only internal features were tested, and the role of external features was not addressed, although external features are highly relevant for holistic face perception (Sinha & Poggio, 1996; Axelrod & Yovel, 2010, 2011). In this study, 8- to 10-year-old children and adults performed a same-different matching task with faces and watches. In this task participants attended to either internal or external features. Holistic face perception was tested using a congruency paradigm, in which face and non-face stimuli either agreed or disagreed in both features (congruent contexts) or just in the attended ones (incongruent contexts). In both age groups, pronounced context congruency and inversion effects were found for faces, but not for watches. These findings indicate holistic feature integration for faces. While inversion effects were highly similar in both age groups, context congruency effects were stronger for children. Moreover, children's face matching performance was generally better when attending to external compared to internal features. Adults tended to perform better when attending to internal features. Our results indicate that both adults and 8- to 10-year-old children integrate external and internal facial features into holistic face representations. However, in children's face representations external features are much more relevant. These findings suggest that face perception is holistic but still not adult-like at the end of the first decade of life. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. Active AU Based Patch Weighting for Facial Expression Recognition

    Directory of Open Access Journals (Sweden)

    Weicheng Xie

    2017-01-01

    Full Text Available Facial expression has many applications in human-computer interaction. Although feature extraction and selection have been well studied, the specificity of each expression variation is not fully explored in state-of-the-art works. In this work, the problem of multiclass expression recognition is converted into triplet-wise expression recognition. For each expression triplet, a new feature optimization model based on action unit (AU weighting and patch weight optimization is proposed to represent the specificity of the expression triplet. The sparse representation-based approach is then proposed to detect the active AUs of the testing sample for better generalization. The algorithm achieved competitive accuracies of 89.67% and 94.09% for the Jaffe and Cohn–Kanade (CK+ databases, respectively. Better cross-database performance has also been observed.

  18. Facial Nerve Schwannoma of the Cerebellopontine Angle: A Diagnostic Challenge

    OpenAIRE

    Lassaletta, Luis; Roda, José María; Frutos, Remedios; Patrón, Mercedes; Gavilán, Javier

    2002-01-01

    Facial nerve schwannomas are rare lesions that may involve any segment of the facial nerve. Because of their rarity and the lack of a consistent clinical and radiological pattern, facial nerve schwannomas located at the cerebellopontine angle (CPA) and internal auditory canal (IAC) represent a diagnostic and therapeutic challenge for clinicians. In this report, a case of a CPA/IAC facial nerve schwannoma is presented. Contemporary diagnosis and management of this rare lesion are analyzed.

  19. 5-HTTLPR modulates the recognition accuracy and exploration of emotional facial expressions

    Directory of Open Access Journals (Sweden)

    Sabrina eBoll

    2014-07-01

    Full Text Available Individual genetic differences in the serotonin transporter-linked polymorphic region (5-HTTLPR have been associated with variations in the sensitivity to social and emotional cues as well as altered amygdala reactivity to facial expressions of emotion. Amygdala activation has further been shown to trigger gaze changes towards diagnostically relevant facial features. The current study examined whether altered socio-emotional reactivity in variants of the 5-HTTLPR promoter polymorphism reflects individual differences in attending to diagnostic features of facial expressions. For this purpose, visual exploration of emotional facial expressions was compared between a low (n=39 and a high (n=40 5-HTT expressing group of healthy human volunteers in an eye tracking paradigm. Emotional faces were presented while manipulating the initial fixation such that saccadic changes towards the eyes and towards the mouth could be identified. We found that the low versus the high 5-HTT group demonstrated greater accuracy with regard to emotion classifications, particularly when faces were presented for a longer duration. No group differences in gaze orientation towards diagnostic facial features could be observed. However, participants in the low 5-HTT group exhibited more and faster fixation changes for certain emotions when faces were presented for a longer duration and overall face fixation times were reduced for this genotype group. These results suggest that the 5-HTT gene influences social perception by modulating the general vigilance to social cues rather than selectively affecting the pre-attentive detection of diagnostic facial features.

  20. Heartbeat Signal from Facial Video for Biometric Recognition

    DEFF Research Database (Denmark)

    Haque, Mohammad Ahsanul; Nasrollahi, Kamal; Moeslund, Thomas B.

    2015-01-01

    Different biometric traits such as face appearance and heartbeat signal from Electrocardiogram (ECG)/Phonocardiogram (PCG) are widely used in the human identity recognition. Recent advances in facial video based measurement of cardio-physiological parameters such as heartbeat rate, respiratory rate......, and blood volume pressure provide the possibility of extracting heartbeat signal from facial video instead of using obtrusive ECG or PCG sensors in the body. This paper proposes the Heartbeat Signal from Facial Video (HSFV) as a new biometric trait for human identity recognition, for the first time...... to the best of our knowledge. Feature extraction from the HSFV is accomplished by employing Radon transform on a waterfall model of the replicated HSFV. The pairwise Minkowski distances are obtained from the Radon image as the features. The authentication is accomplished by a decision tree based supervised...

  1. Non-odontogenic tumors of the facial bones in children and adolescents: role of multiparametric imaging

    International Nuclear Information System (INIS)

    Becker, Minerva; Stefanelli, Salvatore; Poletti, Pierre Alexandre; Merlini, Laura; Rougemont, Anne-Laure

    2017-01-01

    Tumors of the pediatric facial skeleton represent a major challenge in clinical practice because they can lead to functional impairment, facial deformation, and long-term disfigurement. Their treatment often requires a multidisciplinary approach, and radiologists play a pivotal role in the diagnosis and management of these lesions. Although rare, pediatric tumors arising in the facial bones comprise a wide spectrum of benign and malignant lesions of osteogenic, fibrogenic, hematopoietic, neurogenic, or epithelial origin. The more common lesions include Langerhans cell histiocytosis and osteoma, while rare lesions include inflammatory myofibroblastic and desmoid tumors; juvenile ossifying fibroma; primary intraosseous lymphoma; Ewing sarcoma; and metastases to the facial bones from neuroblastoma, Ewing sarcoma, or retinoblastoma. This article provides a comprehensive approach for the evaluation of children with non-odontogenic tumors of the facial skeleton. Typical findings are discussed with emphasis on the added value of multimodality multiparametric imaging with computed tomography (CT), magnetic resonance imaging (MRI) with diffusion-weighted imaging (DWI), positron emission tomography CT (PET CT), and PET MRI. Key imaging findings and characteristic histologic features of benign and malignant lesions are reviewed and the respective role of each modality for pretherapeutic assessment and post-treatment follow-up. Pitfalls of image interpretation are addressed and how to avoid them. (orig.)

  2. Non-odontogenic tumors of the facial bones in children and adolescents: role of multiparametric imaging

    Energy Technology Data Exchange (ETDEWEB)

    Becker, Minerva; Stefanelli, Salvatore; Poletti, Pierre Alexandre; Merlini, Laura [University of Geneva, Division of Radiology, Department of Imaging and Medical Informatics, Geneva University Hospital, Geneva (Switzerland); Rougemont, Anne-Laure [University of Geneva, Division of Clinical Pathology, Department of Genetic and Laboratory Medicine, Geneva University Hospital, Geneva (Switzerland)

    2017-04-15

    Tumors of the pediatric facial skeleton represent a major challenge in clinical practice because they can lead to functional impairment, facial deformation, and long-term disfigurement. Their treatment often requires a multidisciplinary approach, and radiologists play a pivotal role in the diagnosis and management of these lesions. Although rare, pediatric tumors arising in the facial bones comprise a wide spectrum of benign and malignant lesions of osteogenic, fibrogenic, hematopoietic, neurogenic, or epithelial origin. The more common lesions include Langerhans cell histiocytosis and osteoma, while rare lesions include inflammatory myofibroblastic and desmoid tumors; juvenile ossifying fibroma; primary intraosseous lymphoma; Ewing sarcoma; and metastases to the facial bones from neuroblastoma, Ewing sarcoma, or retinoblastoma. This article provides a comprehensive approach for the evaluation of children with non-odontogenic tumors of the facial skeleton. Typical findings are discussed with emphasis on the added value of multimodality multiparametric imaging with computed tomography (CT), magnetic resonance imaging (MRI) with diffusion-weighted imaging (DWI), positron emission tomography CT (PET CT), and PET MRI. Key imaging findings and characteristic histologic features of benign and malignant lesions are reviewed and the respective role of each modality for pretherapeutic assessment and post-treatment follow-up. Pitfalls of image interpretation are addressed and how to avoid them. (orig.)

  3. Sensorineural deafness, distinctive facial features, and abnormal cranial bones: a new variant of Waardenburg syndrome?

    Science.gov (United States)

    Gad, Alona; Laurino, Mercy; Maravilla, Kenneth R; Matsushita, Mark; Raskind, Wendy H

    2008-07-15

    The Waardenburg syndromes (WS) account for approximately 2% of congenital sensorineural deafness. This heterogeneous group of diseases currently can be categorized into four major subtypes (WS types 1-4) on the basis of characteristic clinical features. Multiple genes have been implicated in WS, and mutations in some genes can cause more than one WS subtype. In addition to eye, hair, and skin pigmentary abnormalities, dystopia canthorum and broad nasal bridge are seen in WS type 1. Mutations in the PAX3 gene are responsible for the condition in the majority of these patients. In addition, mutations in PAX3 have been found in WS type 3 that is distinguished by musculoskeletal abnormalities, and in a family with a rare subtype of WS, craniofacial-deafness-hand syndrome (CDHS), characterized by dysmorphic facial features, hand abnormalities, and absent or hypoplastic nasal and wrist bones. Here we describe a woman who shares some, but not all features of WS type 3 and CDHS, and who also has abnormal cranial bones. All sinuses were hypoplastic, and the cochlea were small. No sequence alteration in PAX3 was found. These observations broaden the clinical range of WS and suggest there may be genetic heterogeneity even within the CDHS subtype. 2008 Wiley-Liss, Inc.

  4. Facial expression recognition based on improved local ternary pattern and stacked auto-encoder

    Science.gov (United States)

    Wu, Yao; Qiu, Weigen

    2017-08-01

    In order to enhance the robustness of facial expression recognition, we propose a method of facial expression recognition based on improved Local Ternary Pattern (LTP) combined with Stacked Auto-Encoder (SAE). This method uses the improved LTP extraction feature, and then uses the improved depth belief network as the detector and classifier to extract the LTP feature. The combination of LTP and improved deep belief network is realized in facial expression recognition. The recognition rate on CK+ databases has improved significantly.

  5. Automated Analysis of Facial Cues from Videos as a Potential Method for Differentiating Stress and Boredom of Players in Games

    Directory of Open Access Journals (Sweden)

    Fernando Bevilacqua

    2018-01-01

    Full Text Available Facial analysis is a promising approach to detect emotions of players unobtrusively; however approaches are commonly evaluated in contexts not related to games or facial cues are derived from models not designed for analysis of emotions during interactions with games. We present a method for automated analysis of facial cues from videos as a potential tool for detecting stress and boredom of players behaving naturally while playing games. Computer vision is used to automatically and unobtrusively extract 7 facial features aimed at detecting the activity of a set of facial muscles. Features are mainly based on the Euclidean distance of facial landmarks and do not rely on predefined facial expressions, training of a model, or the use of facial standards. An empirical evaluation was conducted on video recordings of an experiment involving games as emotion elicitation sources. Results show statistically significant differences in the values of facial features during boring and stressful periods of gameplay for 5 of the 7 features. We believe our approach is more user-tailored, convenient, and better suited for contexts involving games.

  6. Eagle's syndrome with facial palsy

    Directory of Open Access Journals (Sweden)

    Mohammed Al-Hashim

    2017-01-01

    Full Text Available Eagle's syndrome (ES is a rare disease in which the styloid process is elongated and compressing adjacent structures. We describe a rare presentation of ES in which the patient presented with facial palsy. Facial palsy as a presentation of ES is very rare. A review of the English literature revealed only one previously reported case. Our case is a 39-year-old male who presented with left facial palsy. He also reported a 9-year history of the classical symptoms of ES. A computed tomography scan with three-dimensional reconstruction confirmed the diagnoses. He was started on conservative management but without significant improvement. Surgical intervention was offered, but the patient refused. It is important for otolaryngologists, dentists, and other specialists who deal with head and neck problems to be able to recognize ES despite its rarity. Although the patient responded to a treatment similar to that of Bell's palsy because of the clinical features and imaging, ES was most likely the cause of his facial palsy.

  7. Facial Expression Recognition using Multiclass Ensemble Least-Square Support Vector Machine

    Science.gov (United States)

    Lawi, Armin; Sya'Rani Machrizzandi, M.

    2018-03-01

    Facial expression is one of behavior characteristics of human-being. The use of biometrics technology system with facial expression characteristics makes it possible to recognize a person’s mood or emotion. The basic components of facial expression analysis system are face detection, face image extraction, facial classification and facial expressions recognition. This paper uses Principal Component Analysis (PCA) algorithm to extract facial features with expression parameters, i.e., happy, sad, neutral, angry, fear, and disgusted. Then Multiclass Ensemble Least-Squares Support Vector Machine (MELS-SVM) is used for the classification process of facial expression. The result of MELS-SVM model obtained from our 185 different expression images of 10 persons showed high accuracy level of 99.998% using RBF kernel.

  8. Automated facial recognition of manually generated clay facial approximations: Potential application in unidentified persons data repositories.

    Science.gov (United States)

    Parks, Connie L; Monson, Keith L

    2018-01-01

    This research examined how accurately 2D images (i.e., photographs) of 3D clay facial approximations were matched to corresponding photographs of the approximated individuals using an objective automated facial recognition system. Irrespective of search filter (i.e., blind, sex, or ancestry) or rank class (R 1 , R 10 , R 25 , and R 50 ) employed, few operationally informative results were observed. In only a single instance of 48 potential match opportunities was a clay approximation matched to a corresponding life photograph within the top 50 images (R 50 ) of a candidate list, even with relatively small gallery sizes created from the application of search filters (e.g., sex or ancestry search restrictions). Increasing the candidate lists to include the top 100 images (R 100 ) resulted in only two additional instances of correct match. Although other untested variables (e.g., approximation method, 2D photographic process, and practitioner skill level) may have impacted the observed results, this study suggests that 2D images of manually generated clay approximations are not readily matched to life photos by automated facial recognition systems. Further investigation is necessary in order to identify the underlying cause(s), if any, of the poor recognition results observed in this study (e.g., potential inferior facial feature detection and extraction). Additional inquiry exploring prospective remedial measures (e.g., stronger feature differentiation) is also warranted, particularly given the prominent use of clay approximations in unidentified persons casework. Copyright © 2017. Published by Elsevier B.V.

  9. Facial Contrast Is a Cross-Cultural Cue for Perceiving Age.

    Science.gov (United States)

    Porcheron, Aurélie; Mauger, Emmanuelle; Soppelsa, Frédérique; Liu, Yuli; Ge, Liezhong; Pascalis, Olivier; Russell, Richard; Morizot, Frédérique

    2017-01-01

    Age is a fundamental social dimension and a youthful appearance is of importance for many individuals, perhaps because it is a relevant predictor of aspects of health, facial attractiveness and general well-being. We recently showed that facial contrast-the color and luminance difference between facial features and the surrounding skin-is age-related and a cue to age perception of Caucasian women. Specifically, aspects of facial contrast decrease with age in Caucasian women, and Caucasian female faces with higher contrast look younger (Porcheron et al., 2013). Here we investigated faces of other ethnic groups and raters of other cultures to see whether facial contrast is a cross-cultural youth-related attribute. Using large sets of full face color photographs of Chinese, Latin American and black South African women aged 20-80, we measured the luminance and color contrast between the facial features (the eyes, the lips, and the brows) and the surrounding skin. Most aspects of facial contrast that were previously found to decrease with age in Caucasian women were also found to decrease with age in the other ethnic groups. Though the overall pattern of changes with age was common to all women, there were also some differences between the groups. In a separate study, individual faces of the 4 ethnic groups were perceived younger by French and Chinese participants when the aspects of facial contrast that vary with age in the majority of faces were artificially increased, but older when they were artificially decreased. Altogether these findings indicate that facial contrast is a cross-cultural cue to youthfulness. Because cosmetics were shown to enhance facial contrast, this work provides some support for the notion that a universal function of cosmetics is to make female faces look younger.

  10. Facial Contrast Is a Cross-Cultural Cue for Perceiving Age

    Directory of Open Access Journals (Sweden)

    Aurélie Porcheron

    2017-07-01

    Full Text Available Age is a fundamental social dimension and a youthful appearance is of importance for many individuals, perhaps because it is a relevant predictor of aspects of health, facial attractiveness and general well-being. We recently showed that facial contrast—the color and luminance difference between facial features and the surrounding skin—is age-related and a cue to age perception of Caucasian women. Specifically, aspects of facial contrast decrease with age in Caucasian women, and Caucasian female faces with higher contrast look younger (Porcheron et al., 2013. Here we investigated faces of other ethnic groups and raters of other cultures to see whether facial contrast is a cross-cultural youth-related attribute. Using large sets of full face color photographs of Chinese, Latin American and black South African women aged 20–80, we measured the luminance and color contrast between the facial features (the eyes, the lips, and the brows and the surrounding skin. Most aspects of facial contrast that were previously found to decrease with age in Caucasian women were also found to decrease with age in the other ethnic groups. Though the overall pattern of changes with age was common to all women, there were also some differences between the groups. In a separate study, individual faces of the 4 ethnic groups were perceived younger by French and Chinese participants when the aspects of facial contrast that vary with age in the majority of faces were artificially increased, but older when they were artificially decreased. Altogether these findings indicate that facial contrast is a cross-cultural cue to youthfulness. Because cosmetics were shown to enhance facial contrast, this work provides some support for the notion that a universal function of cosmetics is to make female faces look younger.

  11. Facial expression influences face identity recognition during the attentional blink.

    Science.gov (United States)

    Bach, Dominik R; Schmidt-Daffy, Martin; Dolan, Raymond J

    2014-12-01

    Emotional stimuli (e.g., negative facial expressions) enjoy prioritized memory access when task relevant, consistent with their ability to capture attention. Whether emotional expression also impacts on memory access when task-irrelevant is important for arbitrating between feature-based and object-based attentional capture. Here, the authors address this question in 3 experiments using an attentional blink task with face photographs as first and second target (T1, T2). They demonstrate reduced neutral T2 identity recognition after angry or happy T1 expression, compared to neutral T1, and this supports attentional capture by a task-irrelevant feature. Crucially, after neutral T1, T2 identity recognition was enhanced and not suppressed when T2 was angry-suggesting that attentional capture by this task-irrelevant feature may be object-based and not feature-based. As an unexpected finding, both angry and happy facial expressions suppress memory access for competing objects, but only angry facial expression enjoyed privileged memory access. This could imply that these 2 processes are relatively independent from one another.

  12. Facial Structure Predicts Sexual Orientation in Both Men and Women.

    Science.gov (United States)

    Skorska, Malvina N; Geniole, Shawn N; Vrysen, Brandon M; McCormick, Cheryl M; Bogaert, Anthony F

    2015-07-01

    Biological models have typically framed sexual orientation in terms of effects of variation in fetal androgen signaling on sexual differentiation, although other biological models exist. Despite marked sex differences in facial structure, the relationship between sexual orientation and facial structure is understudied. A total of 52 lesbian women, 134 heterosexual women, 77 gay men, and 127 heterosexual men were recruited at a Canadian campus and various Canadian Pride and sexuality events. We found that facial structure differed depending on sexual orientation; substantial variation in sexual orientation was predicted using facial metrics computed by a facial modelling program from photographs of White faces. At the univariate level, lesbian and heterosexual women differed in 17 facial features (out of 63) and four were unique multivariate predictors in logistic regression. Gay and heterosexual men differed in 11 facial features at the univariate level, of which three were unique multivariate predictors. Some, but not all, of the facial metrics differed between the sexes. Lesbian women had noses that were more turned up (also more turned up in heterosexual men), mouths that were more puckered, smaller foreheads, and marginally more masculine face shapes (also in heterosexual men) than heterosexual women. Gay men had more convex cheeks, shorter noses (also in heterosexual women), and foreheads that were more tilted back relative to heterosexual men. Principal components analysis and discriminant functions analysis generally corroborated these results. The mechanisms underlying variation in craniofacial structure--both related and unrelated to sexual differentiation--may thus be important in understanding the development of sexual orientation.

  13. Three-Dimensional Facial Adaptation for MPEG-4 Talking Heads

    Directory of Open Access Journals (Sweden)

    Nikos Grammalidis

    2002-10-01

    Full Text Available This paper studies a new method for three-dimensional (3D facial model adaptation and its integration into a text-to-speech (TTS system. The 3D facial adaptation requires a set of two orthogonal views of the user′s face with a number of feature points located on both views. Based on the correspondences of the feature points′ positions, a generic face model is deformed nonrigidly treating every facial part as a separate entity. A cylindrical texture map is then built from the two image views. The generated head models are compared to corresponding models obtained by the commonly used adaptation method that utilizes 3D radial bases functions. The generated 3D models are integrated into a talking head system, which consists of two distinct parts: a multilingual text to speech sub-system and an MPEG-4 compliant facial animation sub-system. Support for the Greek language has been added, while preserving lip and speech synchronization.

  14. A facial expression image database and norm for Asian population: a preliminary report

    Science.gov (United States)

    Chen, Chien-Chung; Cho, Shu-ling; Horszowska, Katarzyna; Chen, Mei-Yen; Wu, Chia-Ching; Chen, Hsueh-Chih; Yeh, Yi-Yu; Cheng, Chao-Min

    2009-01-01

    We collected 6604 images of 30 models in eight types of facial expression: happiness, anger, sadness, disgust, fear, surprise, contempt and neutral. Among them, 406 most representative images from 12 models were rated by more than 200 human raters for perceived emotion category and intensity. Such large number of emotion categories, models and raters is sufficient for most serious expression recognition research both in psychology and in computer science. All the models and raters are of Asian background. Hence, this database can also be used when the culture background is a concern. In addition, 43 landmarks each of the 291 rated frontal view images were identified and recorded. This information should facilitate feature based research of facial expression. Overall, the diversity in images and richness in information should make our database and norm useful for a wide range of research.

  15. An Improved Surface Simplification Method for Facial Expression Animation Based on Homogeneous Coordinate Transformation Matrix and Maximum Shape Operator

    Directory of Open Access Journals (Sweden)

    Juin-Ling Tseng

    2016-01-01

    Full Text Available Facial animation is one of the most popular 3D animation topics researched in recent years. However, when using facial animation, a 3D facial animation model has to be stored. This 3D facial animation model requires many triangles to accurately describe and demonstrate facial expression animation because the face often presents a number of different expressions. Consequently, the costs associated with facial animation have increased rapidly. In an effort to reduce storage costs, researchers have sought to simplify 3D animation models using techniques such as Deformation Sensitive Decimation and Feature Edge Quadric. The studies conducted have examined the problems in the homogeneity of the local coordinate system between different expression models and in the retainment of simplified model characteristics. This paper proposes a method that applies Homogeneous Coordinate Transformation Matrix to solve the problem of homogeneity of the local coordinate system and Maximum Shape Operator to detect shape changes in facial animation so as to properly preserve the features of facial expressions. Further, root mean square error and perceived quality error are used to compare the errors generated by different simplification methods in experiments. Experimental results show that, compared with Deformation Sensitive Decimation and Feature Edge Quadric, our method can not only reduce the errors caused by simplification of facial animation, but also retain more facial features.

  16. Four siblings with distal renal tubular acidosis and nephrocalcinosis, neurobehavioral impairment, short stature, and distinctive facial appearance: a possible new autosomal recessive syndrome.

    Science.gov (United States)

    Faqeih, Eissa; Al-Akash, Samhar I; Sakati, Nadia; Teebi, Prof Ahmad S

    2007-09-01

    We report on four siblings (three males, one female) born to first cousin Arab parents with the constellation of distal renal tubular acidosis (RTA), small kidneys, nephrocalcinosis, neurobehavioral impairment, short stature, and distinctive facial features. They presented with early developmental delay with subsequent severe mental, behavioral and social impairment and autistic-like features. Their facial features are unique with prominent cheeks, well-defined philtrum, large bulbous nose, V-shaped upper lip border, full lower lip, open mouth with protruded tongue, and pits on the ear lobule. All had proteinuria, hypercalciuria, hypercalcemia, and normal anion-gap metabolic acidosis. Renal ultrasound examinations revealed small kidneys, with varying degrees of hyperechogenicity and nephrocalcinosis. Additional findings included dilated ventricles and cerebral demyelination on brain imaging studies. Other than distal RTA, common causes of nephrocalcinosis were excluded. The constellation of features in this family currently likely represents a possibly new autosomal recessive syndrome providing further evidence of heterogeneity of nephrocalcinosis syndromes. Copyright 2007 Wiley-Liss, Inc.

  17. Support vector machine-based facial-expression recognition method combining shape and appearance

    Science.gov (United States)

    Han, Eun Jung; Kang, Byung Jun; Park, Kang Ryoung; Lee, Sangyoun

    2010-11-01

    Facial expression recognition can be widely used for various applications, such as emotion-based human-machine interaction, intelligent robot interfaces, face recognition robust to expression variation, etc. Previous studies have been classified as either shape- or appearance-based recognition. The shape-based method has the disadvantage that the individual variance of facial feature points exists irrespective of similar expressions, which can cause a reduction of the recognition accuracy. The appearance-based method has a limitation in that the textural information of the face is very sensitive to variations in illumination. To overcome these problems, a new facial-expression recognition method is proposed, which combines both shape and appearance information, based on the support vector machine (SVM). This research is novel in the following three ways as compared to previous works. First, the facial feature points are automatically detected by using an active appearance model. From these, the shape-based recognition is performed by using the ratios between the facial feature points based on the facial-action coding system. Second, the SVM, which is trained to recognize the same and different expression classes, is proposed to combine two matching scores obtained from the shape- and appearance-based recognitions. Finally, a single SVM is trained to discriminate four different expressions, such as neutral, a smile, anger, and a scream. By determining the expression of the input facial image whose SVM output is at a minimum, the accuracy of the expression recognition is much enhanced. The experimental results showed that the recognition accuracy of the proposed method was better than previous researches and other fusion methods.

  18. FaceWarehouse: a 3D facial expression database for visual computing.

    Science.gov (United States)

    Cao, Chen; Weng, Yanlin; Zhou, Shun; Tong, Yiying; Zhou, Kun

    2014-03-01

    We present FaceWarehouse, a database of 3D facial expressions for visual computing applications. We use Kinect, an off-the-shelf RGBD camera, to capture 150 individuals aged 7-80 from various ethnic backgrounds. For each person, we captured the RGBD data of her different expressions, including the neutral expression and 19 other expressions such as mouth-opening, smile, kiss, etc. For every RGBD raw data record, a set of facial feature points on the color image such as eye corners, mouth contour, and the nose tip are automatically localized, and manually adjusted if better accuracy is required. We then deform a template facial mesh to fit the depth data as closely as possible while matching the feature points on the color image to their corresponding points on the mesh. Starting from these fitted face meshes, we construct a set of individual-specific expression blendshapes for each person. These meshes with consistent topology are assembled as a rank-3 tensor to build a bilinear face model with two attributes: identity and expression. Compared with previous 3D facial databases, for every person in our database, there is a much richer matching collection of expressions, enabling depiction of most human facial actions. We demonstrate the potential of FaceWarehouse for visual computing with four applications: facial image manipulation, face component transfer, real-time performance-based facial image animation, and facial animation retargeting from video to image.

  19. Body size and allometric variation in facial shape in children.

    Science.gov (United States)

    Larson, Jacinda R; Manyama, Mange F; Cole, Joanne B; Gonzalez, Paula N; Percival, Christopher J; Liberton, Denise K; Ferrara, Tracey M; Riccardi, Sheri L; Kimwaga, Emmanuel A; Mathayo, Joshua; Spitzmacher, Jared A; Rolian, Campbell; Jamniczky, Heather A; Weinberg, Seth M; Roseman, Charles C; Klein, Ophir; Lukowiak, Ken; Spritz, Richard A; Hallgrimsson, Benedikt

    2018-02-01

    Morphological integration, or the tendency for covariation, is commonly seen in complex traits such as the human face. The effects of growth on shape, or allometry, represent a ubiquitous but poorly understood axis of integration. We address the question of to what extent age and measures of size converge on a single pattern of allometry for human facial shape. Our study is based on two large cross-sectional cohorts of children, one from Tanzania and the other from the United States (N = 7,173). We employ 3D facial imaging and geometric morphometrics to relate facial shape to age and anthropometric measures. The two populations differ significantly in facial shape, but the magnitude of this difference is small relative to the variation within each group. Allometric variation for facial shape is similar in both populations, representing a small but significant proportion of total variation in facial shape. Different measures of size are associated with overlapping but statistically distinct aspects of shape variation. Only half of the size-related variation in facial shape can be explained by the first principal component of four size measures and age while the remainder associates distinctly with individual measures. Allometric variation in the human face is complex and should not be regarded as a singular effect. This finding has important implications for how size is treated in studies of human facial shape and for the developmental basis for allometric variation more generally. © 2017 Wiley Periodicals, Inc.

  20. Facial Expression Generation from Speaker's Emotional States in Daily Conversation

    Science.gov (United States)

    Mori, Hiroki; Ohshima, Koh

    A framework for generating facial expressions from emotional states in daily conversation is described. It provides a mapping between emotional states and facial expressions, where the former is represented by vectors with psychologically-defined abstract dimensions, and the latter is coded by the Facial Action Coding System. In order to obtain the mapping, parallel data with rated emotional states and facial expressions were collected for utterances of a female speaker, and a neural network was trained with the data. The effectiveness of proposed method is verified by a subjective evaluation test. As the result, the Mean Opinion Score with respect to the suitability of generated facial expression was 3.86 for the speaker, which was close to that of hand-made facial expressions.

  1. Stability of Facial Affective Expressions in Schizophrenia

    Directory of Open Access Journals (Sweden)

    H. Fatouros-Bergman

    2012-01-01

    Full Text Available Thirty-two videorecorded interviews were conducted by two interviewers with eight patients diagnosed with schizophrenia. Each patient was interviewed four times: three weekly interviews by the first interviewer and one additional interview by the second interviewer. 64 selected sequences where the patients were speaking about psychotic experiences were scored for facial affective behaviour with Emotion Facial Action Coding System (EMFACS. In accordance with previous research, the results show that patients diagnosed with schizophrenia express negative facial affectivity. Facial affective behaviour seems not to be dependent on temporality, since within-subjects ANOVA revealed no substantial changes in the amount of affects displayed across the weekly interview occasions. Whereas previous findings found contempt to be the most frequent affect in patients, in the present material disgust was as common, but depended on the interviewer. The results suggest that facial affectivity in these patients is primarily dominated by the negative emotions of disgust and, to a lesser extent, contempt and implies that this seems to be a fairly stable feature.

  2. Binary pattern analysis for 3D facial action unit detection

    NARCIS (Netherlands)

    Sandbach, Georgia; Zafeiriou, Stefanos; Pantic, Maja

    2012-01-01

    In this paper we propose new binary pattern features for use in the problem of 3D facial action unit (AU) detection. Two representations of 3D facial geometries are employed, the depth map and the Azimuthal Projection Distance Image (APDI). To these the traditional Local Binary Pattern is applied,

  3. Towards Real-Time Facial Landmark Detection in Depth Data Using Auxiliary Information

    Directory of Open Access Journals (Sweden)

    Connah Kendrick

    2018-06-01

    Full Text Available Modern facial motion capture systems employ a two-pronged approach for capturing and rendering facial motion. Visual data (2D is used for tracking the facial features and predicting facial expression, whereas Depth (3D data is used to build a series of expressions on 3D face models. An issue with modern research approaches is the use of a single data stream that provides little indication of the 3D facial structure. We compare and analyse the performance of Convolutional Neural Networks (CNN using visual, Depth and merged data to identify facial features in real-time using a Depth sensor. First, we review the facial landmarking algorithms and its datasets for Depth data. We address the limitation of the current datasets by introducing the Kinect One Expression Dataset (KOED. Then, we propose the use of CNNs for the single data stream and merged data streams for facial landmark detection. We contribute to existing work by performing a full evaluation on which streams are the most effective for the field of facial landmarking. Furthermore, we improve upon the existing work by extending neural networks to predict into 3D landmarks in real-time with additional observations on the impact of using 2D landmarks as auxiliary information. We evaluate the performance by using Mean Square Error (MSE and Mean Average Error (MAE. We observe that the single data stream predicts accurate facial landmarks on Depth data when auxiliary information is used to train the network. The codes and dataset used in this paper will be made available.

  4. Facial Onset Sensory and Motor Neuronopathy: Further Evidence for a TDP-43 Proteinopathy

    Directory of Open Access Journals (Sweden)

    Besa Ziso

    2015-04-01

    Full Text Available Three patients with the clinical and investigation features of facial onset sensory and motor neuronopathy (FOSMN syndrome are presented, one of whom came to a post-mortem examination. This showed TDP-43-positive inclusions in the bulbar and spinal motor neurones as well as in the trigeminal nerve nuclei, consistent with a neurodegenerative pathogenesis. These data support the idea that at least some FOSMN cases fall within the spectrum of the TDP-43 proteinopathies, and represent a focal form of this pathology.

  5. Facial Expression Recognition via Non-Negative Least-Squares Sparse Coding

    Directory of Open Access Journals (Sweden)

    Ying Chen

    2014-05-01

    Full Text Available Sparse coding is an active research subject in signal processing, computer vision, and pattern recognition. A novel method of facial expression recognition via non-negative least squares (NNLS sparse coding is presented in this paper. The NNLS sparse coding is used to form a facial expression classifier. To testify the performance of the presented method, local binary patterns (LBP and the raw pixels are extracted for facial feature representation. Facial expression recognition experiments are conducted on the Japanese Female Facial Expression (JAFFE database. Compared with other widely used methods such as linear support vector machines (SVM, sparse representation-based classifier (SRC, nearest subspace classifier (NSC, K-nearest neighbor (KNN and radial basis function neural networks (RBFNN, the experiment results indicate that the presented NNLS method performs better than other used methods on facial expression recognition tasks.

  6. Cues of fatigue: effects of sleep deprivation on facial appearance.

    Science.gov (United States)

    Sundelin, Tina; Lekander, Mats; Kecklund, Göran; Van Someren, Eus J W; Olsson, Andreas; Axelsson, John

    2013-09-01

    To investigate the facial cues by which one recognizes that someone is sleep deprived versus not sleep deprived. Experimental laboratory study. Karolinska Institutet, Stockholm, Sweden. Forty observers (20 women, mean age 25 ± 5 y) rated 20 facial photographs with respect to fatigue, 10 facial cues, and sadness. The stimulus material consisted of 10 individuals (five women) photographed at 14:30 after normal sleep and after 31 h of sleep deprivation following a night with 5 h of sleep. Ratings of fatigue, fatigue-related cues, and sadness in facial photographs. The faces of sleep deprived individuals were perceived as having more hanging eyelids, redder eyes, more swollen eyes, darker circles under the eyes, paler skin, more wrinkles/fine lines, and more droopy corners of the mouth (effects ranging from b = +3 ± 1 to b = +15 ± 1 mm on 100-mm visual analog scales, P sleep deprivation (P sleep deprivation, nor associated with judgements of fatigue. In addition, sleep-deprived individuals looked sadder than after normal sleep, and sadness was related to looking fatigued (P sleep deprivation affects features relating to the eyes, mouth, and skin, and that these features function as cues of sleep loss to other people. Because these facial regions are important in the communication between humans, facial cues of sleep deprivation and fatigue may carry social consequences for the sleep deprived individual in everyday life.

  7. Toward a universal, automated facial measurement tool in facial reanimation.

    Science.gov (United States)

    Hadlock, Tessa A; Urban, Luke S

    2012-01-01

    To describe a highly quantitative facial function-measuring tool that yields accurate, objective measures of facial position in significantly less time than existing methods. Facial Assessment by Computer Evaluation (FACE) software was designed for facial analysis. Outputs report the static facial landmark positions and dynamic facial movements relevant in facial reanimation. Fifty individuals underwent facial movement analysis using Photoshop-based measurements and the new software; comparisons of agreement and efficiency were made. Comparisons were made between individuals with normal facial animation and patients with paralysis to gauge sensitivity to abnormal movements. Facial measurements were matched using FACE software and Photoshop-based measures at rest and during expressions. The automated assessments required significantly less time than Photoshop-based assessments.FACE measurements easily revealed differences between individuals with normal facial animation and patients with facial paralysis. FACE software produces accurate measurements of facial landmarks and facial movements and is sensitive to paralysis. Given its efficiency, it serves as a useful tool in the clinical setting for zonal facial movement analysis in comprehensive facial nerve rehabilitation programs.

  8. Perception of facial expressions produced by configural relations

    Directory of Open Access Journals (Sweden)

    V A Barabanschikov

    2010-06-01

    Full Text Available The authors discuss the problem of perception of facial expressions produced by configural features. Experimentally found configural features influence the perception of emotional expression of subjectively emotionless face. Classical results by E. Brunsvik related to perception of schematic faces are partly confirmed.

  9. The fate of facial asymmetry after surgery for "muscular torticollis" in early childhood

    Directory of Open Access Journals (Sweden)

    Dinesh Kittur

    2016-01-01

    Full Text Available Aims and Objectives: To study wheather the facial features return to normal after surgery for muscular torticollis done in early childhood. Materials and Methods: This is a long-term study of the fate of facial asymmetry in four children who have undergone operation for muscular torticollis in early childhood. All the patients presented late, i.e., after the age of 4 years with a scarred sternomastoid and plagiocephaly, so conservative management with physiotherapy was not considered. All the patients had an x-ray of cervical spine and eye and dental checkup before making a diagnosis of muscular torticollis. Preoperative photograph of the patient′s face was taken to counsel the parents about the secondary effect of short sternomastoid on facial features and the need for surgery. After division of sternomastoid muscle and release of cervical fascia when indicated, the head was maintained in a hyperextended position supported by sand bags for three days. Gradual physiotherapy was then started followed by wearing of a Minerva collar that the child wore for a maximum period of time in 24 h. Physiotherapy was continued three times a day till the range of movements of the head returned to normal. During the follow-up, serial photographs were taken to note the changes in the facial features. Results: In all four patients, the asymmetry of the face got corrected and the facial features returned to normal. Conclusion: Most of the deformity of facial asymmetry gets corrected in the first two years after surgery. By adolescence, the face returns to normal.

  10. Novel Noninvasive Brain Disease Detection System Using a Facial Image Sensor

    Directory of Open Access Journals (Sweden)

    Ting Shu

    2017-12-01

    Full Text Available Brain disease including any conditions or disabilities that affect the brain is fast becoming a leading cause of death. The traditional diagnostic methods of brain disease are time-consuming, inconvenient and non-patient friendly. As more and more individuals undergo examinations to determine if they suffer from any form of brain disease, developing noninvasive, efficient, and patient friendly detection systems will be beneficial. Therefore, in this paper, we propose a novel noninvasive brain disease detection system based on the analysis of facial colors. The system consists of four components. A facial image is first captured through a specialized sensor, where four facial key blocks are next located automatically from the various facial regions. Color features are extracted from each block to form a feature vector for classification via the Probabilistic Collaborative based Classifier. To thoroughly test the system and its performance, seven facial key block combinations were experimented. The best result was achieved using the second facial key block, where it showed that the Probabilistic Collaborative based Classifier is the most suitable. The overall performance of the proposed system achieves an accuracy −95%, a sensitivity −94.33%, a specificity −95.67%, and an average processing time (for one sample of <1 min at brain disease detection.

  11. Idiopathic ophthalmodynia and idiopathic rhinalgia: two topographic facial pain syndromes.

    Science.gov (United States)

    Pareja, Juan A; Cuadrado, María L; Porta-Etessam, Jesús; Fernández-de-las-Peñas, César; Gili, Pablo; Caminero, Ana B; Cebrián, José L

    2010-09-01

    To describe 2 topographic facial pain conditions with the pain clearly localized in the eye (idiopathic ophthalmodynia) or in the nose (idiopathic rhinalgia), and to propose their distinction from persistent idiopathic facial pain. Persistent idiopathic facial pain, burning mouth syndrome, atypical odontalgia, and facial arthromyalgia are idiopathic facial pain syndromes that have been separated according to topographical criteria. Still, some other facial pain syndromes might have been veiled under the broad term of persistent idiopathic facial pain. Through a 10-year period we have studied all patients referred to our neurological clinic because of facial pain of unknown etiology that might deviate from all well-characterized facial pain syndromes. In a group of patients we have identified 2 consistent clinical pictures with pain precisely located either in the eye (n=11) or in the nose (n=7). Clinical features resembled those of other localized idiopathic facial syndromes, the key differences relying on the topographic distribution of the pain. Both idiopathic ophthalmodynia and idiopathic rhinalgia seem specific pain syndromes with a distinctive location, and may deserve a nosologic status just as other focal pain syndromes of the face. Whether all such focal syndromes are topographic variants of persistent idiopathic facial pain or independent disorders remains a controversial issue.

  12. The Emotional Modulation of Facial Mimicry: A Kinematic Study

    Directory of Open Access Journals (Sweden)

    Antonella Tramacere

    2018-01-01

    Full Text Available It is well-established that the observation of emotional facial expression induces facial mimicry responses in the observers. However, how the interaction between emotional and motor components of facial expressions can modulate the motor behavior of the perceiver is still unknown. We have developed a kinematic experiment to evaluate the effect of different oro-facial expressions on perceiver's face movements. Participants were asked to perform two movements, i.e., lip stretching and lip protrusion, in response to the observation of four meaningful (i.e., smile, angry-mouth, kiss, and spit and two meaningless mouth gestures. All the stimuli were characterized by different motor patterns (mouth aperture or mouth closure. Response Times and kinematics parameters of the movements (amplitude, duration, and mean velocity were recorded and analyzed. Results evidenced a dissociated effect on reaction times and movement kinematics. We found shorter reaction time when a mouth movement was preceded by the observation of a meaningful and motorically congruent oro-facial gesture, in line with facial mimicry effect. On the contrary, during execution, the perception of smile was associated with the facilitation, in terms of shorter duration and higher velocity of the incongruent movement, i.e., lip protrusion. The same effect resulted in response to kiss and spit that significantly facilitated the execution of lip stretching. We called this phenomenon facial mimicry reversal effect, intended as the overturning of the effect normally observed during facial mimicry. In general, the findings show that both motor features and types of emotional oro-facial gestures (conveying positive or negative valence affect the kinematics of subsequent mouth movements at different levels: while congruent motor features facilitate a general motor response, motor execution could be speeded by gestures that are motorically incongruent with the observed one. Moreover, valence

  13. The Emotional Modulation of Facial Mimicry: A Kinematic Study.

    Science.gov (United States)

    Tramacere, Antonella; Ferrari, Pier F; Gentilucci, Maurizio; Giuffrida, Valeria; De Marco, Doriana

    2017-01-01

    It is well-established that the observation of emotional facial expression induces facial mimicry responses in the observers. However, how the interaction between emotional and motor components of facial expressions can modulate the motor behavior of the perceiver is still unknown. We have developed a kinematic experiment to evaluate the effect of different oro-facial expressions on perceiver's face movements. Participants were asked to perform two movements, i.e., lip stretching and lip protrusion, in response to the observation of four meaningful (i.e., smile, angry-mouth, kiss, and spit) and two meaningless mouth gestures. All the stimuli were characterized by different motor patterns (mouth aperture or mouth closure). Response Times and kinematics parameters of the movements (amplitude, duration, and mean velocity) were recorded and analyzed. Results evidenced a dissociated effect on reaction times and movement kinematics. We found shorter reaction time when a mouth movement was preceded by the observation of a meaningful and motorically congruent oro-facial gesture, in line with facial mimicry effect. On the contrary, during execution, the perception of smile was associated with the facilitation, in terms of shorter duration and higher velocity of the incongruent movement, i.e., lip protrusion. The same effect resulted in response to kiss and spit that significantly facilitated the execution of lip stretching. We called this phenomenon facial mimicry reversal effect , intended as the overturning of the effect normally observed during facial mimicry. In general, the findings show that both motor features and types of emotional oro-facial gestures (conveying positive or negative valence) affect the kinematics of subsequent mouth movements at different levels: while congruent motor features facilitate a general motor response, motor execution could be speeded by gestures that are motorically incongruent with the observed one. Moreover, valence effect depends on

  14. http://www.bioline.org.br/js 101 Aetiological Profile of Facial Nerve ...

    African Journals Online (AJOL)

    jen

    Background: Facial nerve abnormalities represent a broad spectrum of lesions which are commonly seen by the otolaryngologist. The aim of this paper is to highlight the aetiologic profile of facial nerve palsy. Methods: A retrospective study of patients with facial nerve palsy seen in the Ear, Nose and Throat clinic for 5 years.

  15. Optimal Geometrical Set for Automated Marker Placement to Virtualized Real-Time Facial Emotions.

    Directory of Open Access Journals (Sweden)

    Vasanthan Maruthapillai

    Full Text Available In recent years, real-time face recognition has been a major topic of interest in developing intelligent human-machine interaction systems. Over the past several decades, researchers have proposed different algorithms for facial expression recognition, but there has been little focus on detection in real-time scenarios. The present work proposes a new algorithmic method of automated marker placement used to classify six facial expressions: happiness, sadness, anger, fear, disgust, and surprise. Emotional facial expressions were captured using a webcam, while the proposed algorithm placed a set of eight virtual markers on each subject's face. Facial feature extraction methods, including marker distance (distance between each marker to the center of the face and change in marker distance (change in distance between the original and new marker positions, were used to extract three statistical features (mean, variance, and root mean square from the real-time video sequence. The initial position of each marker was subjected to the optical flow algorithm for marker tracking with each emotional facial expression. Finally, the extracted statistical features were mapped into corresponding emotional facial expressions using two simple non-linear classifiers, K-nearest neighbor and probabilistic neural network. The results indicate that the proposed automated marker placement algorithm effectively placed eight virtual markers on each subject's face and gave a maximum mean emotion classification rate of 96.94% using the probabilistic neural network.

  16. Optimal Geometrical Set for Automated Marker Placement to Virtualized Real-Time Facial Emotions.

    Science.gov (United States)

    Maruthapillai, Vasanthan; Murugappan, Murugappan

    2016-01-01

    In recent years, real-time face recognition has been a major topic of interest in developing intelligent human-machine interaction systems. Over the past several decades, researchers have proposed different algorithms for facial expression recognition, but there has been little focus on detection in real-time scenarios. The present work proposes a new algorithmic method of automated marker placement used to classify six facial expressions: happiness, sadness, anger, fear, disgust, and surprise. Emotional facial expressions were captured using a webcam, while the proposed algorithm placed a set of eight virtual markers on each subject's face. Facial feature extraction methods, including marker distance (distance between each marker to the center of the face) and change in marker distance (change in distance between the original and new marker positions), were used to extract three statistical features (mean, variance, and root mean square) from the real-time video sequence. The initial position of each marker was subjected to the optical flow algorithm for marker tracking with each emotional facial expression. Finally, the extracted statistical features were mapped into corresponding emotional facial expressions using two simple non-linear classifiers, K-nearest neighbor and probabilistic neural network. The results indicate that the proposed automated marker placement algorithm effectively placed eight virtual markers on each subject's face and gave a maximum mean emotion classification rate of 96.94% using the probabilistic neural network.

  17. Deep learning the dynamic appearance and shape of facial action units

    OpenAIRE

    Jaiswal, Shashank; Valstar, Michel F.

    2016-01-01

    Spontaneous facial expression recognition under uncontrolled conditions is a hard task. It depends on multiple factors including shape, appearance and dynamics of the facial features, all of which are adversely affected by environmental noise and low intensity signals typical of such conditions. In this work, we present a novel approach to Facial Action Unit detection using a combination of Convolutional and Bi-directional Long Short-Term Memory Neural Networks (CNN-BLSTM), which jointly lear...

  18. Automated facial acne assessment from smartphone images

    Science.gov (United States)

    Amini, Mohammad; Vasefi, Fartash; Valdebran, Manuel; Huang, Kevin; Zhang, Haomiao; Kemp, William; MacKinnon, Nicholas

    2018-02-01

    A smartphone mobile medical application is presented, that provides analysis of the health of skin on the face using a smartphone image and cloud-based image processing techniques. The mobile application employs the use of the camera to capture a front face image of a subject, after which the captured image is spatially calibrated based on fiducial points such as position of the iris of the eye. A facial recognition algorithm is used to identify features of the human face image, to normalize the image, and to define facial regions of interest (ROI) for acne assessment. We identify acne lesions and classify them into two categories: those that are papules and those that are pustules. Automated facial acne assessment was validated by performing tests on images of 60 digital human models and 10 real human face images. The application was able to identify 92% of acne lesions within five facial ROIs. The classification accuracy for separating papules from pustules was 98%. Combined with in-app documentation of treatment, lifestyle factors, and automated facial acne assessment, the app can be used in both cosmetic and clinical dermatology. It allows users to quantitatively self-measure acne severity and treatment efficacy on an ongoing basis to help them manage their chronic facial acne.

  19. Facial Image Compression Based on Structured Codebooks in Overcomplete Domain

    Directory of Open Access Journals (Sweden)

    Vila-Forcén JE

    2006-01-01

    Full Text Available We advocate facial image compression technique in the scope of distributed source coding framework. The novelty of the proposed approach is twofold: image compression is considered from the position of source coding with side information and, contrarily to the existing scenarios where the side information is given explicitly; the side information is created based on a deterministic approximation of the local image features. We consider an image in the overcomplete transform domain as a realization of a random source with a structured codebook of symbols where each symbol represents a particular edge shape. Due to the partial availability of the side information at both encoder and decoder, we treat our problem as a modification of the Berger-Flynn-Gray problem and investigate a possible gain over the solutions when side information is either unavailable or available at the decoder. Finally, the paper presents a practical image compression algorithm for facial images based on our concept that demonstrates the superior performance in the very-low-bit-rate regime.

  20. Penetrating gunshot wound to the head: transotic approach to remove the bullet and masseteric-facial nerve anastomosis for early facial reanimation.

    Science.gov (United States)

    Donnarumma, Pasquale; Tarantino, Roberto; Gennaro, Paolo; Mitro, Valeria; Valentini, Valentino; Magliulo, Giuseppe; Delfini, Roberto

    2014-01-01

    Gunshot wounds to the head (GSWH) account for the majority of penetrating brain injuries, and are the most lethal. Since they are rare in Europe, the number of neurosurgeons who have experienced this type of traumatic injury is decreasing, and fewer cases are reported in the literature. We describe a case of gunshot to the temporal bone in which the bullet penetrated the skull resulting in the facial nerve paralysis. It was excised with the transotic approach. Microsurgical anastomosis among the masseteric nerve and the facial nerve was performed. GSWH are often devastating. The in-hospital mortality for civilians with penetrating craniocerebral injury is very high. Survivors often have high rate of complications. When facial paralysis is present, masseteric-facial direct neurorraphy represent a good treatment.

  1. Kernel-based discriminant feature extraction using a representative dataset

    Science.gov (United States)

    Li, Honglin; Sancho Gomez, Jose-Luis; Ahalt, Stanley C.

    2002-07-01

    Discriminant Feature Extraction (DFE) is widely recognized as an important pre-processing step in classification applications. Most DFE algorithms are linear and thus can only explore the linear discriminant information among the different classes. Recently, there has been several promising attempts to develop nonlinear DFE algorithms, among which is Kernel-based Feature Extraction (KFE). The efficacy of KFE has been experimentally verified by both synthetic data and real problems. However, KFE has some known limitations. First, KFE does not work well for strongly overlapped data. Second, KFE employs all of the training set samples during the feature extraction phase, which can result in significant computation when applied to very large datasets. Finally, KFE can result in overfitting. In this paper, we propose a substantial improvement to KFE that overcomes the above limitations by using a representative dataset, which consists of critical points that are generated from data-editing techniques and centroid points that are determined by using the Frequency Sensitive Competitive Learning (FSCL) algorithm. Experiments show that this new KFE algorithm performs well on significantly overlapped datasets, and it also reduces computational complexity. Further, by controlling the number of centroids, the overfitting problem can be effectively alleviated.

  2. Sound-induced facial synkinesis following facial nerve paralysis

    NARCIS (Netherlands)

    Ma, Ming-San; van der Hoeven, Johannes H.; Nicolai, Jean-Philippe A.; Meek, Marcel F.

    Facial synkinesis (or synkinesia) (FS) occurs frequently after paresis or paralysis of the facial nerve and is in most cases due to aberrant regeneration of (branches of) the facial nerve. Patients suffer from inappropriate and involuntary synchronous facial muscle contractions. Here we describe two

  3. Role of temporal processing stages by inferior temporal neurons in facial recognition

    Directory of Open Access Journals (Sweden)

    Yasuko eSugase-Miyamoto

    2011-06-01

    Full Text Available In this review, we focus on the role of temporal stages of encoded facial information in the visual system, which might enable the efficient determination of species, identity, and expression. Facial recognition is an important function of our brain and is known to be processed in the ventral visual pathway, where visual signals are processed through areas V1, V2, V4, and the inferior temporal (IT cortex. In the IT cortex, neurons show selective responses to complex visual images such as faces, and at each stage along the pathway the stimulus selectivity of the neural responses becomes sharper, particularly in the later portion of the responses.In the IT cortex of the monkey, facial information is represented by different temporal stages of neural responses, as shown in our previous study: the initial transient response of face-responsive neurons represents information about global categories, i.e., human vs. monkey vs. simple shapes, whilst the later portion of these responses represents information about detailed facial categories, i.e., expression and/or identity. This suggests that the temporal stages of the neuronal firing pattern play an important role in the coding of visual stimuli, including faces. This type of coding may be a plausible mechanism underlying the temporal dynamics of recognition, including the process of detection/categorization followed by the identification of objects. Recent single-unit studies in monkeys have also provided evidence consistent with the important role of the temporal stages of encoded facial information. For example, view-invariant facial identity information is represented in the response at a later period within a region of face-selective neurons. Consistent with these findings, temporally modulated neural activity has also been observed in human studies. These results suggest a close correlation between the temporal processing stages of facial information by IT neurons and the temporal dynamics of

  4. A small-world network model of facial emotion recognition.

    Science.gov (United States)

    Takehara, Takuma; Ochiai, Fumio; Suzuki, Naoto

    2016-01-01

    Various models have been proposed to increase understanding of the cognitive basis of facial emotions. Despite those efforts, interactions between facial emotions have received minimal attention. If collective behaviours relating to each facial emotion in the comprehensive cognitive system could be assumed, specific facial emotion relationship patterns might emerge. In this study, we demonstrate that the frameworks of complex networks can effectively capture those patterns. We generate 81 facial emotion images (6 prototypes and 75 morphs) and then ask participants to rate degrees of similarity in 3240 facial emotion pairs in a paired comparison task. A facial emotion network constructed on the basis of similarity clearly forms a small-world network, which features an extremely short average network distance and close connectivity. Further, even if two facial emotions have opposing valences, they are connected within only two steps. In addition, we show that intermediary morphs are crucial for maintaining full network integration, whereas prototypes are not at all important. These results suggest the existence of collective behaviours in the cognitive systems of facial emotions and also describe why people can efficiently recognize facial emotions in terms of information transmission and propagation. For comparison, we construct three simulated networks--one based on the categorical model, one based on the dimensional model, and one random network. The results reveal that small-world connectivity in facial emotion networks is apparently different from those networks, suggesting that a small-world network is the most suitable model for capturing the cognitive basis of facial emotions.

  5. Sound-induced facial synkinesis following facial nerve paralysis.

    Science.gov (United States)

    Ma, Ming-San; van der Hoeven, Johannes H; Nicolai, Jean-Philippe A; Meek, Marcel F

    2009-08-01

    Facial synkinesis (or synkinesia) (FS) occurs frequently after paresis or paralysis of the facial nerve and is in most cases due to aberrant regeneration of (branches of) the facial nerve. Patients suffer from inappropriate and involuntary synchronous facial muscle contractions. Here we describe two cases of sound-induced facial synkinesis (SFS) after facial nerve injury. As far as we know, this phenomenon has not been described in the English literature before. Patient A presented with right hemifacial palsy after lesion of the facial nerve due to skull base fracture. He reported involuntary muscle activity at the right corner of the mouth, specifically on hearing ringing keys. Patient B suffered from left hemifacial palsy following otitis media and developed involuntary muscle contraction in the facial musculature specifically on hearing clapping hands or a trumpet sound. Both patients were evaluated by means of video, audio and EMG analysis. Possible mechanisms in the pathophysiology of SFS are postulated and therapeutic options are discussed.

  6. Facial anthropometric differences among gender, ethnicity, and age groups.

    Science.gov (United States)

    Zhuang, Ziqing; Landsittel, Douglas; Benson, Stacey; Roberge, Raymond; Shaffer, Ronald

    2010-06-01

    The impact of race/ethnicity upon facial anthropometric data in the US workforce, on the development of personal protective equipment, has not been investigated to any significant degree. The proliferation of minority populations in the US workforce has increased the need to investigate differences in facial dimensions among these workers. The objective of this study was to determine the face shape and size differences among race and age groups from the National Institute for Occupational Safety and Health survey of 3997 US civilian workers. Survey participants were divided into two gender groups, four racial/ethnic groups, and three age groups. Measurements of height, weight, neck circumference, and 18 facial dimensions were collected using traditional anthropometric techniques. A multivariate analysis of the data was performed using Principal Component Analysis. An exploratory analysis to determine the effect of different demographic factors had on anthropometric features was assessed via a linear model. The 21 anthropometric measurements, body mass index, and the first and second principal component scores were dependent variables, while gender, ethnicity, age, occupation, weight, and height served as independent variables. Gender significantly contributes to size for 19 of 24 dependent variables. African-Americans have statistically shorter, wider, and shallower noses than Caucasians. Hispanic workers have 14 facial features that are significantly larger than Caucasians, while their nose protrusion, height, and head length are significantly shorter. The other ethnic group was composed primarily of Asian subjects and has statistically different dimensions from Caucasians for 16 anthropometric values. Nineteen anthropometric values for subjects at least 45 years of age are statistically different from those measured for subjects between 18 and 29 years of age. Workers employed in manufacturing, fire fighting, healthcare, law enforcement, and other occupational

  7. The Associations between Visual Attention and Facial Expression Identification in Patients with Schizophrenia.

    Science.gov (United States)

    Lin, I-Mei; Fan, Sheng-Yu; Huang, Tiao-Lai; Wu, Wan-Ting; Li, Shi-Ming

    2013-12-01

    Visual search is an important attention process that precedes the information processing. Visual search also mediates the relationship between cognition function (attention) and social cognition (such as facial expression identification). However, the association between visual attention and social cognition in patients with schizophrenia remains unknown. The purposes of this study were to examine the differences in visual search performance and facial expression identification between patients with schizophrenia and normal controls, and to explore the relationship between visual search performance and facial expression identification in patients with schizophrenia. Fourteen patients with schizophrenia (mean age=46.36±6.74) and 15 normal controls (mean age=40.87±9.33) participated this study. The visual search task, including feature search and conjunction search, and Japanese and Caucasian Facial Expression of Emotion were administered. Patients with schizophrenia had worse visual search performance both in feature search and conjunction search than normal controls, as well as had worse facial expression identification, especially in surprised and sadness. In addition, there were negative associations between visual search performance and facial expression identification in patients with schizophrenia, especially in surprised and sadness. However, this phenomenon was not showed in normal controls. Patients with schizophrenia who had visual search deficits had the impairment on facial expression identification. Increasing ability of visual search and facial expression identification may improve their social function and interpersonal relationship.

  8. Facial dynamics and emotional expressions in facial aging treatments.

    Science.gov (United States)

    Michaud, Thierry; Gassia, Véronique; Belhaouari, Lakhdar

    2015-03-01

    Facial expressions convey emotions that form the foundation of interpersonal relationships, and many of these emotions promote and regulate our social linkages. Hence, the facial aging symptomatological analysis and the treatment plan must of necessity include knowledge of the facial dynamics and the emotional expressions of the face. This approach aims to more closely meet patients' expectations of natural-looking results, by correcting age-related negative expressions while observing the emotional language of the face. This article will successively describe patients' expectations, the role of facial expressions in relational dynamics, the relationship between facial structures and facial expressions, and the way facial aging mimics negative expressions. Eventually, therapeutic implications for facial aging treatment will be addressed. © 2015 Wiley Periodicals, Inc.

  9. Automatic recognition of emotions from facial expressions

    Science.gov (United States)

    Xue, Henry; Gertner, Izidor

    2014-06-01

    In the human-computer interaction (HCI) process it is desirable to have an artificial intelligent (AI) system that can identify and categorize human emotions from facial expressions. Such systems can be used in security, in entertainment industries, and also to study visual perception, social interactions and disorders (e.g. schizophrenia and autism). In this work we survey and compare the performance of different feature extraction algorithms and classification schemes. We introduce a faster feature extraction method that resizes and applies a set of filters to the data images without sacrificing the accuracy. In addition, we have enhanced SVM to multiple dimensions while retaining the high accuracy rate of SVM. The algorithms were tested using the Japanese Female Facial Expression (JAFFE) Database and the Database of Faces (AT&T Faces).

  10. Effects of facial emotion recognition remediation on visual scanning of novel face stimuli.

    Science.gov (United States)

    Marsh, Pamela J; Luckett, Gemma; Russell, Tamara; Coltheart, Max; Green, Melissa J

    2012-11-01

    Previous research shows that emotion recognition in schizophrenia can be improved with targeted remediation that draws attention to important facial features (eyes, nose, mouth). Moreover, the effects of training have been shown to last for up to one month after training. The aim of this study was to investigate whether improved emotion recognition of novel faces is associated with concomitant changes in visual scanning of these same novel facial expressions. Thirty-nine participants with schizophrenia received emotion recognition training using Ekman's Micro-Expression Training Tool (METT), with emotion recognition and visual scanpath (VSP) recordings to face stimuli collected simultaneously. Baseline ratings of interpersonal and cognitive functioning were also collected from all participants. Post-METT training, participants showed changes in foveal attention to the features of facial expressions of emotion not used in METT training, which were generally consistent with the information about important features from the METT. In particular, there were changes in how participants looked at the features of facial expressions of emotion surprise, disgust, fear, happiness, and neutral, demonstrating that improved emotion recognition is paralleled by changes in the way participants with schizophrenia viewed novel facial expressions of emotion. However, there were overall decreases in foveal attention to sad and neutral faces that indicate more intensive instruction might be needed for these faces during training. Most importantly, the evidence shows that participant gender may affect training outcomes. Copyright © 2012 Elsevier B.V. All rights reserved.

  11. Soft-tissue facial characteristics of attractive Chinese men compared to normal men.

    Science.gov (United States)

    Wu, Feng; Li, Junfang; He, Hong; Huang, Na; Tang, Youchao; Wang, Yuanqing

    2015-01-01

    To compare the facial characteristics of attractive Chinese men with those of reference men. The three-dimensional coordinates of 50 facial landmarks were collected in 40 healthy reference men and in 40 "attractive" men, soft tissue facial angles, distances, areas, and volumes were computed and compared using analysis of variance. When compared with reference men, attractive men shared several similar facial characteristics: relatively large forehead, reduced mandible, and rounded face. They had a more acute soft tissue profile, an increased upper facial width and middle facial depth, larger mouth, and more voluminous lips than reference men. Attractive men had several facial characteristics suggesting babyness. Nonetheless, each group of men was characterized by a different development of these features. Esthetic reference values can be a useful tool for clinicians, but should always consider the characteristics of individual faces.

  12. Multiracial Facial Golden Ratio and Evaluation of Facial Appearance.

    Directory of Open Access Journals (Sweden)

    Mohammad Khursheed Alam

    Full Text Available This study aimed to investigate the association of facial proportion and its relation to the golden ratio with the evaluation of facial appearance among Malaysian population. This was a cross-sectional study with 286 randomly selected from Universiti Sains Malaysia (USM Health Campus students (150 females and 136 males; 100 Malaysian Chinese, 100 Malaysian Malay and 86 Malaysian Indian, with the mean age of 21.54 ± 1.56 (Age range, 18-25. Facial indices obtained from direct facial measurements were used for the classification of facial shape into short, ideal and long. A validated structured questionnaire was used to assess subjects' evaluation of their own facial appearance. The mean facial indices of Malaysian Indian (MI, Malaysian Chinese (MC and Malaysian Malay (MM were 1.59 ± 0.19, 1.57 ± 0.25 and 1.54 ± 0.23 respectively. Only MC showed significant sexual dimorphism in facial index (P = 0.047; P<0.05 but no significant difference was found between races. Out of the 286 subjects, 49 (17.1% were of ideal facial shape, 156 (54.5% short and 81 (28.3% long. The facial evaluation questionnaire showed that MC had the lowest satisfaction with mean score of 2.18 ± 0.97 for overall impression and 2.15 ± 1.04 for facial parts, compared to MM and MI, with mean score of 1.80 ± 0.97 and 1.64 ± 0.74 respectively for overall impression; 1.75 ± 0.95 and 1.70 ± 0.83 respectively for facial parts.1 Only 17.1% of Malaysian facial proportion conformed to the golden ratio, with majority of the population having short face (54.5%; 2 Facial index did not depend significantly on races; 3 Significant sexual dimorphism was shown among Malaysian Chinese; 4 All three races are generally satisfied with their own facial appearance; 5 No significant association was found between golden ratio and facial evaluation score among Malaysian population.

  13. Ethnic differences in the structural properties of facial skin.

    Science.gov (United States)

    Sugiyama-Nakagiri, Yoriko; Sugata, Keiichi; Hachiya, Akira; Osanai, Osamu; Ohuchi, Atsushi; Kitahara, Takashi

    2009-02-01

    Conspicuous facial pores are one type of serious aesthetic defects for many women. However, the mechanism(s) that underlie the conspicuousness of facial pores remains unclear. We previously characterized the epidermal architecture around facial pores that correlated with the appearance of those pores. A survey was carried out to elucidate ethnic-dependent differences in facial pore size and in epidermal architecture. The subjects included 80 healthy women (aged 30-39: Caucasians, Asians, Hispanics and African Americans) living in Dallas in the USA. First, surface replicas were collected to compare pore sizes of cheek skin. Second, horizontal cross-sectioned images from cheek skin were obtained non-invasively from the same subjects using in vivo confocal laser scanning microscopy (CLSM) and the severity of impairment of epidermal architecture around facial pores was determined. Finally, to compare racial differences in the architecture of the interfollicular epidermis of facial cheek skin, horizontal cross-sectioned images were obtained and the numbers of dermal papillae were counted. Asians had the smallest pore areas compared with other racial groups. Regarding the epidermal architecture around facial pores, all ethnic groups observed in this study had similar morphological features and African Americans showed substantially more severe impairment of architecture around facial pores than any other racial group. In addition, significant differences were observed in the architecture of the interfollicular epidermis between ethnic groups. These results suggest that facial pore size, the epidermal architecture around facial pores and the architecture of the interfollicular epidermis differ between ethnic groups. This might affect the appearance of facial pores.

  14. Facial Prototype Formation in Children.

    Science.gov (United States)

    Inn, Donald; And Others

    This study examined memory representation as it is exhibited in young children's formation of facial prototypes. In the first part of the study, researchers constructed images of faces using an Identikit that provided the features of hair, eyes, mouth, nose, and chin. Images were varied systematically. A series of these images, called exemplar…

  15. Anaplastology in times of facial transplantation: Still a reasonable treatment option?

    Science.gov (United States)

    Toso, Sabine Maria; Menzel, Kerstin; Motzkus, Yvonne; Klein, Martin; Menneking, Horst; Raguse, Jan-Dirk; Nahles, Susanne; Hoffmeister, Bodo; Adolphs, Nicolai

    2015-09-01

    Optimum functional and aesthetic facial reconstruction is still a challenge in patients who suffer from inborn or acquired facial deformity. It is known that functional and aesthetic impairment can result in significant psychosocial strain, leading to the social isolation of patients who are affected by major facial deformities. Microvascular techniques and increasing experience in facial transplantation certainly contribute to better restorative outcomes. However, these technologies also have some drawbacks, limitations and unsolved problems. Extensive facial defects which include several aesthetic units and dentition can be restored by combining dental prostheses and anaplastology, thus providing an adequate functional and aesthetic outcome in selected patients without the drawbacks of major surgical procedures. Referring to some representative patient cases, it is shown how extreme facial disfigurement after oncological surgery can be palliated by combining intraoral dentures with extraoral facial prostheses using individualized treatment and without the need for major reconstructive surgery. Copyright © 2015 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  16. People with chronic facial pain perform worse than controls at a facial emotion recognition task, but it is not all about the emotion.

    Science.gov (United States)

    von Piekartz, H; Wallwork, S B; Mohr, G; Butler, D S; Moseley, G L

    2015-04-01

    Alexithymia, or a lack of emotional awareness, is prevalent in some chronic pain conditions and has been linked to poor recognition of others' emotions. Recognising others' emotions from their facial expression involves both emotional and motor processing, but the possible contribution of motor disruption has not been considered. It is possible that poor performance on emotional recognition tasks could reflect problems with emotional processing, motor processing or both. We hypothesised that people with chronic facial pain would be less accurate in recognising others' emotions from facial expressions, would be less accurate in a motor imagery task involving the face, and that performance on both tasks would be positively related. A convenience sample of 19 people (15 females) with chronic facial pain and 19 gender-matched controls participated. They undertook two tasks; in the first task, they identified the facial emotion presented in a photograph. In the second, they identified whether the person in the image had a facial feature pointed towards their left or right side, a well-recognised paradigm to induce implicit motor imagery. People with chronic facial pain performed worse than controls at both tasks (Facially Expressed Emotion Labelling (FEEL) task P facial pain were worse than controls at both the FEEL emotion recognition task and the left/right facial expression task and performance covaried within participants. We propose that disrupted motor processing may underpin or at least contribute to the difficulty that facial pain patients have in emotion recognition and that further research that tests this proposal is warranted. © 2014 John Wiley & Sons Ltd.

  17. Multiracial Facial Golden Ratio and Evaluation of Facial Appearance.

    Science.gov (United States)

    Alam, Mohammad Khursheed; Mohd Noor, Nor Farid; Basri, Rehana; Yew, Tan Fo; Wen, Tay Hui

    2015-01-01

    This study aimed to investigate the association of facial proportion and its relation to the golden ratio with the evaluation of facial appearance among Malaysian population. This was a cross-sectional study with 286 randomly selected from Universiti Sains Malaysia (USM) Health Campus students (150 females and 136 males; 100 Malaysian Chinese, 100 Malaysian Malay and 86 Malaysian Indian), with the mean age of 21.54 ± 1.56 (Age range, 18-25). Facial indices obtained from direct facial measurements were used for the classification of facial shape into short, ideal and long. A validated structured questionnaire was used to assess subjects' evaluation of their own facial appearance. The mean facial indices of Malaysian Indian (MI), Malaysian Chinese (MC) and Malaysian Malay (MM) were 1.59 ± 0.19, 1.57 ± 0.25 and 1.54 ± 0.23 respectively. Only MC showed significant sexual dimorphism in facial index (P = 0.047; Pmean score of 2.18 ± 0.97 for overall impression and 2.15 ± 1.04 for facial parts, compared to MM and MI, with mean score of 1.80 ± 0.97 and 1.64 ± 0.74 respectively for overall impression; 1.75 ± 0.95 and 1.70 ± 0.83 respectively for facial parts. 1) Only 17.1% of Malaysian facial proportion conformed to the golden ratio, with majority of the population having short face (54.5%); 2) Facial index did not depend significantly on races; 3) Significant sexual dimorphism was shown among Malaysian Chinese; 4) All three races are generally satisfied with their own facial appearance; 5) No significant association was found between golden ratio and facial evaluation score among Malaysian population.

  18. Coding and quantification of a facial expression for pain in lambs.

    Science.gov (United States)

    Guesgen, M J; Beausoleil, N J; Leach, M; Minot, E O; Stewart, M; Stafford, K J

    2016-11-01

    Facial expressions are routinely used to assess pain in humans, particularly those who are non-verbal. Recently, there has been an interest in developing coding systems for facial grimacing in non-human animals, such as rodents, rabbits, horses and sheep. The aims of this preliminary study were to: 1. Qualitatively identify facial feature changes in lambs experiencing pain as a result of tail-docking and compile these changes to create a Lamb Grimace Scale (LGS); 2. Determine whether human observers can use the LGS to differentiate tail-docked lambs from control lambs and differentiate lambs before and after docking; 3. Determine whether changes in facial action units of the LGS can be objectively quantified in lambs before and after docking; 4. Evaluate effects of restraint of lambs on observers' perceptions of pain using the LGS and on quantitative measures of facial action units. By comparing images of lambs before (no pain) and after (pain) tail-docking, the LGS was devised in consultation with scientists experienced in assessing facial expression in other species. The LGS consists of five facial action units: Orbital Tightening, Mouth Features, Nose Features, Cheek Flattening and Ear Posture. The aims of the study were addressed in two experiments. In Experiment I, still images of the faces of restrained lambs were taken from video footage before and after tail-docking (n=4) or sham tail-docking (n=3). These images were scored by a group of five naïve human observers using the LGS. Because lambs were restrained for the duration of the experiment, Ear Posture was not scored. The scores for the images were averaged to provide one value per feature per period and then scores for the four LGS action units were averaged to give one LGS score per lamb per period. In Experiment II, still images of the faces nine lambs were taken before and after tail-docking. Stills were taken when lambs were restrained and unrestrained in each period. A different group of five

  19. Traumatic facial nerve neuroma with facial palsy presenting in infancy.

    Science.gov (United States)

    Clark, James H; Burger, Peter C; Boahene, Derek Kofi; Niparko, John K

    2010-07-01

    To describe the management of traumatic neuroma of the facial nerve in a child and literature review. Sixteen-month-old male subject. Radiological imaging and surgery. Facial nerve function. The patient presented at 16 months with a right facial palsy and was found to have a right facial nerve traumatic neuroma. A transmastoid, middle fossa resection of the right facial nerve lesion was undertaken with a successful facial nerve-to-hypoglossal nerve anastomosis. The facial palsy improved postoperatively. A traumatic neuroma should be considered in an infant who presents with facial palsy, even in the absence of an obvious history of trauma. The treatment of such lesion is complex in any age group but especially in young children. Symptoms, age, lesion size, growth rate, and facial nerve function determine the appropriate management.

  20. Improving the Quality of Facial Composites Using a Holistic Cognitive Interview

    Science.gov (United States)

    Frowd, Charlie D.; Bruce, Vicki; Smith, Ashley J.; Hancock, Peter J. B.

    2008-01-01

    Witnesses to and victims of serious crime are normally asked to describe the appearance of a criminal suspect, using a Cognitive Interview (CI), and to construct a facial composite, a visual representation of the face. Research suggests that focusing on the global aspects of a face, as opposed to its facial features, facilitates recognition and…

  1. Evidence from facial morphology for similarity of Asian and African representatives of Homo erectus.

    Science.gov (United States)

    Rightmire, G P

    1998-05-01

    It has been argued that Homo erectus is a species confined to Asia. Specialized characters displayed by the Indonesian and Chinese skulls are said to be absent in material from eastern Africa, and individuals from Koobi Fora and Nariokotome are now referred by some workers to H. ergaster. This second species is held to be the ancestor from which later human populations are derived. The claim for two taxa is evaluated here with special reference to the facial skeleton. Asian fossils examined include Sangiran 4 and Sangiran 17, several of the Ngandong crania, Gongwangling, and of course the material from Zhoukoudian described by Weidenreich ([1943] Palaeontol. Sin. [New Ser. D] 10:1-484). African specimens compared are KNM-ER 3733 and KNM-ER 3883 from Koobi Fora and KNM-WT 15000 from Nariokotome. Hominid 9 from Olduvai is useful only insofar as the brows and interorbital pillar are preserved. Neither detailed anatomical comparisons nor measurements bring to light any consistent patterns in facial morphology which set the African hominids apart from Asian H. erectus. Faces of the African individuals do tend to be high and less broad across the orbits. Both of the Koobi Fora crania but not KNM-WT 15000 have nasal bones that are narrow superiorly, while the piriform aperture is relatively wide. In many other characters, including contour of the supraorbital torus, glabellar prominence, nasal bridge dimensions, internasal keeling, anatomy of the nasal sill and floor, development of the canine jugum, orientation of the zygomaticoalveolar pillar, rounding of the anterolateral surface of the cheek, formation of a malar tubercle, and palatal rugosity, there is variation among individuals from localities within the major geographic provinces. Here it is not possible to identify features that are unique to either the Asian or African assemblages. Additional traits such as a forward sloping "crista nasalis," presence of a "sulcus maxillaris," a high (and massive) cheek coupled

  2. Facial expressions of emotion are not culturally universal.

    Science.gov (United States)

    Jack, Rachael E; Garrod, Oliver G B; Yu, Hui; Caldara, Roberto; Schyns, Philippe G

    2012-05-08

    Since Darwin's seminal works, the universality of facial expressions of emotion has remained one of the longest standing debates in the biological and social sciences. Briefly stated, the universality hypothesis claims that all humans communicate six basic internal emotional states (happy, surprise, fear, disgust, anger, and sad) using the same facial movements by virtue of their biological and evolutionary origins [Susskind JM, et al. (2008) Nat Neurosci 11:843-850]. Here, we refute this assumed universality. Using a unique computer graphics platform that combines generative grammars [Chomsky N (1965) MIT Press, Cambridge, MA] with visual perception, we accessed the mind's eye of 30 Western and Eastern culture individuals and reconstructed their mental representations of the six basic facial expressions of emotion. Cross-cultural comparisons of the mental representations challenge universality on two separate counts. First, whereas Westerners represent each of the six basic emotions with a distinct set of facial movements common to the group, Easterners do not. Second, Easterners represent emotional intensity with distinctive dynamic eye activity. By refuting the long-standing universality hypothesis, our data highlight the powerful influence of culture on shaping basic behaviors once considered biologically hardwired. Consequently, our data open a unique nature-nurture debate across broad fields from evolutionary psychology and social neuroscience to social networking via digital avatars.

  3. A Brief Review of Facial Emotion Recognition Based on Visual Information.

    Science.gov (United States)

    Ko, Byoung Chul

    2018-01-30

    Facial emotion recognition (FER) is an important topic in the fields of computer vision and artificial intelligence owing to its significant academic and commercial potential. Although FER can be conducted using multiple sensors, this review focuses on studies that exclusively use facial images, because visual expressions are one of the main information channels in interpersonal communication. This paper provides a brief review of researches in the field of FER conducted over the past decades. First, conventional FER approaches are described along with a summary of the representative categories of FER systems and their main algorithms. Deep-learning-based FER approaches using deep networks enabling "end-to-end" learning are then presented. This review also focuses on an up-to-date hybrid deep-learning approach combining a convolutional neural network (CNN) for the spatial features of an individual frame and long short-term memory (LSTM) for temporal features of consecutive frames. In the later part of this paper, a brief review of publicly available evaluation metrics is given, and a comparison with benchmark results, which are a standard for a quantitative comparison of FER researches, is described. This review can serve as a brief guidebook to newcomers in the field of FER, providing basic knowledge and a general understanding of the latest state-of-the-art studies, as well as to experienced researchers looking for productive directions for future work.

  4. Dissociable roles of internal feelings and face recognition ability in facial expression decoding.

    Science.gov (United States)

    Zhang, Lin; Song, Yiying; Liu, Ling; Liu, Jia

    2016-05-15

    The problem of emotion recognition has been tackled by researchers in both affective computing and cognitive neuroscience. While affective computing relies on analyzing visual features from facial expressions, it has been proposed that humans recognize emotions by internally simulating the emotional states conveyed by others' expressions, in addition to perceptual analysis of facial features. Here we investigated whether and how our internal feelings contributed to the ability to decode facial expressions. In two independent large samples of participants, we observed that individuals who generally experienced richer internal feelings exhibited a higher ability to decode facial expressions, and the contribution of internal feelings was independent of face recognition ability. Further, using voxel-based morphometry, we found that the gray matter volume (GMV) of bilateral superior temporal sulcus (STS) and the right inferior parietal lobule was associated with facial expression decoding through the mediating effect of internal feelings, while the GMV of bilateral STS, precuneus, and the right central opercular cortex contributed to facial expression decoding through the mediating effect of face recognition ability. In addition, the clusters in bilateral STS involved in the two components were neighboring yet separate. Our results may provide clues about the mechanism by which internal feelings, in addition to face recognition ability, serve as an important instrument for humans in facial expression decoding. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Intact Rapid Facial Mimicry as well as Generally Reduced Mimic Responses in Stable Schizophrenia Patients

    Science.gov (United States)

    Chechko, Natalya; Pagel, Alena; Otte, Ellen; Koch, Iring; Habel, Ute

    2016-01-01

    Spontaneous emotional expressions (rapid facial mimicry) perform both emotional and social functions. In the current study, we sought to test whether there were deficits in automatic mimic responses to emotional facial expressions in patients (15 of them) with stable schizophrenia compared to 15 controls. In a perception-action interference paradigm (the Simon task; first experiment), and in the context of a dual-task paradigm (second experiment), the task-relevant stimulus feature was the gender of a face, which, however, displayed a smiling or frowning expression (task-irrelevant stimulus feature). We measured the electromyographical activity in the corrugator supercilii and zygomaticus major muscle regions in response to either compatible or incompatible stimuli (i.e., when the required response did or did not correspond to the depicted facial expression). The compatibility effect based on interactions between the implicit processing of a task-irrelevant emotional facial expression and the conscious production of an emotional facial expression did not differ between the groups. In stable patients (in spite of a reduced mimic reaction), we observed an intact capacity to respond spontaneously to facial emotional stimuli. PMID:27303335

  6. Real Time Facial Expression Recognition Using Webcam and SDK Affectiva

    Directory of Open Access Journals (Sweden)

    Martin Magdin

    2018-06-01

    Full Text Available Facial expression is an essential part of communication. For this reason, the issue of human emotions evaluation using a computer is a very interesting topic, which has gained more and more attention in recent years. It is mainly related to the possibility of applying facial expression recognition in many fields such as HCI, video games, virtual reality, and analysing customer satisfaction etc. Emotions determination (recognition process is often performed in 3 basic phases: face detection, facial features extraction, and last stage - expression classification. Most often you can meet the so-called Ekman’s classification of 6 emotional expressions (or 7 - neutral expression as well as other types of classification - the Russell circular model, which contains up to 24 or the Plutchik’s Wheel of Emotions. The methods used in the three phases of the recognition process have not only improved over the last 60 years, but new methods and algorithms have also emerged that can determine the ViolaJones detector with greater accuracy and lower computational demands. Therefore, there are currently various solutions in the form of the Software Development Kit (SDK. In this publication, we point to the proposition and creation of our system for real-time emotion classification. Our intention was to create a system that would use all three phases of the recognition process, work fast and stable in real time. That’s why we’ve decided to take advantage of existing Affectiva SDKs. By using the classic webcamera we can detect facial landmarks on the image automatically using the Software Development Kit (SDK from Affectiva. Geometric feature based approach is used for feature extraction. The distance between landmarks is used as a feature, and for selecting an optimal set of features, the brute force method is used. The proposed system uses neural network algorithm for classification. The proposed system recognizes 6 (respectively 7 facial expressions

  7. Estimation of human emotions using thermal facial information

    Science.gov (United States)

    Nguyen, Hung; Kotani, Kazunori; Chen, Fan; Le, Bac

    2014-01-01

    In recent years, research on human emotion estimation using thermal infrared (IR) imagery has appealed to many researchers due to its invariance to visible illumination changes. Although infrared imagery is superior to visible imagery in its invariance to illumination changes and appearance differences, it has difficulties in handling transparent glasses in the thermal infrared spectrum. As a result, when using infrared imagery for the analysis of human facial information, the regions of eyeglasses are dark and eyes' thermal information is not given. We propose a temperature space method to correct eyeglasses' effect using the thermal facial information in the neighboring facial regions, and then use Principal Component Analysis (PCA), Eigen-space Method based on class-features (EMC), and PCA-EMC method to classify human emotions from the corrected thermal images. We collected the Kotani Thermal Facial Emotion (KTFE) database and performed the experiments, which show the improved accuracy rate in estimating human emotions.

  8. Dogs Evaluate Threatening Facial Expressions by Their Biological Validity--Evidence from Gazing Patterns.

    Directory of Open Access Journals (Sweden)

    Sanni Somppi

    Full Text Available Appropriate response to companions' emotional signals is important for all social creatures. The emotional expressions of humans and non-human animals have analogies in their form and function, suggesting shared evolutionary roots, but very little is known about how animals other than primates view and process facial expressions. In primates, threat-related facial expressions evoke exceptional viewing patterns compared with neutral or positive stimuli. Here, we explore if domestic dogs (Canis familiaris have such an attentional bias toward threatening social stimuli and whether observed emotional expressions affect dogs' gaze fixation distribution among the facial features (eyes, midface and mouth. We recorded the voluntary eye gaze of 31 domestic dogs during viewing of facial photographs of humans and dogs with three emotional expressions (threatening, pleasant and neutral. We found that dogs' gaze fixations spread systematically among facial features. The distribution of fixations was altered by the seen expression, but eyes were the most probable targets of the first fixations and gathered longer looking durations than mouth regardless of the viewed expression. The examination of the inner facial features as a whole revealed more pronounced scanning differences among expressions. This suggests that dogs do not base their perception of facial expressions on the viewing of single structures, but the interpretation of the composition formed by eyes, midface and mouth. Dogs evaluated social threat rapidly and this evaluation led to attentional bias, which was dependent on the depicted species: threatening conspecifics' faces evoked heightened attention but threatening human faces instead an avoidance response. We propose that threatening signals carrying differential biological validity are processed via distinctive neurocognitive pathways. Both of these mechanisms may have an adaptive significance for domestic dogs. The findings provide a novel

  9. Facial Expression Recognition By Using Fisherface Methode With Backpropagation Neural Network

    Directory of Open Access Journals (Sweden)

    Zaenal Abidin

    2011-01-01

    Full Text Available Abstract— In daily lives, especially in interpersonal communication, face often used for expression. Facial expressions give information about the emotional state of the person. A facial expression is one of the behavioral characteristics. The components of a basic facial expression analysis system are face detection, face data extraction, and facial expression recognition. Fisherface method with backpropagation artificial neural network approach can be used for facial expression recognition. This method consists of two-stage process, namely PCA and LDA. PCA is used to reduce the dimension, while the LDA is used for features extraction of facial expressions. The system was tested with 2 databases namely JAFFE database and MUG database. The system correctly classified the expression with accuracy of 86.85%, and false positive 25 for image type I of JAFFE, for image type II of JAFFE 89.20% and false positive 15,  for type III of JAFFE 87.79%, and false positive for 16. The image of MUG are 98.09%, and false positive 5. Keywords— facial expression, fisherface method, PCA, LDA, backpropagation neural network.

  10. Novel dynamic Bayesian networks for facial action element recognition and understanding

    Science.gov (United States)

    Zhao, Wei; Park, Jeong-Seon; Choi, Dong-You; Lee, Sang-Woong

    2011-12-01

    In daily life, language is an important tool of communication between people. Besides language, facial action can also provide a great amount of information. Therefore, facial action recognition has become a popular research topic in the field of human-computer interaction (HCI). However, facial action recognition is quite a challenging task due to its complexity. In a literal sense, there are thousands of facial muscular movements, many of which have very subtle differences. Moreover, muscular movements always occur simultaneously when the pose is changed. To address this problem, we first build a fully automatic facial points detection system based on a local Gabor filter bank and principal component analysis. Then, novel dynamic Bayesian networks are proposed to perform facial action recognition using the junction tree algorithm over a limited number of feature points. In order to evaluate the proposed method, we have used the Korean face database for model training. For testing, we used the CUbiC FacePix, facial expressions and emotion database, Japanese female facial expression database, and our own database. Our experimental results clearly demonstrate the feasibility of the proposed approach.

  11. Ellis-van Creveld syndrome with facial hemiatrophy

    Directory of Open Access Journals (Sweden)

    Bhat Yasmeen

    2010-01-01

    Full Text Available Ellis-van Creveld (EVC syndrome is a rare autosomal recessive congenital disorder characterized by chondrodysplasia and polydactyly, ectodermal dysplasia and congenital defects of the heart. We present here a case of a 16-year-old short-limbed dwarf with skeletal deformities and bilateral postaxial polydactyly, dysplastic nails and teeth, also having left-sided facial hemiatrophy. The diagnosis of EVC syndrome was made on the basis of clinical and radiological features. To the best of our knowledge, this is the first report of EVC syndrome with facial hemiatrophy in the medical literature from India.

  12. Automatic Facial Expression Recognition and Operator Functional State

    Science.gov (United States)

    Blanson, Nina

    2012-01-01

    The prevalence of human error in safety-critical occupations remains a major challenge to mission success despite increasing automation in control processes. Although various methods have been proposed to prevent incidences of human error, none of these have been developed to employ the detection and regulation of Operator Functional State (OFS), or the optimal condition of the operator while performing a task, in work environments due to drawbacks such as obtrusiveness and impracticality. A video-based system with the ability to infer an individual's emotional state from facial feature patterning mitigates some of the problems associated with other methods of detecting OFS, like obtrusiveness and impracticality in integration with the mission environment. This paper explores the utility of facial expression recognition as a technology for inferring OFS by first expounding on the intricacies of OFS and the scientific background behind emotion and its relationship with an individual's state. Then, descriptions of the feedback loop and the emotion protocols proposed for the facial recognition program are explained. A basic version of the facial expression recognition program uses Haar classifiers and OpenCV libraries to automatically locate key facial landmarks during a live video stream. Various methods of creating facial expression recognition software are reviewed to guide future extensions of the program. The paper concludes with an examination of the steps necessary in the research of emotion and recommendations for the creation of an automatic facial expression recognition program for use in real-time, safety-critical missions

  13. Automatic Facial Expression Recognition and Operator Functional State

    Science.gov (United States)

    Blanson, Nina

    2011-01-01

    The prevalence of human error in safety-critical occupations remains a major challenge to mission success despite increasing automation in control processes. Although various methods have been proposed to prevent incidences of human error, none of these have been developed to employ the detection and regulation of Operator Functional State (OFS), or the optimal condition of the operator while performing a task, in work environments due to drawbacks such as obtrusiveness and impracticality. A video-based system with the ability to infer an individual's emotional state from facial feature patterning mitigates some of the problems associated with other methods of detecting OFS, like obtrusiveness and impracticality in integration with the mission environment. This paper explores the utility of facial expression recognition as a technology for inferring OFS by first expounding on the intricacies of OFS and the scientific background behind emotion and its relationship with an individual's state. Then, descriptions of the feedback loop and the emotion protocols proposed for the facial recognition program are explained. A basic version of the facial expression recognition program uses Haar classifiers and OpenCV libraries to automatically locate key facial landmarks during a live video stream. Various methods of creating facial expression recognition software are reviewed to guide future extensions of the program. The paper concludes with an examination of the steps necessary in the research of emotion and recommendations for the creation of an automatic facial expression recognition program for use in real-time, safety-critical missions.

  14. The role of great auricular-facial nerve neurorrhaphy in facial nerve damage.

    Science.gov (United States)

    Sun, Yan; Liu, Limei; Han, Yuechen; Xu, Lei; Zhang, Daogong; Wang, Haibo

    2015-01-01

    Facial nerve is easy to be damaged, and there are many reconstructive methods for facial nerve reconstructive, such as facial nerve end to end anastomosis, the great auricular nerve graft, the sural nerve graft, or hypoglossal-facial nerve anastomosis. However, there is still little study about great auricular-facial nerve neurorrhaphy. The aim of the present study was to identify the role of great auricular-facial nerve neurorrhaphy and the mechanism. Rat models of facial nerve cut (FC), facial nerve end to end anastomosis (FF), facial-great auricular neurorrhaphy (FG), and control (Ctrl) were established. Apex nasi amesiality observation, electrophysiology and immunofluorescence assays were employed to investigate the function and mechanism. In apex nasi amesiality observation, it was found apex nasi amesiality of FG group was partly recovered. Additionally, electrophysiology and immunofluorescence assays revealed that facial-great auricular neurorrhaphy could transfer nerve impulse and express AChR which was better than facial nerve cut and worse than facial nerve end to end anastomosis. The present study indicated that great auricular-facial nerve neurorrhaphy is a substantial solution for facial lesion repair, as it is efficiently preventing facial muscles atrophy by generating neurotransmitter like ACh.

  15. A Real-Time Interactive System for Facial Makeup of Peking Opera

    Science.gov (United States)

    Cai, Feilong; Yu, Jinhui

    In this paper we present a real-time interactive system for making facial makeup of Peking Opera. First, we analyze the process of drawing facial makeup and characteristics of the patterns used in it, and then construct a SVG pattern bank based on local features like eye, nose, mouth, etc. Next, we pick up some SVG patterns from the pattern bank and composed them to make a new facial makeup. We offer a vector-based free form deformation (FFD) tool to edit patterns and, based on editing, our system creates automatically texture maps for a template head model. Finally, the facial makeup is rendered on the 3D head model in real time. Our system offers flexibility in designing and synthesizing various 3D facial makeup. Potential applications of the system include decoration design, digital museum exhibition and education of Peking Opera.

  16. Facial expressions : What the mirror neuron system can and cannot tell us

    NARCIS (Netherlands)

    van der Gaag, Christiaan; Minderaa, Ruud B.; Keysers, Christian

    2007-01-01

    Facial expressions contain both motor and emotional components. The inferior frontal gyrus (IFG) and posterior parietal cortex have been considered to compose a mirror neuron system (MNS) for the motor components of facial expressions, while the amygdala and insula may represent an "additional" MNS

  17. Event-related theta synchronization predicts deficit in facial affect recognition in schizophrenia.

    Science.gov (United States)

    Csukly, Gábor; Stefanics, Gábor; Komlósi, Sarolta; Czigler, István; Czobor, Pál

    2014-02-01

    Growing evidence suggests that abnormalities in the synchronized oscillatory activity of neurons in schizophrenia may lead to impaired neural activation and temporal coding and thus lead to neurocognitive dysfunctions, such as deficits in facial affect recognition. To gain an insight into the neurobiological processes linked to facial affect recognition, we investigated both induced and evoked oscillatory activity by calculating the Event Related Spectral Perturbation (ERSP) and the Inter Trial Coherence (ITC) during facial affect recognition. Fearful and neutral faces as well as nonface patches were presented to 24 patients with schizophrenia and 24 matched healthy controls while EEG was recorded. The participants' task was to recognize facial expressions. Because previous findings with healthy controls showed that facial feature decoding was associated primarily with oscillatory activity in the theta band, we analyzed ERSP and ITC in this frequency band in the time interval of 140-200 ms, which corresponds to the N170 component. Event-related theta activity and phase-locking to facial expressions, but not to nonface patches, predicted emotion recognition performance in both controls and patients. Event-related changes in theta amplitude and phase-locking were found to be significantly weaker in patients compared with healthy controls, which is in line with previous investigations showing decreased neural synchronization in the low frequency bands in patients with schizophrenia. Neural synchrony is thought to underlie distributed information processing. Our results indicate a less effective functioning in the recognition process of facial features, which may contribute to a less effective social cognition in schizophrenia. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  18. Botulinum toxin treatment for facial palsy: A systematic review.

    Science.gov (United States)

    Cooper, Lilli; Lui, Michael; Nduka, Charles

    2017-06-01

    Facial palsy may be complicated by ipsilateral synkinesis or contralateral hyperkinesis. Botulinum toxin is increasingly used in the management of facial palsy; however, the optimum dose, treatment interval, adjunct therapy and performance as compared with alternative treatments have not been well established. This study aimed to systematically review the evidence for the use of botulinum toxin in facial palsy. The Cochrane central register of controlled trials (CENTRAL), MEDLINE(R) (1946 to September 2015) and Embase Classic + Embase (1947 to September 2015) were searched for randomised studies using botulinum toxin in facial palsy. Forty-seven studies were identified, and three included. Their physical and patient-reported outcomes are described, and observations and cautions are discussed. Facial asymmetry has a strong correlation to subjective domains such as impairment in social interaction and perception of self-image and appearance. Botulinum toxin injections represent a minimally invasive technique that is helpful in restoring facial symmetry at rest and during movement in chronic, and potentially acute, facial palsy. Botulinum toxin in combination with physical therapy may be particularly helpful. Currently, there is a paucity of data; areas for further research are suggested. A strong body of evidence may allow botulinum toxin treatment to be nationally standardised and recommended in the management of facial palsy. Copyright © 2017 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  19. MRI of the facial nerve in idiopathic facial palsy

    International Nuclear Information System (INIS)

    Saatci, I.; Sahintuerk, F.; Sennaroglu, L.; Boyvat, F.; Guersel, B.; Besim, A.

    1996-01-01

    The purpose of this prospective study was to define the enhancement pattern of the facial nerve in idiopathic facial paralysis (Bell's palsy) on magnetic resonance (MR) imaging with routine doses of gadolinium-DTPA (0.1 mmol/kg). Using 0.5 T imager, 24 patients were examined with a mean interval time of 13.7 days between the onset of symptoms and the MR examination. Contralateral asymptomatic facial nerves constituted the control group and five of the normal facial nerves (20.8%) showed enhancement confined to the geniculate ganglion. Hence, contrast enhancement limited to the geniculate ganglion in the abnormal facial nerve (3 of 24) was referred to a equivocal. Not encountered in any of the normal facial nerves, enhancement of other segments alone or associated with geniculate ganglion enhancement was considered to be abnormal and noted in 70.8% of the symptomatic facial nerves. The most frequently enhancing segments were the geniculate ganglion and the distal intracanalicular segment. (orig.)

  20. MRI of the facial nerve in idiopathic facial palsy

    Energy Technology Data Exchange (ETDEWEB)

    Saatci, I. [Dept. of Radiology, Hacettepe Univ., Hospital Sihhiye, Ankara (Turkey); Sahintuerk, F. [Dept. of Radiology, Hacettepe Univ., Hospital Sihhiye, Ankara (Turkey); Sennaroglu, L. [Dept. of Otolaryngology, Head and Neck Surgery, Hacettepe Univ., Hospital Sihhiye, Ankara (Turkey); Boyvat, F. [Dept. of Radiology, Hacettepe Univ., Hospital Sihhiye, Ankara (Turkey); Guersel, B. [Dept. of Otolaryngology, Head and Neck Surgery, Hacettepe Univ., Hospital Sihhiye, Ankara (Turkey); Besim, A. [Dept. of Radiology, Hacettepe Univ., Hospital Sihhiye, Ankara (Turkey)

    1996-10-01

    The purpose of this prospective study was to define the enhancement pattern of the facial nerve in idiopathic facial paralysis (Bell`s palsy) on magnetic resonance (MR) imaging with routine doses of gadolinium-DTPA (0.1 mmol/kg). Using 0.5 T imager, 24 patients were examined with a mean interval time of 13.7 days between the onset of symptoms and the MR examination. Contralateral asymptomatic facial nerves constituted the control group and five of the normal facial nerves (20.8%) showed enhancement confined to the geniculate ganglion. Hence, contrast enhancement limited to the geniculate ganglion in the abnormal facial nerve (3 of 24) was referred to a equivocal. Not encountered in any of the normal facial nerves, enhancement of other segments alone or associated with geniculate ganglion enhancement was considered to be abnormal and noted in 70.8% of the symptomatic facial nerves. The most frequently enhancing segments were the geniculate ganglion and the distal intracanalicular segment. (orig.)

  1. Children's understanding of facial expression of emotion: II. Drawing of emotion-faces.

    Science.gov (United States)

    Missaghi-Lakshman, M; Whissell, C

    1991-06-01

    67 children from Grades 2, 4, and 7 drew faces representing the emotional expressions of fear, anger, surprise, disgust, happiness, and sadness. The children themselves and 29 adults later decoded the drawings in an emotion-recognition task. Children were the more accurate decoders, and their accuracy and the accuracy of adults increased significantly for judgments of 7th-grade drawings. The emotions happy and sad were most accurately decoded. There were no significant differences associated with sex. In their drawings, children utilized a symbol system that seems to be based on a highlighting or exaggeration of features of the innately governed facial expression of emotion.

  2. Facial paralysis

    Science.gov (United States)

    ... otherwise healthy, facial paralysis is often due to Bell palsy . This is a condition in which the facial ... speech, or occupational therapist. If facial paralysis from Bell palsy lasts for more than 6 to 12 months, ...

  3. Dissociation between recognition and detection advantage for facial expressions: a meta-analysis.

    Science.gov (United States)

    Nummenmaa, Lauri; Calvo, Manuel G

    2015-04-01

    Happy facial expressions are recognized faster and more accurately than other expressions in categorization tasks, whereas detection in visual search tasks is widely believed to be faster for angry than happy faces. We used meta-analytic techniques for resolving this categorization versus detection advantage discrepancy for positive versus negative facial expressions. Effect sizes were computed on the basis of the r statistic for a total of 34 recognition studies with 3,561 participants and 37 visual search studies with 2,455 participants, yielding a total of 41 effect sizes for recognition accuracy, 25 for recognition speed, and 125 for visual search speed. Random effects meta-analysis was conducted to estimate effect sizes at population level. For recognition tasks, an advantage in recognition accuracy and speed for happy expressions was found for all stimulus types. In contrast, for visual search tasks, moderator analysis revealed that a happy face detection advantage was restricted to photographic faces, whereas a clear angry face advantage was found for schematic and "smiley" faces. Robust detection advantage for nonhappy faces was observed even when stimulus emotionality was distorted by inversion or rearrangement of the facial features, suggesting that visual features primarily drive the search. We conclude that the recognition advantage for happy faces is a genuine phenomenon related to processing of facial expression category and affective valence. In contrast, detection advantages toward either happy (photographic stimuli) or nonhappy (schematic) faces is contingent on visual stimulus features rather than facial expression, and may not involve categorical or affective processing. (c) 2015 APA, all rights reserved).

  4. Internal representations reveal cultural diversity in expectations of facial expressions of emotion.

    Science.gov (United States)

    Jack, Rachael E; Caldara, Roberto; Schyns, Philippe G

    2012-02-01

    Facial expressions have long been considered the "universal language of emotion." Yet consistent cultural differences in the recognition of facial expressions contradict such notions (e.g., R. E. Jack, C. Blais, C. Scheepers, P. G. Schyns, & R. Caldara, 2009). Rather, culture--as an intricate system of social concepts and beliefs--could generate different expectations (i.e., internal representations) of facial expression signals. To investigate, they used a powerful psychophysical technique (reverse correlation) to estimate the observer-specific internal representations of the 6 basic facial expressions of emotion (i.e., happy, surprise, fear, disgust, anger, and sad) in two culturally distinct groups (i.e., Western Caucasian [WC] and East Asian [EA]). Using complementary statistical image analyses, cultural specificity was directly revealed in these representations. Specifically, whereas WC internal representations predominantly featured the eyebrows and mouth, EA internal representations showed a preference for expressive information in the eye region. Closer inspection of the EA observer preference revealed a surprising feature: changes of gaze direction, shown primarily among the EA group. For the first time, it is revealed directly that culture can finely shape the internal representations of common facial expressions of emotion, challenging notions of a biologically hardwired "universal language of emotion."

  5. Walk and Learn: Facial Attribute Representation Learning from Egocentric Video and Contextual Data

    OpenAIRE

    Wang, Jing; Cheng, Yu; Feris, Rogerio Schmidt

    2016-01-01

    The way people look in terms of facial attributes (ethnicity, hair color, facial hair, etc.) and the clothes or accessories they wear (sunglasses, hat, hoodies, etc.) is highly dependent on geo-location and weather condition, respectively. This work explores, for the first time, the use of this contextual information, as people with wearable cameras walk across different neighborhoods of a city, in order to learn a rich feature representation for facial attribute classification, without the c...

  6. Automatic facial pore analysis system using multi-scale pore detection.

    Science.gov (United States)

    Sun, J Y; Kim, S W; Lee, S H; Choi, J E; Ko, S J

    2017-08-01

    As facial pore widening and its treatments have become common concerns in the beauty care field, the necessity for an objective pore-analyzing system has been increased. Conventional apparatuses lack in usability requiring strong light sources and a cumbersome photographing process, and they often yield unsatisfactory analysis results. This study was conducted to develop an image processing technique for automatic facial pore analysis. The proposed method detects facial pores using multi-scale detection and optimal scale selection scheme and then extracts pore-related features such as total area, average size, depth, and the number of pores. Facial photographs of 50 subjects were graded by two expert dermatologists, and correlation analyses between the features and clinical grading were conducted. We also compared our analysis result with those of conventional pore-analyzing devices. The number of large pores and the average pore size were highly correlated with the severity of pore enlargement. In comparison with the conventional devices, the proposed analysis system achieved better performance showing stronger correlation with the clinical grading. The proposed system is highly accurate and reliable for measuring the severity of skin pore enlargement. It can be suitably used for objective assessment of the pore tightening treatments. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  7. Facial Sports Injuries

    Science.gov (United States)

    ... Marketplace Find an ENT Doctor Near You Facial Sports Injuries Facial Sports Injuries Patient Health Information News ... should receive immediate medical attention. Prevention Of Facial Sports Injuries The best way to treat facial sports ...

  8. Facial Cosmetic Surgery

    Science.gov (United States)

    ... to find out more. Facial Cosmetic Surgery Facial Cosmetic Surgery Extensive education and training in surgical procedures ... to find out more. Facial Cosmetic Surgery Facial Cosmetic Surgery Extensive education and training in surgical procedures ...

  9. Oro-facial-digital syndrome Type 1: A case report

    Directory of Open Access Journals (Sweden)

    Kanika Singh Dhull

    2014-01-01

    Full Text Available Oro-Facial Digital Syndrome (OFDS is a generic term for group of apparently distinctive genetic diseases that affect the development of the oral cavity, facial features, and digits. One of these is OFDS type I (OFDS-I which has rarely been reported in Asian countries. This is the case report of a 13 year old patient with OFDS type I who reported to the Department of Pedodontics and Preventive Dentistry, with the complaint of discolored upper front teeth.

  10. Performance-driven facial animation: basic research on human judgments of emotional state in facial avatars.

    Science.gov (United States)

    Rizzo, A A; Neumann, U; Enciso, R; Fidaleo, D; Noh, J Y

    2001-08-01

    three-dimensional avatar using a performance-driven facial animation (PDFA) system developed at the University of Southern California Integrated Media Systems Center. PDFA offers a means for creating high-fidelity visual representations of human faces and bodies. This effort explores the feasibility of sensing and reproducing a range of facial expressions with a PDFA system. In order to test concordance of human ratings of emotional expression between video and avatar facial delivery, we first had facial model subjects observe stimuli that were designed to elicit naturalistic facial expressions. The emotional stimulus induction involved presenting text-based, still image, and video clips to subjects that were previously rated to induce facial expressions for the six universals2 of facial expression (happy, sad, fear, anger, disgust, and surprise), in addition to attentiveness, puzzlement and frustration. Videotapes of these induced facial expressions that best represented prototypic examples of the above emotional states and three-dimensional avatar animations of the same facial expressions were randomly presented to 38 human raters. The raters used open-end, forced choice and seven-point Likert-type scales to rate expression in terms of identification. The forced choice and seven-point ratings provided the most usable data to determine video/animation concordance and these data are presented. To support a clear understanding of this data, a website has been set up that will allow readers to view the video and facial animation clips to illustrate the assets and limitations of these types of facial expression-rendering methods (www. USCAvatars.com/MMVR). This methodological first step in our research program has served to provide valuable human user-centered feedback to support the iterative design and development of facial avatar characteristics for expression of emotional communication.

  11. Facial soft tissue thickness in North Indian adult population

    Directory of Open Access Journals (Sweden)

    Tanushri Saxena

    2012-01-01

    Full Text Available Objectives: Forensic facial reconstruction is an attempt to reproduce a likeness of facial features of an individual, based on characteristics of the skull, for the purpose of individual identification - The aim of this study was to determine the soft tissue thickness values of individuals of Bareilly population, Uttar Pradesh, India and to evaluate whether these values can help in forensic identification. Study design: A total of 40 individuals (19 males, 21 females were evaluated using spiral computed tomographic (CT scan with 2 mm slice thickness in axial sections and soft tissue thicknesses were measured at seven midfacial anthropological facial landmarks. Results: It was found that facial soft tissue thickness values decreased with age. Soft tissue thickness values were less in females than in males, except at ramus region. Comparing the left and right values in individuals it was found to be not significant. Conclusion: Soft tissue thickness values are an important factor in facial reconstruction and also help in forensic identification of an individual. CT scan gives a good representation of these values and hence is considered an important tool in facial reconstruction- This study has been conducted in North Indian population and further studies with larger sample size can surely add to the data regarding soft tissue thicknesses.

  12. Effect of a Facial Muscle Exercise Device on Facial Rejuvenation.

    Science.gov (United States)

    Hwang, Ui-Jae; Kwon, Oh-Yun; Jung, Sung-Hoon; Ahn, Sun-Hee; Gwak, Gyeong-Tae

    2018-01-20

    The efficacy of facial muscle exercises (FMEs) for facial rejuvenation is controversial. In the majority of previous studies, nonquantitative assessment tools were used to assess the benefits of FMEs. This study examined the effectiveness of FMEs using a Pao (MTG, Nagoya, Japan) device to quantify facial rejuvenation. Fifty females were asked to perform FMEs using a Pao device for 30 seconds twice a day for 8 weeks. Facial muscle thickness and cross-sectional area were measured sonographically. Facial surface distance, surface area, and volumes were determined using a laser scanning system before and after FME. Facial muscle thickness, cross-sectional area, midfacial surface distances, jawline surface distance, and lower facial surface area and volume were compared bilaterally before and after FME using a paired Student t test. The cross-sectional areas of the zygomaticus major and digastric muscles increased significantly (right: P jawline surface distances (right: P = 0.004, left: P = 0.003) decreased significantly after FME using the Pao device. The lower facial surface areas (right: P = 0.005, left: P = 0.006) and volumes (right: P = 0.001, left: P = 0.002) were also significantly reduced after FME using the Pao device. FME using the Pao device can increase facial muscle thickness and cross-sectional area, thus contributing to facial rejuvenation. © 2018 The American Society for Aesthetic Plastic Surgery, Inc.

  13. In-the-wild facial expression recognition in extreme poses

    Science.gov (United States)

    Yang, Fei; Zhang, Qian; Zheng, Chi; Qiu, Guoping

    2018-04-01

    In the computer research area, facial expression recognition is a hot research problem. Recent years, the research has moved from the lab environment to in-the-wild circumstances. It is challenging, especially under extreme poses. But current expression detection systems are trying to avoid the pose effects and gain the general applicable ability. In this work, we solve the problem in the opposite approach. We consider the head poses and detect the expressions within special head poses. Our work includes two parts: detect the head pose and group it into one pre-defined head pose class; do facial expression recognize within each pose class. Our experiments show that the recognition results with pose class grouping are much better than that of direct recognition without considering poses. We combine the hand-crafted features, SIFT, LBP and geometric feature, with deep learning feature as the representation of the expressions. The handcrafted features are added into the deep learning framework along with the high level deep learning features. As a comparison, we implement SVM and random forest to as the prediction models. To train and test our methodology, we labeled the face dataset with 6 basic expressions.

  14. An Automatic Diagnosis Method of Facial Acne Vulgaris Based on Convolutional Neural Network.

    Science.gov (United States)

    Shen, Xiaolei; Zhang, Jiachi; Yan, Chenjun; Zhou, Hong

    2018-04-11

    In this paper, we present a new automatic diagnosis method for facial acne vulgaris which is based on convolutional neural networks (CNNs). To overcome the shortcomings of previous methods which were the inability to classify enough types of acne vulgaris. The core of our method is to extract features of images based on CNNs and achieve classification by classifier. A binary-classifier of skin-and-non-skin is used to detect skin area and a seven-classifier is used to achieve the classification task of facial acne vulgaris and healthy skin. In the experiments, we compare the effectiveness of our CNN and the VGG16 neural network which is pre-trained on the ImageNet data set. We use a ROC curve to evaluate the performance of binary-classifier and use a normalized confusion matrix to evaluate the performance of seven-classifier. The results of our experiments show that the pre-trained VGG16 neural network is effective in extracting features from facial acne vulgaris images. And the features are very useful for the follow-up classifiers. Finally, we try applying the classifiers both based on the pre-trained VGG16 neural network to assist doctors in facial acne vulgaris diagnosis.

  15. Facial Fractures.

    Science.gov (United States)

    Ricketts, Sophie; Gill, Hameet S; Fialkov, Jeffery A; Matic, Damir B; Antonyshyn, Oleh M

    2016-02-01

    After reading this article, the participant should be able to: 1. Demonstrate an understanding of some of the changes in aspects of facial fracture management. 2. Assess a patient presenting with facial fractures. 3. Understand indications and timing of surgery. 4. Recognize exposures of the craniomaxillofacial skeleton. 5. Identify methods for repair of typical facial fracture patterns. 6. Discuss the common complications seen with facial fractures. Restoration of the facial skeleton and associated soft tissues after trauma involves accurate clinical and radiologic assessment to effectively plan a management approach for these injuries. When surgical intervention is necessary, timing, exposure, sequencing, and execution of repair are all integral to achieving the best long-term outcomes for these patients.

  16. Contributions of feature shapes and surface cues to the recognition and neural representation of facial identity.

    Science.gov (United States)

    Andrews, Timothy J; Baseler, Heidi; Jenkins, Rob; Burton, A Mike; Young, Andrew W

    2016-10-01

    A full understanding of face recognition will involve identifying the visual information that is used to discriminate different identities and how this is represented in the brain. The aim of this study was to explore the importance of shape and surface properties in the recognition and neural representation of familiar faces. We used image morphing techniques to generate hybrid faces that mixed shape properties (more specifically, second order spatial configural information as defined by feature positions in the 2D-image) from one identity and surface properties from a different identity. Behavioural responses showed that recognition and matching of these hybrid faces was primarily based on their surface properties. These behavioural findings contrasted with neural responses recorded using a block design fMRI adaptation paradigm to test the sensitivity of Haxby et al.'s (2000) core face-selective regions in the human brain to the shape or surface properties of the face. The fusiform face area (FFA) and occipital face area (OFA) showed a lower response (adaptation) to repeated images of the same face (same shape, same surface) compared to different faces (different shapes, different surfaces). From the behavioural data indicating the critical contribution of surface properties to the recognition of identity, we predicted that brain regions responsible for familiar face recognition should continue to adapt to faces that vary in shape but not surface properties, but show a release from adaptation to faces that vary in surface properties but not shape. However, we found that the FFA and OFA showed an equivalent release from adaptation to changes in both shape and surface properties. The dissociation between the neural and perceptual responses suggests that, although they may play a role in the process, these core face regions are not solely responsible for the recognition of facial identity. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Complex Odontome Causing Facial Asymmetry

    Directory of Open Access Journals (Sweden)

    Karthikeya Patil

    2006-01-01

    Full Text Available Odontomas are the most common non-cystic odontogenic lesions representing 70% of all odontogenic tumors. Often small and asymptomatic, they are detected on routine radiographs. Occasionally they become large and produce expansion of bone with consequent facial asymmetry. We report a case of such a lesion causing expansion of the mandible in an otherwise asymptomatic patient.

  18. A Brief Review of Facial Emotion Recognition Based on Visual Information

    Science.gov (United States)

    2018-01-01

    Facial emotion recognition (FER) is an important topic in the fields of computer vision and artificial intelligence owing to its significant academic and commercial potential. Although FER can be conducted using multiple sensors, this review focuses on studies that exclusively use facial images, because visual expressions are one of the main information channels in interpersonal communication. This paper provides a brief review of researches in the field of FER conducted over the past decades. First, conventional FER approaches are described along with a summary of the representative categories of FER systems and their main algorithms. Deep-learning-based FER approaches using deep networks enabling “end-to-end” learning are then presented. This review also focuses on an up-to-date hybrid deep-learning approach combining a convolutional neural network (CNN) for the spatial features of an individual frame and long short-term memory (LSTM) for temporal features of consecutive frames. In the later part of this paper, a brief review of publicly available evaluation metrics is given, and a comparison with benchmark results, which are a standard for a quantitative comparison of FER researches, is described. This review can serve as a brief guidebook to newcomers in the field of FER, providing basic knowledge and a general understanding of the latest state-of-the-art studies, as well as to experienced researchers looking for productive directions for future work. PMID:29385749

  19. A Brief Review of Facial Emotion Recognition Based on Visual Information

    Directory of Open Access Journals (Sweden)

    Byoung Chul Ko

    2018-01-01

    Full Text Available Facial emotion recognition (FER is an important topic in the fields of computer vision and artificial intelligence owing to its significant academic and commercial potential. Although FER can be conducted using multiple sensors, this review focuses on studies that exclusively use facial images, because visual expressions are one of the main information channels in interpersonal communication. This paper provides a brief review of researches in the field of FER conducted over the past decades. First, conventional FER approaches are described along with a summary of the representative categories of FER systems and their main algorithms. Deep-learning-based FER approaches using deep networks enabling “end-to-end” learning are then presented. This review also focuses on an up-to-date hybrid deep-learning approach combining a convolutional neural network (CNN for the spatial features of an individual frame and long short-term memory (LSTM for temporal features of consecutive frames. In the later part of this paper, a brief review of publicly available evaluation metrics is given, and a comparison with benchmark results, which are a standard for a quantitative comparison of FER researches, is described. This review can serve as a brief guidebook to newcomers in the field of FER, providing basic knowledge and a general understanding of the latest state-of-the-art studies, as well as to experienced researchers looking for productive directions for future work.

  20. The facial nerve: anatomy and associated disorders for oral health professionals.

    Science.gov (United States)

    Takezawa, Kojiro; Townsend, Grant; Ghabriel, Mounir

    2018-04-01

    The facial nerve, the seventh cranial nerve, is of great clinical significance to oral health professionals. Most published literature either addresses the central connections of the nerve or its peripheral distribution but few integrate both of these components and also highlight the main disorders affecting the nerve that have clinical implications in dentistry. The aim of the current study is to provide a comprehensive description of the facial nerve. Multiple aspects of the facial nerve are discussed and integrated, including its neuroanatomy, functional anatomy, gross anatomy, clinical problems that may involve the nerve, and the use of detailed anatomical knowledge in the diagnosis of the site of facial nerve lesion in clinical neurology. Examples are provided of disorders that can affect the facial nerve during its intra-cranial, intra-temporal and extra-cranial pathways, and key aspects of clinical management are discussed. The current study is complemented by original detailed dissections and sketches that highlight key anatomical features and emphasise the extent and nature of anatomical variations displayed by the facial nerve.

  1. The Prevalence of Cosmetic Facial Plastic Procedures among Facial Plastic Surgeons.

    Science.gov (United States)

    Moayer, Roxana; Sand, Jordan P; Han, Albert; Nabili, Vishad; Keller, Gregory S

    2018-04-01

    This is the first study to report on the prevalence of cosmetic facial plastic surgery use among facial plastic surgeons. The aim of this study is to determine the frequency with which facial plastic surgeons have cosmetic procedures themselves. A secondary aim is to determine whether trends in usage of cosmetic facial procedures among facial plastic surgeons are similar to that of nonsurgeons. The study design was an anonymous, five-question, Internet survey distributed via email set in a single academic institution. Board-certified members of the American Academy of Facial Plastic and Reconstructive Surgery (AAFPRS) were included in this study. Self-reported history of cosmetic facial plastic surgery or minimally invasive procedures were recorded. The survey also queried participants for demographic data. A total of 216 members of the AAFPRS responded to the questionnaire. Ninety percent of respondents were male ( n  = 192) and 10.3% were female ( n  = 22). Thirty-three percent of respondents were aged 31 to 40 years ( n  = 70), 25% were aged 41 to 50 years ( n  = 53), 21.4% were aged 51 to 60 years ( n  = 46), and 20.5% were older than 60 years ( n  = 44). Thirty-six percent of respondents had a surgical cosmetic facial procedure and 75% has at least one minimally invasive cosmetic facial procedure. Facial plastic surgeons are frequent users of cosmetic facial plastic surgery. This finding may be due to access, knowledge base, values, or attitudes. By better understanding surgeon attitudes toward facial plastic surgery, we can improve communication with patients and delivery of care. This study is a first step in understanding use of facial plastic procedures among facial plastic surgeons. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  2. Biometric morphing: a novel technique for the analysis of morphologic outcomes after facial surgery.

    Science.gov (United States)

    Pahuta, Markian A; Mainprize, James G; Rohlf, F James; Antonyshyn, Oleh M

    2009-01-01

    The results of facial surgery are intuitively judged in terms of the visible changes in facial features or proportions. However, describing these morphologic outcomes objectively remains a challenge. Biometric morphing addresses this issue by merging statistical shape analysis and image processing. This study describes the implementation of biometric morphing in describing the average morphologic result of facial surgery. The biometric morphing protocol was applied to pre- and postoperative images of the following: (1) 40 dorsal hump reduction rhinoplasties and (2) 20 unilateral enophthalmos repairs. Pre- and postoperative average images (average morphs) were generated. The average morphs provided an objective rendering of nasal and periorbital morphology, which summarized the average features and extent of deformity in a population of patients. Subtle alterations in morphology after surgery, which would otherwise be difficult to identify or demonstrate, were clearly illustrated. Biometric morphing is an effective instrument for describing average facial morphology in a population of patients.

  3. How Do Typically Developing Deaf Children and Deaf Children with Autism Spectrum Disorder Use the Face When Comprehending Emotional Facial Expressions in British Sign Language?

    Science.gov (United States)

    Denmark, Tanya; Atkinson, Joanna; Campbell, Ruth; Swettenham, John

    2014-01-01

    Facial expressions in sign language carry a variety of communicative features. While emotion can modulate a spoken utterance through changes in intonation, duration and intensity, in sign language specific facial expressions presented concurrently with a manual sign perform this function. When deaf adult signers cannot see facial features, their…

  4. Reproducibility of the dynamics of facial expressions in unilateral facial palsy.

    Science.gov (United States)

    Alagha, M A; Ju, X; Morley, S; Ayoub, A

    2018-02-01

    The aim of this study was to assess the reproducibility of non-verbal facial expressions in unilateral facial paralysis using dynamic four-dimensional (4D) imaging. The Di4D system was used to record five facial expressions of 20 adult patients. The system captured 60 three-dimensional (3D) images per second; each facial expression took 3-4seconds which was recorded in real time. Thus a set of 180 3D facial images was generated for each expression. The procedure was repeated after 30min to assess the reproducibility of the expressions. A mathematical facial mesh consisting of thousands of quasi-point 'vertices' was conformed to the face in order to determine the morphological characteristics in a comprehensive manner. The vertices were tracked throughout the sequence of the 180 images. Five key 3D facial frames from each sequence of images were analyzed. Comparisons were made between the first and second capture of each facial expression to assess the reproducibility of facial movements. Corresponding images were aligned using partial Procrustes analysis, and the root mean square distance between them was calculated and analyzed statistically (paired Student t-test, PFacial expressions of lip purse, cheek puff, and raising of eyebrows were reproducible. Facial expressions of maximum smile and forceful eye closure were not reproducible. The limited coordination of various groups of facial muscles contributed to the lack of reproducibility of these facial expressions. 4D imaging is a useful clinical tool for the assessment of facial expressions. Copyright © 2017 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  5. Automatic Emotional State Detection using Facial Expression Dynamic in Videos

    Directory of Open Access Journals (Sweden)

    Hongying Meng

    2014-11-01

    Full Text Available In this paper, an automatic emotion detection system is built for a computer or machine to detect the emotional state from facial expressions in human computer communication. Firstly, dynamic motion features are extracted from facial expression videos and then advanced machine learning methods for classification and regression are used to predict the emotional states. The system is evaluated on two publicly available datasets, i.e. GEMEP_FERA and AVEC2013, and satisfied performances are achieved in comparison with the baseline results provided. With this emotional state detection capability, a machine can read the facial expression of its user automatically. This technique can be integrated into applications such as smart robots, interactive games and smart surveillance systems.

  6. Cognitive penetrability and emotion recognition in human facial expressions

    Directory of Open Access Journals (Sweden)

    Francesco eMarchi

    2015-06-01

    Full Text Available Do our background beliefs, desires, and mental images influence our perceptual experience of the emotions of others? In this paper, we will address the possibility of cognitive penetration of perceptual experience in the domain of social cognition. In particular, we focus on emotion recognition based on the visual experience of facial expressions. After introducing the current debate on cognitive penetration, we review examples of perceptual adaptation for facial expressions of emotion. This evidence supports the idea that facial expressions are perceptually processed as wholes. That is, the perceptual system integrates lower-level facial features, such as eyebrow orientation, mouth angle etc., into facial compounds. We then present additional experimental evidence showing that in some cases, emotion recognition on the basis of facial expression is sensitive to and modified by the background knowledge of the subject. We argue that such sensitivity is best explained as a difference in the visual experience of the facial expression, not just as a modification of the judgment based on this experience. The difference in experience is characterized as the result of the interference of background knowledge with the perceptual integration process for faces. Thus, according to the best explanation, we have to accept cognitive penetration in some cases of emotion recognition. Finally, we highlight a recent model of social vision in order to propose a mechanism for cognitive penetration used in the face-based recognition of emotion.

  7. Real Time 3D Facial Movement Tracking Using a Monocular Camera

    Directory of Open Access Journals (Sweden)

    Yanchao Dong

    2016-07-01

    Full Text Available The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference.

  8. Temporal neural mechanisms underlying conscious access to different levels of facial stimulus contents.

    Science.gov (United States)

    Hsu, Shen-Mou; Yang, Yu-Fang

    2018-04-01

    An important issue facing the empirical study of consciousness concerns how the contents of incoming stimuli gain access to conscious processing. According to classic theories, facial stimuli are processed in a hierarchical manner. However, it remains unclear how the brain determines which level of stimulus content is consciously accessible when facing an incoming facial stimulus. Accordingly, with a magnetoencephalography technique, this study aims to investigate the temporal dynamics of the neural mechanism mediating which level of stimulus content is consciously accessible. Participants were instructed to view masked target faces at threshold so that, according to behavioral responses, their perceptual awareness alternated from consciously accessing facial identity in some trials to being able to consciously access facial configuration features but not facial identity in other trials. Conscious access at these two levels of facial contents were associated with a series of differential neural events. Before target presentation, different patterns of phase angle adjustment were observed between the two types of conscious access. This effect was followed by stronger phase clustering for awareness of facial identity immediately during stimulus presentation. After target onset, conscious access to facial identity, as opposed to facial configural features, was able to elicit more robust late positivity. In conclusion, we suggest that the stages of neural events, ranging from prestimulus to stimulus-related activities, may operate in combination to determine which level of stimulus contents is consciously accessed. Conscious access may thus be better construed as comprising various forms that depend on the level of stimulus contents accessed. NEW & NOTEWORTHY The present study investigates how the brain determines which level of stimulus contents is consciously accessible when facing an incoming facial stimulus. Using magnetoencephalography, we show that prestimulus

  9. The influence of different facial components on facial aesthetics.

    NARCIS (Netherlands)

    Faure, J.C.; Rieffe, C.; Maltha, J.C.

    2002-01-01

    Facial aesthetics have an important influence on social behaviour and perception in our society. The purpose of the present study was to evaluate the effect of facial symmetry and inter-ocular distance on the assessment of facial aesthetics, factors that are often suggested as major contributors to

  10. The Child Affective Facial Expression (CAFE) set: validity and reliability from untrained adults.

    Science.gov (United States)

    LoBue, Vanessa; Thrasher, Cat

    2014-01-01

    Emotional development is one of the largest and most productive areas of psychological research. For decades, researchers have been fascinated by how humans respond to, detect, and interpret emotional facial expressions. Much of the research in this area has relied on controlled stimulus sets of adults posing various facial expressions. Here we introduce a new stimulus set of emotional facial expressions into the domain of research on emotional development-The Child Affective Facial Expression set (CAFE). The CAFE set features photographs of a racially and ethnically diverse group of 2- to 8-year-old children posing for six emotional facial expressions-angry, fearful, sad, happy, surprised, and disgusted-and a neutral face. In the current work, we describe the set and report validity and reliability data on the set from 100 untrained adult participants.

  11. "Man-some": A Review of Male Facial Aging and Beauty.

    Science.gov (United States)

    Keaney, Terrence Colin

    2017-06-01

    Gender plays a significant role in determining facial anatomy and behavior, both of which are key factors in the aging process. Understanding the pattern of male facial aging is critical when planning aesthetic treatments on men. Men develop more severe rhytides in a unique pattern, show increased periocular aging changes, and are more prone to hair loss. What also needs to be considered when planning a treatment is what makes men beautiful or "man-some". Male beauty strikes a balance between masculine and feminine facial features. A hypermasculine face can have negative associations. Men also exhibit different cosmetic concerns. Men tend to focus on three areas of the face - hairline, periocular area, and jawline. A comprehensive understanding of the male patient including anatomy, facial aging, cosmetic concerns, and beauty are needed for successful cosmetic outcomes. J Drugs Dermatol. 2017;16(6 Suppl):s91-93..

  12. Facial Emotion Recognition Impairment in Patients with Parkinson's Disease and Isolated Apathy

    Directory of Open Access Journals (Sweden)

    Mercè Martínez-Corral

    2010-01-01

    Full Text Available Apathy is a frequent feature of Parkinson's disease (PD, usually related with executive dysfunction. However, in a subgroup of PD patients apathy may represent the only or predominant neuropsychiatric feature. To understand the mechanisms underlying apathy in PD, we investigated emotional processing in PD patients with and without apathy and in healthy controls (HC, assessed by a facial emotion recognition task (FERT. We excluded PD patients with cognitive impairment, depression, other affective disturbances and previous surgery for PD. PD patients with apathy scored significantly worse in the FERT, performing worse in fear, anger, and sadness recognition. No differences, however, were found between nonapathetic PD patients and HC. These findings suggest the existence of a disruption of emotional-affective processing in cognitive preserved PD patients with apathy. To identify specific dysfunction of limbic structures in PD, patients with isolated apathy may have therapeutic and prognostic implications.

  13. Distinct facial processing in schizophrenia and schizoaffective disorders

    Science.gov (United States)

    Chen, Yue; Cataldo, Andrea; Norton, Daniel J; Ongur, Dost

    2011-01-01

    Although schizophrenia and schizoaffective disorders have both similar and differing clinical features, it is not well understood whether similar or differing pathophysiological processes mediate patients’ cognitive functions. Using psychophysical methods, this study compared the performances of schizophrenia (SZ) patients, patients with schizoaffective disorder (SA), and a healthy control group in two face-related cognitive tasks: emotion discrimination, which tested perception of facial affect, and identity discrimination, which tested perception of non-affective facial features. Compared to healthy controls, SZ patients, but not SA patients, exhibited deficient performance in both fear and happiness discrimination, as well as identity discrimination. SZ patients, but not SA patients, also showed impaired performance in a theory-of-mind task for which emotional expressions are identified based upon the eye regions of face images. This pattern of results suggests distinct processing of face information in schizophrenia and schizoaffective disorders. PMID:21868199

  14. The role of great auricular-facial nerve neurorrhaphy in facial nerve damage

    OpenAIRE

    Sun, Yan; Liu, Limei; Han, Yuechen; Xu, Lei; Zhang, Daogong; Wang, Haibo

    2015-01-01

    Background: Facial nerve is easy to be damaged, and there are many reconstructive methods for facial nerve reconstructive, such as facial nerve end to end anastomosis, the great auricular nerve graft, the sural nerve graft, or hypoglossal-facial nerve anastomosis. However, there is still little study about great auricular-facial nerve neurorrhaphy. The aim of the present study was to identify the role of great auricular-facial nerve neurorrhaphy and the mechanism. Methods: Rat models of facia...

  15. Facial soft tissue analysis among various vertical facial patterns

    International Nuclear Information System (INIS)

    Jeelani, W.; Fida, M.; Shaikh, A.

    2016-01-01

    Background: The emergence of soft tissue paradigm in orthodontics has made various soft tissue parameters an integral part of the orthodontic problem list. The purpose of this study was to determine and compare various facial soft tissue parameters on lateral cephalograms among patients with short, average and long facial patterns. Methods: A cross-sectional study was conducted on the lateral cephalograms of 180 adult subjects divided into three equal groups, i.e., short, average and long face according to the vertical facial pattern. Incisal display at rest, nose height, upper and lower lip lengths, degree of lip procumbency and the nasolabial angle were measured for each individual. The gender differences for these soft tissue parameters were determined using Mann-Whitney U test while the comparison among different facial patterns was performed using Kruskal-Wallis test. Results: Significant differences in the incisal display at rest, total nasal height, lip procumbency, the nasolabial angle and the upper and lower lip lengths were found among the three vertical facial patterns. A significant positive correlation of nose and lip dimensions was found with the underlying skeletal pattern. Similarly, the incisal display at rest, upper and lower lip procumbency and the nasolabial angle were significantly correlated with the lower anterior facial height. Conclusion: Short facial pattern is associated with minimal incisal display, recumbent upper and lower lips and acute nasolabial angle while the long facial pattern is associated with excessive incisal display, procumbent upper and lower lips and obtuse nasolabial angle. (author)

  16. Enhancement of the facial nerve at MR imaging

    International Nuclear Information System (INIS)

    Gebarski, S.S.; Telian, S.; Niparko, J.

    1990-01-01

    In the few cases studied, normal facial nerves are reported to show no MR enhancement. Because this did not fit clinical experience, the authors designed a retrospective imaging review with anatomic correlation. Between June 1989 and June 1990, 175 patients underwent focused temporal bone MR imaging before and after administration of intravenous gadopentetate dimeglumine (0.1 mmol/kg). Exclusion criteria for the study included facial nerve dysfunction (subjective or objective); facial nerve mass; central nervous system infection, inflammation, or trauma; neurofibromatosis; or previous cranial surgery of any type. The following sequences were reviewed: GE 1.5-T axial spin-echo TR 567 msec, TE 20 msec, 256 x 192, 2.0 excitations, 20-cm field of view, 3-mm section thickness. Imaging analysis was a side-by side comparison of the images and region-of-interest quantified signal intensity. Anatomic correlation included a comparison with dissection and axial histologic sections. Ninety-three patients (aged 15-75 years) were available for imaging analysis after the exclusionary criteria were applied. With 46 patients (92 facial nerves) analyzed, they found that 76 nerves (83%) showed easily visible gadopentetate dimeglumine enhancement, especially about the geniculate ganglia. Sixteen (17%) of the 92 nerves did not show visible enhancement, but region-of-interest analysis showed increased intensity after gadopentetate dimeglumine administration. Sixteen patients (42%) showed right-to-left asymmetry in facial nerve enhancement. The facial nerves showed enhancement in the geniculate, tympanic, and fallopian portions; the facial nerve within the IAC showed no enhancement. This corresponded exactly with the topographic features of a circummeural arterial/venous plexus seen on the anatomic preparations

  17. Relative preservation of the recognition of positive facial expression "happiness" in Alzheimer disease.

    Science.gov (United States)

    Maki, Yohko; Yoshida, Hiroshi; Yamaguchi, Tomoharu; Yamaguchi, Haruyasu

    2013-01-01

    Positivity recognition bias has been reported for facial expression as well as memory and visual stimuli in aged individuals, whereas emotional facial recognition in Alzheimer disease (AD) patients is controversial, with possible involvement of confounding factors such as deficits in spatial processing of non-emotional facial features and in verbal processing to express emotions. Thus, we examined whether recognition of positive facial expressions was preserved in AD patients, by adapting a new method that eliminated the influences of these confounding factors. Sensitivity of six basic facial expressions (happiness, sadness, surprise, anger, disgust, and fear) was evaluated in 12 outpatients with mild AD, 17 aged normal controls (ANC), and 25 young normal controls (YNC). To eliminate the factors related to non-emotional facial features, averaged faces were prepared as stimuli. To eliminate the factors related to verbal processing, the participants were required to match the images of stimulus and answer, avoiding the use of verbal labels. In recognition of happiness, there was no difference in sensitivity between YNC and ANC, and between ANC and AD patients. AD patients were less sensitive than ANC in recognition of sadness, surprise, and anger. ANC were less sensitive than YNC in recognition of surprise, anger, and disgust. Within the AD patient group, sensitivity of happiness was significantly higher than those of the other five expressions. In AD patient, recognition of happiness was relatively preserved; recognition of happiness was most sensitive and was preserved against the influences of age and disease.

  18. A novel human-machine interface based on recognition of multi-channel facial bioelectric signals

    International Nuclear Information System (INIS)

    Razazadeh, Iman Mohammad; Firoozabadi, S. Mohammad; Golpayegani, S.M.R.H.; Hu, H.

    2011-01-01

    Full text: This paper presents a novel human-machine interface for disabled people to interact with assistive systems for a better quality of life. It is based on multichannel forehead bioelectric signals acquired by placing three pairs of electrodes (physical channels) on the Fron-tails and Temporalis facial muscles. The acquired signals are passes through a parallel filter bank to explore three different sub-bands related to facial electromyogram, electrooculogram and electroencephalogram. The root mean features of the bioelectric signals analyzed within non-overlapping 256 ms windows were extracted. The subtractive fuzzy c-means clustering method (SFCM) was applied to segment the feature space and generate initial fuzzy based Takagi-Sugeno rules. Then, an adaptive neuro-fuzzy inference system is exploited to tune up the premises and consequence parameters of the extracted SFCMs. rules. The average classifier discriminating ratio for eight different facial gestures (smiling, frowning, pulling up left/right lips corner, eye movement to left/right/up/down is between 93.04% and 96.99% according to different combinations and fusions of logical features. Experimental results show that the proposed interface has a high degree of accuracy and robustness for discrimination of 8 fundamental facial gestures. Some potential and further capabilities of our approach in human-machine interfaces are also discussed. (author)

  19. Dynamic Facial Prosthetics for Sufferers of Facial Paralysis

    Directory of Open Access Journals (Sweden)

    Fergal Coulter

    2011-10-01

    Full Text Available BackgroundThis paper discusses the various methods and the materialsfor the fabrication of active artificial facial muscles. Theprimary use for these will be the reanimation of paralysedor atrophied muscles in sufferers of non-recoverableunilateral facial paralysis.MethodThe prosthetic solution described in this paper is based onsensing muscle motion of the contralateral healthy musclesand replicating that motion across a patient’s paralysed sideof the face, via solid state and thin film actuators. Thedevelopment of this facial prosthetic device focused onrecreating a varying intensity smile, with emphasis ontiming, displacement and the appearance of the wrinklesand folds that commonly appear around the nose and eyesduring the expression.An animatronic face was constructed with actuations beingmade to a silicone representation musculature, usingmultiple shape-memory alloy cascades. Alongside theartificial muscle physical prototype, a facial expressionrecognition software system was constructed. This formsthe basis of an automated calibration and reconfigurationsystem for the artificial muscles following implantation, soas to suit the implantee’s unique physiognomy.ResultsAn animatronic model face with silicone musculature wasdesigned and built to evaluate the performance of ShapeMemory Alloy artificial muscles, their power controlcircuitry and software control systems. A dual facial motionsensing system was designed to allow real time control overmodel – a piezoresistive flex sensor to measure physicalmotion, and a computer vision system to evaluate real toartificial muscle performance.Analysis of various facial expressions in real subjects wasmade, which give useful data upon which to base thesystems parameter limits.ConclusionThe system performed well, and the various strengths andshortcomings of the materials and methods are reviewedand considered for the next research phase, when newpolymer based artificial muscles are constructed

  20. Facial Fractures.

    Science.gov (United States)

    Ghosh, Rajarshi; Gopalkrishnan, Kulandaswamy

    2018-06-01

    The aim of this study is to retrospectively analyze the incidence of facial fractures along with age, gender predilection, etiology, commonest site, associated dental injuries, and any complications of patients operated in Craniofacial Unit of SDM College of Dental Sciences and Hospital. This retrospective study was conducted at the Department of OMFS, SDM College of Dental Sciences, Dharwad from January 2003 to December 2013. Data were recorded for the cause of injury, age and gender distribution, frequency and type of injury, localization and frequency of soft tissue injuries, dentoalveolar trauma, facial bone fractures, complications, concomitant injuries, and different treatment protocols.All the data were analyzed using statistical analysis that is chi-squared test. A total of 1146 patients reported at our unit with facial fractures during these 10 years. Males accounted for a higher frequency of facial fractures (88.8%). Mandible was the commonest bone to be fractured among all the facial bones (71.2%). Maxillary central incisors were the most common teeth to be injured (33.8%) and avulsion was the most common type of injury (44.6%). Commonest postoperative complication was plate infection (11%) leading to plate removal. Other injuries associated with facial fractures were rib fractures, head injuries, upper and lower limb fractures, etc., among these rib fractures were seen most frequently (21.6%). This study was performed to compare the different etiologic factors leading to diverse facial fracture patterns. By statistical analysis of this record the authors come to know about the relationship of facial fractures with gender, age, associated comorbidities, etc.

  1. The Mysterious Noh Mask: Contribution of Multiple Facial Parts to the Recognition of Emotional Expressions

    Science.gov (United States)

    Miyata, Hiromitsu; Nishimura, Ritsuko; Okanoya, Kazuo; Kawai, Nobuyuki

    2012-01-01

    Background A Noh mask worn by expert actors when performing on a Japanese traditional Noh drama is suggested to convey countless different facial expressions according to different angles of head/body orientation. The present study addressed the question of how different facial parts of a Noh mask, including the eyebrows, the eyes, and the mouth, may contribute to different emotional expressions. Both experimental situations of active creation and passive recognition of emotional facial expressions were introduced. Methodology/Principal Findings In Experiment 1, participants either created happy or sad facial expressions, or imitated a face that looked up or down, by actively changing each facial part of a Noh mask image presented on a computer screen. For an upward tilted mask, the eyebrows and the mouth shared common features with sad expressions, whereas the eyes with happy expressions. This contingency tended to be reversed for a downward tilted mask. Experiment 2 further examined which facial parts of a Noh mask are crucial in determining emotional expressions. Participants were exposed to the synthesized Noh mask images with different facial parts expressing different emotions. Results clearly revealed that participants primarily used the shape of the mouth in judging emotions. The facial images having the mouth of an upward/downward tilted Noh mask strongly tended to be evaluated as sad/happy, respectively. Conclusions/Significance The results suggest that Noh masks express chimeric emotional patterns, with different facial parts conveying different emotions This appears consistent with the principles of Noh which highly appreciate subtle and composite emotional expressions, as well as with the mysterious facial expressions observed in Western art. It was further demonstrated that the mouth serves as a diagnostic feature in characterizing the emotional expressions. This indicates the superiority of biologically-driven factors over the traditionally

  2. Cerebro-facio-thoracic dysplasia (Pascual-Castroviejo syndrome): Identification of a novel mutation, use of facial recognition analysis, and review of the literature.

    Science.gov (United States)

    Tender, Jennifer A F; Ferreira, Carlos R

    2018-04-13

    Cerebro-facio-thoracic dysplasia (CFTD) is a rare, autosomal recessive disorder characterized by facial dysmorphism, cognitive impairment and distinct skeletal anomalies and has been linked to the TMCO1 defect syndrome. To describe two siblings with features consistent with CFTD with a novel homozygous p.Arg114* pathogenic variant in the TMCO1 gene. We conducted a literature review and summarized the clinical features and laboratory results of two siblings with a novel pathogenic variant in the TMCO1 gene. Facial recognition analysis was utilized to assess the specificity of facial traits. The novel homozygous p.Arg114* pathogenic variant in the TMCO1 gene is responsible for the clinical features of CFTD in two siblings. Facial recognition analysis allows unambiguous distinction of this syndrome against controls.

  3. Targeted presurgical decompensation in patients with yaw-dependent facial asymmetry

    OpenAIRE

    Kim, Kyung-A; Lee, Ji-Won; Park, Jeong-Ho; Kim, Byoung-Ho; Ahn, Hyo-Won; Kim, Su-Jung

    2017-01-01

    Facial asymmetry can be classified into the rolling-dominant type (R-type), translation-dominant type (T-type), yawing-dominant type (Y-type), and atypical type (A-type) based on the distorted skeletal components that cause canting, translation, and yawing of the maxilla and/or mandible. Each facial asymmetry type represents dentoalveolar compensations in three dimensions that correspond to the main skeletal discrepancies. To obtain sufficient surgical correction, it is necessary to analyze t...

  4. The Child Affective Facial Expression (CAFE Set: Validity and Reliability from Untrained Adults

    Directory of Open Access Journals (Sweden)

    Vanessa eLoBue

    2015-01-01

    Full Text Available Emotional development is one of the largest and most productive areas of psychological research. For decades, researchers have been fascinated by how humans respond to, detect, and interpret emotional facial expressions. Much of the research in this area has relied on controlled stimulus sets of adults posing various facial expressions. Here we introduce a new stimulus set of emotional facial expressions into the domain of research on emotional development—The Child Affective Facial Expression set (CAFE. The CAFE set features photographs of a racially and ethnically diverse group of 2- to 8-year-old children posing for 6 emotional facial expressions—angry, fearful, sad, happy, surprised, and disgusted—and a neutral face. In the current work, we describe the set and report validity and reliability data on the set from 100 untrained adult participants.

  5. Quantitative Anthropometric Measures of Facial Appearance of Healthy Hispanic/Latino White Children: Establishing Reference Data for Care of Cleft Lip With or Without Cleft Palate

    Science.gov (United States)

    Lee, Juhun; Ku, Brian; Combs, Patrick D.; Da Silveira, Adriana. C.; Markey, Mia K.

    2017-06-01

    Cleft lip with or without cleft palate (CL ± P) is one of the most common congenital facial deformities worldwide. To minimize negative social consequences of CL ± P, reconstructive surgery is conducted to modify the face to a more normal appearance. Each race/ethnic group requires its own facial norm data, yet there are no existing facial norm data for Hispanic/Latino White children. The objective of this paper is to identify measures of facial appearance relevant for planning reconstructive surgery for CL ± P of Hispanic/Latino White children. Quantitative analysis was conducted on 3D facial images of 82 (41 girls, 41 boys) healthy Hispanic/Latino White children whose ages ranged from 7 to 12 years. Twenty-eight facial anthropometric features related to CL ± P (mainly in the nasal and mouth area) were measured from 3D facial images. In addition, facial aesthetic ratings were obtained from 16 non-clinical observers for the same 3D facial images using a 7-point Likert scale. Pearson correlation analysis was conducted to find features that were correlated with the panel ratings of observers. Boys with a longer face and nose, or thicker upper and lower lips are considered more attractive than others while girls with a less curved middle face contour are considered more attractive than others. Associated facial landmarks for these features are primary focus areas for reconstructive surgery for CL ± P. This study identified anthropometric measures of facial features of Hispanic/Latino White children that are pertinent to CL ± P and which correlate with the panel attractiveness ratings.

  6. Four not six: Revealing culturally common facial expressions of emotion.

    Science.gov (United States)

    Jack, Rachael E; Sun, Wei; Delis, Ioannis; Garrod, Oliver G B; Schyns, Philippe G

    2016-06-01

    As a highly social species, humans generate complex facial expressions to communicate a diverse range of emotions. Since Darwin's work, identifying among these complex patterns which are common across cultures and which are culture-specific has remained a central question in psychology, anthropology, philosophy, and more recently machine vision and social robotics. Classic approaches to addressing this question typically tested the cross-cultural recognition of theoretically motivated facial expressions representing 6 emotions, and reported universality. Yet, variable recognition accuracy across cultures suggests a narrower cross-cultural communication supported by sets of simpler expressive patterns embedded in more complex facial expressions. We explore this hypothesis by modeling the facial expressions of over 60 emotions across 2 cultures, and segregating out the latent expressive patterns. Using a multidisciplinary approach, we first map the conceptual organization of a broad spectrum of emotion words by building semantic networks in 2 cultures. For each emotion word in each culture, we then model and validate its corresponding dynamic facial expression, producing over 60 culturally valid facial expression models. We then apply to the pooled models a multivariate data reduction technique, revealing 4 latent and culturally common facial expression patterns that each communicates specific combinations of valence, arousal, and dominance. We then reveal the face movements that accentuate each latent expressive pattern to create complex facial expressions. Our data questions the widely held view that 6 facial expression patterns are universal, instead suggesting 4 latent expressive patterns with direct implications for emotion communication, social psychology, cognitive neuroscience, and social robotics. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  7. Signs of Facial Aging in Men in a Diverse, Multinational Study: Timing and Preventive Behaviors.

    Science.gov (United States)

    Rossi, Anthony M; Eviatar, Joseph; Green, Jeremy B; Anolik, Robert; Eidelman, Michael; Keaney, Terrence C; Narurkar, Vic; Jones, Derek; Kolodziejczyk, Julia; Drinkwater, Adrienne; Gallagher, Conor J

    2017-11-01

    Men are a growing patient population in aesthetic medicine and are increasingly seeking minimally invasive cosmetic procedures. To examine differences in the timing of facial aging and in the prevalence of preventive facial aging behaviors in men by race/ethnicity. Men aged 18 to 75 years in the United States, Canada, United Kingdom, and Australia rated their features using photonumeric rating scales for 10 facial aging characteristics. Impact of race/ethnicity (Caucasian, black, Asian, Hispanic) on severity of each feature was assessed. Subjects also reported the frequency of dermatologic facial product use. The study included 819 men. Glabellar lines, crow's feet lines, and nasolabial folds showed the greatest change with age. Caucasian men reported more severe signs of aging and earlier onset, by 10 to 20 years, compared with Asian, Hispanic, and, particularly, black men. In all racial/ethnic groups, most men did not regularly engage in basic, antiaging preventive behaviors, such as use of sunscreen. Findings from this study conducted in a globally diverse sample may guide clinical discussions with men about the prevention and treatment of signs of facial aging, to help men of all races/ethnicities achieve their desired aesthetic outcomes.

  8. Adolescents with HIV and facial lipoatrophy: response to facial stimulation

    Directory of Open Access Journals (Sweden)

    Jesus Claudio Gabana-Silveira

    2014-08-01

    Full Text Available OBJECTIVES: This study evaluated the effects of facial stimulation over the superficial muscles of the face in individuals with facial lipoatrophy associated with human immunodeficiency virus (HIV and with no indication for treatment with polymethyl methacrylate. METHOD: The study sample comprised four adolescents of both genders ranging from 13 to 17 years in age. To participate in the study, the participants had to score six or less points on the Facial Lipoatrophy Index. The facial stimulation program used in our study consisted of 12 weekly 30-minute sessions during which individuals received therapy. The therapy consisted of intra- and extra-oral muscle contraction and stretching maneuvers of the zygomaticus major and minor and the masseter muscles. Pre- and post-treatment results were obtained using anthropometric static measurements of the face and the Facial Lipoatrophy Index. RESULTS: The results suggest that the therapeutic program effectively improved the volume of the buccinators. No significant differences were observed for the measurements of the medial portion of the face, the lateral portion of the face, the volume of the masseter muscle, or Facial Lipoatrophy Index scores. CONCLUSION: The results of our study suggest that facial maneuvers applied to the superficial muscles of the face of adolescents with facial lipoatrophy associated with HIV improved the facial area volume related to the buccinators muscles. We believe that our results will encourage future research with HIV patients, especially for patients who do not have the possibility of receiving an alternative aesthetic treatment.

  9. [Measuring impairment of facial affects recognition in schizophrenia. Preliminary study of the facial emotions recognition task (TREF)].

    Science.gov (United States)

    Gaudelus, B; Virgile, J; Peyroux, E; Leleu, A; Baudouin, J-Y; Franck, N

    2015-06-01

    The impairment of social cognition, including facial affects recognition, is a well-established trait in schizophrenia, and specific cognitive remediation programs focusing on facial affects recognition have been developed by different teams worldwide. However, even though social cognitive impairments have been confirmed, previous studies have also shown heterogeneity of the results between different subjects. Therefore, assessment of personal abilities should be measured individually before proposing such programs. Most research teams apply tasks based on facial affects recognition by Ekman et al. or Gur et al. However, these tasks are not easily applicable in a clinical exercise. Here, we present the Facial Emotions Recognition Test (TREF), which is designed to identify facial affects recognition impairments in a clinical practice. The test is composed of 54 photos and evaluates abilities in the recognition of six universal emotions (joy, anger, sadness, fear, disgust and contempt). Each of these emotions is represented with colored photos of 4 different models (two men and two women) at nine intensity levels from 20 to 100%. Each photo is presented during 10 seconds; no time limit for responding is applied. The present study compared the scores of the TREF test in a sample of healthy controls (64 subjects) and people with stabilized schizophrenia (45 subjects) according to the DSM IV-TR criteria. We analysed global scores for all emotions, as well as sub scores for each emotion between these two groups, taking into account gender differences. Our results were coherent with previous findings. Applying TREF, we confirmed an impairment in facial affects recognition in schizophrenia by showing significant differences between the two groups in their global results (76.45% for healthy controls versus 61.28% for people with schizophrenia), as well as in sub scores for each emotion except for joy. Scores for women were significantly higher than for men in the population

  10. A facial marker in facial wasting rehabilitation.

    Science.gov (United States)

    Rauso, Raffaele; Tartaro, Gianpaolo; Freda, Nicola; Rusciani, Antonio; Curinga, Giuseppe

    2012-02-01

    Facial lipoatrophy is one of the most distressing manifestation for HIV patients. It can be stigmatizing, severely affecting quality of life and self-esteem, and it may result in reduced antiretroviral adherence. Several filling techniques have been proposed in facial wasting restoration, with different outcomes. The aim of this study is to present a triangular area that is useful to fill in facial wasting rehabilitation. Twenty-eight HIV patients rehabilitated for facial wasting were enrolled in this study. Sixteen were rehabilitated with a non-resorbable filler and twelve with structural fat graft harvested from lipohypertrophied areas. A photographic pre-operative and post-operative evaluation was performed by the patients and by two plastic surgeons who were "blinded." The filled area, in both patients rehabilitated with structural fat grafts or non-resorbable filler, was a triangular area of depression identified between the nasolabial fold, the malar arch, and the line that connects these two anatomical landmarks. The cosmetic result was evaluated after three months after the last filling procedure in the non-resorbable filler group and after three months post-surgery in the structural fat graft group. The mean patient satisfaction score was 8.7 as assessed with a visual analogue scale. The mean score for blinded evaluators was 7.6. In this study the authors describe a triangular area of the face, between the nasolabial fold, the malar arch, and the line that connects these two anatomical landmarks, where a good aesthetic facial restoration in HIV patients with facial wasting may be achieved regardless of which filling technique is used.

  11. Appearance-based human gesture recognition using multimodal features for human computer interaction

    Science.gov (United States)

    Luo, Dan; Gao, Hua; Ekenel, Hazim Kemal; Ohya, Jun

    2011-03-01

    The use of gesture as a natural interface plays an utmost important role for achieving intelligent Human Computer Interaction (HCI). Human gestures include different components of visual actions such as motion of hands, facial expression, and torso, to convey meaning. So far, in the field of gesture recognition, most previous works have focused on the manual component of gestures. In this paper, we present an appearance-based multimodal gesture recognition framework, which combines the different groups of features such as facial expression features and hand motion features which are extracted from image frames captured by a single web camera. We refer 12 classes of human gestures with facial expression including neutral, negative and positive meanings from American Sign Languages (ASL). We combine the features in two levels by employing two fusion strategies. At the feature level, an early feature combination can be performed by concatenating and weighting different feature groups, and LDA is used to choose the most discriminative elements by projecting the feature on a discriminative expression space. The second strategy is applied on decision level. Weighted decisions from single modalities are fused in a later stage. A condensation-based algorithm is adopted for classification. We collected a data set with three to seven recording sessions and conducted experiments with the combination techniques. Experimental results showed that facial analysis improve hand gesture recognition, decision level fusion performs better than feature level fusion.

  12. [Evidence of facial palsy and facial malformations in pottery from Peruvian Moche and Lambayeque pre-Columbian cultures].

    Science.gov (United States)

    Carod-Artal, F J; Vázquez Cabrera, C B

    2006-01-01

    Moche (100-700 AD) and Lambayeque-Sicán (750-1100 AD) are pre-Columbian cultures from Regional States Period, developed in Northern Peru. Information about daily life, religion and medicine has been obtained through the study of Moche ceramics found in lords and priests tombs, pyramids and temples. To analyze archeological evidences of Moche Medicine and neurological diseases through ceramics. Representations of diseases in Moche and Lambayeque iconography and Moche pottery collections exposed in Casinelli museum from Trujillo, and Brüning National Archeological museum from Lambayeque, Peru, were studied. The most representative cases were analyzed and photographed, previous authorization from authorities and curators of the museums. The following pathologies were observed in ceramic collections: peripheral facial palsy, facial malformations such as cleft lip, hemifacial spasm, legs and arm amputations, scoliosis and Siamese patients. Male and females Moche doctors were also observed in the ceramics in ritual ceremonies treating patients. The main pathologies observed in Moche and Lambayeque pottery are facial palsy and cleft lip. These are one of the earliest registries of these pathologies in pre-Columbian cultures in South-America.

  13. When is facial paralysis Bell palsy? Current diagnosis and treatment.

    Science.gov (United States)

    Ahmed, Anwar

    2005-05-01

    Bell palsy is largely a diagnosis of exclusion, but certain features in the history and physical examination help distinguish it from facial paralysis due to other conditions: eg, abrupt onset with complete, unilateral facial weakness at 24 to 72 hours, and, on the affected side, numbness or pain around the ear, a reduction in taste, and hypersensitivity to sounds. Corticosteroids and antivirals given within 10 days of onset have been shown to help. But Bell palsy resolves spontaneously without treatment in most patients within 6 months.

  14. A Report of Two Cases of Solid Facial Edema in Acne

    OpenAIRE

    Kuhn-R?gnier, Sarah; Mangana, Joanna; Kerl, Katrin; Kamarachev, Jivko; French, Lars E.; Cozzio, Antonio; Navarini, Alexander A.

    2017-01-01

    Introduction Solid facial edema (SFE) is a rare complication of acne vulgaris. To examine the clinical features of acne patients with solid facial edema, and to give an overview on the outcome of previous topical and systemic treatments in the cases so far published. Methods We report two cases from Switzerland, both young men with initially papulopustular acne resistant to topical retinoids. Results Both cases responded to oral isotretinoin, in one case combined with oral steroids. Our cases...

  15. Gd-DTPA enhancement of the facial nerve in Ramsay Hunt's syndrome

    Energy Technology Data Exchange (ETDEWEB)

    Kato, Tsutomu; Yanagida, Masahiro; Yamauchi, Yasuo [Kansai Medical School, Moriguchi, Osaka (Japan); and others

    1992-10-01

    A total of 21 MR images in 16 Ramsay Hunt's syndrome were evaluated. In all images, the involved side of peripheral facial nerve were enhanced in intensity after Gd-DTPA. However, 2 cases had recovered facial palsy when MR images were taken. Nine of 19 cases with the enhancement of internal auditory canal portion had vertigo or tinnitus. Thus, it was suggested that the enhancement of internal auditory canal portion and clinical feature are closely related. (author).

  16. Gd-DTPA enhancement of the facial nerve in Ramsay Hunt's syndrome

    International Nuclear Information System (INIS)

    Kato, Tsutomu; Yanagida, Masahiro; Yamauchi, Yasuo

    1992-01-01

    A total of 21 MR images in 16 Ramsay Hunt's syndrome were evaluated. In all images, the involved side of peripheral facial nerve were enhanced in intensity after Gd-DTPA. However, 2 cases had recovered facial palsy when MR images were taken. Nine of 19 cases with the enhancement of internal auditory canal portion had vertigo or tinnitus. Thus, it was suggested that the enhancement of internal auditory canal portion and clinical feature are closely related. (author)

  17. Dermoscopic clues to differentiate facial lentigo maligna from pigmented actinic keratosis.

    Science.gov (United States)

    Lallas, A; Tschandl, P; Kyrgidis, A; Stolz, W; Rabinovitz, H; Cameron, A; Gourhant, J Y; Giacomel, J; Kittler, H; Muir, J; Argenziano, G; Hofmann-Wellenhof, R; Zalaudek, I

    2016-05-01

    Dermoscopy is limited in differentiating accurately between pigmented lentigo maligna (LM) and pigmented actinic keratosis (PAK). This might be related to the fact that most studies have focused on pigmented criteria only, without considering additional recognizable features. To investigate the diagnostic accuracy of established dermoscopic criteria for pigmented LM and PAK, but including in the evaluation features previously associated with nonpigmented facial actinic keratosis. Retrospectively enrolled cases of histopathologically diagnosed LM, PAK and solar lentigo/early seborrhoeic keratosis (SL/SK) were dermoscopically evaluated for the presence of predefined criteria. Univariate and multivariate regression analyses were performed and receiver operating characteristic curves were used. The study sample consisted of 70 LMs, 56 PAKs and 18 SL/SKs. In a multivariate analysis, the most potent predictors of LM were grey rhomboids (sixfold increased probability of LM), nonevident follicles (fourfold) and intense pigmentation (twofold). In contrast, white circles, scales and red colour were significantly correlated with PAK, posing a 14-fold, eightfold and fourfold probability for PAK, respectively. The absence of evident follicles also represented a frequent LM criterion, characterizing 71% of LMs. White and evident follicles, scales and red colour represent significant diagnostic clues for PAK. Conversely, intense pigmentation and grey rhomboidal lines appear highly suggestive of LM. © 2015 British Association of Dermatologists.

  18. Representing Objects using Global 3D Relational Features for Recognition Tasks

    DEFF Research Database (Denmark)

    Mustafa, Wail

    2015-01-01

    representations. For representing objects, we derive global descriptors encoding shape using viewpoint-invariant features obtained from multiple sensors observing the scene. Objects are also described using color independently. This allows for combining color and shape when it is required for the task. For more...... robust color description, color calibration is performed. The framework was used in three recognition tasks: object instance recognition, object category recognition, and object spatial relationship recognition. For the object instance recognition task, we present a system that utilizes color and scale...

  19. [Facial nerve neurinomas].

    Science.gov (United States)

    Sokołowski, Jacek; Bartoszewicz, Robert; Morawski, Krzysztof; Jamróz, Barbara; Niemczyk, Kazimierz

    2013-01-01

    Evaluation of diagnostic, surgical technique, treatment results facial nerve neurinomas and its comparison with literature was the main purpose of this study. Seven cases of patients (2005-2011) with facial nerve schwannomas were included to retrospective analysis in the Department of Otolaryngology, Medical University of Warsaw. All patients were assessed with history of the disease, physical examination, hearing tests, computed tomography and/or magnetic resonance imaging, electronystagmography. Cases were observed in the direction of potential complications and recurrences. Neurinoma of the facial nerve occurred in the vertical segment (n=2), facial nerve geniculum (n=1) and the internal auditory canal (n=4). The symptoms observed in patients were analyzed: facial nerve paresis (n=3), hearing loss (n=2), dizziness (n=1). Magnetic resonance imaging and computed tomography allowed to confirm the presence of the tumor and to assess its staging. Schwannoma of the facial nerve has been surgically removed using the middle fossa approach (n=5) and by antromastoidectomy (n=2). Anatomical continuity of the facial nerve was achieved in 3 cases. In the twelve months after surgery, facial nerve paresis was rated at level II-III° HB. There was no recurrence of the tumor in radiological observation. Facial nerve neurinoma is a rare tumor. Currently surgical techniques allow in most cases, the radical removing of the lesion and reconstruction of the VII nerve function. The rate of recurrence is low. A tumor of the facial nerve should be considered in the differential diagnosis of nerve VII paresis. Copyright © 2013 Polish Otorhinolaryngology - Head and Neck Surgery Society. Published by Elsevier Urban & Partner Sp. z.o.o. All rights reserved.

  20. Rejuvenecimiento facial

    Directory of Open Access Journals (Sweden)

    L. Daniel Jacubovsky, Dr.

    2010-01-01

    Full Text Available El envejecimiento facial es un proceso único y particular a cada individuo y está regido en especial por su carga genética. El lifting facial es una compleja técnica desarrollada en nuestra especialidad desde principios de siglo, para revertir los principales signos de este proceso. Los factores secundarios que gravitan en el envejecimiento facial son múltiples y por ello las ritidectomías o lifting cérvico faciales descritas han buscado corregir los cambios fisonómicos del envejecimiento excursionando, como se describe, en todos los planos tisulares involucrados. Esta cirugía por lo tanto, exige conocimiento cabal de la anatomía quirúrgica, pericia y experiencia para reducir las complicaciones, estigmas quirúrgicos y revisiones secundarias. La ridectomía facial ha evolucionado hacia un procedimiento más simple, de incisiones más cortas y disecciones menos extensas. Las suspensiones musculares han variado en su ejecución y los vectores de montaje y resección cutánea son cruciales en los resultados estéticos de la cirugía cérvico facial. Hoy estos vectores son de tracción más vertical. La corrección de la flaccidez va acompañada de un interés en reponer el volumen de la superficie del rostro, en especial el tercio medio. Las técnicas quirúrgicas de rejuvenecimiento, en especial el lifting facial, exigen una planificación para cada paciente. Las técnicas adjuntas al lifting, como blefaroplastias, mentoplastía, lipoaspiración de cuello, implantes faciales y otras, también han tenido una positiva evolución hacia la reducción de riesgos y mejor éxito estético.

  1. A de novo 11q23 deletion in a patient presenting with severe ophthalmologic findings, psychomotor retardation and facial dysmorphism.

    Science.gov (United States)

    Şimşek-Kiper, Pelin Özlem; Bayram, Yavuz; Ütine, Gülen Eda; Alanay, Yasemin; Boduroğlu, Koray

    2014-01-01

    Distal 11q deletion, previously known as Jacobsen syndrome, is caused by segmental aneusomy for the distal end of the long arm of chromosome 11. Typical clinical features include facial dysmorphism, mild-to-moderate psychomotor retardation, trigonocephaly, cardiac defects, and thrombocytopenia. There is a significant variability in the range of clinical features. We report herein a five-year-old girl with severe ophthalmological findings, facial dysmorphism, and psychomotor retardation with normal platelet function, in whom a de novo 11q23 deletion was detected, suggesting that distal 11q monosomy should be kept in mind in patients presenting with dysmorphic facial features and psychomotor retardation even in the absence of hematological findings.

  2. Forensic Facial Reconstruction: Relationship Between the Alar Cartilage and Piriform Aperture.

    Science.gov (United States)

    Strapasson, Raíssa Ananda Paim; Herrera, Lara Maria; Melani, Rodolfo Francisco Haltenhoff

    2017-11-01

    During forensic facial reconstruction, facial features may be predicted based on the parameters of the skull. This study evaluated the relationships between alar cartilage and piriform aperture and nose morphology and facial typology. Ninety-six cone beam computed tomography images of Brazilian subjects (49 males and 47 females) were used in this study. OsiriX software was used to perform the following measurements: nasal width, distance between alar base insertion points, lower width of the piriform aperture, and upper width of the piriform aperture. Nasal width was associated with the lower width of the piriform aperture, sex, skeletal vertical pattern of the face, and age. The current study contributes to the improvement of forensic facial guides by identifying the relationships between the alar cartilages and characteristics of the biological profile of members of a population that has been little studied thus far. © 2017 American Academy of Forensic Sciences.

  3. Overview of pediatric peripheral facial nerve paralysis: analysis of 40 patients.

    Science.gov (United States)

    Özkale, Yasemin; Erol, İlknur; Saygı, Semra; Yılmaz, İsmail

    2015-02-01

    Peripheral facial nerve paralysis in children might be an alarming sign of serious disease such as malignancy, systemic disease, congenital anomalies, trauma, infection, middle ear surgery, and hypertension. The cases of 40 consecutive children and adolescents who were diagnosed with peripheral facial nerve paralysis at Baskent University Adana Hospital Pediatrics and Pediatric Neurology Unit between January 2010 and January 2013 were retrospectively evaluated. We determined that the most common cause was Bell palsy, followed by infection, tumor lesion, and suspected chemotherapy toxicity. We noted that younger patients had generally poorer outcome than older patients regardless of disease etiology. Peripheral facial nerve paralysis has been reported in many countries in America and Europe; however, knowledge about its clinical features, microbiology, neuroimaging, and treatment in Turkey is incomplete. The present study demonstrated that Bell palsy and infection were the most common etiologies of peripheral facial nerve paralysis. © The Author(s) 2014.

  4. Quantitative facial asymmetry: using three-dimensional photogrammetry to measure baseline facial surface symmetry.

    Science.gov (United States)

    Taylor, Helena O; Morrison, Clinton S; Linden, Olivia; Phillips, Benjamin; Chang, Johnny; Byrne, Margaret E; Sullivan, Stephen R; Forrest, Christopher R

    2014-01-01

    Although symmetry is hailed as a fundamental goal of aesthetic and reconstructive surgery, our tools for measuring this outcome have been limited and subjective. With the advent of three-dimensional photogrammetry, surface geometry can be captured, manipulated, and measured quantitatively. Until now, few normative data existed with regard to facial surface symmetry. Here, we present a method for reproducibly calculating overall facial symmetry and present normative data on 100 subjects. We enrolled 100 volunteers who underwent three-dimensional photogrammetry of their faces in repose. We collected demographic data on age, sex, and race and subjectively scored facial symmetry. We calculated the root mean square deviation (RMSD) between the native and reflected faces, reflecting about a plane of maximum symmetry. We analyzed the interobserver reliability of the subjective assessment of facial asymmetry and the quantitative measurements and compared the subjective and objective values. We also classified areas of greatest asymmetry as localized to the upper, middle, or lower facial thirds. This cluster of normative data was compared with a group of patients with subtle but increasing amounts of facial asymmetry. We imaged 100 subjects by three-dimensional photogrammetry. There was a poor interobserver correlation between subjective assessments of asymmetry (r = 0.56). There was a high interobserver reliability for quantitative measurements of facial symmetry RMSD calculations (r = 0.91-0.95). The mean RMSD for this normative population was found to be 0.80 ± 0.24 mm. Areas of greatest asymmetry were distributed as follows: 10% upper facial third, 49% central facial third, and 41% lower facial third. Precise measurement permitted discrimination of subtle facial asymmetry within this normative group and distinguished norms from patients with subtle facial asymmetry, with placement of RMSDs along an asymmetry ruler. Facial surface symmetry, which is poorly assessed

  5. Anatomía del Nervio Facial y sus Implicancias en los Procedimientos Quirúrgicos

    OpenAIRE

    Rodrigues, Antonio de Castro; Andreo, Jesus Carlos; Menezes, Laura de Freitas; Chinellato, Tatiana Pimentel; Rosa Júnior, Geraldo Marco

    2009-01-01

    Facial palsy, parotid diseases and others are a relatively common clinical condition with a variety of causes. Irrespective of its etiology, facial palsy always represents a very serious problem for the patient. Parotid gland diseases also are very common occurrence. In this particular case, the knowledge of surgical anatomy of the facial nerve and its correlations with the parotid gland is very important for an adequate preservation in the cases of surgery of benign and malignant diseases of...

  6. Humanoid Head Face Mechanism with Expandable Facial Expressions

    Directory of Open Access Journals (Sweden)

    Wagshum Techane Asheber

    2016-02-01

    Full Text Available Recently a social robot for daily life activities is becoming more common. To this end a humanoid robot with realistic facial expression is a strong candidate for common chores. In this paper, the development of a humanoid face mechanism with a simplified system complexity to generate human like facial expression is presented. The distinctive feature of this face robot is the use of significantly fewer actuators. Only three servo motors for facial expressions and five for the rest of the head motions have been used. This leads to effectively low energy consumption, making it suitable for applications such as mobile humanoid robots. Moreover, the modular design makes it possible to have as many face appearances as needed on one structure. The mechanism allows expansion to generate more expressions without addition or alteration of components. The robot is also equipped with an audio system and camera inside each eyeball, consequently hearing and vision sensibility are utilized in localization, communication and enhancement of expression exposition processes.

  7. Cerebral Angiographic Findings of Cosmetic Facial Filler-related Ophthalmic and Retinal Artery Occlusion

    OpenAIRE

    Kim, Yong-Kyu; Jung, Cheolkyu; Woo, Se Joon; Park, Kyu Hyung

    2015-01-01

    Cosmetic facial filler-related ophthalmic artery occlusion is rare but is a devastating complication, while the exact pathophysiology is still elusive. Cerebral angiography provides more detailed information on blood flow of ophthalmic artery as well as surrounding orbital area which cannot be covered by fundus fluorescein angiography. This study aimed to evaluate cerebral angiographic features of cosmetic facial filler-related ophthalmic artery occlusion patients. We retrospectively reviewed...

  8. Facial responsiveness of psychopaths to the emotional expressions of others.

    Directory of Open Access Journals (Sweden)

    Janina Künecke

    Full Text Available Psychopathic individuals show selfish, manipulative, and antisocial behavior in addition to emotional detachment and reduced empathy. Their empathic deficits are thought to be associated with a reduced responsiveness to emotional stimuli. Immediate facial muscle responses to the emotional expressions of others reflect the expressive part of emotional responsiveness and are positively related to trait empathy. Empirical evidence for reduced facial muscle responses in adult psychopathic individuals to the emotional expressions of others is rare. In the present study, 261 male criminal offenders and non-offenders categorized dynamically presented facial emotion expressions (angry, happy, sad, and neutral during facial electromyography recording of their corrugator muscle activity. We replicated a measurement model of facial muscle activity, which controls for general facial responsiveness to face stimuli, and modeled three correlated emotion-specific factors (i.e., anger, happiness, and sadness representing emotion specific activity. In a multi-group confirmatory factor analysis, we compared the means of the anger, happiness, and sadness latent factors between three groups: 1 non-offenders, 2 low, and 3 high psychopathic offenders. There were no significant mean differences between groups. Our results challenge current theories that focus on deficits in emotional responsiveness as leading to the development of psychopathy and encourage further theoretical development on deviant emotional processes in psychopathic individuals.

  9. Women living with facial hair: the psychological and behavioral burden.

    Science.gov (United States)

    Lipton, Michelle G; Sherr, Lorraine; Elford, Jonathan; Rustin, Malcolm H A; Clayton, William J

    2006-08-01

    While unwanted facial hair is clearly distressing for women, relatively little is known about its psychological impact. This study reports on the psychological and behavioral burden of facial hair in women with suspected polycystic ovary syndrome. Eighty-eight women (90% participation rate) completed a self-administered questionnaire concerning hair removal practices; the impact of facial hair on social and emotional domains; relationships and daily life; anxiety and depression (Hospital Anxiety and Depression Scale); self-esteem (Rosenberg Self-esteem Scale); and quality of life (WHOQOL-BREF). Women spent considerable time on the management of their facial hair (mean, 104 min/week). Two thirds (67%) reported continually checking in mirrors and 76% by touch. Forty percent felt uncomfortable in social situations. High levels of emotional distress and psychological morbidity were detected; 30% had levels of depression above the clinical cut off point, while 75% reported clinical levels of anxiety; 29% reported both. Although overall quality of life was good, scores were low in social and relationship domains--reflecting the impact of unwanted facial hair. Unwanted facial hair carries a high psychological burden for women and represents a significant intrusion into their daily lives. Psychological support is a neglected element of care for these women.

  10. Automatic prediction of facial trait judgments: appearance vs. structural models.

    Directory of Open Access Journals (Sweden)

    Mario Rojas

    Full Text Available Evaluating other individuals with respect to personality characteristics plays a crucial role in human relations and it is the focus of attention for research in diverse fields such as psychology and interactive computer systems. In psychology, face perception has been recognized as a key component of this evaluation system. Multiple studies suggest that observers use face information to infer personality characteristics. Interactive computer systems are trying to take advantage of these findings and apply them to increase the natural aspect of interaction and to improve the performance of interactive computer systems. Here, we experimentally test whether the automatic prediction of facial trait judgments (e.g. dominance can be made by using the full appearance information of the face and whether a reduced representation of its structure is sufficient. We evaluate two separate approaches: a holistic representation model using the facial appearance information and a structural model constructed from the relations among facial salient points. State of the art machine learning methods are applied to a derive a facial trait judgment model from training data and b predict a facial trait value for any face. Furthermore, we address the issue of whether there are specific structural relations among facial points that predict perception of facial traits. Experimental results over a set of labeled data (9 different trait evaluations and classification rules (4 rules suggest that a prediction of perception of facial traits is learnable by both holistic and structural approaches; b the most reliable prediction of facial trait judgments is obtained by certain type of holistic descriptions of the face appearance; and c for some traits such as attractiveness and extroversion, there are relationships between specific structural features and social perceptions.

  11. Face it a visual reference for multi-ethnic facial modeling

    CERN Document Server

    Beckmann Wells, Patricia

    2013-01-01

    Face It  presents practical hands-on techniques, 3D modeling and sculpting tools with Maya and ZBrush production pipelines, uniquely focused on the facial modeling of 7 ethnicity models, featuring over 100 different models ranging in age from newborn to elderly characters. Face It is a resource for academic and professionals alike. Explore the modeling possibilities beyond the digital reference galleries online. No more having to adapt medical anatomy texts to your own models! Explore the finite details of facial anatomy with focus on skull development, muscle structure, e

  12. Survey of methods of facial palsy documentation in use by members of the Sir Charles Bell Society

    NARCIS (Netherlands)

    Fattah, A.Y.; Gavilan, J.; Hadlock, T.A.; Marcus, J.R.; Marres, H.A.; Nduka, C.; Slattery, W.H.; Snyder-Warwick, A.K.

    2014-01-01

    OBJECTIVES/HYPOTHESIS: Facial palsy manifests a broad array of deficits affecting function, form, and psychological well-being. Assessment scales were introduced to standardize and document the features of facial palsy and to facilitate the exchange of information and comparison of outcomes. The aim

  13. Morphing between expressions dissociates continuous from categorical representations of facial expression in the human brain

    Science.gov (United States)

    Harris, Richard J.; Young, Andrew W.; Andrews, Timothy J.

    2012-01-01

    Whether the brain represents facial expressions as perceptual continua or as emotion categories remains controversial. Here, we measured the neural response to morphed images to directly address how facial expressions of emotion are represented in the brain. We found that face-selective regions in the posterior superior temporal sulcus and the amygdala responded selectively to changes in facial expression, independent of changes in identity. We then asked whether the responses in these regions reflected categorical or continuous neural representations of facial expression. Participants viewed images from continua generated by morphing between faces posing different expressions such that the expression could be the same, could involve a physical change but convey the same emotion, or could differ by the same physical amount but be perceived as two different emotions. We found that the posterior superior temporal sulcus was equally sensitive to all changes in facial expression, consistent with a continuous representation. In contrast, the amygdala was only sensitive to changes in expression that altered the perceived emotion, demonstrating a more categorical representation. These results offer a resolution to the controversy about how facial expression is processed in the brain by showing that both continuous and categorical representations underlie our ability to extract this important social cue. PMID:23213218

  14. When Age Matters: Differences in Facial Mimicry and Autonomic Responses to Peers' Emotions in Teenagers and Adults

    Science.gov (United States)

    Ardizzi, Martina; Sestito, Mariateresa; Martini, Francesca; Umiltà, Maria Alessandra; Ravera, Roberto; Gallese, Vittorio

    2014-01-01

    Age-group membership effects on explicit emotional facial expressions recognition have been widely demonstrated. In this study we investigated whether Age-group membership could also affect implicit physiological responses, as facial mimicry and autonomic regulation, to observation of emotional facial expressions. To this aim, facial Electromyography (EMG) and Respiratory Sinus Arrhythmia (RSA) were recorded from teenager and adult participants during the observation of facial expressions performed by teenager and adult models. Results highlighted that teenagers exhibited greater facial EMG responses to peers' facial expressions, whereas adults showed higher RSA-responses to adult facial expressions. The different physiological modalities through which young and adults respond to peers' emotional expressions are likely to reflect two different ways to engage in social interactions with coetaneous. Findings confirmed that age is an important and powerful social feature that modulates interpersonal interactions by influencing low-level physiological responses. PMID:25337916

  15. Combining Facial Dynamics With Appearance for Age Estimation

    NARCIS (Netherlands)

    Dibeklioğlu, H.; Alnajar, F.; Salah, A.A.; Gevers, T.

    2015-01-01

    Estimating the age of a human from the captured images of his/her face is a challenging problem. In general, the existing approaches to this problem use appearance features only. In this paper, we show that in addition to appearance information, facial dynamics can be leveraged in age estimation. We

  16. Facial exercises for facial rejuvenation: a control group study.

    Science.gov (United States)

    De Vos, Marie-Camille; Van den Brande, Helen; Boone, Barbara; Van Borsel, John

    2013-01-01

    Facial exercises are a noninvasive alternative to medical approaches to facial rejuvenation. Logopedists could be involved in providing these exercises. Little research has been conducted, however, on the effectiveness of exercises for facial rejuvenation. This study assessed the effectiveness of 4 exercises purportedly reducing wrinkles and sagging of the facial skin. A control group study was conducted with 18 participants, 9 of whom (the experimental group) underwent daily training for 7 weeks. Pictures taken before and after 7 weeks of 5 facial areas (forehead, nasolabial folds, area above the upper lip, jawline and area under the chin) were evaluated by a panel of laypersons. In addition, the participants of the experimental group evaluated their own pictures. Evaluation included the pairwise presentation of pictures before and after 7 weeks and scoring of the same pictures by means of visual analogue scales in a random presentation. Only one significant difference was found between the control and experimental group. In the experimental group, the picture after therapy of the upper lip was more frequently chosen to be the younger-looking one by the panel. It cannot be concluded that facial exercises are effective. More systematic research is needed. © 2013 S. Karger AG, Basel.

  17. Avaliação comparativa entre agradabilidade facial e análise subjetiva do Padrão Facial Comparative evaluation among facial attractiveness and subjective analysis of Facial Pattern

    Directory of Open Access Journals (Sweden)

    Olívia Morihisa

    2009-12-01

    Full Text Available OBJETIVO: estudar duas análises subjetivas faciais utilizadas para o diagnóstico ortodôntico, avaliação da agradabilidade facial e definição de Padrão Facial, e verificar a associação existente entre elas. MÉTODOS: utilizou-se 208 fotografias faciais padronizadas (104 laterais e 104 frontais de 104 indivíduos escolhidos aleatoriamente, as quais foram submetidas à avaliação da agradabilidade por dois grupos distintos (Grupo " Ortodontia" e Grupo " Leigos" , que classificaram os indivíduos em " agradável" , " aceitável" ou " desagradável" . Os indivíduos também foram classificados quanto ao Padrão Facial por três examinadores calibrados, utilizando-se apenas a vista lateral. RESULTADOS E CONCLUSÃO: após a análise estatística, verificou-se que houve associação fortemente positiva entre a agradabilidade facial e o Padrão Facial para a norma lateral, porém não para a frontal, em que os indivíduos tenderam a ser bem classificados mesmo no Padrão II.AIM: To study two subjective facial analysis commonly used on orthodontic diagnosis and to verify the association between the evaluation of facial attractiveness and Facial Pattern definition. METHODS: Two hundred and eight standardized face photographs (104 in lateral view and 104 in frontal view of 104 randomly chosen individuals were used in the present study. They were classified as " pleasant" , " acceptable" and " not pleasant" by two distinct groups: " Lay people" and " Orthodontists" . The individuals were either classified according to their Facial Pattern using lateral view images. RESULTS AND CONCLUSION: After statistical analysis, it was noted a strong positive concordance between facial attractiveness in lateral view and Facial Pattern, however, frontal view attractiveness classification did not have good concordance with Facial Pattern, tending to have good attractiveness classification even in Facial Pattern II.

  18. Relation between facial affect recognition and configural face processing in antipsychotic-free schizophrenia.

    Science.gov (United States)

    Fakra, Eric; Jouve, Elisabeth; Guillaume, Fabrice; Azorin, Jean-Michel; Blin, Olivier

    2015-03-01

    Deficit in facial affect recognition is a well-documented impairment in schizophrenia, closely connected to social outcome. This deficit could be related to psychopathology, but also to a broader dysfunction in processing facial information. In addition, patients with schizophrenia inadequately use configural information-a type of processing that relies on spatial relationships between facial features. To date, no study has specifically examined the link between symptoms and misuse of configural information in the deficit in facial affect recognition. Unmedicated schizophrenia patients (n = 30) and matched healthy controls (n = 30) performed a facial affect recognition task and a face inversion task, which tests aptitude to rely on configural information. In patients, regressions were carried out between facial affect recognition, symptom dimensions and inversion effect. Patients, compared with controls, showed a deficit in facial affect recognition and a lower inversion effect. Negative symptoms and lower inversion effect could account for 41.2% of the variance in facial affect recognition. This study confirms the presence of a deficit in facial affect recognition, and also of dysfunctional manipulation in configural information in antipsychotic-free patients. Negative symptoms and poor processing of configural information explained a substantial part of the deficient recognition of facial affect. We speculate that this deficit may be caused by several factors, among which independently stand psychopathology and failure in correctly manipulating configural information. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  19. Facial and extrafacial eosinophilic pustular folliculitis: a clinical and histopathological comparative study.

    Science.gov (United States)

    Lee, W J; Won, K H; Won, C H; Chang, S E; Choi, J H; Moon, K C; Lee, M W

    2014-05-01

    Although more than 300 cases of eosinophilic pustular folliculitis (EPF) have been reported to date, differences in clinicohistopathological findings among affected sites have not yet been evaluated. To evaluate differences in the clinical and histopathological features of facial and extrafacial EPF. Forty-six patients diagnosed with EPF were classified into those with facial and extrafacial disease according to the affected site. Clinical and histopathological characteristics were retrospectively compared, using all data available in the patient medical records. There were no significant between-group differences in subject ages at presentation, but a male predominance was observed in the extrafacial group. In addition, immunosuppression-associated type EPF was more common in the extrafacial group. Eruptions of plaques with an annular appearance were more common in the facial group. Histologically, perifollicular infiltration of eosinophils occurred more frequently in the facial group, whereas perivascular patterns occurred more frequently in the extrafacial group. Follicular mucinosis and exocytosis of inflammatory cells in the hair follicles were strongly associated with facial EPF. The clinical and histopathological characteristics of patients with facial and extrafacial EPF differ, suggesting the involvement of different pathogenic processes in the development of EPF at different sites. © 2013 British Association of Dermatologists.

  20. Borrowed beauty? Understanding identity in Asian facial cosmetic surgery

    NARCIS (Netherlands)

    Aquino, Y.S.; Steinkamp, N.L.

    2016-01-01

    This review aims to identify (1) sources of knowledge and (2) important themes of the ethical debate related to surgical alteration of facial features in East Asians. This article integrates narrative and systematic review methods. In March 2014, we searched databases including PubMed, Philosopher's

  1. Facial trauma.

    Science.gov (United States)

    Peeters, N; Lemkens, P; Leach, R; Gemels B; Schepers, S; Lemmens, W

    Facial trauma. Patients with facial trauma must be assessed in a systematic way so as to avoid missing any injury. Severe and disfiguring facial injuries can be distracting. However, clinicians must first focus on the basics of trauma care, following the Advanced Trauma Life Support (ATLS) system of care. Maxillofacial trauma occurs in a significant number of severely injured patients. Life- and sight-threatening injuries must be excluded during the primary and secondary surveys. Special attention must be paid to sight-threatening injuries in stabilized patients through early referral to an appropriate specialist or the early initiation of emergency care treatment. The gold standard for the radiographic evaluation of facial injuries is computed tomography (CT) imaging. Nasal fractures are the most frequent isolated facial fractures. Isolated nasal fractures are principally diagnosed through history and clinical examination. Closed reduction is the most frequently performed treatment for isolated nasal fractures, with a fractured nasal septum as a predictor of failure. Ear, nose and throat surgeons, maxillofacial surgeons and ophthalmologists must all develop an adequate treatment plan for patients with complex maxillofacial trauma.

  2. Internal versus external features in triggering the brain waveforms for conjunction and feature faces in recognition.

    Science.gov (United States)

    Nie, Aiqing; Jiang, Jingguo; Fu, Qiao

    2014-08-20

    Previous research has found that conjunction faces (whose internal features, e.g. eyes, nose, and mouth, and external features, e.g. hairstyle and ears, are from separate studied faces) and feature faces (partial features of these are studied) can produce higher false alarms than both old and new faces (i.e. those that are exactly the same as the studied faces and those that have not been previously presented) in recognition. The event-related potentials (ERPs) that relate to conjunction and feature faces at recognition, however, have not been described as yet; in addition, the contributions of different facial features toward ERPs have not been differentiated. To address these issues, the present study compared the ERPs elicited by old faces, conjunction faces (the internal and the external features were from two studied faces), old internal feature faces (whose internal features were studied), and old external feature faces (whose external features were studied) with those of new faces separately. The results showed that old faces not only elicited an early familiarity-related FN400, but a more anterior distributed late old/new effect that reflected recollection. Conjunction faces evoked similar late brain waveforms as old internal feature faces, but not to old external feature faces. These results suggest that, at recognition, old faces hold higher familiarity than compound faces in the profiles of ERPs and internal facial features are more crucial than external ones in triggering the brain waveforms that are characterized as reflecting the result of familiarity.

  3. An analysis of facial nerve function in irradiated and unirradiated facial nerve grafts

    International Nuclear Information System (INIS)

    Brown, Paul D.; Eshleman, Jeffrey S.; Foote, Robert L.; Strome, Scott E.

    2000-01-01

    Purpose: The effect of high-dose radiation therapy on facial nerve grafts is controversial. Some authors believe radiotherapy is so detrimental to the outcome of facial nerve graft function that dynamic or static slings should be performed instead of facial nerve grafts in all patients who are to receive postoperative radiation therapy. Unfortunately, the facial function achieved with dynamic and static slings is almost always inferior to that after facial nerve grafts. In this retrospective study, we compared facial nerve function in irradiated and unirradiated nerve grafts. Methods and Materials: The medical records of 818 patients with neoplasms involving the parotid gland who received treatment between 1974 and 1997 were reviewed, of whom 66 underwent facial nerve grafting. Fourteen patients who died or had a recurrence less than a year after their facial nerve graft were excluded. The median follow-up for the remaining 52 patients was 10.6 years. Cable nerve grafts were performed in 50 patients and direct anastomoses of the facial nerve in two. Facial nerve function was scored by means of the House-Brackmann (H-B) facial grading system. Twenty-eight of the 52 patients received postoperative radiotherapy. The median time from nerve grafting to start of radiotherapy was 5.1 weeks. The median and mean doses of radiation were 6000 and 6033 cGy, respectively, for the irradiated grafts. One patient received preoperative radiotherapy to a total dose of 5000 cGy in 25 fractions and underwent surgery 1 month after the completion of radiotherapy. This patient was placed, by convention, in the irradiated facial nerve graft cohort. Results: Potential prognostic factors for facial nerve function such as age, gender, extent of surgery at the time of nerve grafting, preoperative facial nerve palsy, duration of preoperative palsy if present, or number of previous operations in the parotid bed were relatively well balanced between irradiated and unirradiated patients. However

  4. Decoding facial expressions based on face-selective and motion-sensitive areas.

    Science.gov (United States)

    Liang, Yin; Liu, Baolin; Xu, Junhai; Zhang, Gaoyan; Li, Xianglin; Wang, Peiyuan; Wang, Bin

    2017-06-01

    Humans can easily recognize others' facial expressions. Among the brain substrates that enable this ability, considerable attention has been paid to face-selective areas; in contrast, whether motion-sensitive areas, which clearly exhibit sensitivity to facial movements, are involved in facial expression recognition remained unclear. The present functional magnetic resonance imaging (fMRI) study used multi-voxel pattern analysis (MVPA) to explore facial expression decoding in both face-selective and motion-sensitive areas. In a block design experiment, participants viewed facial expressions of six basic emotions (anger, disgust, fear, joy, sadness, and surprise) in images, videos, and eyes-obscured videos. Due to the use of multiple stimulus types, the impacts of facial motion and eye-related information on facial expression decoding were also examined. It was found that motion-sensitive areas showed significant responses to emotional expressions and that dynamic expressions could be successfully decoded in both face-selective and motion-sensitive areas. Compared with static stimuli, dynamic expressions elicited consistently higher neural responses and decoding performance in all regions. A significant decrease in both activation and decoding accuracy due to the absence of eye-related information was also observed. Overall, the findings showed that emotional expressions are represented in motion-sensitive areas in addition to conventional face-selective areas, suggesting that motion-sensitive regions may also effectively contribute to facial expression recognition. The results also suggested that facial motion and eye-related information played important roles by carrying considerable expression information that could facilitate facial expression recognition. Hum Brain Mapp 38:3113-3125, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  5. Behavioral dissociation between emotional and non-emotional facial expressions in congenital prosopagnosia.

    Science.gov (United States)

    Daini, Roberta; Comparetti, Chiara M; Ricciardelli, Paola

    2014-01-01

    Neuropsychological and neuroimaging studies have shown that facial recognition and emotional expressions are dissociable. However, it is unknown if a single system supports the processing of emotional and non-emotional facial expressions. We aimed to understand if individuals with impairment in face recognition from birth (congenital prosopagnosia, CP) can use non-emotional facial expressions to recognize a face as an already seen one, and thus, process this facial dimension independently from features (which are impaired in CP), and basic emotional expressions. To this end, we carried out a behavioral study in which we compared the performance of 6 CP individuals to that of typical development individuals, using upright and inverted faces. Four avatar faces with a neutral expression were presented in the initial phase. The target faces presented in the recognition phase, in which a recognition task was requested (2AFC paradigm), could be identical (neutral) to those of the initial phase or present biologically plausible changes to features, non-emotional expressions, or emotional expressions. After this task, a second task was performed, in which the participants had to detect whether or not the recognized face exactly matched the study face or showed any difference. The results confirmed the CPs' impairment in the configural processing of the invariant aspects of the face, but also showed a spared configural processing of non-emotional facial expression (task 1). Interestingly and unlike the non-emotional expressions, the configural processing of emotional expressions was compromised in CPs and did not improve their change detection ability (task 2). These new results have theoretical implications for face perception models since they suggest that, at least in CPs, non-emotional expressions are processed configurally, can be dissociated from other facial dimensions, and may serve as a compensatory strategy to achieve face recognition.

  6. Facial nerve paralysis associated with temporal bone masses.

    Science.gov (United States)

    Nishijima, Hironobu; Kondo, Kenji; Kagoya, Ryoji; Iwamura, Hitoshi; Yasuhara, Kazuo; Yamasoba, Tatsuya

    2017-10-01

    To investigate the clinical and electrophysiological features of facial nerve paralysis (FNP) due to benign temporal bone masses (TBMs) and elucidate its differences as compared with Bell's palsy. FNP assessed by the House-Brackmann (HB) grading system and by electroneurography (ENoG) were compared retrospectively. We reviewed 914 patient records and identified 31 patients with FNP due to benign TBMs. Moderate FNP (HB Grades II-IV) was dominant for facial nerve schwannoma (FNS) (n=15), whereas severe FNP (Grades V and VI) was dominant for cholesteatomas (n=8) and hemangiomas (n=3). The average ENoG value was 19.8% for FNS, 15.6% for cholesteatoma, and 0% for hemangioma. Analysis of the correlation between HB grade and ENoG value for FNP due to TBMs and Bell's palsy revealed that given the same ENoG value, the corresponding HB grade was better for FNS, followed by cholesteatoma, and worst in Bell's palsy. Facial nerve damage caused by benign TBMs could depend on the underlying pathology. Facial movement and ENoG values did not correlate when comparing TBMs and Bell's palsy. When the HB grade is found to be unexpectedly better than the ENoG value, TBMs should be included in the differential diagnosis. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Facial skin follllicular hyperkeratosis of patients with basal cell carcinoma

    Directory of Open Access Journals (Sweden)

    M. V. Zhuchkov

    2016-01-01

    Full Text Available This article provides a clinical observation of paraneoplastic syndrome of a patient with basal cell carcinoma of skin. Authors present clinical features of the described for the first time, paraneoplastic retentional follicular hyperkeratosis of facial area.

  8. Polyacrylamide gel for facial wasting rehabilitation: how many milliliters per session?

    Science.gov (United States)

    Rauso, R; Gherardini, G; Parlato, V; Amore, R; Tartaro, G

    2012-02-01

    Facial lipoatrophy is most distressing for HIV patients in pharmacologic treatment. Nonabsorbable fillers are widely used to restore facial features in these patients. We evaluated the safety and aesthetic outcomes of two samples of HIV+ patients affected by facial wasting who received different filling protocols of the nonabsorbable filler Aquamid® to restore facial wasting. Thirty-one HIV+ patients affected by facial wasting received injections of the nonabsorbable filler Aquamid for facial wasting rehabilitation. Patients were randomly divided into two groups: A and B. In group A, the facial defect was corrected by injecting up to 8 ml of product in the first session; patients were retreated after every 8th week with touch-up procedures until full correction was observed. In group B, facial defects were corrected by injecting 2 ml of product per session; patients were retreated after every 8th week until full correction was observed. Patients of group A noted a great improvement after the first filling procedure. Patients in group B noted improvement of their face after four filling procedures on average. Local infection, foreign-body reaction, and migration of the product were not observed in either group during follow-up. The rehabilitation obtained with a megafilling session and further touch-up procedures and that with a gradual build-up of the localized soft-tissue loss seem not to have differences in terms of safety for the patients. However, with a megafilling session satisfaction is achieved earlier and it is possible to reduce hospital costs in terms of gauze, gloves, and other items.

  9. The masculinity paradox: facial masculinity and beardedness interact to determine women's ratings of men's facial attractiveness.

    Science.gov (United States)

    Dixson, B J W; Sulikowski, D; Gouda-Vossos, A; Rantala, M J; Brooks, R C

    2016-11-01

    In many species, male secondary sexual traits have evolved via female choice as they confer indirect (i.e. genetic) benefits or direct benefits such as enhanced fertility or survival. In humans, the role of men's characteristically masculine androgen-dependent facial traits in determining men's attractiveness has presented an enduring paradox in studies of human mate preferences. Male-typical facial features such as a pronounced brow ridge and a more robust jawline may signal underlying health, whereas beards may signal men's age and masculine social dominance. However, masculine faces are judged as more attractive for short-term relationships over less masculine faces, whereas beards are judged as more attractive than clean-shaven faces for long-term relationships. Why such divergent effects occur between preferences for two sexually dimorphic traits remains unresolved. In this study, we used computer graphic manipulation to morph male faces varying in facial hair from clean-shaven, light stubble, heavy stubble and full beards to appear more (+25% and +50%) or less (-25% and -50%) masculine. Women (N = 8520) were assigned to treatments wherein they rated these stimuli for physical attractiveness in general, for a short-term liaison or a long-term relationship. Results showed a significant interaction between beardedness and masculinity on attractiveness ratings. Masculinized and, to an even greater extent, feminized faces were less attractive than unmanipulated faces when all were clean-shaven, and stubble and beards dampened the polarizing effects of extreme masculinity and femininity. Relationship context also had effects on ratings, with facial hair enhancing long-term, and not short-term, attractiveness. Effects of facial masculinization appear to have been due to small differences in the relative attractiveness of each masculinity level under the three treatment conditions and not to any change in the order of their attractiveness. Our findings suggest that

  10. 3D Facial Landmarking under Expression, Pose, and Occlusion Variations

    NARCIS (Netherlands)

    H. Dibeklioğ lu; A.A. Salah (Albert Ali); L. Akarun

    2008-01-01

    htmlabstractAutomatic localization of 3D facial features is important for face recognition, tracking, modeling and expression analysis. Methods developed for 2D images were shown to have problems working across databases acquired with different illumination conditions. Expression variations, pose

  11. Facial recognition and laser surface scan: a pilot study

    DEFF Research Database (Denmark)

    Lynnerup, Niels; Clausen, Maja-Lisa; Kristoffersen, Agnethe May

    2009-01-01

    Surface scanning of the face of a suspect is presented as a way to better match the facial features with those of a perpetrator from CCTV footage. We performed a simple pilot study where we obtained facial surface scans of volunteers and then in blind trials tried to match these scans with 2D...... photographs of the faces of the volunteers. Fifteen male volunteers were surface scanned using a Polhemus FastSCAN Cobra Handheld Laser Scanner. Three photographs were taken of each volunteer's face in full frontal, profile and from above at an angle of 45 degrees and also 45 degrees laterally. Via special...

  12. Slowing down Presentation of Facial Movements and Vocal Sounds Enhances Facial Expression Recognition and Induces Facial-Vocal Imitation in Children with Autism

    Science.gov (United States)

    Tardif, Carole; Laine, France; Rodriguez, Melissa; Gepner, Bruno

    2007-01-01

    This study examined the effects of slowing down presentation of facial expressions and their corresponding vocal sounds on facial expression recognition and facial and/or vocal imitation in children with autism. Twelve autistic children and twenty-four normal control children were presented with emotional and non-emotional facial expressions on…

  13. Cross-Cultural Agreement in Facial Attractiveness Preferences: The Role of Ethnicity and Gender

    Science.gov (United States)

    Coetzee, Vinet; Greeff, Jaco M.; Stephen, Ian D.; Perrett, David I.

    2014-01-01

    Previous work showed high agreement in facial attractiveness preferences within and across cultures. The aims of the current study were twofold. First, we tested cross-cultural agreement in the attractiveness judgements of White Scottish and Black South African students for own- and other-ethnicity faces. Results showed significant agreement between White Scottish and Black South African observers' attractiveness judgements, providing further evidence of strong cross-cultural agreement in facial attractiveness preferences. Second, we tested whether cross-cultural agreement is influenced by the ethnicity and/or the gender of the target group. White Scottish and Black South African observers showed significantly higher agreement for Scottish than for African faces, presumably because both groups are familiar with White European facial features, but the Scottish group are less familiar with Black African facial features. Further work investigating this discordance in cross-cultural attractiveness preferences for African faces show that Black South African observers rely more heavily on colour cues when judging African female faces for attractiveness, while White Scottish observers rely more heavily on shape cues. Results also show higher cross-cultural agreement for female, compared to male faces, albeit not significantly higher. The findings shed new light on the factors that influence cross-cultural agreement in attractiveness preferences. PMID:24988325

  14. Cross-cultural agreement in facial attractiveness preferences: the role of ethnicity and gender.

    Directory of Open Access Journals (Sweden)

    Vinet Coetzee

    Full Text Available Previous work showed high agreement in facial attractiveness preferences within and across cultures. The aims of the current study were twofold. First, we tested cross-cultural agreement in the attractiveness judgements of White Scottish and Black South African students for own- and other-ethnicity faces. Results showed significant agreement between White Scottish and Black South African observers' attractiveness judgements, providing further evidence of strong cross-cultural agreement in facial attractiveness preferences. Second, we tested whether cross-cultural agreement is influenced by the ethnicity and/or the gender of the target group. White Scottish and Black South African observers showed significantly higher agreement for Scottish than for African faces, presumably because both groups are familiar with White European facial features, but the Scottish group are less familiar with Black African facial features. Further work investigating this discordance in cross-cultural attractiveness preferences for African faces show that Black South African observers rely more heavily on colour cues when judging African female faces for attractiveness, while White Scottish observers rely more heavily on shape cues. Results also show higher cross-cultural agreement for female, compared to male faces, albeit not significantly higher. The findings shed new light on the factors that influence cross-cultural agreement in attractiveness preferences.

  15. Multistage feature extraction for accurate face alignment

    NARCIS (Netherlands)

    Zuo, F.; With, de P.H.N.

    2004-01-01

    We propose a novel multistage facial feature extraction approach using a combination of 'global' and 'local' techniques. At the first stage, we use template matching, based on an Edge-Orientation-Map for fast feature position estimation. Using this result, a statistical framework applying the Active

  16. Weighted Feature Gaussian Kernel SVM for Emotion Recognition.

    Science.gov (United States)

    Wei, Wei; Jia, Qingxuan

    2016-01-01

    Emotion recognition with weighted feature based on facial expression is a challenging research topic and has attracted great attention in the past few years. This paper presents a novel method, utilizing subregion recognition rate to weight kernel function. First, we divide the facial expression image into some uniform subregions and calculate corresponding recognition rate and weight. Then, we get a weighted feature Gaussian kernel function and construct a classifier based on Support Vector Machine (SVM). At last, the experimental results suggest that the approach based on weighted feature Gaussian kernel function has good performance on the correct rate in emotion recognition. The experiments on the extended Cohn-Kanade (CK+) dataset show that our method has achieved encouraging recognition results compared to the state-of-the-art methods.

  17. Facial identification in very low-resolution images simulating prosthetic vision.

    Science.gov (United States)

    Chang, M H; Kim, H S; Shin, J H; Park, K S

    2012-08-01

    Familiar facial identification is important to blind or visually impaired patients and can be achieved using a retinal prosthesis. Nevertheless, there are limitations in delivering the facial images with a resolution sufficient to distinguish facial features, such as eyes and nose, through multichannel electrode arrays used in current visual prostheses. This study verifies the feasibility of familiar facial identification under low-resolution prosthetic vision and proposes an edge-enhancement method to deliver more visual information that is of higher quality. We first generated a contrast-enhanced image and an edge image by applying the Sobel edge detector and blocked each of them by averaging. Then, we subtracted the blocked edge image from the blocked contrast-enhanced image and produced a pixelized image imitating an array of phosphenes. Before subtraction, every gray value of the edge images was weighted as 50% (mode 2), 75% (mode 3) and 100% (mode 4). In mode 1, the facial image was blocked and pixelized with no further processing. The most successful identification was achieved with mode 3 at every resolution in terms of identification index, which covers both accuracy and correct response time. We also found that the subjects recognized a distinctive face especially more accurately and faster than the other given facial images even under low-resolution prosthetic vision. Every subject could identify familiar faces even in very low-resolution images. And the proposed edge-enhancement method seemed to contribute to intermediate-stage visual prostheses.

  18. Neurophysiology of spontaneous facial expressions: I. Motor control of the upper and lower face is behaviorally independent in adults.

    Science.gov (United States)

    Ross, Elliott D; Gupta, Smita S; Adnan, Asif M; Holden, Thomas L; Havlicek, Joseph; Radhakrishnan, Sridhar

    2016-03-01

    Facial expressions are described traditionally as monolithic entities. However, humans have the capacity to produce facial blends, in which the upper and lower face simultaneously display different emotional expressions. This, in turn, has led to the Component Theory of facial expressions. Recent neuroanatomical studies in monkeys have demonstrated that there are separate cortical motor areas for controlling the upper and lower face that, presumably, also occur in humans. The lower face is represented on the posterior ventrolateral surface of the frontal lobes in the primary motor and premotor cortices and the upper face is represented on the medial surface of the posterior frontal lobes in the supplementary motor and anterior cingulate cortices. Our laboratory has been engaged in a series of studies exploring the perception and production of facial blends. Using high-speed videography, we began measuring the temporal aspects of facial expressions to develop a more complete understanding of the neurophysiology underlying facial expressions and facial blends. The goal of the research presented here was to determine if spontaneous facial expressions in adults are predominantly monolithic or exhibit independent motor control of the upper and lower face. We found that spontaneous facial expressions are very complex and that the motor control of the upper and lower face is overwhelmingly independent, thus robustly supporting the Component Theory of facial expressions. Seemingly monolithic expressions, be they full facial or facial blends, are most likely the result of a timing coincident rather than a synchronous coordination between the ventrolateral and medial cortical motor areas responsible for controlling the lower and upper face, respectively. In addition, we found evidence that the right and left face may also exhibit independent motor control, thus supporting the concept that spontaneous facial expressions are organized predominantly across the horizontal facial

  19. Diplegia facial traumatica Traumatic facial diplegia: a case report

    Directory of Open Access Journals (Sweden)

    J. Fortes-Rego

    1975-12-01

    Full Text Available É relatado um caso de paralisia facial bilateral, incompleta, associada a hipoacusia esquerda, após traumatismo cranioencefálico, com fraturas evidenciadas radiológicamente. Algumas considerações são formuladas tentando relacionar ditas manifestações com fraturas do osso temporal.A case of traumatic facial diplegia with left partial loss of hearing following head injury is reported. X-rays showed fractures on the occipital and left temporal bones. A review of traumatic facial paralysis is made.

  20. The effects of acute alcohol intoxication on the cognitive mechanisms underlying false facial recognition.

    Science.gov (United States)

    Colloff, Melissa F; Flowe, Heather D

    2016-06-01

    False face recognition rates are sometimes higher when faces are learned while under the influence of alcohol. Alcohol myopia theory (AMT) proposes that acute alcohol intoxication during face learning causes people to attend to only the most salient features of a face, impairing the encoding of less salient facial features. Yet, there is currently no direct evidence to support this claim. Our objective was to test whether acute alcohol intoxication impairs face learning by causing subjects to attend to a salient (i.e., distinctive) facial feature over other facial features, as per AMT. We employed a balanced placebo design (N = 100). Subjects in the alcohol group were dosed to achieve a blood alcohol concentration (BAC) of 0.06 %, whereas the no alcohol group consumed tonic water. Alcohol expectancy was controlled. Subjects studied faces with or without a distinctive feature (e.g., scar, piercing). An old-new recognition test followed. Some of the test faces were "old" (i.e., previously studied), and some were "new" (i.e., not previously studied). We varied whether the new test faces had a previously studied distinctive feature versus other familiar characteristics. Intoxicated and sober recognition accuracy was comparable, but subjects in the alcohol group made more positive identifications overall compared to the no alcohol group. The results are not in keeping with AMT. Rather, a more general cognitive mechanism appears to underlie false face recognition in intoxicated subjects. Specifically, acute alcohol intoxication during face learning results in more liberal choosing, perhaps because of an increased reliance on familiarity.

  1. Mapping the impairment in decoding static facial expressions of emotion in prosopagnosia.

    Science.gov (United States)

    Fiset, Daniel; Blais, Caroline; Royer, Jessica; Richoz, Anne-Raphaëlle; Dugas, Gabrielle; Caldara, Roberto

    2017-08-01

    Acquired prosopagnosia is characterized by a deficit in face recognition due to diverse brain lesions, but interestingly most prosopagnosic patients suffering from posterior lesions use the mouth instead of the eyes for face identification. Whether this bias is present for the recognition of facial expressions of emotion has not yet been addressed. We tested PS, a pure case of acquired prosopagnosia with bilateral occipitotemporal lesions anatomically sparing the regions dedicated for facial expression recognition. PS used mostly the mouth to recognize facial expressions even when the eye area was the most diagnostic. Moreover, PS directed most of her fixations towards the mouth. Her impairment was still largely present when she was instructed to look at the eyes, or when she was forced to look at them. Control participants showed a performance comparable to PS when only the lower part of the face was available. These observations suggest that the deficits observed in PS with static images are not solely attentional, but are rooted at the level of facial information use. This study corroborates neuroimaging findings suggesting that the Occipital Face Area might play a critical role in extracting facial features that are integrated for both face identification and facial expression recognition in static images. © The Author (2017). Published by Oxford University Press.

  2. [Peripheral facial nerve lesion induced long-term dendritic retraction in pyramidal cortico-facial neurons].

    Science.gov (United States)

    Urrego, Diana; Múnera, Alejandro; Troncoso, Julieta

    2011-01-01

    Little evidence is available concerning the morphological modifications of motor cortex neurons associated with peripheral nerve injuries, and the consequences of those injuries on post lesion functional recovery. Dendritic branching of cortico-facial neurons was characterized with respect to the effects of irreversible facial nerve injury. Twenty-four adult male rats were distributed into four groups: sham (no lesion surgery), and dendritic assessment at 1, 3 and 5 weeks post surgery. Eighteen lesion animals underwent surgical transection of the mandibular and buccal branches of the facial nerve. Dendritic branching was examined by contralateral primary motor cortex slices stained with the Golgi-Cox technique. Layer V pyramidal (cortico-facial) neurons from sham and injured animals were reconstructed and their dendritic branching was compared using Sholl analysis. Animals with facial nerve lesions displayed persistent vibrissal paralysis throughout the five week observation period. Compared with control animal neurons, cortico-facial pyramidal neurons of surgically injured animals displayed shrinkage of their dendritic branches at statistically significant levels. This shrinkage persisted for at least five weeks after facial nerve injury. Irreversible facial motoneuron axonal damage induced persistent dendritic arborization shrinkage in contralateral cortico-facial neurons. This morphological reorganization may be the physiological basis of functional sequelae observed in peripheral facial palsy patients.

  3. Facial Pain Followed by Unilateral Facial Nerve Palsy: A Case Report with Literature Review

    OpenAIRE

    GV, Sowmya; BS, Manjunatha; Goel, Saurabh; Singh, Mohit Pal; Astekar, Madhusudan

    2014-01-01

    Peripheral facial nerve palsy is the commonest cranial nerve motor neuropathy. The causes range from cerebrovascular accident to iatrogenic damage, but there are few reports of facial nerve paralysis attributable to odontogenic infections. In majority of the cases, recovery of facial muscle function begins within first three weeks after onset. This article reports a unique case of 32-year-old male patient who developed facial pain followed by unilateral facial nerve paralysis due to odontogen...

  4. Steel syndrome: dislocated hips and radial heads, carpal coalition, scoliosis, short stature, and characteristic facial features.

    Science.gov (United States)

    Flynn, John M; Ramirez, Norman; Betz, Randal; Mulcahey, Mary Jane; Pino, Franz; Herrera-Soto, Jose A; Carlo, Simon; Cornier, Alberto S

    2010-01-01

    A syndrome of children with short stature, bilateral hip dislocations, radial head dislocations, carpal coalitions, scoliosis, and cavus feet in Puerto Rican children, was reported by Steel et al in 1993. The syndrome was described as a unique entity with dismal results after conventional treatment of dislocated hips. The purpose of this study is to reevaluate this patient population with a longer follow-up and delineate the clinical and radiologic features, treatment outcomes, and the genetic characteristics. This is a retrospective cohort study of 32 patients in whom we evaluated the clinical, imaging data, and genetic characteristics. We compare the findings and quality of life in patients with this syndrome who have had attempts at reduction of the hips versus those who did not have the treatment. Congenital hip dislocations were present in 100% of the patients. There was no attempt at reduction in 39% (25/64) of the hips. In the remaining 61% (39/64), the hips were treated with a variety of modalities fraught with complications. Of those treated, 85% (33/39) remain dislocated, the rest of the hips continue subluxated with acetabular dysplasia and pain. The group of hips that were not treated reported fewer complaints and limitation in daily activities compared with the hips that had attempts at reduction. Steel syndrome is a distinct clinical entity characterized by short stature, bilateral hip and radial head dislocation, carpal coalition, scoliosis, cavus feet, and characteristic facial features with dismal results for attempts at reduction of the hips. Prognostic Study Level II.

  5. Facial infiltrative lipomatosis

    International Nuclear Information System (INIS)

    Haloi, A.K.; Ditchfield, M.; Pennington, A.; Philips, R.

    2006-01-01

    Although there are multiple case reports and small series concerning facial infiltrative lipomatosis, there is no composite radiological description of the condition. Radiological evaluation of facial infiltrative lipomatosis using plain film, sonography, CT and MRI. We radiologically evaluated four patients with facial infiltrative lipomatosis. Initial plain radiographs of the face were acquired in all patients. Three children had an initial sonographic examination to evaluate the condition, followed by MRI. One child had a CT and then MRI. One child had abnormalities on plain radiographs. Sonographically, the lesions were seen as ill-defined heterogeneously hypoechoic areas with indistinct margins. On CT images, the lesions did not have a homogeneous fat density but showed some relatively more dense areas in deeper parts of the lesions. MRI provided better delineation of the exact extent of the process and characterization of facial infiltrative lipomatosis. Facial infiltrative lipomatosis should be considered as a differential diagnosis of vascular or lymphatic malformation when a child presents with unilateral facial swelling. MRI is the most useful single imaging modality to evaluate the condition, as it provides the best delineation of the exact extent of the process. (orig.)

  6. Neural mechanism for judging the appropriateness of facial affect.

    Science.gov (United States)

    Kim, Ji-Woong; Kim, Jae-Jin; Jeong, Bum Seok; Ki, Seon Wan; Im, Dong-Mi; Lee, Soo Jung; Lee, Hong Shick

    2005-12-01

    Questions regarding the appropriateness of facial expressions in particular situations arise ubiquitously in everyday social interactions. To determine the appropriateness of facial affect, first of all, we should represent our own or the other's emotional state as induced by the social situation. Then, based on these representations, we should infer the possible affective response of the other person. In this study, we identified the brain mechanism mediating special types of social evaluative judgments of facial affect in which the internal reference is related to theory of mind (ToM) processing. Many previous ToM studies have used non-emotional stimuli, but, because so much valuable social information is conveyed through nonverbal emotional channels, this investigation used emotionally salient visual materials to tap ToM. Fourteen right-handed healthy subjects volunteered for our study. We used functional magnetic resonance imaging to examine brain activation during the judgmental task for the appropriateness of facial affects as opposed to gender matching tasks. We identified activation of a brain network, which includes both medial frontal cortex, left temporal pole, left inferior frontal gyrus, and left thalamus during the judgmental task for appropriateness of facial affect compared to the gender matching task. The results of this study suggest that the brain system involved in ToM plays a key role in judging the appropriateness of facial affect in an emotionally laden situation. In addition, our result supports that common neural substrates are involved in performing diverse kinds of ToM tasks irrespective of perceptual modalities and the emotional salience of test materials.

  7. Three-dimensional facial analyses of Indian and Malaysian women.

    Science.gov (United States)

    Kusugal, Preethi; Ruttonji, Zarir; Gowda, Roopa; Rajpurohit, Ladusingh; Lad, Pritam; Ritu

    2015-01-01

    Facial measurements serve as a valuable tool in the treatment planning of maxillofacial rehabilitation, orthodontic treatment, and orthognathic surgeries. The esthetic guidelines of face are still based on neoclassical canons, which were used in the ancient art. These canons are considered to be highly subjective, and there is ample evidence in the literature, which raises such questions as whether or not these canons can be applied for the modern population. This study was carried out to analyze the facial features of Indian and Malaysian women by using three-dimensional (3D) scanner and thus determine the prevalence of neoclassical facial esthetic canons in both the groups. The study was carried out on 60 women in the age range of 18-25 years, out of whom 30 were Indian and 30 Malaysian. As many as 16 facial measurements were taken by using a noncontact 3D scanner. Unpaired t-test was used for comparison of facial measurements between Indian and Malaysian females. Two-tailed Fisher exact test was used to determine the prevalence of neoclassical canons. Orbital Canon was prevalent in 80% of Malaysian women; the same was found only in 16% of Indian women (P = 0.00013). About 43% of Malaysian women exhibited orbitonasal canon (P = 0.0470) whereas nasoaural canon was prevalent in 73% of Malaysian and 33% of Indian women (P = 0.0068). Orbital, orbitonasal, and nasoaural canon were more prevalent in Malaysian women. Facial profile canon, nasooral, and nasofacial canons were not seen in either group. Though some canons provide guidelines in esthetic analyses of face, complete reliance on these canons is not justifiable.

  8. Gd-DTPA enhancement of the facial nerve in Ramsay Hunt's syndrome

    Energy Technology Data Exchange (ETDEWEB)

    Kato, Tsutomu; Yanagida, Masahiro; Yamauchi, Yasuo (Kansai Medical School, Moriguchi, Osaka (Japan)) (and others)

    1992-10-01

    A total of 21 MR images in 16 Ramsay Hunt's syndrome were evaluated. In all images, the involved side of peripheral facial nerve were enhanced in intensity after Gd-DTPA. However, 2 cases had recovered facial palsy when MR images were taken. Nine of 19 cases with the enhancement of internal auditory canal portion had vertigo or tinnitus. Thus, it was suggested that the enhancement of internal auditory canal portion and clinical feature are closely related. (author).

  9. Targeted presurgical decompensation in patients with yaw-dependent facial asymmetry.

    Science.gov (United States)

    Kim, Kyung-A; Lee, Ji-Won; Park, Jeong-Ho; Kim, Byoung-Ho; Ahn, Hyo-Won; Kim, Su-Jung

    2017-05-01

    Facial asymmetry can be classified into the rolling-dominant type (R-type), translation-dominant type (T-type), yawing-dominant type (Y-type), and atypical type (A-type) based on the distorted skeletal components that cause canting, translation, and yawing of the maxilla and/or mandible. Each facial asymmetry type represents dentoalveolar compensations in three dimensions that correspond to the main skeletal discrepancies. To obtain sufficient surgical correction, it is necessary to analyze the main skeletal discrepancies contributing to the facial asymmetry and then the skeletal-dental relationships in the maxilla and mandible separately. Particularly in cases of facial asymmetry accompanied by mandibular yawing, it is not simple to establish pre-surgical goals of tooth movement since chin deviation and posterior gonial prominence can be either aggravated or compromised according to the direction of mandibular yawing. Thus, strategic dentoalveolar decompensations targeting the real basal skeletal discrepancies should be performed during presurgical orthodontic treatment to allow for sufficient skeletal correction with stability. In this report, we document targeted decompensation of two asymmetry patients focusing on more complicated yaw-dependent types than others: Y-type and A-type. This may suggest a clinical guideline on the targeted decompensation in patient with different types of facial asymmetries.

  10. Are event-related potentials to dynamic facial expressions of emotion related to individual differences in the accuracy of processing facial expressions and identity?

    Science.gov (United States)

    Recio, Guillermo; Wilhelm, Oliver; Sommer, Werner; Hildebrandt, Andrea

    2017-04-01

    Despite a wealth of knowledge about the neural mechanisms behind emotional facial expression processing, little is known about how they relate to individual differences in social cognition abilities. We studied individual differences in the event-related potentials (ERPs) elicited by dynamic facial expressions. First, we assessed the latent structure of the ERPs, reflecting structural face processing in the N170, and the allocation of processing resources and reflexive attention to emotionally salient stimuli, in the early posterior negativity (EPN) and the late positive complex (LPC). Then we estimated brain-behavior relationships between the ERP factors and behavioral indicators of facial identity and emotion-processing abilities. Structural models revealed that the participants who formed faster structural representations of neutral faces (i.e., shorter N170 latencies) performed better at face perception (r = -.51) and memory (r = -.42). The N170 amplitude was not related to individual differences in face cognition or emotion processing. The latent EPN factor correlated with emotion perception (r = .47) and memory (r = .32), and also with face perception abilities (r = .41). Interestingly, the latent factor representing the difference in EPN amplitudes between the two neutral control conditions (chewing and blinking movements) also correlated with emotion perception (r = .51), highlighting the importance of tracking facial changes in the perception of emotional facial expressions. The LPC factor for negative expressions correlated with the memory for emotional facial expressions. The links revealed between the latency and strength of activations of brain systems and individual differences in processing socio-emotional information provide new insights into the brain mechanisms involved in social communication.

  11. Facial expression recognition based on weber local descriptor and sparse representation

    Science.gov (United States)

    Ouyang, Yan

    2018-03-01

    Automatic facial expression recognition has been one of the research hotspots in the area of computer vision for nearly ten years. During the decade, many state-of-the-art methods have been proposed which perform very high accurate rate based on the face images without any interference. Nowadays, many researchers begin to challenge the task of classifying the facial expression images with corruptions and occlusions and the Sparse Representation based Classification framework has been wildly used because it can robust to the corruptions and occlusions. Therefore, this paper proposed a novel facial expression recognition method based on Weber local descriptor (WLD) and Sparse representation. The method includes three parts: firstly the face images are divided into many local patches, and then the WLD histograms of each patch are extracted, finally all the WLD histograms features are composed into a vector and combined with SRC to classify the facial expressions. The experiment results on the Cohn-Kanade database show that the proposed method is robust to occlusions and corruptions.

  12. Development of the Korean Facial Emotion Stimuli: Korea University Facial Expression Collection 2nd Edition

    Directory of Open Access Journals (Sweden)

    Sun-Min Kim

    2017-05-01

    Full Text Available Background: Developing valid emotional facial stimuli for specific ethnicities creates ample opportunities to investigate both the nature of emotional facial information processing in general and clinical populations as well as the underlying mechanisms of facial emotion processing within and across cultures. Given that most entries in emotional facial stimuli databases were developed with western samples, and given that very few of the eastern emotional facial stimuli sets were based strictly on the Ekman’s Facial Action Coding System, developing valid emotional facial stimuli of eastern samples remains a high priority.Aims: To develop and examine the psychometric properties of six basic emotional facial stimuli recruiting professional Korean actors and actresses based on the Ekman’s Facial Action Coding System for the Korea University Facial Expression Collection-Second Edition (KUFEC-II.Materials And Methods: Stimulus selection was done in two phases. First, researchers evaluated the clarity and intensity of each stimulus developed based on the Facial Action Coding System. Second, researchers selected a total of 399 stimuli from a total of 57 actors and actresses, which were then rated on accuracy, intensity, valence, and arousal by 75 independent raters.Conclusion: The hit rates between the targeted and rated expressions of the KUFEC-II were all above 80%, except for fear (50% and disgust (63%. The KUFEC-II appears to be a valid emotional facial stimuli database, providing the largest set of emotional facial stimuli. The mean intensity score was 5.63 (out of 7, suggesting that the stimuli delivered the targeted emotions with great intensity. All positive expressions were rated as having a high positive valence, whereas all negative expressions were rated as having a high negative valence. The KUFEC II is expected to be widely used in various psychological studies on emotional facial expression. KUFEC-II stimuli can be obtained through

  13. Caricaturing facial expressions.

    Science.gov (United States)

    Calder, A J; Rowland, D; Young, A W; Nimmo-Smith, I; Keane, J; Perrett, D I

    2000-08-14

    The physical differences between facial expressions (e.g. fear) and a reference norm (e.g. a neutral expression) were altered to produce photographic-quality caricatures. In Experiment 1, participants rated caricatures of fear, happiness and sadness for their intensity of these three emotions; a second group of participants rated how 'face-like' the caricatures appeared. With increasing levels of exaggeration the caricatures were rated as more emotionally intense, but less 'face-like'. Experiment 2 demonstrated a similar relationship between emotional intensity and level of caricature for six different facial expressions. Experiments 3 and 4 compared intensity ratings of facial expression caricatures prepared relative to a selection of reference norms - a neutral expression, an average expression, or a different facial expression (e.g. anger caricatured relative to fear). Each norm produced a linear relationship between caricature and rated intensity of emotion; this finding is inconsistent with two-dimensional models of the perceptual representation of facial expression. An exemplar-based multidimensional model is proposed as an alternative account.

  14. A novel malformation complex of bilateral and symmetric preaxial radial ray-thumb aplasia and lower limb defects with minimal facial dysmorphic features: a case report and literature review.

    Science.gov (United States)

    Al Kaissi, Ali; Klaushofer, Klaus; Krebs, Alexander; Grill, Franz

    2008-10-24

    Radial hemimelia is a congenital abnormality characterised by the partial or complete absence of the radius. The longitudinal hemimelia indicates the absence of one or more bones along the preaxial (medial) or postaxial (lateral) side of the limb. Preaxial limb defects occurred more frequently with a combination of microtia, esophageal atresia, anorectal atresia, heart defects, unilateral kidney dysgenesis, and some axial skeletal defects. Postaxial acrofacial dysostoses are characterised by distinctive facies and postaxial limb deficiencies, involving the 5th finger, metacarpal/ulnar/fibular/and metatarsal. The patient, an 8-year-old-boy with minimal craniofacial dysmorphic features but with profound upper limb defects of bilateral and symmetrical absence of the radius and the thumbs respectively. In addition, there was a unilateral tibio-fibular hypoplasia (hemimelia) associated with hypoplasia of the terminal phalanges and malsegmentation of the upper thoracic vertebrae, causing effectively the development of thoracic kyphosis. In the typical form of the preaxial acrofacial dysostosis, there are aberrations in the development of the first and second branchial arches and limb buds. The craniofacial dysmorphic features are characteristic such as micrognathia, zygomatic hypoplasia, cleft palate, and preaxial limb defects. Nager and de Reynier in 1948, who used the term acrofacial dysostosis (AFD) to distinguish the condition from mandibulofacial dysostosis. Neither the facial features nor the limb defects in our present patient appear to be absolutely typical with the previously reported cases of AFD. Our patient expands the phenotype of syndromic preaxial limb malformation complex. He might represent a new syndromic entity of mild naso-maxillary malformation in connection with axial and extra-axial malformation complex.

  15. Parotidectomía y vena facial Parotidectomy and facial vein

    Directory of Open Access Journals (Sweden)

    F. Hernández Altemir

    2009-10-01

    Full Text Available La cirugía de los tumores benignos de la parótida, es una cirugía de relaciones con estructuras fundamentalmente nerviosas cuyo daño, representa un gravísimo problema psicosomático por definirlo de una manera genérica. Para ayudar al manejo quirúrgico del nervio facial periférico, es por lo que en el presente artículo tratamos de enfatizar la importancia de la vena facial en la disección y conservación del nervio, precisamente donde su disección suele ser más comprometida, esto es en las ramas más caudales. El trabajo que vamos a desarrollar hay que verlo pues, como un ensalzamiento de las estructuras venosas en el seguimiento y control del nervio facial periférico y de porqué no, el nervio auricular mayor no siempre suficientemente valorado en la cirugía de la parótida al perder protagonismo con el facial.Benign parotid tumor surgery is related to fundamental nervous structures, defined simply: that when damaged cause great psychosomatic problems. In order to make peripheral facial nerve surgery easy to handle for the surgeon this article emphasizes the importance of the facial vein in the dissection and conservation of the nerve. Its dissection can be compromised if the caudal branches are damaged. The study that we develop should be seen as praise for the vein structures in the follow up and control of the peripheral facial nerve, and the main auricular nerve that is often undervalued when it is no longer the protagonist in the face.

  16. Facial Nerve Palsy: An Unusual Presenting Feature of Small Cell Lung Cancer

    Directory of Open Access Journals (Sweden)

    Ozcan Yildiz

    2011-01-01

    Full Text Available Lung cancer is the second most common type of cancer in the world and is the most common cause of cancer-related death in men and women; it is responsible for 1.3 million deaths annually worldwide. It can metastasize to any organ. The most common site of metastasis in the head and neck region is the brain; however, it can also metastasize to the oral cavity, gingiva, tongue, parotid gland and lymph nodes. This article reports a case of small cell lung cancer presenting with metastasis to the facial nerve.

  17. 3D Facial Similarity Measure Based on Geodesic Network and Curvatures

    Directory of Open Access Journals (Sweden)

    Junli Zhao

    2014-01-01

    Full Text Available Automated 3D facial similarity measure is a challenging and valuable research topic in anthropology and computer graphics. It is widely used in various fields, such as criminal investigation, kinship confirmation, and face recognition. This paper proposes a 3D facial similarity measure method based on a combination of geodesic and curvature features. Firstly, a geodesic network is generated for each face with geodesics and iso-geodesics determined and these network points are adopted as the correspondence across face models. Then, four metrics associated with curvatures, that is, the mean curvature, Gaussian curvature, shape index, and curvedness, are computed for each network point by using a weighted average of its neighborhood points. Finally, correlation coefficients according to these metrics are computed, respectively, as the similarity measures between two 3D face models. Experiments of different persons’ 3D facial models and different 3D facial models of the same person are implemented and compared with a subjective face similarity study. The results show that the geodesic network plays an important role in 3D facial similarity measure. The similarity measure defined by shape index is consistent with human’s subjective evaluation basically, and it can measure the 3D face similarity more objectively than the other indices.

  18. A Report of Two Cases of Solid Facial Edema in Acne.

    Science.gov (United States)

    Kuhn-Régnier, Sarah; Mangana, Joanna; Kerl, Katrin; Kamarachev, Jivko; French, Lars E; Cozzio, Antonio; Navarini, Alexander A

    2017-03-01

    Solid facial edema (SFE) is a rare complication of acne vulgaris. To examine the clinical features of acne patients with solid facial edema, and to give an overview on the outcome of previous topical and systemic treatments in the cases so far published. We report two cases from Switzerland, both young men with initially papulopustular acne resistant to topical retinoids. Both cases responded to oral isotretinoin, in one case combined with oral steroids. Our cases show a strikingly similar clinical appearance to the cases described by Connelly and Winkelmann in 1985 (Connelly MG, Winkelmann RK. Solid facial edema as a complication of acne vulgaris. Arch Dermatol. 1985;121(1):87), as well as to cases of Morbihan's disease that occurs as a rare complication of rosacea. Even 30 years after, the cause of the edema remains unknown. In two of the original four cases, a potential triggering factor was identified such as facial trauma or insect bites; however, our two patients did not report such occurrencies. The rare cases of solid facial edema in both acne and rosacea might hold the key to understanding the specific inflammatory pattern that creates both persisting inflammation and disturbed fluid homeostasis which can occur as a slightly different presentation in dermatomyositis, angioedema, Heerfordt's syndrome and other conditions.

  19. Facial reanimation by muscle-nerve neurotization after facial nerve sacrifice. Case report.

    Science.gov (United States)

    Taupin, A; Labbé, D; Babin, E; Fromager, G

    2016-12-01

    Recovering a certain degree of mimicry after sacrifice of the facial nerve is a clinically recognized finding. The authors report a case of hemifacial reanimation suggesting a phenomenon of neurotization from muscle-to-nerve. A woman benefited from a parotidectomy with sacrifice of the left facial nerve indicated for recurrent tumor in the gland. The distal branches of the facial nerve, isolated at the time of resection, were buried in the masseter muscle underneath. The patient recovered a voluntary hémifacial motricity. The electromyographic analysis of the motor activity of the zygomaticus major before and after block of the masseter nerve showed a dependence between mimic muscles and the masseter muscle. Several hypotheses have been advanced to explain the spontaneous reanimation of facial paralysis. The clinical case makes it possible to argue in favor of muscle-to-nerve neurotization from masseter muscle to distal branches of the facial nerve. It illustrates the quality of motricity that can be obtained thanks to this procedure. The authors describe a simple implantation technique of distal branches of the facial nerve in the masseter muscle during a radical parotidectomy with facial nerve sacrifice and recovery of resting tone but also a quality voluntary mimicry. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  20. Deliberately generated and imitated facial expressions of emotions in people with eating disorders.

    Science.gov (United States)

    Dapelo, Marcela Marin; Bodas, Sergio; Morris, Robin; Tchanturia, Kate

    2016-02-01

    People with eating disorders have difficulties in socio emotional functioning that could contribute to maintaining the functional consequences of the disorder. This study aimed to explore the ability to deliberately generate (i.e., pose) and imitate facial expressions of emotions in women with anorexia (AN) and bulimia nervosa (BN), compared to healthy controls (HC). One hundred and three participants (36 AN, 25 BN, and 42 HC) were asked to pose and imitate facial expressions of anger, disgust, fear, happiness, and sadness. Their facial expressions were recorded and coded. Participants with eating disorders (both AN and BN) were less accurate than HC when posing facial expressions of emotions. Participants with AN were less accurate compared to HC imitating facial expressions, whilst BN participants had a middle range performance. All results remained significant after controlling for anxiety, depression and autistic features. The relatively small number of BN participants recruited for this study. The study findings suggest that people with eating disorders, particularly those with AN, have difficulties posing and imitating facial expressions of emotions. These difficulties could have an impact in social communication and social functioning. This is the first study to investigate the ability to pose and imitate facial expressions of emotions in people with eating disorders, and the findings suggest this area should be further explored in future studies. Copyright © 2015. Published by Elsevier B.V.

  1. Deep Spatial-Temporal Joint Feature Representation for Video Object Detection.

    Science.gov (United States)

    Zhao, Baojun; Zhao, Boya; Tang, Linbo; Han, Yuqi; Wang, Wenzheng

    2018-03-04

    With the development of deep neural networks, many object detection frameworks have shown great success in the fields of smart surveillance, self-driving cars, and facial recognition. However, the data sources are usually videos, and the object detection frameworks are mostly established on still images and only use the spatial information, which means that the feature consistency cannot be ensured because the training procedure loses temporal information. To address these problems, we propose a single, fully-convolutional neural network-based object detection framework that involves temporal information by using Siamese networks. In the training procedure, first, the prediction network combines the multiscale feature map to handle objects of various sizes. Second, we introduce a correlation loss by using the Siamese network, which provides neighboring frame features. This correlation loss represents object co-occurrences across time to aid the consistent feature generation. Since the correlation loss should use the information of the track ID and detection label, our video object detection network has been evaluated on the large-scale ImageNet VID dataset where it achieves a 69.5% mean average precision (mAP).

  2. Atuação da fonoaudiologia na estética facial: relato de caso clínico Speech therapy performance ih the facial aesthetics: case report

    Directory of Open Access Journals (Sweden)

    Carla Cristina Gonçalves dos Santos

    2011-08-01

    Full Text Available TEMA: estética facial. PROCEDIMENTOS: por meio de anamnese e avaliação clínica, aplicou-se um protocolo de tratamento com manipulação funcional dos músculos mastigatórios faciais acompanhados de exercícios isométricos num total de 8 sessões semanais, durante 2 meses, sendo fotografados antes e após o tratamento. O objetivo foi caracterizar as modificações faciais do ponto de vista qualitativo avaliadas clinicamente após tratamento Fonoaudiológico, num enfoque etiológico de caráter biomecânico. RESULTADOS: observou-se uma melhoria da simetria facial e funções relacionadas à biomecânica mandibular. CONCLUSÃO: sugere-se a importância da atuação fonoaudiológica no restabelecimento facial e funcional da motricidade oral com repercussões na diminuição das rugas, marcas de expressão e flacidez.BACKGROUND: aesthetics PROCEDURES: by means of a questionnaire and clinical evaluation, we applied a treatment protocol with functional manipulation of the masticatory muscles accompanied by facial isometric exercises for a total of 8 weekly sessions during 2 months, and we photographed before and after treatment. This study aimed at featuring four facial changes in terms of quality, as assessed after treatment, under an etiological and biomechanical character focus. RESULTS: there was an improvement in facial symmetry and functions related to mandibular biomechanics. CONCLUSION: we suggest the importance of speech-language intervention in restoring facial and functional oral motor that influence the reduction of wrinkles, expression marks and flaccidity.

  3. Facial emotion identification in early-onset psychosis.

    Science.gov (United States)

    Barkl, Sophie J; Lah, Suncica; Starling, Jean; Hainsworth, Cassandra; Harris, Anthony W F; Williams, Leanne M

    2014-12-01

    Facial emotion identification (FEI) deficits are common in patients with chronic schizophrenia and are strongly related to impaired functioning. The objectives of this study were to determine whether FEI deficits are present and emotion specific in people experiencing early-onset psychosis (EOP), and related to current clinical symptoms and functioning. Patients with EOP (n=34, mean age=14.11, 53% female) and healthy controls (HC, n=42, mean age 13.80, 51% female) completed a task of FEI that measured accuracy, error pattern and response time. Relative to HC, patients with EOP (i) had lower accuracy for identifying facial expressions of emotions, especially fear, anger and disgust, (ii) were more likely to misattribute other emotional expressions as fear or disgust, and (iii) were slower at accurately identifying all facial expressions. FEI accuracy was not related to clinical symptoms or current functioning. Deficits in FEI (especially for fear, anger and disgust) are evident in EOP. Our findings suggest that while emotion identification deficits may reflect a trait susceptibility marker, functional deficits may represent a sequelae of illness. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. Speech Signal and Facial Image Processing for Obstructive Sleep Apnea Assessment

    Directory of Open Access Journals (Sweden)

    Fernando Espinoza-Cuadros

    2015-01-01

    Full Text Available Obstructive sleep apnea (OSA is a common sleep disorder characterized by recurring breathing pauses during sleep caused by a blockage of the upper airway (UA. OSA is generally diagnosed through a costly procedure requiring an overnight stay of the patient at the hospital. This has led to proposing less costly procedures based on the analysis of patients’ facial images and voice recordings to help in OSA detection and severity assessment. In this paper we investigate the use of both image and speech processing to estimate the apnea-hypopnea index, AHI (which describes the severity of the condition, over a population of 285 male Spanish subjects suspected to suffer from OSA and referred to a Sleep Disorders Unit. Photographs and voice recordings were collected in a supervised but not highly controlled way trying to test a scenario close to an OSA assessment application running on a mobile device (i.e., smartphones or tablets. Spectral information in speech utterances is modeled by a state-of-the-art low-dimensional acoustic representation, called i-vector. A set of local craniofacial features related to OSA are extracted from images after detecting facial landmarks using Active Appearance Models (AAMs. Support vector regression (SVR is applied on facial features and i-vectors to estimate the AHI.

  5. Dispersion assessment in the location of facial landmarks on photographs.

    Science.gov (United States)

    Campomanes-Álvarez, B R; Ibáñez, O; Navarro, F; Alemán, I; Cordón, O; Damas, S

    2015-01-01

    The morphological assessment of facial features using photographs has played an important role in forensic anthropology. The analysis of anthropometric landmarks for determining facial dimensions and angles has been considered in diverse forensic areas. Hence, the quantification of the error associated to the location of facial landmarks seems to be necessary when photographs become a key element of the forensic procedure. In this work, we statistically evaluate the inter- and intra-observer dispersions related to the facial landmark identification on photographs. In the inter-observer experiment, a set of 18 facial landmarks was provided to 39 operators. They were requested to mark only those that they could precisely place on 10 photographs with different poses (frontal, oblique, and lateral views). The frequency of landmark location was studied together with their dispersion. Regarding the intra-observer evaluation, three participants identified 13 facial points on five photographs classified in the frontal and oblique views. Each landmark location was repeated five times at intervals of at least 24 h. The frequency results reveal that glabella, nasion, subnasale, labiale superius, and pogonion obtained the highest location frequency in the three image categories. On the contrary, the lowest rate corresponds to labiale inferius and menton. Meanwhile, zygia, gonia, and gnathion were significantly more difficult to locate than other facial landmarks. They produced a significant effect on the dispersion depending on the pose of the image where they were placed, regardless of the type of observer that positioned them. In particular, zygia and gonia presented a statistically greater variation in the three image poses, while the location of gnathion is less precise in oblique view photographs. Hence, our findings suggest that the latter landmarks tend to be highly variable when determining their exact position.

  6. Neurinomas of the facial nerve extending to the middle cranial fossa

    International Nuclear Information System (INIS)

    Ichikawa, Akimichi; Tanaka, Ryuichi; Matsumura, Kenichiro; Takeda, Norio; Ishii, Ryoji; Ito, Jusuke.

    1986-01-01

    Three cases with neurinomas of the facial nerve are reported, especially with regard to the computerized tomographic (CT) findings. All of them had a long history of facial-nerve dysfunction, associated with hearing loss over periods from several to twenty-five years. Intraoperative findings demonstrated that these tumors arose from the intrapetrous portion, the horizontal portion, or the geniculate portion of the facial nerve and that they were located in the middle cranial fossa. The histological diagnoses were neurinomas. CT scans of three cases demonstrated round and low-density masses with marginal high-density areas in the middle cranial fossa, in one associated with diffuse low-density areas in the left temporal and parietal lobes. The low-density areas on CT were thought to be cysts; this was confirmed by surgery. Enhanced CT scans showed irregular enhancement in one case and ring-like enhancement in two cases. High-resolution CT scans of the temporal bone in two cases revealed a soft tissue mass in the middle ear, a well-circumscribed irregular destruction of the anterior aspect of the petrous bone, and calcifications. These findings seemed to be significant features of the neurinomas of the facial nerve extending to the middle cranial fossa. We emphasize that bone-window CT of the temporal bone is most useful in detecting a neurinoma of the facial nerve in its early stage in order to preserve the facial- and acoustic-nerve functions. (author)

  7. Pediatric facial injuries: It's management

    OpenAIRE

    Singh, Geeta; Mohammad, Shadab; Pal, U. S.; Hariram,; Malkunje, Laxman R.; Singh, Nimisha

    2011-01-01

    Background: Facial injuries in children always present a challenge in respect of their diagnosis and management. Since these children are of a growing age every care should be taken so that later the overall growth pattern of the facial skeleton in these children is not jeopardized. Purpose: To access the most feasible method for the management of facial injuries in children without hampering the facial growth. Materials and Methods: Sixty child patients with facial trauma were selected rando...

  8. Facial Sports Injuries

    Science.gov (United States)

    ... the patient has HIV or hepatitis. Facial Fractures Sports injuries can cause potentially serious broken bones or fractures of the face. Common symptoms of facial fractures include: swelling and bruising, ...

  9. Automatic change detection to facial expressions in adolescents

    DEFF Research Database (Denmark)

    Liu, Tongran; Xiao, Tong; Jiannong, Shi

    2016-01-01

    Adolescence is a critical period for the neurodevelopment of social-emotional processing, wherein the automatic detection of changes in facial expressions is crucial for the development of interpersonal communication. Two groups of participants (an adolescent group and an adult group) were...... in facial expressions between the two age groups. The current findings demonstrated that the adolescent group featured more negative vMMN amplitudes than the adult group in the fronto-central region during the 120–200 ms interval. During the time window of 370–450 ms, only the adult group showed better...... automatic processing on fearful faces than happy faces. The present study indicated that adolescent’s posses stronger automatic detection of changes in emotional expression relative to adults, and sheds light on the neurodevelopment of automatic processes concerning social-emotional information....

  10. Outcome of a graduated minimally invasive facial reanimation in patients with facial paralysis.

    Science.gov (United States)

    Holtmann, Laura C; Eckstein, Anja; Stähr, Kerstin; Xing, Minzhi; Lang, Stephan; Mattheis, Stefan

    2017-08-01

    Peripheral paralysis of the facial nerve is the most frequent of all cranial nerve disorders. Despite advances in facial surgery, the functional and aesthetic reconstruction of a paralyzed face remains a challenge. Graduated minimally invasive facial reanimation is based on a modular principle. According to the patients' needs, precondition, and expectations, the following modules can be performed: temporalis muscle transposition and facelift, nasal valve suspension, endoscopic brow lift, and eyelid reconstruction. Applying a concept of a graduated minimally invasive facial reanimation may help minimize surgical trauma and reduce morbidity. Twenty patients underwent a graduated minimally invasive facial reanimation. A retrospective chart review was performed with a follow-up examination between 1 and 8 months after surgery. The FACEgram software was used to calculate pre- and postoperative eyelid closure, the level of brows, nasal, and philtral symmetry as well as oral commissure position at rest and oral commissure excursion with smile. As a patient-oriented outcome parameter, the Glasgow Benefit Inventory questionnaire was applied. There was a statistically significant improvement in the postoperative score of eyelid closure, brow asymmetry, nasal asymmetry, philtral asymmetry as well as oral commissure symmetry at rest (p facial nerve repair or microneurovascular tissue transfer cannot be applied, graduated minimally invasive facial reanimation is a promising option to restore facial function and symmetry at rest.

  11. Younger and Older Users’ Recognition of Virtual Agent Facial Expressions

    Science.gov (United States)

    Beer, Jenay M.; Smarr, Cory-Ann; Fisk, Arthur D.; Rogers, Wendy A.

    2015-01-01

    As technology advances, robots and virtual agents will be introduced into the home and healthcare settings to assist individuals, both young and old, with everyday living tasks. Understanding how users recognize an agent’s social cues is therefore imperative, especially in social interactions. Facial expression, in particular, is one of the most common non-verbal cues used to display and communicate emotion in on-screen agents (Cassell, Sullivan, Prevost, & Churchill, 2000). Age is important to consider because age-related differences in emotion recognition of human facial expression have been supported (Ruffman et al., 2008), with older adults showing a deficit for recognition of negative facial expressions. Previous work has shown that younger adults can effectively recognize facial emotions displayed by agents (Bartneck & Reichenbach, 2005; Courgeon et al. 2009; 2011; Breazeal, 2003); however, little research has compared in-depth younger and older adults’ ability to label a virtual agent’s facial emotions, an import consideration because social agents will be required to interact with users of varying ages. If such age-related differences exist for recognition of virtual agent facial expressions, we aim to understand if those age-related differences are influenced by the intensity of the emotion, dynamic formation of emotion (i.e., a neutral expression developing into an expression of emotion through motion), or the type of virtual character differing by human-likeness. Study 1 investigated the relationship between age-related differences, the implication of dynamic formation of emotion, and the role of emotion intensity in emotion recognition of the facial expressions of a virtual agent (iCat). Study 2 examined age-related differences in recognition expressed by three types of virtual characters differing by human-likeness (non-humanoid iCat, synthetic human, and human). Study 2 also investigated the role of configural and featural processing as a

  12. FEATURES OF NEED-MOTIVATION ORIENTATION OF STUDENTS WHO REPRESENT THE CHINESE CULTURE

    Directory of Open Access Journals (Sweden)

    T. V. Mayasova

    2016-01-01

    Full Text Available In the article it is investigated the features of need-motivational orientation of students who represent the Chinese culture, studying in the higher educational institutions of Russia. As personal characteristics are analyzed the degree of satisfaction of basic needs, the level of motivation to succeed, motivational structure of personality in Chinese and Russian students. The importance of the study of personality characteristics of foreign students of the university helps professionals find the conditions for successful social and cross-cultural adaptation of students in a foreign country. The analysis obtained during the empirical research results confirm that there are certain differences in the needs and motivation of the students, representatives of Chinese and Russian culture. There were significant differences in rates of interpersonal needs, need for recognition, motivation and the comfort level of motivation to the "total activity" in Chinese and Russian students, which allows to predict the occurrence of adaptation and socialization difficulties of foreign students during training.

  13. The distinguishing motor features of cataplexy: a study from video-recorded attacks.

    Science.gov (United States)

    Pizza, Fabio; Antelmi, Elena; Vandi, Stefano; Meletti, Stefano; Erro, Roberto; Baumann, Christian R; Bhatia, Kailash P; Dauvilliers, Yves; Edwards, Mark J; Iranzo, Alex; Overeem, Sebastiaan; Tinazzi, Michele; Liguori, Rocco; Plazzi, Giuseppe

    2018-05-01

    To describe the motor pattern of cataplexy and to determine its phenomenological differences from pseudocataplexy in the differential diagnosis of episodic falls. We selected 30 video-recorded cataplexy and 21 pseudocataplexy attacks in 17 and 10 patients evaluated for suspected narcolepsy and with final diagnosis of narcolepsy type 1 and conversion disorder, respectively, together with self-reported attacks features, and asked expert neurologists to blindly evaluate the motor features of the attacks. Video documented and self-reported attack features of cataplexy and pseudocataplexy were contrasted. Video-recorded cataplexy can be positively differentiated from pseudocataplexy by the occurrence of facial hypotonia (ptosis, mouth opening, tongue protrusion) intermingled by jerks and grimaces abruptly interrupting laughter behavior (i.e. smile, facial expression) and postural control (head drops, trunk fall) under clear emotional trigger. Facial involvement is present in both partial and generalized cataplexy. Conversely, generalized pseudocataplexy is associated with persistence of deep tendon reflexes during the attack. Self-reported features confirmed the important role of positive emotions (laughter, telling a joke) in triggering the attacks, as well as the more frequent occurrence of partial body involvement in cataplexy compared with pseudocataplexy. Cataplexy is characterized by abrupt facial involvement during laughter behavior. Video recording of suspected cataplexy attacks allows the identification of positive clinical signs useful for diagnosis and, possibly in the future, for severity assessment.

  14. Facial nerve palsy as a primary presentation of advanced carcinoma ...

    African Journals Online (AJOL)

    Introduction: Cranial nerve neuropathy is a rare presentation of advanced cancer of the prostate. Observation: We report a case of 65-year-old man who presented with right lower motor neuron (LMN) facial nerve palsy. The prostate had malignant features on digital rectal examination (DRE) and the prostate specific antigen ...

  15. Emotion Index of Cover Song Music Video Clips based on Facial Expression Recognition

    DEFF Research Database (Denmark)

    Kavallakis, George; Vidakis, Nikolaos; Triantafyllidis, Georgios

    2017-01-01

    This paper presents a scheme of creating an emotion index of cover song music video clips by recognizing and classifying facial expressions of the artist in the video. More specifically, it fuses effective and robust algorithms which are employed for expression recognition, along with the use...... of a neural network system using the features extracted by the SIFT algorithm. Also we support the need of this fusion of different expression recognition algorithms, because of the way that emotions are linked to facial expressions in music video clips....

  16. Hypoglossal-facial nerve "side"-to-side neurorrhaphy for facial paralysis resulting from closed temporal bone fractures.

    Science.gov (United States)

    Su, Diya; Li, Dezhi; Wang, Shiwei; Qiao, Hui; Li, Ping; Wang, Binbin; Wan, Hong; Schumacher, Michael; Liu, Song

    2018-06-06

    Closed temporal bone fractures due to cranial trauma often result in facial nerve injury, frequently inducing incomplete facial paralysis. Conventional hypoglossal-facial nerve end-to-end neurorrhaphy may not be suitable for these injuries because sacrifice of the lesioned facial nerve for neurorrhaphy destroys the remnant axons and/or potential spontaneous innervation. we modified the classical method by hypoglossal-facial nerve "side"-to-side neurorrhaphy using an interpositional predegenerated nerve graft to treat these injuries. Five patients who experienced facial paralysis resulting from closed temporal bone fractures due to cranial trauma were treated with the "side"-to-side neurorrhaphy. An additional 4 patients did not receive the neurorrhaphy and served as controls. Before treatment, all patients had suffered House-Brackmann (H-B) grade V or VI facial paralysis for a mean of 5 months. During the 12-30 months of follow-up period, no further detectable deficits were observed, but an improvement in facial nerve function was evidenced over time in the 5 neurorrhaphy-treated patients. At the end of follow-up, the improved facial function reached H-B grade II in 3, grade III in 1 and grade IV in 1 of the 5 patients, consistent with the electrophysiological examinations. In the control group, two patients showed slightly spontaneous innervation with facial function improved from H-B grade VI to V, and the other patients remained unchanged at H-B grade V or VI. We concluded that the hypoglossal-facial nerve "side"-to-side neurorrhaphy can preserve the injured facial nerve and is suitable for treating significant incomplete facial paralysis resulting from closed temporal bone fractures, providing an evident beneficial effect. Moreover, this treatment may be performed earlier after the onset of facial paralysis in order to reduce the unfavorable changes to the injured facial nerve and atrophy of its target muscles due to long-term denervation and allow axonal

  17. [Descending hypoglossal branch-facial nerve anastomosis in treating unilateral facial palsy after acoustic neuroma resection].

    Science.gov (United States)

    Liang, Jiantao; Li, Mingchu; Chen, Ge; Guo, Hongchuan; Zhang, Qiuhang; Bao, Yuhai

    2015-12-15

    To evaluate the efficiency of the descending hypoglossal branch-facial nerve anastomosis for the severe facial palsy after acoustic neuroma resection. The clinical data of 14 patients (6 males, 8 females, average age 45. 6 years old) underwent descending hypoglossal branch-facial nerve anastomosis for treatment of unilateral facial palsy was analyzed retrospectively. All patients previously had undergone resection of a large acoustic neuroma. House-Brackmann (H-B) grading system was used to evaluate the pre-, post-operative and follow up facial nerve function status. 12 cases (85.7%) had long follow up, with an average follow-up period of 24. 6 months. 6 patients had good outcome (H-B 2 - 3 grade); 5 patients had fair outcome (H-B 3 - 4 grade) and 1 patient had poor outcome (H-B 5 grade) Only 1 patient suffered hemitongue myoparalysis owing to the operation. Descending hypoglossal branch-facial nerve anastomosis is effective for facial reanimation, and it has little impact on the function of chewing, swallowing and pronunciation of the patients compared with the traditional hypoglossal-facial nerve anastomosis.

  18. Adapting Local Features for Face Detection in Thermal Image

    Directory of Open Access Journals (Sweden)

    Chao Ma

    2017-11-01

    Full Text Available A thermal camera captures the temperature distribution of a scene as a thermal image. In thermal images, facial appearances of different people under different lighting conditions are similar. This is because facial temperature distribution is generally constant and not affected by lighting condition. This similarity in face appearances is advantageous for face detection. To detect faces in thermal images, cascade classifiers with Haar-like features are generally used. However, there are few studies exploring the local features for face detection in thermal images. In this paper, we introduce two approaches relying on local features for face detection in thermal images. First, we create new feature types by extending Multi-Block LBP. We consider a margin around the reference and the generally constant distribution of facial temperature. In this way, we make the features more robust to image noise and more effective for face detection in thermal images. Second, we propose an AdaBoost-based training method to get cascade classifiers with multiple types of local features. These feature types have different advantages. In this way we enhance the description power of local features. We did a hold-out validation experiment and a field experiment. In the hold-out validation experiment, we captured a dataset from 20 participants, comprising 14 males and 6 females. For each participant, we captured 420 images with 10 variations in camera distance, 21 poses, and 2 appearances (participant with/without glasses. We compared the performance of cascade classifiers trained by different sets of the features. The experiment results showed that the proposed approaches effectively improve the performance of face detection in thermal images. In the field experiment, we compared the face detection performance in realistic scenes using thermal and RGB images, and gave discussion based on the results.

  19. Detection of emotional faces: salient physical features guide effective visual search.

    Science.gov (United States)

    Calvo, Manuel G; Nummenmaa, Lauri

    2008-08-01

    In this study, the authors investigated how salient visual features capture attention and facilitate detection of emotional facial expressions. In a visual search task, a target emotional face (happy, disgusted, fearful, angry, sad, or surprised) was presented in an array of neutral faces. Faster detection of happy and, to a lesser extent, surprised and disgusted faces was found both under upright and inverted display conditions. Inversion slowed down the detection of these faces less than that of others (fearful, angry, and sad). Accordingly, the detection advantage involves processing of featural rather than configural information. The facial features responsible for the detection advantage are located in the mouth rather than the eye region. Computationally modeled visual saliency predicted both attentional orienting and detection. Saliency was greatest for the faces (happy) and regions (mouth) that were fixated earlier and detected faster, and there was close correspondence between the onset of the modeled saliency peak and the time at which observers initially fixated the faces. The authors conclude that visual saliency of specific facial features--especially the smiling mouth--is responsible for facilitated initial orienting, which thus shortens detection. (PsycINFO Database Record (c) 2008 APA, all rights reserved).

  20. Facial recognition software success rates for the identification of 3D surface reconstructed facial images: implications for patient privacy and security.

    Science.gov (United States)

    Mazura, Jan C; Juluru, Krishna; Chen, Joseph J; Morgan, Tara A; John, Majnu; Siegel, Eliot L

    2012-06-01

    Image de-identification has focused on the removal of textual protected health information (PHI). Surface reconstructions of the face have the potential to reveal a subject's identity even when textual PHI is absent. This study assessed the ability of a computer application to match research subjects' 3D facial reconstructions with conventional photographs of their face. In a prospective study, 29 subjects underwent CT scans of the head and had frontal digital photographs of their face taken. Facial reconstructions of each CT dataset were generated on a 3D workstation. In phase 1, photographs of the 29 subjects undergoing CT scans were added to a digital directory and tested for recognition using facial recognition software. In phases 2-4, additional photographs were added in groups of 50 to increase the pool of possible matches and the test for recognition was repeated. As an internal control, photographs of all subjects were tested for recognition against an identical photograph. Of 3D reconstructions, 27.5% were matched correctly to corresponding photographs (95% upper CL, 40.1%). All study subject photographs were matched correctly to identical photographs (95% lower CL, 88.6%). Of 3D reconstructions, 96.6% were recognized simply as a face by the software (95% lower CL, 83.5%). Facial recognition software has the potential to recognize features on 3D CT surface reconstructions and match these with photographs, with implications for PHI.

  1. Facial Expression Recognition from Video Sequences Based on Spatial-Temporal Motion Local Binary Pattern and Gabor Multiorientation Fusion Histogram

    Directory of Open Access Journals (Sweden)

    Lei Zhao

    2017-01-01

    Full Text Available This paper proposes novel framework for facial expressions analysis using dynamic and static information in video sequences. First, based on incremental formulation, discriminative deformable face alignment method is adapted to locate facial points to correct in-plane head rotation and break up facial region from background. Then, spatial-temporal motion local binary pattern (LBP feature is extracted and integrated with Gabor multiorientation fusion histogram to give descriptors, which reflect static and dynamic texture information of facial expressions. Finally, a one-versus-one strategy based multiclass support vector machine (SVM classifier is applied to classify facial expressions. Experiments on Cohn-Kanade (CK + facial expression dataset illustrate that integrated framework outperforms methods using single descriptors. Compared with other state-of-the-art methods on CK+, MMI, and Oulu-CASIA VIS datasets, our proposed framework performs better.

  2. Perceptual and affective mechanisms in facial expression recognition: An integrative review.

    Science.gov (United States)

    Calvo, Manuel G; Nummenmaa, Lauri

    2016-09-01

    Facial expressions of emotion involve a physical component of morphological changes in a face and an affective component conveying information about the expresser's internal feelings. It remains unresolved how much recognition and discrimination of expressions rely on the perception of morphological patterns or the processing of affective content. This review of research on the role of visual and emotional factors in expression recognition reached three major conclusions. First, behavioral, neurophysiological, and computational measures indicate that basic expressions are reliably recognized and discriminated from one another, albeit the effect may be inflated by the use of prototypical expression stimuli and forced-choice responses. Second, affective content along the dimensions of valence and arousal is extracted early from facial expressions, although this coarse affective representation contributes minimally to categorical recognition of specific expressions. Third, the physical configuration and visual saliency of facial features contribute significantly to expression recognition, with "emotionless" computational models being able to reproduce some of the basic phenomena demonstrated in human observers. We conclude that facial expression recognition, as it has been investigated in conventional laboratory tasks, depends to a greater extent on perceptual than affective information and mechanisms.

  3. Facial Visualizations of Women's Voices Suggest a Cross-Modality Preference for Femininity

    Directory of Open Access Journals (Sweden)

    Susanne Röder

    2013-01-01

    Full Text Available Women with higher-pitched voices and more feminine facial features are commonly judged as being more attractive than are women with lower-pitched voices and less feminine faces, possibly because both features are affected by (age-related variations in endocrine status. These results are primarily derived from investigations of perceptions of variations in single-modality stimuli (i.e., faces or voices in samples of young adult women. In the present study we sought to test whether male and female perceptions of women's voices affect visual representations of facial femininity. Eighty men and women judged voice recordings of 10 young girls (11–15 years, 10 adult women (19–28 years and 10 peri-/post-menopausal women (50–64 years on age, attractiveness, and femininity. Another 80 men and women were asked to indicate the face they think each voice corresponded to using a video that gradually changed from a masculine looking male face into a feminine looking female face. Both male and female participants perceived voices of young girls and adult women to be significantly younger, more attractive and feminine than those of peri-/post-menopausal women. Hearing young girls' and adult women's voices resulted in both men and women selecting faces that differed markedly in apparent femininity from those associated with peri-/post-menopausal women's voices. Voices of young girls had the strongest effect on visualizations of facial femininity. Our results suggest a cross-modal preference for women's vocal and facial femininity, which depends on female age and is independent of the perceiver's sex.

  4. Impaired Overt Facial Mimicry in Response to Dynamic Facial Expressions in High-Functioning Autism Spectrum Disorders

    Science.gov (United States)

    Yoshimura, Sayaka; Sato, Wataru; Uono, Shota; Toichi, Motomi

    2015-01-01

    Previous electromyographic studies have reported that individuals with autism spectrum disorders (ASD) exhibited atypical patterns of facial muscle activity in response to facial expression stimuli. However, whether such activity is expressed in visible facial mimicry remains unknown. To investigate this issue, we videotaped facial responses in…

  5. The MPI facial expression database--a validated database of emotional and conversational facial expressions.

    Directory of Open Access Journals (Sweden)

    Kathrin Kaulard

    Full Text Available The ability to communicate is one of the core aspects of human life. For this, we use not only verbal but also nonverbal signals of remarkable complexity. Among the latter, facial expressions belong to the most important information channels. Despite the large variety of facial expressions we use in daily life, research on facial expressions has so far mostly focused on the emotional aspect. Consequently, most databases of facial expressions available to the research community also include only emotional expressions, neglecting the largely unexplored aspect of conversational expressions. To fill this gap, we present the MPI facial expression database, which contains a large variety of natural emotional and conversational expressions. The database contains 55 different facial expressions performed by 19 German participants. Expressions were elicited with the help of a method-acting protocol, which guarantees both well-defined and natural facial expressions. The method-acting protocol was based on every-day scenarios, which are used to define the necessary context information for each expression. All facial expressions are available in three repetitions, in two intensities, as well as from three different camera angles. A detailed frame annotation is provided, from which a dynamic and a static version of the database have been created. In addition to describing the database in detail, we also present the results of an experiment with two conditions that serve to validate the context scenarios as well as the naturalness and recognizability of the video sequences. Our results provide clear evidence that conversational expressions can be recognized surprisingly well from visual information alone. The MPI facial expression database will enable researchers from different research fields (including the perceptual and cognitive sciences, but also affective computing, as well as computer vision to investigate the processing of a wider range of natural

  6. Unmasking Zorro: functional importance of the facial mask in the Masked Shrike (Lanius nubicus)

    OpenAIRE

    Reuven Yosef; Piotr Zduniak; Piotr Tryjanowski

    2012-01-01

    The facial mask is a prominent feature in the animal kingdom. We hypothesized that the facial mask of shrikes allows them to hunt into the sun, which accords them detection and surprise-attack capabilities. We conducted a field experiment to determine whether the mask facilitated foraging while facing into the sun. Male shrikes with white-painted masks hunted facing away from the sun more than birds with black-painted masks, which are the natural color, and more than individuals in the contro...

  7. Facial Identification in Observers with Colour-Grapheme Synaesthesia

    DEFF Research Database (Denmark)

    Sørensen, Thomas Alrik

    2013-01-01

    Synaesthesia between colours and graphemes is often reported as one of the most common forms cross modal perception [Colizolo et al, 2012, PLoS ONE, 7(6), e39799]. In this particular synesthetic sub-type the perception of a letterform is followed by an additional experience of a colour quality....... Both colour [McKeefry and Zeki, 1997, Brain, 120(12), 2229–2242] and visual word forms [McCandliss et al, 2003, Trends in Cognitive Sciences, 7(7), 293–299] have previously been linked to the fusiform gyrus. By being neighbouring functions speculations of cross wiring between the areas have been...... of Neuroscience, 17(11), 4302–4311], increased colour-word form representations in observers with colour-grapheme synaesthesia may affect facial identification in people with synaesthesia. This study investigates the ability to process facial features for identification in observers with colour...

  8. Facial EMG responses to dynamic emotional facial expressions in boys with disruptive behavior disorders

    NARCIS (Netherlands)

    Wied, de M.; Boxtel, van Anton; Zaalberg, R.; Goudena, P.P.; Matthys, W.

    2006-01-01

    Based on the assumption that facial mimicry is a key factor in emotional empathy, and clinical observations that children with disruptive behavior disorders (DBD) are weak empathizers, the present study explored whether DBD boys are less facially responsive to facial expressions of emotions than

  9. Case Report: A true median facial cleft (crano-facial dysraphia ...

    African Journals Online (AJOL)

    Case Report: A true median facial cleft (crano-facial dysraphia, atessier type O) in Bingham University Teaching Hospital, Jos. ... Patient had a multidisciplinary care by the obstetrician, Neonatologist, anesthesiologist and the plastic surgery team who scheduled a soft tissue repair of the upper lip defect, columella and ...

  10. Outcome of different facial nerve reconstruction techniques

    OpenAIRE

    Mohamed, Aboshanif; Omi, Eigo; Honda, Kohei; Suzuki, Shinsuke; Ishikawa, Kazuo

    2016-01-01

    Abstract Introduction: There is no technique of facial nerve reconstruction that guarantees facial function recovery up to grade III. Objective: To evaluate the efficacy and safety of different facial nerve reconstruction techniques. Methods: Facial nerve reconstruction was performed in 22 patients (facial nerve interpositional graft in 11 patients and hypoglossal-facial nerve transfer in another 11 patients). All patients had facial function House-Brackmann (HB) grade VI, either caused by...

  11. Single trial classification for the categories of perceived emotional facial expressions: an event-related fMRI study

    Science.gov (United States)

    Song, Sutao; Huang, Yuxia; Long, Zhiying; Zhang, Jiacai; Chen, Gongxiang; Wang, Shuqing

    2016-03-01

    Recently, several studies have successfully applied multivariate pattern analysis methods to predict the categories of emotions. These studies are mainly focused on self-experienced emotions, such as the emotional states elicited by music or movie. In fact, most of our social interactions involve perception of emotional information from the expressions of other people, and it is an important basic skill for humans to recognize the emotional facial expressions of other people in a short time. In this study, we aimed to determine the discriminability of perceived emotional facial expressions. In a rapid event-related fMRI design, subjects were instructed to classify four categories of facial expressions (happy, disgust, angry and neutral) by pressing different buttons, and each facial expression stimulus lasted for 2s. All participants performed 5 fMRI runs. One multivariate pattern analysis method, support vector machine was trained to predict the categories of facial expressions. For feature selection, ninety masks defined from anatomical automatic labeling (AAL) atlas were firstly generated and each were treated as the input of the classifier; then, the most stable AAL areas were selected according to prediction accuracies, and comprised the final feature sets. Results showed that: for the 6 pair-wise classification conditions, the accuracy, sensitivity and specificity were all above chance prediction, among which, happy vs. neutral , angry vs. disgust achieved the lowest results. These results suggested that specific neural signatures of perceived emotional facial expressions may exist, and happy vs. neutral, angry vs. disgust might be more similar in information representation in the brain.

  12. Delineation and Diagnostic Criteria of Oral-Facial-Digital Syndrome Type VI

    NARCIS (Netherlands)

    Poretti, Andrea; Vitiello, Giuseppina; Hennekam, Raoul C. M.; Arrigoni, Filippo; Bertini, Enrico; Borgatti, Renato; Brancati, Francesco; D'Arrigo, Stefano; Faravelli, Francesca; Giordano, Lucio; Huisman, Thierry A. G. M.; Iannicelli, Miriam; Kluger, Gerhard; Kyllerman, Marten; Landgren, Magnus; Lees, Melissa M.; Pinelli, Lorenzo; Romaniello, Romina; Scheer, Ianina; Schwarz, Christoph E.; Spiegel, Ronen; Tibussek, Daniel; Valente, Enza Maria; Boltshauser, Eugen

    2012-01-01

    Oral-Facial-Digital Syndrome type VI (OFD VI) represents a rare phenotypic subtype of Joubert syndrome and related disorders (JSRD). In the original report polydactyly, oral findings, intellectual disability, and absence of the cerebellar vermis at post-mortem characterized the syndrome.

  13. Microbial biofilms on silicone facial prostheses

    NARCIS (Netherlands)

    Ariani, Nina

    2015-01-01

    Facial disfigurements can result from oncologic surgery, trauma and congenital deformities. These disfigurements can be rehabilitated with facial prostheses. Facial prostheses are usually made of silicones. A problem of facial prostheses is that microorganisms can colonize their surface. It is hard

  14. The activation of visual memory for facial identity is task-dependent: evidence from human electrophysiology.

    Science.gov (United States)

    Zimmermann, Friederike G S; Eimer, Martin

    2014-05-01

    The question whether the recognition of individual faces is mandatory or task-dependent is still controversial. We employed the N250r component of the event-related potential as a marker of the activation of representations of facial identity in visual memory, in order to find out whether identity-related information from faces is encoded and maintained even when facial identity is task-irrelevant. Pairs of faces appeared in rapid succession, and the N250r was measured in response to repetitions of the same individual face, as compared to presentations of two different faces. In Experiment 1, an N250r was present in an identity matching task where identity information was relevant, but not when participants had to detect infrequent targets (inverted faces), and facial identity was task-irrelevant. This was the case not only for unfamiliar faces, but also for famous faces, suggesting that even famous face recognition is not as automatic as is often assumed. In Experiment 2, an N250r was triggered by repetitions of non-famous faces in a task where participants had to match the view of each face pair, and facial identity had to be ignored. This shows that when facial features have to be maintained in visual memory for a subsequent comparison, identity-related information is retained as well, even when it is irrelevant. Our results suggest that individual face recognition is neither fully mandatory nor completely task-dependent. Facial identity is encoded and maintained in tasks that involve visual memory for individual faces, regardless of the to-be-remembered feature. In tasks without this memory component, irrelevant visual identity information can be completely ignored. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Comparison of self-reported signs of facial ageing among Caucasian women in Australia versus those in the USA, the UK and Canada.

    Science.gov (United States)

    Goodman, Greg J; Armour, Katherine S; Kolodziejczyk, Julia K; Santangelo, Samantha; Gallagher, Conor J

    2017-04-10

    Australians are more exposed to higher solar UV radiation levels that accelerate signs of facial ageing than individuals who live in temperate northern countries. The severity and course of self-reported facial ageing among fair-skinned Australian women were compared with those living in Canada, the UK and the USA. Women voluntarily recruited into a proprietary opt-in survey panel completed an internet-based questionnaire about their facial ageing. Participants aged 18-75 years compared their features against photonumeric rating scales depicting degrees of severity for forehead, crow's feet and glabellar lines, tear troughs, midface volume loss, nasolabial folds, oral commissures and perioral lines. Data from Caucasian and Asian women with Fitzpatrick skin types I-III were analysed by linear regression for the impact of country (Australia versus Canada, the UK and the USA) on ageing severity for each feature, after controlling for age and race. Among 1472 women, Australians reported higher rates of change and significantly more severe facial lines (P ≤ 0.040) and volume-related features like tear troughs and nasolabial folds (P ≤ 0.03) than women from the other countries. More Australians also reported moderate to severe ageing for all features one to two decades earlier than US women. Australian women reported more severe signs of facial ageing sooner than other women and volume-related changes up to 20 years earlier than those in the USA, which may suggest that environmental factors also impact volume-related ageing. These findings have implications for managing their facial aesthetic concerns. © 2017 The Authors. Australasian Journal of Dermatology published by John Wiley and Sons Australia, Ltd on behalf of The Australasian College of Dermatologists.

  16. The importance of internal facial features in learning new faces.

    Science.gov (United States)

    Longmore, Christopher A; Liu, Chang Hong; Young, Andrew W

    2015-01-01

    For familiar faces, the internal features (eyes, nose, and mouth) are known to be differentially salient for recognition compared to external features such as hairstyle. Two experiments are reported that investigate how this internal feature advantage accrues as a face becomes familiar. In Experiment 1, we tested the contribution of internal and external features to the ability to generalize from a single studied photograph to different views of the same face. A recognition advantage for the internal features over the external features was found after a change of viewpoint, whereas there was no internal feature advantage when the same image was used at study and test. In Experiment 2, we removed the most salient external feature (hairstyle) from studied photographs and looked at how this affected generalization to a novel viewpoint. Removing the hair from images of the face assisted generalization to novel viewpoints, and this was especially the case when photographs showing more than one viewpoint were studied. The results suggest that the internal features play an important role in the generalization between different images of an individual's face by enabling the viewer to detect the common identity-diagnostic elements across non-identical instances of the face.

  17. [Facial tics and spasms].

    Science.gov (United States)

    Potgieser, Adriaan R E; van Dijk, J Marc C; Elting, Jan Willem J; de Koning-Tijssen, Marina A J

    2014-01-01

    Facial tics and spasms are socially incapacitating, but effective treatment is often available. The clinical picture is sufficient for distinguishing between the different diseases that cause this affliction.We describe three cases of patients with facial tics or spasms: one case of tics, which are familiar to many physicians; one case of blepharospasms; and one case of hemifacial spasms. We discuss the differential diagnosis and the treatment possibilities for facial tics and spasms. Early diagnosis and treatment is important, because of the associated social incapacitation. Botulin toxin should be considered as a treatment option for facial tics and a curative neurosurgical intervention should be considered for hemifacial spasms.

  18. Síndrome de dolor facial

    Directory of Open Access Journals (Sweden)

    DR. F. Eugenio Tenhamm

    2014-07-01

    Full Text Available El dolor o algia facial constituye un síndrome doloroso de las estructuras cráneo faciales bajo el cual se agrupan un gran número de enfermedades. La mejor manera de abordar el diagnóstico diferencial de las entidades que causan el dolor facial es usando un algoritmo que identifica cuatro síndromes dolorosos principales que son: las neuralgias faciales, los dolores faciales con síntomas y signos neurológicos, las cefaleas autonómicas trigeminales y los dolores faciales sin síntomas ni signos neurológicos. Una evaluación clínica detallada de los pacientes, permite una aproximación etiológica lo que orienta el estudio diagnóstico y permite ofrecer una terapia específica a la mayoría de los casos

  19. Naumoff short-rib polydactyly syndrome compounded with Mohr oral-facial-digital syndrome

    Energy Technology Data Exchange (ETDEWEB)

    Young, L.W.; Wilhelm, L.L. [Loma Linda Univ., CA (United States). Medical Center; Zuppan, C.W. [Div. of Pediatric Pathology, Loma Linda University Medical Center, CA (United States); Clark, R. [Div. of Medical Genetics, Loma Linda University Medical Center, CA (United States)

    2001-01-01

    A stillborn baby boy had findings of severe constitutional dwarfism with short limbs, short ribs, and polydactyly that were consistent with Naumoff (type III) short-rib polydactyly syndrome. He also had additional congenital anomalies, including cleft palate, notching of the upper lip, small tongue with accessory sublingual tissue. These oral and pharyngeal anomalies were consistent with Mohr (type II) oral-facial-digital syndrome. We suggest the stillborn infant represented a compound of Naumoff short-rib polydactyly syndrome (SRPS-III) and Mohr oral-facial-digital syndrome (OFDS-II). (orig.)

  20. Outcome of different facial nerve reconstruction techniques.

    Science.gov (United States)

    Mohamed, Aboshanif; Omi, Eigo; Honda, Kohei; Suzuki, Shinsuke; Ishikawa, Kazuo

    There is no technique of facial nerve reconstruction that guarantees facial function recovery up to grade III. To evaluate the efficacy and safety of different facial nerve reconstruction techniques. Facial nerve reconstruction was performed in 22 patients (facial nerve interpositional graft in 11 patients and hypoglossal-facial nerve transfer in another 11 patients). All patients had facial function House-Brackmann (HB) grade VI, either caused by trauma or after resection of a tumor. All patients were submitted to a primary nerve reconstruction except 7 patients, where late reconstruction was performed two weeks to four months after the initial surgery. The follow-up period was at least two years. For facial nerve interpositional graft technique, we achieved facial function HB grade III in eight patients and grade IV in three patients. Synkinesis was found in eight patients, and facial contracture with synkinesis was found in two patients. In regards to hypoglossal-facial nerve transfer using different modifications, we achieved facial function HB grade III in nine patients and grade IV in two patients. Facial contracture, synkinesis and tongue atrophy were found in three patients, and synkinesis was found in five patients. However, those who had primary direct facial-hypoglossal end-to-side anastomosis showed the best result without any neurological deficit. Among various reanimation techniques, when indicated, direct end-to-side facial-hypoglossal anastomosis through epineural suturing is the most effective technique with excellent outcomes for facial reanimation and preservation of tongue movement, particularly when performed as a primary technique. Copyright © 2016 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  1. Intelligent Facial Recognition Systems: Technology advancements for security applications

    Energy Technology Data Exchange (ETDEWEB)

    Beer, C.L.

    1993-07-01

    Insider problems such as theft and sabotage can occur within the security and surveillance realm of operations when unauthorized people obtain access to sensitive areas. A possible solution to these problems is a means to identify individuals (not just credentials or badges) in a given sensitive area and provide full time personnel accountability. One approach desirable at Department of Energy facilities for access control and/or personnel identification is an Intelligent Facial Recognition System (IFRS) that is non-invasive to personnel. Automatic facial recognition does not require the active participation of the enrolled subjects, unlike most other biological measurement (biometric) systems (e.g., fingerprint, hand geometry, or eye retinal scan systems). It is this feature that makes an IFRS attractive for applications other than access control such as emergency evacuation verification, screening, and personnel tracking. This paper discusses current technology that shows promising results for DOE and other security applications. A survey of research and development in facial recognition identified several companies and universities that were interested and/or involved in the area. A few advanced prototype systems were also identified. Sandia National Laboratories is currently evaluating facial recognition systems that are in the advanced prototype stage. The initial application for the evaluation is access control in a controlled environment with a constant background and with cooperative subjects. Further evaluations will be conducted in a less controlled environment, which may include a cluttered background and subjects that are not looking towards the camera. The outcome of the evaluations will help identify areas of facial recognition systems that need further development and will help to determine the effectiveness of the current systems for security applications.

  2. Facial Transplantation Surgery Introduction

    OpenAIRE

    Eun, Seok-Chan

    2015-01-01

    Severely disfiguring facial injuries can have a devastating impact on the patient's quality of life. During the past decade, vascularized facial allotransplantation has progressed from an experimental possibility to a clinical reality in the fields of disease, trauma, and congenital malformations. This technique may now be considered a viable option for repairing complex craniofacial defects for which the results of autologous reconstruction remain suboptimal. Vascularized facial allotranspla...

  3. The bony crescent sign - a new sign of facial nerve schwannoma

    International Nuclear Information System (INIS)

    Watts, A.; Fagan, P.

    1992-01-01

    Schwannomas are relatively uncommon intracranial tumours. They most commonly involve the acoustic nerve followed in frequency by the trigeminal nerve. Other cranial nerves are rarely involved. Facial nerve schwannomas occurring within the petrous temporal bone are very rare. Their diagnosis may be missed prospectively even when appropriate computerized tomography (CT) scans are performed. Even in retrospect the site of abnormality may be difficult to identify, especially if there is an associated middle ear mass such as a cholesteatoma. In the 4 cases presented the facial nerve schwannoma was seen on high resolution CT as a soft tissue mass bounded anteriorly by a thin rim of bone. This bony crescent sign is a previously undescribed feature of facial nerve schwannoma which appears to be strongly indicative of the presence of this tumour. Recognition of this sign makes these tumours arising in the region of the geniculate ganglion easy to diagnose prospectively. 12 refs., 6 figs

  4. Contemporary Koreans’ Perceptions of Facial Beauty

    Directory of Open Access Journals (Sweden)

    Seung Chul Rhee

    2017-09-01

    Full Text Available Background This article aims to investigate current perceptions of beauty of the general public and physicians without a specialization in plastic surgery performing aesthetic procedures. Methods A cross-sectional and interviewing questionnaire was administered to 290 people in Seoul, South Korea in September 2015. The questionnaire addressed three issues: general attitudes about plastic surgery (Q1, perception of and preferences regarding Korean female celebrities’ facial attractiveness (Q2, and the relative influence of each facial aesthetic subunit on overall facial attractiveness. The survey’s results were gathered by a professional research agency and classified according to a respondent’s gender, age, and job type (95%±5.75% confidence interval. Statistical analysis was performed using SPSS ver. 10.1, calculating one-way analysis of variance with post hoc analysis and Tukey’s t-test. Results Among the respondents, 38.3% were in favor of aesthetic plastic surgery. The most common source of plastic surgery information was the internet (50.0%. The most powerful factor influencing hospital or clinic selection was the postoperative surgical results of acquaintances (74.9%. We created a composite face of an attractive Korean female, representing the current facial configuration considered appealing to the Koreans. Beauty perceptions differed to some degree based on gender and generational differences. We found that there were certain differences in beauty perceptions between general physicians who perform aesthetic procedures and the general public. Conclusions Our study results provide aesthetic plastic surgeons with detailed information about contemporary Korean people’s attitudes toward and perceptions of plastic surgery and the specific characteristics of female Korean faces currently considered attractive, plus trends in these perceptions, which should inform plastic surgeons within their specialized fields.

  5. Outcome of different facial nerve reconstruction techniques

    Directory of Open Access Journals (Sweden)

    Aboshanif Mohamed

    Full Text Available Abstract Introduction: There is no technique of facial nerve reconstruction that guarantees facial function recovery up to grade III. Objective: To evaluate the efficacy and safety of different facial nerve reconstruction techniques. Methods: Facial nerve reconstruction was performed in 22 patients (facial nerve interpositional graft in 11 patients and hypoglossal-facial nerve transfer in another 11 patients. All patients had facial function House-Brackmann (HB grade VI, either caused by trauma or after resection of a tumor. All patients were submitted to a primary nerve reconstruction except 7 patients, where late reconstruction was performed two weeks to four months after the initial surgery. The follow-up period was at least two years. Results: For facial nerve interpositional graft technique, we achieved facial function HB grade III in eight patients and grade IV in three patients. Synkinesis was found in eight patients, and facial contracture with synkinesis was found in two patients. In regards to hypoglossal-facial nerve transfer using different modifications, we achieved facial function HB grade III in nine patients and grade IV in two patients. Facial contracture, synkinesis and tongue atrophy were found in three patients, and synkinesis was found in five patients. However, those who had primary direct facial-hypoglossal end-to-side anastomosis showed the best result without any neurological deficit. Conclusion: Among various reanimation techniques, when indicated, direct end-to-side facial-hypoglossal anastomosis through epineural suturing is the most effective technique with excellent outcomes for facial reanimation and preservation of tongue movement, particularly when performed as a primary technique.

  6. Recognizing Facial Expressions Automatically from Video

    Science.gov (United States)

    Shan, Caifeng; Braspenning, Ralph

    Facial expressions, resulting from movements of the facial muscles, are the face changes in response to a person's internal emotional states, intentions, or social communications. There is a considerable history associated with the study on facial expressions. Darwin [22] was the first to describe in details the specific facial expressions associated with emotions in animals and humans, who argued that all mammals show emotions reliably in their faces. Since that, facial expression analysis has been a area of great research interest for behavioral scientists [27]. Psychological studies [48, 3] suggest that facial expressions, as the main mode for nonverbal communication, play a vital role in human face-to-face communication. For illustration, we show some examples of facial expressions in Fig. 1.

  7. Hirschsprung disease, microcephaly, mental retardation, and characteristic facial features: delineation of a new syndrome and identification of a locus at chromosome 2q22-q23.

    Science.gov (United States)

    Mowat, D R; Croaker, G D; Cass, D T; Kerr, B A; Chaitow, J; Adès, L C; Chia, N L; Wilson, M J

    1998-01-01

    We have identified six children with a distinctive facial phenotype in association with mental retardation (MR), microcephaly, and short stature, four of whom presented with Hirschsprung (HSCR) disease in the neonatal period. HSCR was diagnosed in a further child at the age of 3 years after investigation for severe chronic constipation and another child, identified as sharing the same facial phenotype, had chronic constipation, but did not have HSCR. One of our patients has an interstitial deletion of chromosome 2, del(2)(q21q23). These children strongly resemble the patient reported by Lurie et al with HSCR and dysmorphic features associated with del(2)(q22q23). All patients have been isolated cases, suggesting a contiguous gene syndrome or a dominant single gene disorder involving a locus for HSCR located at 2q22-q23. Review of published reports suggests that there is significant phenotypic and genetic heterogeneity within the group of patients with HSCR, MR, and microcephaly. In particular, our patients appear to have a separate disorder from Goldberg-Shprintzen syndrome, for which autosomal recessive inheritance has been proposed because of sib recurrence and consanguinity in some families. Images PMID:9719364

  8. Rejuvenecimiento facial en "doble sigma" "Double ogee" facial rejuvenation

    Directory of Open Access Journals (Sweden)

    O. M. Ramírez

    2007-03-01

    Full Text Available Las técnicas subperiósticas descritas por Tessier revolucionaron el tratamiento del envejecimiento facial, recomendando esta vía para tratar los signos tempranos del envejecimiento en pacientes jóvenes y de mediana edad. Psillakis refinó la técnica y Ramírez describió un método más seguro y eficaz de lifting subperióstico, demostrando que la técnica subperióstica de rejuveneciento facial se puede aplicar en el amplio espectro del envejecimiento facial. La introducción del endoscopio en el tratamiento del envejecimiento facial ha abierto una nueva era en la Cirugía Estética. Hoy la disección subperióstica asistida endocópicamente del tercio superior, medio e inferior de la cara, proporciona un medio eficaz para la reposición de los tejidos blandos, con posibilidad de aumento del esqueleto óseo craneofacial, menor edema facial postoperatorio, mínima lesión de las ramas del nervio facial y mejor tratamiento de las mejillas. Este abordaje, desarrollado y refinado durante la última década, se conoce como "Ritidectomía en Doble Sigma". El Arco Veneciano en doble sigma, bien conocido en Arquitectura desde la antigüedad, se caracteriza por ser un trazo armónico de curva convexa y a continuación curva cóncava. Cuando se observa una cara joven, desde un ángulo oblicuo, presenta una distribución característica de los tejidos, previamente descrita para el tercio medio como un arco ojival arquitectónico o una curva en forma de "S". Sin embargo, en un examen más detallado de la cara joven, en la vista de tres cuartos, el perfil completo revela una "arco ojival doble" o una sigma "S" doble. Para ver este recíproco y multicurvilíneo trazo de la belleza, debemos ver la cara en posición oblicua y así poder ver ambos cantos mediales. En esta posición, la cara joven presenta una convexidad característica de la cola de la ceja que confluye en la concavidad de la pared orbitaria lateral formando así el primer arco (superior

  9. Facial transplantation for massive traumatic injuries.

    Science.gov (United States)

    Alam, Daniel S; Chi, John J

    2013-10-01

    This article describes the challenges of facial reconstruction and the role of facial transplantation in certain facial defects and injuries. This information is of value to surgeons assessing facial injuries with massive soft tissue loss or injury. Copyright © 2013 Elsevier Inc. All rights reserved.

  10. Facial nerve conduction after sclerotherapy in children with facial lymphatic malformations: report of two cases.

    Science.gov (United States)

    Lin, Pei-Jung; Guo, Yuh-Cherng; Lin, Jan-You; Chang, Yu-Tang

    2007-04-01

    Surgical excision is thought to be the standard treatment of choice for lymphatic malformations. However, when the lesions are limited to the face only, surgical scar and facial nerve injury may impair cosmetics and facial expression. Sclerotherapy, an injection of a sclerosing agent directly through the skin into a lesion, is an alternative method. By evaluating facial nerve conduction, we observed the long-term effect of facial lymphatic malformations after intralesional injection of OK-432 and correlated the findings with anatomic outcomes. One 12-year-old boy with a lesion over the right-side preauricular area adjacent to the main trunk of facial nerve and the other 5-year-old boy with a lesion in the left-sided cheek involving the buccinator muscle were enrolled. The follow-up data of more than one year, including clinical appearance, computed tomography (CT) scan and facial nerve evaluation were collected. The facial nerve conduction study was normal in both cases. Blink reflex in both children revealed normal results as well. Complete resolution was noted on outward appearance and CT scan. The neurophysiologic data were compatible with good anatomic and functional outcomes. Our report suggests that the inflammatory reaction of OK-432 did not interfere with adjacent facial nerve conduction.

  11. Keloid Skin Flap Retention and Resurfacing in Facial Keloid Treatment.

    Science.gov (United States)

    Liu, Shu; Liang, Weizhong; Song, Kexin; Wang, Youbin

    2018-02-01

    Facial keloids commonly occur in young patients. Multiple keloid masses often converge into a large lesion on the face, representing a significant obstacle to keloid mass excision and reconstruction. We describe a new surgical method that excises the keloid mass and resurfaces the wound by saving the keloid skin as a skin flap during facial keloid treatment. Forty-five patients with facial keloids were treated in our department between January 2013 and January 2016. Multiple incisions were made along the facial esthetic line on the keloid mass. The keloid skin was dissected and elevated as a skin flap with one or two pedicles. The scar tissue in the keloid was then removed through the incision. The wound was covered with the preserved keloid skin flap and closed without tension. Radiotherapy and hyperbaric oxygen were applied after surgery. Patients underwent follow-up examinations 6 and 12 months after surgery. Of the 45 total patients, 32 patients were cured and seven patients were partially cured. The efficacy rate was 88.9%, and 38 patients (84.4%) were satisfied with the esthetic result. We describe an efficacious and esthetically satisfactory surgical method for managing facial keloids by preserving the keloid skin as a skin flap. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .

  12. Facial Nerve Paralysis due to a Pleomorphic Adenoma with the Imaging Characteristics of a Facial Nerve Schwannoma.

    Science.gov (United States)

    Nader, Marc-Elie; Bell, Diana; Sturgis, Erich M; Ginsberg, Lawrence E; Gidley, Paul W

    2014-08-01

    Background Facial nerve paralysis in a patient with a salivary gland mass usually denotes malignancy. However, facial paralysis can also be caused by benign salivary gland tumors. Methods We present a case of facial nerve paralysis due to a benign salivary gland tumor that had the imaging characteristics of an intraparotid facial nerve schwannoma. Results The patient presented to our clinic 4 years after the onset of facial nerve paralysis initially diagnosed as Bell palsy. Computed tomography demonstrated filling and erosion of the stylomastoid foramen with a mass on the facial nerve. Postoperative histopathology showed the presence of a pleomorphic adenoma. Facial paralysis was thought to be caused by extrinsic nerve compression. Conclusions This case illustrates the difficulty of accurate preoperative diagnosis of a parotid gland mass and reinforces the concept that facial nerve paralysis in the context of salivary gland tumors may not always indicate malignancy.

  13. Advances in facial reanimation.

    Science.gov (United States)

    Tate, James R; Tollefson, Travis T

    2006-08-01

    Facial paralysis often has a significant emotional impact on patients. Along with the myriad of new surgical techniques in managing facial paralysis comes the challenge of selecting the most effective procedure for the patient. This review delineates common surgical techniques and reviews state-of-the-art techniques. The options for dynamic reanimation of the paralyzed face must be examined in the context of several patient factors, including age, overall health, and patient desires. The best functional results are obtained with direct facial nerve anastomosis and interpositional nerve grafts. In long-standing facial paralysis, temporalis muscle transfer gives a dependable and quick result. Microvascular free tissue transfer is a reliable technique with reanimation potential whose results continue to improve as microsurgical expertise increases. Postoperative results can be improved with ancillary soft tissue procedures, as well as botulinum toxin. The paper provides an overview of recent advances in facial reanimation, including preoperative assessment, surgical reconstruction options, and postoperative management.

  14. BMI and WHR Are Reflected in Female Facial Shape and Texture: A Geometric Morphometric Image Analysis.

    Directory of Open Access Journals (Sweden)

    Christine Mayer

    Full Text Available Facial markers of body composition are frequently studied in evolutionary psychology and are important in computational and forensic face recognition. We assessed the association of body mass index (BMI and waist-to-hip ratio (WHR with facial shape and texture (color pattern in a sample of young Middle European women by a combination of geometric morphometrics and image analysis. Faces of women with high BMI had a wider and rounder facial outline relative to the size of the eyes and lips, and relatively lower eyebrows. Furthermore, women with high BMI had a brighter and more reddish skin color than women with lower BMI. The same facial features were associated with WHR, even though BMI and WHR were only moderately correlated. Yet BMI was better predictable than WHR from facial attributes. After leave-one-out cross-validation, we were able to predict 25% of variation in BMI and 10% of variation in WHR by facial shape. Facial texture predicted only about 3-10% of variation in BMI and WHR. This indicates that facial shape primarily reflects total fat proportion, rather than the distribution of fat within the body. The association of reddish facial texture in high-BMI women may be mediated by increased blood pressure and superficial blood flow as well as diet. Our study elucidates how geometric morphometric image analysis serves to quantify the effect of biological factors such as BMI and WHR to facial shape and color, which in turn contributes to social perception.

  15. Orangutans modify facial displays depending on recipient attention

    Directory of Open Access Journals (Sweden)

    Bridget M. Waller

    2015-03-01

    Full Text Available Primate facial expressions are widely accepted as underpinned by reflexive emotional processes and not under voluntary control. In contrast, other modes of primate communication, especially gestures, are widely accepted as underpinned by intentional, goal-driven cognitive processes. One reason for this distinction is that production of primate gestures is often sensitive to the attentional state of the recipient, a phenomenon used as one of the key behavioural criteria for identifying intentionality in signal production. The reasoning is that modifying/producing a signal when a potential recipient is looking could demonstrate that the sender intends to communicate with them. Here, we show that the production of a primate facial expression can also be sensitive to the attention of the play partner. Using the orangutan (Pongo pygmaeus Facial Action Coding System (OrangFACS, we demonstrate that facial movements are more intense and more complex when recipient attention is directed towards the sender. Therefore, production of the playface is not an automated response to play (or simply a play behaviour itself and is instead produced flexibly depending on the context. If sensitivity to attentional stance is a good indicator of intentionality, we must also conclude that the orangutan playface is intentionally produced. However, a number of alternative, lower level interpretations for flexible production of signals in response to the attention of another are discussed. As intentionality is a key feature of human language, claims of intentional communication in related primate species are powerful drivers in language evolution debates, and thus caution in identifying intentionality is important.

  16. The role of encoding and attention in facial emotion memory: an EEG investigation.

    Science.gov (United States)

    Brenner, Colleen A; Rumak, Samuel P; Burns, Amy M N; Kieffaber, Paul D

    2014-09-01

    Facial expressions are encoded via sensory mechanisms, but meaning extraction and salience of these expressions involve cognitive functions. We investigated the time course of sensory encoding and subsequent maintenance in memory via EEG. Twenty-nine healthy participants completed a facial emotion delayed match-to-sample task. P100, N170 and N250 ERPs were measured in response to the first stimulus, and evoked theta power (4-7Hz) was measured during the delay interval. Negative facial expressions produced larger N170 amplitudes and greater theta power early in the delay. N170 amplitude correlated with theta power, however larger N170 amplitude coupled with greater theta power only predicted behavioural performance for one emotion condition (very happy) out of six tested (see Supplemental Data). These findings indicate that the N170 ERP may be sensitive to emotional facial expressions when task demands require encoding and retention of this information. Furthermore, sustained theta activity may represent continued attentional processing that supports short-term memory, especially of negative facial stimuli. Further study is needed to investigate the potential influence of these measures, and their interaction, on behavioural performance. Crown Copyright © 2014. Published by Elsevier B.V. All rights reserved.

  17. Modelling the perceptual similarity of facial expressions from image statistics and neural responses.

    Science.gov (United States)

    Sormaz, Mladen; Watson, David M; Smith, William A P; Young, Andrew W; Andrews, Timothy J

    2016-04-01

    The ability to perceive facial expressions of emotion is essential for effective social communication. We investigated how the perception of facial expression emerges from the image properties that convey this important social signal, and how neural responses in face-selective brain regions might track these properties. To do this, we measured the perceptual similarity between expressions of basic emotions, and investigated how this is reflected in image measures and in the neural response of different face-selective regions. We show that the perceptual similarity of different facial expressions (fear, anger, disgust, sadness, happiness) can be predicted by both surface and feature shape information in the image. Using block design fMRI, we found that the perceptual similarity of expressions could also be predicted from the patterns of neural response in the face-selective posterior superior temporal sulcus (STS), but not in the fusiform face area (FFA). These results show that the perception of facial expression is dependent on the shape and surface properties of the image and on the activity of specific face-selective regions. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Photometric facial analysis of the Igbo Nigerian adult male

    Science.gov (United States)

    Ukoha, Ukoha Ukoha; Udemezue, Onochie Okwudili; Oranusi, Chidi Kingsley; Asomugha, Azuoma Lasbrey; Dimkpa, Uchechukwu; Nzeukwu, Lynda Chinenye

    2012-01-01

    Background: A carefully performed facial analysis can serve as a strong foundation for successful facial reconstructive and plastic surgeries, rhinoplasty or orthodontics. Aim: The purpose of this study is to determine the facial features and qualities of the Igbo Nigerian adult male using photometry. Materials and Methods: One hundred and twenty subjects aged between 18 and 28 years were studied at the Anambra State University, Uli, Nigeria. The frontal and right lateral view photographs of their faces were taken and traced out on tracing papers. On these, two vertical distances, nasion to subnasal and subnasale to menton, and four angles, nasofrontal (NF), nasofacial, nasomental (NM) and mentocervical, were measured. Results: The result showed that the Igbo Nigerian adult male had a middle face that was shorter than the lower one (41.76% vs.58.24%), a moderate glabella (NF=133.97°), a projected nose (NM=38.68°) and a less prominent chin (NM=125.87°). Conclusion: This study is very important in medical practice as it can be used to compare the pre- and post-operative results of plastic surgery and other related surgeries of the face. PMID:23661886

  19. Virtual facial expressions of emotions: An initial concomitant and construct validity study.

    Directory of Open Access Journals (Sweden)

    Christian eJoyal

    2014-09-01

    Full Text Available Abstract. Background. Facial expressions of emotions represent classic stimuli for the study of social cognition. Developing virtual dynamic facial expressions of emotions, however, would open-up possibilities, both for fundamental and clinical research. For instance, virtual faces allow real-time Human-Computer retroactions between physiological measures and the virtual agent. Objectives. The goal of this study was to initially assess concomitant and construct validity of a newly developed set of virtual faces expressing 6 fundamental emotions (happiness, surprise, anger, sadness, fear, or disgust. Recognition rates, facial electromyography (zygomatic major and corrugator supercilii muscles, and regional gaze fixation latencies (eyes and mouth regions were compared in 41 adult volunteers (20 ♂, 21 ♀ during the presentation of video clips depicting real vs. virtual adults expressing emotions. Results. Emotions expressed by each sets of stimuli were similarly recognized, both by men and women. Accordingly, both sets of stimuli elicited similar activation of facial muscles and similar ocular fixation times in eye regions from man and woman participants. Conclusion. Further validation studies can be performed with these virtual faces among clinical populations known to present social cognition difficulties. Brain-Computer Interface studies with feedback-feed forward interactions based on facial emotion expressions can also be conducted with these stimuli.

  20. Facial first impressions and partner preference models: Comparable or distinct underlying structures?

    Science.gov (United States)

    South Palomares, Jennifer K; Sutherland, Clare A M; Young, Andrew W

    2017-12-17

    Given the frequency of relationships nowadays initiated online, where impressions from face photographs may influence relationship initiation, it is important to understand how facial first impressions might be used in such contexts. We therefore examined the applicability of a leading model of verbally expressed partner preferences to impressions derived from real face images and investigated how the factor structure of first impressions based on potential partner preference-related traits might relate to a more general model of facial first impressions. Participants rated 1,000 everyday face photographs on 12 traits selected to represent (Fletcher, et al. 1999, Journal of Personality and Social Psychology, 76, 72) verbal model of partner preferences. Facial trait judgements showed an underlying structure that largely paralleled the tripartite structure of Fletcher et al.'s verbal preference model, regardless of either face gender or participant gender. Furthermore, there was close correspondence between the verbal partner preference model and a more general tripartite model of facial first impressions derived from a different literature (Sutherland et al., 2013, Cognition, 127, 105), suggesting an underlying correspondence between verbal conceptual models of romantic preferences and more general models of facial first impressions. © 2017 The British Psychological Society.

  1. Computer Aided Facial Prosthetics Manufacturing System

    Directory of Open Access Journals (Sweden)

    Peng H.K.

    2016-01-01

    Full Text Available Facial deformities can impose burden to the patient. There are many solutions for facial deformities such as plastic surgery and facial prosthetics. However, current fabrication method of facial prosthetics is high-cost and time consuming. This study aimed to identify a new method to construct a customized facial prosthetic. A 3D scanner, computer software and 3D printer were used in this study. Results showed that the new developed method can be used to produce a customized facial prosthetics. The advantages of the developed method over the conventional process are low cost, reduce waste of material and pollution in order to meet the green concept.

  2. Creating fair lineups for suspects with distinctive features

    OpenAIRE

    Zarkadi, Theodora; Wade, Kimberley A.; Stewart, Neil

    2009-01-01

    In their descriptions, eyewitnesses often refer to a culprit's distinctive facial features. However, in a police lineup, selecting the only member with the described distinctive feature is unfair to the suspect and provides the police with little further information. For fair and informative lineups, the distinctive feature should be either replicated across foils or concealed on the target. In the present experiments, replication produced more correct identifications in target-present lineup...

  3. Satisfaction with facial appearance and its determinants in adults with severe congenital facial disfigurement: a case-referent study.

    Science.gov (United States)

    Versnel, S L; Duivenvoorden, H J; Passchier, J; Mathijssen, I M J

    2010-10-01

    Patients with severe congenital facial disfigurement have a long track record of operations and hospital visits by the time they are 18 years old. The fact that their facial deformity is congenital may have an impact on how satisfied these patients are with their appearance. This study evaluated the level of satisfaction with facial appearance of congenital and of acquired facially disfigured adults, and explored demographic, physical and psychological determinants of this satisfaction. Differences compared with non-disfigured adults were examined. Fifty-nine adults with a rare facial cleft, 59 adults with a facial deformity traumatically acquired in adulthood, and a reference group of 201 non-disfigured adults completed standardised demographic, physical and psychological questionnaires. The congenital and acquired groups did not differ significantly in the level of satisfaction with facial appearance, but both were significantly less satisfied than the reference group. In facially disfigured adults, level of education, number of affected facial parts and facial function were determinants of the level of satisfaction. High fear of negative appearance evaluation by others (FNAE) and low self-esteem (SE) were strong psychological determinants. Although FNAE was higher in both patient groups, SE was similar in all three groups. Satisfaction with facial appearance of individuals with a congenital or acquired facial deformity is similar and will seldom reach the level of satisfaction of non-disfigured persons. A combination of surgical correction (with attention for facial profile and restoring facial functions) and psychological help (to increase SE and lower FNAE) may improve patient satisfaction. Copyright 2009 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  4. Recognition of facial expressions and prosodic cues with graded emotional intensities in adults with Asperger syndrome.

    Science.gov (United States)

    Doi, Hirokazu; Fujisawa, Takashi X; Kanai, Chieko; Ohta, Haruhisa; Yokoi, Hideki; Iwanami, Akira; Kato, Nobumasa; Shinohara, Kazuyuki

    2013-09-01

    This study investigated the ability of adults with Asperger syndrome to recognize emotional categories of facial expressions and emotional prosodies with graded emotional intensities. The individuals with Asperger syndrome showed poorer recognition performance for angry and sad expressions from both facial and vocal information. The group difference in facial expression recognition was prominent for stimuli with low or intermediate emotional intensities. In contrast to this, the individuals with Asperger syndrome exhibited lower recognition accuracy than typically-developed controls mainly for emotional prosody with high emotional intensity. In facial expression recognition, Asperger and control groups showed an inversion effect for all categories. The magnitude of this effect was less in the Asperger group for angry and sad expressions, presumably attributable to reduced recruitment of the configural mode of face processing. The individuals with Asperger syndrome outperformed the control participants in recognizing inverted sad expressions, indicating enhanced processing of local facial information representing sad emotion. These results suggest that the adults with Asperger syndrome rely on modality-specific strategies in emotion recognition from facial expression and prosodic information.

  5. P2-28: An Amplification of Feedback from Facial Muscles Strengthened Sympathetic Activations to Emotional Facial Cues

    Directory of Open Access Journals (Sweden)

    Younbyoung Chae

    2012-10-01

    Full Text Available The facial feedback hypothesis suggests that feedback from cutaneous and muscular afferents influences our emotions during the control of facial expressions. Enhanced facial expressiveness is correlated with an increase in autonomic arousal, and self-reported emotional experience, while limited facial expression attenuates these responses. The present study was aimed at investigating the difference in emotional response in imitated versus observed facial expressions. For this, we measured the facial electromyogram of the corrugator muscle as well as the skin conductance response (SCR while participants were either imitating or simply observing emotional facial expressions. We found that participants produced significantly greater facial electromyogram activation during imitations compared to observations of angry faces. Similarly, they exhibited significantly greater SCR during imitations to angry faces compared to observations. An amplification of feedback from face muscles during imitation strengthened sympathetic activation to negative emotional cues. These findings suggest that manipulations of muscular feedback could modulate the bodily expression of emotion and perhaps also the emotional response itself.

  6. Exacerbation of Facial Motoneuron Loss after Facial Nerve Axotomy in CCR3-Deficient Mice

    Directory of Open Access Journals (Sweden)

    Derek A Wainwright

    2009-11-01

    Full Text Available We have previously demonstrated a neuroprotective mechanism of FMN (facial motoneuron survival after facial nerve axotomy that is dependent on CD4+ Th2 cell interaction with peripheral antigen-presenting cells, as well as CNS (central nervous system-resident microglia. PACAP (pituitary adenylate cyclase-activating polypeptide is expressed by injured FMN and increases Th2-associated chemokine expression in cultured murine microglia. Collectively, these results suggest a model involving CD4+ Th2 cell migration to the facial motor nucleus after injury via microglial expression of Th2-associated chemokines. However, to respond to Th2-associated chemokines, Th2 cells must express the appropriate Th2-associated chemokine receptors. In the present study, we tested the hypothesis that Th2-associated chemokine receptors increase in the facial motor nucleus after facial nerve axotomy at timepoints consistent with significant T-cell infiltration. Microarray analysis of Th2-associated chemokine receptors was followed up with real-time PCR for CCR3, which indicated that facial nerve injury increases CCR3 mRNA levels in mouse facial motor nucleus. Unexpectedly, quantitative- and co-immunofluorescence revealed increased CCR3 expression localizing to FMN in the facial motor nucleus after facial nerve axotomy. Compared with WT (wild-type, a significant decrease in FMN survival 4 weeks after axotomy was observed in CCR3–/– mice. Additionally, compared with WT, a significant decrease in FMN survival 4 weeks after axotomy was observed in Rag2 –/– (recombination activating gene-2-deficient mice adoptively transferred CD4+ T-cells isolated from CCR3–/– mice, but not in CCR3–/– mice adoptively transferred CD4+ T-cells derived from WT mice. These results provide a basis for further investigation into the co-operation between CD4+ T-cell- and CCR3-mediated neuroprotection after FMN injury.

  7. Deep Spatial-Temporal Joint Feature Representation for Video Object Detection

    Directory of Open Access Journals (Sweden)

    Baojun Zhao

    2018-03-01

    Full Text Available With the development of deep neural networks, many object detection frameworks have shown great success in the fields of smart surveillance, self-driving cars, and facial recognition. However, the data sources are usually videos, and the object detection frameworks are mostly established on still images and only use the spatial information, which means that the feature consistency cannot be ensured because the training procedure loses temporal information. To address these problems, we propose a single, fully-convolutional neural network-based object detection framework that involves temporal information by using Siamese networks. In the training procedure, first, the prediction network combines the multiscale feature map to handle objects of various sizes. Second, we introduce a correlation loss by using the Siamese network, which provides neighboring frame features. This correlation loss represents object co-occurrences across time to aid the consistent feature generation. Since the correlation loss should use the information of the track ID and detection label, our video object detection network has been evaluated on the large-scale ImageNet VID dataset where it achieves a 69.5% mean average precision (mAP.

  8. Facial emotion recognition in Parkinson's disease: A review and new hypotheses

    Science.gov (United States)

    Vérin, Marc; Sauleau, Paul; Grandjean, Didier

    2018-01-01

    Abstract Parkinson's disease is a neurodegenerative disorder classically characterized by motor symptoms. Among them, hypomimia affects facial expressiveness and social communication and has a highly negative impact on patients' and relatives' quality of life. Patients also frequently experience nonmotor symptoms, including emotional‐processing impairments, leading to difficulty in recognizing emotions from faces. Aside from its theoretical importance, understanding the disruption of facial emotion recognition in PD is crucial for improving quality of life for both patients and caregivers, as this impairment is associated with heightened interpersonal difficulties. However, studies assessing abilities in recognizing facial emotions in PD still report contradictory outcomes. The origins of this inconsistency are unclear, and several questions (regarding the role of dopamine replacement therapy or the possible consequences of hypomimia) remain unanswered. We therefore undertook a fresh review of relevant articles focusing on facial emotion recognition in PD to deepen current understanding of this nonmotor feature, exploring multiple significant potential confounding factors, both clinical and methodological, and discussing probable pathophysiological mechanisms. This led us to examine recent proposals about the role of basal ganglia‐based circuits in emotion and to consider the involvement of facial mimicry in this deficit from the perspective of embodied simulation theory. We believe our findings will inform clinical practice and increase fundamental knowledge, particularly in relation to potential embodied emotion impairment in PD. © 2018 The Authors. Movement Disorders published by Wiley Periodicals, Inc. on behalf of International Parkinson and Movement Disorder Society. PMID:29473661

  9. The MPI Facial Expression Database — A Validated Database of Emotional and Conversational Facial Expressions

    Science.gov (United States)

    Kaulard, Kathrin; Cunningham, Douglas W.; Bülthoff, Heinrich H.; Wallraven, Christian

    2012-01-01

    The ability to communicate is one of the core aspects of human life. For this, we use not only verbal but also nonverbal signals of remarkable complexity. Among the latter, facial expressions belong to the most important information channels. Despite the large variety of facial expressions we use in daily life, research on facial expressions has so far mostly focused on the emotional aspect. Consequently, most databases of facial expressions available to the research community also include only emotional expressions, neglecting the largely unexplored aspect of conversational expressions. To fill this gap, we present the MPI facial expression database, which contains a large variety of natural emotional and conversational expressions. The database contains 55 different facial expressions performed by 19 German participants. Expressions were elicited with the help of a method-acting protocol, which guarantees both well-defined and natural facial expressions. The method-acting protocol was based on every-day scenarios, which are used to define the necessary context information for each expression. All facial expressions are available in three repetitions, in two intensities, as well as from three different camera angles. A detailed frame annotation is provided, from which a dynamic and a static version of the database have been created. In addition to describing the database in detail, we also present the results of an experiment with two conditions that serve to validate the context scenarios as well as the naturalness and recognizability of the video sequences. Our results provide clear evidence that conversational expressions can be recognized surprisingly well from visual information alone. The MPI facial expression database will enable researchers from different research fields (including the perceptual and cognitive sciences, but also affective computing, as well as computer vision) to investigate the processing of a wider range of natural facial expressions

  10. Effects of Orientation on Recognition of Facial Affect

    Science.gov (United States)

    Cohen, M. M.; Mealey, J. B.; Hargens, Alan R. (Technical Monitor)

    1997-01-01

    The ability to discriminate facial features is often degraded when the orientation of the face and/or the observer is altered. Previous studies have shown that gross distortions of facial features can go unrecognized when the image of the face is inverted, as exemplified by the 'Margaret Thatcher' effect. This study examines how quickly erect and supine observers can distinguish between smiling and frowning faces that are presented at various orientations. The effects of orientation are of particular interest in space, where astronauts frequently view one another in orientations other than the upright. Sixteen observers viewed individual facial images of six people on a computer screen; on a given trial, the image was either smiling or frowning. Each image was viewed when it was erect and when it was rotated (rolled) by 45 degrees, 90 degrees, 135 degrees, 180 degrees, 225 degrees and 270 degrees about the line of sight. The observers were required to respond as rapidly and accurately as possible to identify if the face presented was smiling or frowning. Measures of reaction time were obtained when the observers were both upright and supine. Analyses of variance revealed that mean reaction time, which increased with stimulus rotation (F=18.54, df 7/15, p (is less than) 0.001), was 22% longer when the faces were inverted than when they were erect, but that the orientation of the observer had no significant effect on reaction time (F=1.07, df 1/15, p (is greater than) .30). These data strongly suggest that the orientation of the image of a face on the observer's retina, but not its orientation with respect to gravity, is important in identifying the expression on the face.

  11. Common cues to emotion in the dynamic facial expressions of speech and song.

    Science.gov (United States)

    Livingstone, Steven R; Thompson, William F; Wanderley, Marcelo M; Palmer, Caroline

    2015-01-01

    Speech and song are universal forms of vocalization that may share aspects of emotional expression. Research has focused on parallels in acoustic features, overlooking facial cues to emotion. In three experiments, we compared moving facial expressions in speech and song. In Experiment 1, vocalists spoke and sang statements each with five emotions. Vocalists exhibited emotion-dependent movements of the eyebrows and lip corners that transcended speech-song differences. Vocalists' jaw movements were coupled to their acoustic intensity, exhibiting differences across emotion and speech-song. Vocalists' emotional movements extended beyond vocal sound to include large sustained expressions, suggesting a communicative function. In Experiment 2, viewers judged silent videos of vocalists' facial expressions prior to, during, and following vocalization. Emotional intentions were identified accurately for movements during and after vocalization, suggesting that these movements support the acoustic message. Experiment 3 compared emotional identification in voice-only, face-only, and face-and-voice recordings. Emotion judgements for voice-only singing were poorly identified, yet were accurate for all other conditions, confirming that facial expressions conveyed emotion more accurately than the voice in song, yet were equivalent in speech. Collectively, these findings highlight broad commonalities in the facial cues to emotion in speech and song, yet highlight differences in perception and acoustic-motor production.

  12. The enlargement of geniculate fossa of facial nerve canal: a new CT finding of facial nerve canal fracture

    International Nuclear Information System (INIS)

    Gong Ruozhen; Li Yuhua; Gong Wuxian; Wu Lebin

    2006-01-01

    Objective: To discuss the value of enlargement of geniculate fossa of facial nerve canal in the diagnosis of facial nerve canal fracture. Methods: Thirty patients with facial nerve canal fracture underwent axial and coronal CT scan. The correlation between the fracture and the enlargement of geniculate fossa of facial nerve canal was analyzed. The ability of showing the fracture and enlargement of geniculate fossa of facial nerve canal in axial and coronal imaging were compared. Results: Fracture of geniculate fossa of facial nerve canal was found in the operation in 30 patients, while the fracture was detected in CT in 18 patients. Enlargement of geniculate ganglion of facial nerve was detected in 30 patients in the operation, while the enlargement of fossa was found in CT in 28 cases. Enlargement and fracture of geniculate fossa of facial nerve canal were both detected in CT images in 18 patients. Only the enlargement of geniculate fossa of facial nerve canal was shown in 12 patients in CT. Conclusion: Enlargement of geniculate fossa of facial nerve canal was a useful finding in the diagnosis of fracture of geniculate fossa in patients with facial paralysis, even no fracture line was shown on CT images. (authors)

  13. Facial Nerve Paralysis due to a Pleomorphic Adenoma with the Imaging Characteristics of a Facial Nerve Schwannoma

    OpenAIRE

    Nader, Marc-Elie; Bell, Diana; Sturgis, Erich M.; Ginsberg, Lawrence E.; Gidley, Paul W.

    2014-01-01

    Background Facial nerve paralysis in a patient with a salivary gland mass usually denotes malignancy. However, facial paralysis can also be caused by benign salivary gland tumors. Methods We present a case of facial nerve paralysis due to a benign salivary gland tumor that had the imaging characteristics of an intraparotid facial nerve schwannoma. Results The patient presented to our clinic 4 years after the onset of facial nerve paralysis initially diagnosed as Bell palsy. Computed tomograph...

  14. Quantitative Magnetic Resonance Imaging Volumetry of Facial Muscles in Healthy Patients with Facial Palsy

    Science.gov (United States)

    Volk, Gerd F.; Karamyan, Inna; Klingner, Carsten M.; Reichenbach, Jürgen R.

    2014-01-01

    Background: Magnetic resonance imaging (MRI) has not yet been established systematically to detect structural muscular changes after facial nerve lesion. The purpose of this pilot study was to investigate quantitative assessment of MRI muscle volume data for facial muscles. Methods: Ten healthy subjects and 5 patients with facial palsy were recruited. Using manual or semiautomatic segmentation of 3T MRI, volume measurements were performed for the frontal, procerus, risorius, corrugator supercilii, orbicularis oculi, nasalis, zygomaticus major, zygomaticus minor, levator labii superioris, orbicularis oris, depressor anguli oris, depressor labii inferioris, and mentalis, as well as for the masseter and temporalis as masticatory muscles for control. Results: All muscles except the frontal (identification in 4/10 volunteers), procerus (4/10), risorius (6/10), and zygomaticus minor (8/10) were identified in all volunteers. Sex or age effects were not seen (all P > 0.05). There was no facial asymmetry with exception of the zygomaticus major (larger on the left side; P = 0.012). The exploratory examination of 5 patients revealed considerably smaller muscle volumes on the palsy side 2 months after facial injury. One patient with chronic palsy showed substantial muscle volume decrease, which also occurred in another patient with incomplete chronic palsy restricted to the involved facial area. Facial nerve reconstruction led to mixed results of decreased but also increased muscle volumes on the palsy side compared with the healthy side. Conclusions: First systematic quantitative MRI volume measures of 5 different clinical presentations of facial paralysis are provided. PMID:25289366

  15. [Prosopagnosia and facial expression recognition].

    Science.gov (United States)

    Koyama, Shinichi

    2014-04-01

    This paper reviews clinical neuropsychological studies that have indicated that the recognition of a person's identity and the recognition of facial expressions are processed by different cortical and subcortical areas of the brain. The fusiform gyrus, especially the right fusiform gyrus, plays an important role in the recognition of identity. The superior temporal sulcus, amygdala, and medial frontal cortex play important roles in facial-expression recognition. Both facial recognition and facial-expression recognition are highly intellectual processes that involve several regions of the brain.

  16. Holistic face processing can inhibit recognition of forensic facial composites.

    Science.gov (United States)

    McIntyre, Alex H; Hancock, Peter J B; Frowd, Charlie D; Langton, Stephen R H

    2016-04-01

    Facial composite systems help eyewitnesses to show the appearance of criminals. However, likenesses created by unfamiliar witnesses will not be completely accurate, and people familiar with the target can find them difficult to identify. Faces are processed holistically; we explore whether this impairs identification of inaccurate composite images and whether recognition can be improved. In Experiment 1 (n = 64) an imaging technique was used to make composites of celebrity faces more accurate and identification was contrasted with the original composite images. Corrected composites were better recognized, confirming that errors in production of the likenesses impair identification. The influence of holistic face processing was explored by misaligning the top and bottom parts of the composites (cf. Young, Hellawell, & Hay, 1987). Misalignment impaired recognition of corrected composites but identification of the original, inaccurate composites significantly improved. This effect was replicated with facial composites of noncelebrities in Experiment 2 (n = 57). We conclude that, like real faces, facial composites are processed holistically: recognition is impaired because unlike real faces, composites contain inaccuracies and holistic face processing makes it difficult to perceive identifiable features. This effect was consistent across composites of celebrities and composites of people who are personally familiar. Our findings suggest that identification of forensic facial composites can be enhanced by presenting composites in a misaligned format. (c) 2016 APA, all rights reserved).

  17. Alteration of Occlusal Plane in Orthognathic Surgery: Clinical Features to Help Treatment Planning on Class III Patients

    Directory of Open Access Journals (Sweden)

    Daniel Amaral Alves Marlière

    2018-01-01

    Full Text Available Dentofacial deformities (DFD presenting mainly as Class III malocclusions that require orthognathic surgery as a part of definitive treatment. Class III patients can have obvious signs such as increasing the chin projection and chin throat length, nasolabial folds, reverse overjet, and lack of upper lip support. However, Class III patients can present different facial patterns depending on the angulation of occlusal plane (OP, and only bite correction does not always lead to the improvement of the facial esthetic. We described two Class III patients with different clinical features and inclination of OP and had undergone different treatment planning based on 6 clinical features: (I facial type; (II upper incisor display at rest; (III dental and gingival display on smile; (IV soft tissue support; (V chin projection; and (VI lower lip projection. These patients were submitted to orthognathic surgery with different treatment plannings: a clockwise rotation and counterclockwise rotation of OP according to their facial features. The clinical features and OP inclination helped to define treatment planning by clockwise and counterclockwise rotations of the maxillomandibular complex, and two patients undergone to bimaxillary orthognathic surgery showed harmonic outcomes and stables after 2 years of follow-up.

  18. Antenatal diagnosis of complete facial duplication--a case report of a rare craniofacial defect.

    Science.gov (United States)

    Rai, V S; Gaffney, G; Manning, N; Pirrone, P G; Chamberlain, P F

    1998-06-01

    We report a case of the prenatal sonographic detection of facial duplication, the diprosopus abnormality, in a twin pregnancy. The characteristic sonographic features of the condition include duplication of eyes, mouth, nose and both mid- and anterior intracranial structures. A heart-shaped abnormality of the cranial vault should prompt more detailed examination for other supportive features of this rare condition.

  19. MR findings of facial nerve on oblique sagittal MRI using TMJ surface coil: normal vs peripheral facial nerve palsy

    International Nuclear Information System (INIS)

    Park, Yong Ok; Lee, Myeong Jun; Lee, Chang Joon; Yoo, Jeong Hyun

    2000-01-01

    To evaluate the findings of normal facial nerve, as seen on oblique sagittal MRI using a TMJ (temporomandibular joint) surface coil, and then to evaluate abnormal findings of peripheral facial nerve palsy. We retrospectively reviewed the MR findings of 20 patients with peripheral facial palsy and 50 normal facial nerves of 36 patients without facial palsy. All underwent oblique sagittal MRI using a T MJ surface coil. We analyzed the course, signal intensity, thickness, location, and degree of enhancement of the facial nerve. According to the angle made by the proximal parotid segment on the axis of the mastoid segment, course was classified as anterior angulation (obtuse and acute, or buckling), straight and posterior angulation. Among 50 normal facial nerves, 24 (48%) were straight, and 23 (46%) demonstrated anterior angulation; 34 (68%) showed iso signal intensity on T1W1. In the group of patients, course on the affected side was either straight (40%) or showed anterior angulation (55%), and signal intensity in 80% of cases was isointense. These findings were similar to those in the normal group, but in patients with post-traumatic or post-operative facial palsy, buckling, of course, appeared. In 12 of 18 facial palsy cases (66.6%) in which contrast materials were administered, a normal facial nerve of the opposite facial canal showed mild enhancement on more than one segment, but on the affected side the facial nerve showed diffuse enhancement in all 14 patients with acute facial palsy. Eleven of these (79%) showed fair or marked enhancement on more than one segment, and in 12 (86%), mild enhancement of the proximal parotid segment was noted. Four of six chronic facial palsy cases (66.6%) showed atrophy of the facial nerve. When oblique sagittal MR images are obtained using a TMJ surface coil, enhancement of the proximal parotid segment of the facial nerve and fair or marked enhancement of at least one segment within the facial canal always suggests pathology of

  20. Facial neuroma masquerading as acoustic neuroma.

    Science.gov (United States)

    Sayegh, Eli T; Kaur, Gurvinder; Ivan, Michael E; Bloch, Orin; Cheung, Steven W; Parsa, Andrew T

    2014-10-01

    Facial nerve neuromas are rare benign tumors that may be initially misdiagnosed as acoustic neuromas when situated near the auditory apparatus. We describe a patient with a large cystic tumor with associated trigeminal, facial, audiovestibular, and brainstem dysfunction, which was suspicious for acoustic neuroma on preoperative neuroimaging. Intraoperative investigation revealed a facial nerve neuroma located in the cerebellopontine angle and internal acoustic canal. Gross total resection of the tumor via retrosigmoid craniotomy was curative. Transection of the facial nerve necessitated facial reanimation 4 months later via hypoglossal-facial cross-anastomosis. Clinicians should recognize the natural history, diagnostic approach, and management of this unusual and mimetic lesion. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Cultural similarities and differences in perceiving and recognizing facial expressions of basic emotions.

    Science.gov (United States)

    Yan, Xiaoqian; Andrews, Timothy J; Young, Andrew W

    2016-03-01

    The ability to recognize facial expressions of basic emotions is often considered a universal human ability. However, recent studies have suggested that this commonality has been overestimated and that people from different cultures use different facial signals to represent expressions (Jack, Blais, Scheepers, Schyns, & Caldara, 2009; Jack, Caldara, & Schyns, 2012). We investigated this possibility by examining similarities and differences in the perception and categorization of facial expressions between Chinese and white British participants using whole-face and partial-face images. Our results showed no cultural difference in the patterns of perceptual similarity of expressions from whole-face images. When categorizing the same expressions, however, both British and Chinese participants were slightly more accurate with whole-face images of their own ethnic group. To further investigate potential strategy differences, we repeated the perceptual similarity and categorization tasks with presentation of only the upper or lower half of each face. Again, the perceptual similarity of facial expressions was similar between Chinese and British participants for both the upper and lower face regions. However, participants were slightly better at categorizing facial expressions of their own ethnic group for the lower face regions, indicating that the way in which culture shapes the categorization of facial expressions is largely driven by differences in information decoding from this part of the face. (c) 2016 APA, all rights reserved).

  2. Random fractional ultrapulsed CO2 resurfacing of photodamaged facial skin: long-term evaluation.

    Science.gov (United States)

    Tretti Clementoni, Matteo; Galimberti, Michela; Tourlaki, Athanasia; Catenacci, Maximilian; Lavagno, Rosalia; Bencini, Pier Luca

    2013-02-01

    Although numerous papers have recently been published on ablative fractional resurfacing, there is a lack of information in literature on very long-term results. The aim of this retrospective study is to evaluate the efficacy, adverse side effects, and long-term results of a random fractional ultrapulsed CO2 laser on a large population with photodamaged facial skin. Three hundred twelve patients with facial photodamaged skin were enrolled and underwent a single full-face treatment. Six aspects of photodamaged skin were recorded using a 5 point scale at 3, 6, and 24 months after the treatment. The results were compared with a non-parametric statistical test, the Wilcoxon's exact test. Three hundred one patients completed the study. All analyzed features showed a significant statistical improvement 3 months after the procedure. Three months later all features, except for pigmentations, once again showed a significant statistical improvement. Results after 24 months were similar to those assessed 18 months before. No long-term or other serious complications were observed. From the significant number of patients analyzed, long-term results demonstrate not only how fractional ultrapulsed CO2 resurfacing can achieve good results on photodamaged facial skin but also how these results can be considered stable 2 years after the procedure.

  3. Delayed appearance of tracer lead in facial hair

    International Nuclear Information System (INIS)

    Rabinowitz, M.; Wetherill, G.; Kopple, J.

    1976-01-01

    Three adult men were fed 204 Pb--a rare, stable isotope of lead--daily for about 100 days. Simultaneous blood and facial hair measurements of this tracer and of total lead concentrations were made by mass spectrometric isotope dilution analysis. Although the blood showed an immediate response to the intake of the tracer, the facial hair showed a more gradual response and a delay of approximately 35 days. Since the pattern of appearance of lead in hair does not appear to represent a simple time delay of blood lead concentration, the existence of a physiological pool of lead fed by the blood and giving rise to the content in hair is suggested. Hair lead values should therefore, be interpreted as the integral of the blood lead values over the mean life of this intermediate pool--about 100 days

  4. Computed tomography in facial trauma

    International Nuclear Information System (INIS)

    Zilkha, A.

    1982-01-01

    Computed tomography (CT), plain radiography, and conventional tomography were performed on 30 patients with facial trauma. CT demonstrated bone and soft-tissue involvement. In all cases, CT was superior to tomography in the assessment of facial injury. It is suggested that CT follow plain radiography in the evaluation of facial trauma

  5. A moat around castle walls. The role of axillary and facial hair in lymph node protection from mutagenic factors.

    Science.gov (United States)

    Komarova, Svetlana V

    2006-01-01

    Axillary hair is a highly conserved phenotypical feature in humans, and as such deserves at least consideration of its functional significance. Protection from environmental factors is one of the main functions attributed to hair in furred vertebrates, but is believed to be inapplicable to humans. I considered the hypothesis that the phenotypic preservation of axillary hair is due to its unrecognized role in the organism protection. Two immediate questions arise--what exactly is being protected and what it is protected from. A large group of axillary lymph nodes represents a major difference between underarms and the adjacent areas of the trunk. The consideration of potential factors from which hair can offer protection identifies sunlight as the most likely candidate. Intense sweat production underarms may represent an independent defense mechanism, specifically protecting lymph nodes from overheating. Moreover, the pattern of facial hair growth in males strikingly overlaps with the distribution of superficial lymph nodes, suggesting potential role for facial hair in protection of lymph nodes, and possibly thymus and thyroid. The idea of lymph node protection from environmental mutagenic factors, such as UV radiation and heat, appears particularly important in light of wide association of lymph nodes with cancers. The position of contemporary fashion towards body hair is aggressively negative, including the social pressure for removal of axillary and bikini line hair for women, facial hair for men in many professional occupations, and even body hair for men. If this hypothesis is proven to be true, the implications will be significant for immunology (by providing new insights in lymph node physiology), health sciences (depilation is painful and therefore easily modifiable habit if proven to increase disease risk), as well as art, social fashion and economy.

  6. Facial transplantation surgery introduction.

    Science.gov (United States)

    Eun, Seok-Chan

    2015-06-01

    Severely disfiguring facial injuries can have a devastating impact on the patient's quality of life. During the past decade, vascularized facial allotransplantation has progressed from an experimental possibility to a clinical reality in the fields of disease, trauma, and congenital malformations. This technique may now be considered a viable option for repairing complex craniofacial defects for which the results of autologous reconstruction remain suboptimal. Vascularized facial allotransplantation permits optimal anatomical reconstruction and provides desired functional, esthetic, and psychosocial benefits that are far superior to those achieved with conventional methods. Along with dramatic improvements in their functional statuses, patients regain the ability to make facial expressions such as smiling and to perform various functions such as smelling, eating, drinking, and speaking. The ideas in the 1997 movie "Face/Off" have now been realized in the clinical field. The objective of this article is to introduce this new surgical field, provide a basis for examining the status of the field of face transplantation, and stimulate and enhance facial transplantation studies in Korea.

  7. Facial reanimation with gracilis muscle transfer neurotized to cross-facial nerve graft versus masseteric nerve: a comparative study using the FACIAL CLIMA evaluating system.

    Science.gov (United States)

    Hontanilla, Bernardo; Marre, Diego; Cabello, Alvaro

    2013-06-01

    Longstanding unilateral facial paralysis is best addressed with microneurovascular muscle transplantation. Neurotization can be obtained from the cross-facial or the masseter nerve. The authors present a quantitative comparison of both procedures using the FACIAL CLIMA system. Forty-seven patients with complete unilateral facial paralysis underwent reanimation with a free gracilis transplant neurotized to either a cross-facial nerve graft (group I, n=20) or to the ipsilateral masseteric nerve (group II, n=27). Commissural displacement and commissural contraction velocity were measured using the FACIAL CLIMA system. Postoperative intragroup commissural displacement and commissural contraction velocity means of the reanimated versus the normal side were first compared using the independent samples t test. Mean percentage of recovery of both parameters were compared between the groups using the independent samples t test. Significant differences of mean commissural displacement and commissural contraction velocity between the reanimated side and the normal side were observed in group I (p=0.001 and p=0.014, respectively) but not in group II. Intergroup comparisons showed that both commissural displacement and commissural contraction velocity were higher in group II, with significant differences for commissural displacement (p=0.048). Mean percentage of recovery of both parameters was higher in group II, with significant differences for commissural displacement (p=0.042). Free gracilis muscle transfer neurotized by the masseteric nerve is a reliable technique for reanimation of longstanding facial paralysis. Compared with cross-facial nerve graft neurotization, this technique provides better symmetry and a higher degree of recovery. Therapeutic, III.

  8. Are facial injuries really different? An observational cohort study comparing appearance concern and psychological distress in facial trauma and non-facial trauma patients.

    Science.gov (United States)

    Rahtz, Emmylou; Bhui, Kamaldeep; Hutchison, Iain; Korszun, Ania

    2018-01-01

    Facial injuries are widely assumed to lead to stigma and significant psychosocial burden. Experimental studies of face perception support this idea, but there is very little empirical evidence to guide treatment. This study sought to address the gap. Data were collected from 193 patients admitted to hospital following facial or other trauma. Ninety (90) participants were successfully followed up 8 months later. Participants completed measures of appearance concern and psychological distress (post-traumatic stress symptoms (PTSS), depressive symptoms, anxiety symptoms). Participants were classified by site of injury (facial or non-facial injury). The overall levels of appearance concern were comparable to those of the general population, and there was no evidence of more appearance concern among people with facial injuries. Women and younger people were significantly more likely to experience appearance concern at baseline. Baseline and 8-month psychological distress, although common in the sample, did not differ according to the site of injury. Changes in appearance concern were, however, strongly associated with psychological distress at follow-up. We conclude that although appearance concern is severe among some people with facial injury, it is not especially different to those with non-facial injuries or the general public; changes in appearance concern, however, appear to correlate with psychological distress. We therefore suggest that interventions might focus on those with heightened appearance concern and should target cognitive bias and psychological distress. Copyright © 2017 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  9. Intact mirror mechanisms for automatic facial emotions in children and adolescents with autism spectrum disorder.

    Science.gov (United States)

    Schulte-Rüther, Martin; Otte, Ellen; Adigüzel, Kübra; Firk, Christine; Herpertz-Dahlmann, Beate; Koch, Iring; Konrad, Kerstin

    2017-02-01

    It has been suggested that an early deficit in the human mirror neuron system (MNS) is an important feature of autism. Recent findings related to simple hand and finger movements do not support a general dysfunction of the MNS in autism. Studies investigating facial actions (e.g., emotional expressions) have been more consistent, however, mostly relied on passive observation tasks. We used a new variant of a compatibility task for the assessment of automatic facial mimicry responses that allowed for simultaneous control of attention to facial stimuli. We used facial electromyography in 18 children and adolescents with Autism spectrum disorder (ASD) and 18 typically developing controls (TDCs). We observed a robust compatibility effect in ASD, that is, the execution of a facial expression was facilitated if a congruent facial expression was observed. Time course analysis of RT distributions and comparison to a classic compatibility task (symbolic Simon task) revealed that the facial compatibility effect appeared early and increased with time, suggesting fast and sustained activation of motor codes during observation of facial expressions. We observed a negative correlation of the compatibility effect with age across participants and in ASD, and a positive correlation between self-rated empathy and congruency for smiling faces in TDC but not in ASD. This pattern of results suggests that basic motor mimicry is intact in ASD, but is not associated with complex social cognitive abilities such as emotion understanding and empathy. Autism Res 2017, 10: 298-310. © 2016 International Society for Autism Research, Wiley Periodicals, Inc. © 2016 International Society for Autism Research, Wiley Periodicals, Inc.

  10. Opioid modulation of facial itch- and pain-related responses and grooming behavior in rats.

    Science.gov (United States)

    Spradley, Jessica M; Davoodi, Auva; Carstens, Mirela Iodi; Carstens, Earl

    2012-09-01

    Intradermal facial injections of pruritogens or algogens elicit distinct behavioral hindlimb scratch or forelimb wiping responses in rodents. We systematically investigated the parameters and opioid modulation of these evoked behaviors and spontaneous facial grooming in rats. Serotonin (5-HT) elicited hindlimb scratch bouts with few wipes. Scratching was attenuated by the µ-opiate antagonist naltrexone but not morphine. In contrast, cheek injection of mustard oil (allyl-isothiocyanate (AITC)) elicited ipsilateral forelimb wipes but little hindlimb scratching. AITC-evoked wiping was significantly attenuated by morphine but not naltrexone. Spontaneous facial grooming by the forepaws was attenuated by naltrexone, whereas morphine did not affect grooming behavior before or after cheek injections of 5-HT or AITC. These data validate that the rodent "cheek" model discriminates between itch- and pain-related behaviors. Naltrexone sensitivity of facial grooming and 5-HT-evoked scratch-ing suggests a common functionality. Forelimb wipes may represent a nocifensive response akin to rubbing an injury to relieve pain.

  11. Tracking Subtle Stereotypes of Children with Trisomy 21: From Facial-Feature-Based to Implicit Stereotyping

    OpenAIRE

    Enea-Drapeau , Claire; Carlier , Michèle; Huguet , Pascal

    2012-01-01

    International audience; BackgroundStigmatization is one of the greatest obstacles to the successful integration of people with Trisomy 21 (T21 or Down syndrome), the most frequent genetic disorder associated with intellectual disability. Research on attitudes and stereotypes toward these people still focuses on explicit measures subjected to social-desirability biases, and neglects how variability in facial stigmata influences attitudes and stereotyping.Methodology/Principal FindingsThe parti...

  12. Misrecognition of facial expressions in delinquents

    Directory of Open Access Journals (Sweden)

    Matsuura Naomi

    2009-09-01

    Full Text Available Abstract Background Previous reports have suggested impairment in facial expression recognition in delinquents, but controversy remains with respect to how such recognition is impaired. To address this issue, we investigated facial expression recognition in delinquents in detail. Methods We tested 24 male adolescent/young adult delinquents incarcerated in correctional facilities. We compared their performances with those of 24 age- and gender-matched control participants. Using standard photographs of facial expressions illustrating six basic emotions, participants matched each emotional facial expression with an appropriate verbal label. Results Delinquents were less accurate in the recognition of facial expressions that conveyed disgust than were control participants. The delinquents misrecognized the facial expressions of disgust as anger more frequently than did controls. Conclusion These results suggest that one of the underpinnings of delinquency might be impaired recognition of emotional facial expressions, with a specific bias toward interpreting disgusted expressions as hostile angry expressions.

  13. Peripheral facial palsy in children.

    Science.gov (United States)

    Yılmaz, Unsal; Cubukçu, Duygu; Yılmaz, Tuba Sevim; Akıncı, Gülçin; Ozcan, Muazzez; Güzel, Orkide

    2014-11-01

    The aim of this study is to evaluate the types and clinical characteristics of peripheral facial palsy in children. The hospital charts of children diagnosed with peripheral facial palsy were reviewed retrospectively. A total of 81 children (42 female and 39 male) with a mean age of 9.2 ± 4.3 years were included in the study. Causes of facial palsy were 65 (80.2%) idiopathic (Bell palsy) facial palsy, 9 (11.1%) otitis media/mastoiditis, and tumor, trauma, congenital facial palsy, chickenpox, Melkersson-Rosenthal syndrome, enlarged lymph nodes, and familial Mediterranean fever (each 1; 1.2%). Five (6.1%) patients had recurrent attacks. In patients with Bell palsy, female/male and right/left ratios were 36/29 and 35/30, respectively. Of them, 31 (47.7%) had a history of preceding infection. The overall rate of complete recovery was 98.4%. A wide variety of disorders can present with peripheral facial palsy in children. Therefore, careful investigation and differential diagnosis is essential. © The Author(s) 2013.

  14. Facial expressions and pair bonds in hylobatids.

    Science.gov (United States)

    Florkiewicz, Brittany; Skollar, Gabriella; Reichard, Ulrich H

    2018-06-06

    Facial expressions are an important component of primate communication that functions to transmit social information and modulate intentions and motivations. Chimpanzees and macaques, for example, produce a variety of facial expressions when communicating with conspecifics. Hylobatids also produce various facial expressions; however, the origin and function of these facial expressions are still largely unclear. It has been suggested that larger facial expression repertoires may have evolved in the context of social complexity, but this link has yet to be tested at a broader empirical basis. The social complexity hypothesis offers a possible explanation for the evolution of complex communicative signals such as facial expressions, because as the complexity of an individual's social environment increases so does the need for communicative signals. We used an intraspecies, pair-focused study design to test the link between facial expressions and sociality within hylobatids, specifically the strength of pair-bonds. The current study compared 206 hr of video and 103 hr of focal animal data for ten hylobatid pairs from three genera (Nomascus, Hoolock, and Hylobates) living at the Gibbon Conservation Center. Using video footage, we explored 5,969 facial expressions along three dimensions: repertoire use, repertoire breadth, and facial expression synchrony [FES]. We then used focal animal data to compare dimensions of facial expressiveness to pair bond strength and behavioral synchrony. Hylobatids in our study overlapped in only half of their facial expressions (50%) with the only other detailed, quantitative study of hylobatid facial expressions, while 27 facial expressions were uniquely observed in our study animals. Taken together, hylobatids have a large facial expression repertoire of at least 80 unique facial expressions. Contrary to our prediction, facial repertoire composition was not significantly correlated with pair bond strength, rates of territorial synchrony

  15. Elevated responses to constant facial emotions in different faces in the human amygdala: an fMRI study of facial identity and expression

    Directory of Open Access Journals (Sweden)

    Weiller Cornelius

    2004-11-01

    Full Text Available Abstract Background Human faces provide important signals in social interactions by inferring two main types of information, individual identity and emotional expression. The ability to readily assess both, the variability and consistency among emotional expressions in different individuals, is central to one's own interpretation of the imminent environment. A factorial design was used to systematically test the interaction of either constant or variable emotional expressions with constant or variable facial identities in areas involved in face processing using functional magnetic resonance imaging. Results Previous studies suggest a predominant role of the amygdala in the assessment of emotional variability. Here we extend this view by showing that this structure activated to faces with changing identities that display constant emotional expressions. Within this condition, amygdala activation was dependent on the type and intensity of displayed emotion, with significant responses to fearful expressions and, to a lesser extent so to neutral and happy expressions. In contrast, the lateral fusiform gyrus showed a binary pattern of increased activation to changing stimulus features while it was also differentially responsive to the intensity of displayed emotion when processing different facial identities. Conclusions These results suggest that the amygdala might serve to detect constant facial emotions in different individuals, complementing its established role for detecting emotional variability.

  16. Facial talon cusps.

    LENUS (Irish Health Repository)

    McNamara, T

    1997-12-01

    This is a report of two patients with isolated facial talon cusps. One occurred on a permanent mandibular central incisor; the other on a permanent maxillary canine. The locations of these talon cusps suggests that the definition of a talon cusp include teeth in addition to the incisor group and be extended to include the facial aspect of teeth.

  17. The impact of face skin tone on perceived facial attractiveness: A study realized with an innovative methodology.

    Science.gov (United States)

    Vera Cruz, Germano

    2017-12-19

    This study aimed to assess the impact of target faces' skin tone and perceivers' skin tone on the participants' attractiveness judgment regarding a symmetrical representative range of target faces as stimuli. Presented with a set of facial features, 240 Mozambican adults rated their attractiveness along a continuous scale. ANOVA and Chi-square were used to analyze the data. The results revealed that the skin tone of the target faces had an impact on the participants' attractiveness judgment. Overall, participants preferred light-skinned faces over dark-skinned ones. This finding is not only consistent with previous results on skin tone preferences, but it is even more powerful because it demonstrates that the light skin tone preference occurs regardless of the symmetry and baseline attractiveness of the stimuli.

  18. Imaging the Facial Nerve: A Contemporary Review

    International Nuclear Information System (INIS)

    Gupta, S.; Roehm, P.C.; Mends, F.; Hagiwara, M.; Fatterpekar, G.

    2013-01-01

    Imaging plays a critical role in the evaluation of a number of facial nerve disorders. The facial nerve has a complex anatomical course; thus, a thorough understanding of the course of the facial nerve is essential to localize the sites of pathology. Facial nerve dysfunction can occur from a variety of causes, which can often be identified on imaging. Computed tomography and magnetic resonance imaging are helpful for identifying bony facial canal and soft tissue abnormalities, respectively. Ultrasound of the facial nerve has been used to predict functional outcomes in patients with Bell’s palsy. More recently, diffusion tensor tractography has appeared as a new modality which allows three-dimensional display of facial nerve fibers

  19. Angular photogrammetric soft tissue facial profile analysis of Bangladeshi young adults

    Directory of Open Access Journals (Sweden)

    Lubna Akter

    2017-01-01

    planning by specialists such as orthodontists, prosthodontists, plastic surgeons, and maxillofacial surgeons, who have the capability to change the soft tissue facial features.

  20. Facial Resemblance Exaggerates Sex-Specific Jealousy-Based Decisions1

    Directory of Open Access Journals (Sweden)

    Steven M. Platek

    2007-01-01

    Full Text Available Sex differences in reaction to a romantic partner's infidelity are well documented and are hypothesized to be attributable to sex-specific jealousy mechanisms which are utilized to solve adaptive problems associated with risk of extra-pair copulation. Males, because of the risk of cuckoldry become more upset by sexual infidelity, while females, because of loss of resources and biparental investment tend to become more distressed by emotional infidelity. However, the degree to which these sex-specific reactions to jealousy interact with cues to kin are completely unknown. Here we investigated the interaction of facial resemblance with decisions about sex-specific jealousy scenarios. Fifty nine volunteers were asked to imagine that two different people (represented by facial composites informed them about their romantic partner's sexual or emotional infidelity. Consistent with previous research, males ranked sexual infidelity scenarios as most upsetting and females ranked emotional infidelity scenarios most upsetting. However, when information about the infidelity was provided by a face that resembled the subject, sex-specific reactions to jealousy were exaggerated. This finding highlights the use of facial resemblance as a putative self-referent phenotypic matching cue that impacts trusting behavior in sexual contexts.

  1. Facial nerve palsy due to birth trauma

    Science.gov (United States)

    Seventh cranial nerve palsy due to birth trauma; Facial palsy - birth trauma; Facial palsy - neonate; Facial palsy - infant ... An infant's facial nerve is also called the seventh cranial nerve. It can be damaged just before or at the time of delivery. ...

  2. A case of oral-facial-digital syndrome with overlapping manifestations of type V and type VI: a possible new OFD syndrome

    International Nuclear Information System (INIS)

    Chung Wongyiu; Chung Laupo

    1999-01-01

    We report a child with clinical and radiological manifestations characteristic of both V'aradi syndrome (oral-facial-digital syndrome type VI) and Thurston syndrome (oral-facial-digital syndrome type V). The findings have not been reported previously, and we believe that it represents a new variant. (orig.)

  3. Electrical and transcranial magnetic stimulation of the facial nerve: diagnostic relevance in acute isolated facial nerve palsy.

    Science.gov (United States)

    Happe, Svenja; Bunten, Sabine

    2012-01-01

    Unilateral facial weakness is common. Transcranial magnetic stimulation (TMS) allows identification of a conduction failure at the level of the canalicular portion of the facial nerve and may help to confirm the diagnosis. We retrospectively analyzed 216 patients with the diagnosis of peripheral facial palsy. The electrophysiological investigations included the blink reflex, preauricular electrical stimulation and the response to TMS at the labyrinthine part of the canalicular proportion of the facial nerve within 3 days after symptom onset. A similar reduction or loss of the TMS amplitude (p facial palsy without being specific for Bell's palsy. These data shed light on the TMS-based diagnosis of peripheral facial palsy, an ability to localize the site of lesion within the Fallopian channel regardless of the underlying pathology. Copyright © 2012 S. Karger AG, Basel.

  4. Less Empathic and More Reactive: The Different Impact of Childhood Maltreatment on Facial Mimicry and Vagal Regulation.

    Directory of Open Access Journals (Sweden)

    Martina Ardizzi

    Full Text Available Facial mimicry and vagal regulation represent two crucial physiological responses to others' facial expressions of emotions. Facial mimicry, defined as the automatic, rapid and congruent electromyographic activation to others' facial expressions, is implicated in empathy, emotional reciprocity and emotions recognition. Vagal regulation, quantified by the computation of Respiratory Sinus Arrhythmia (RSA, exemplifies the autonomic adaptation to contingent social cues. Although it has been demonstrated that childhood maltreatment induces alterations in the processing of the facial expression of emotions, both at an explicit and implicit level, the effects of maltreatment on children's facial mimicry and vagal regulation in response to facial expressions of emotions remain unknown. The purpose of the present study was to fill this gap, involving 24 street-children (maltreated group and 20 age-matched controls (control group. We recorded their spontaneous facial electromyographic activations of corrugator and zygomaticus muscles and RSA responses during the visualization of the facial expressions of anger, fear, joy and sadness. Results demonstrated a different impact of childhood maltreatment on facial mimicry and vagal regulation. Maltreated children did not show the typical positive-negative modulation of corrugator mimicry. Furthermore, when only negative facial expressions were considered, maltreated children demonstrated lower corrugator mimicry than controls. With respect to vagal regulation, whereas maltreated children manifested the expected and functional inverse correlation between RSA value at rest and RSA response to angry facial expressions, controls did not. These results describe an early and divergent functional adaptation to hostile environment of the two investigated physiological mechanisms. On the one side, maltreatment leads to the suppression of the spontaneous facial mimicry normally concurring to empathic understanding of

  5. Facial Muscle Coordination in Monkeys During Rhythmic Facial Expressions and Ingestive Movements

    Science.gov (United States)

    Shepherd, Stephen V.; Lanzilotto, Marco; Ghazanfar, Asif A.

    2012-01-01

    Evolutionary hypotheses regarding the origins of communication signals generally, and primate orofacial communication signals in particular, suggest that these signals derive by ritualization of noncommunicative behaviors, notably including ingestive behaviors such as chewing and nursing. These theories are appealing in part because of the prominent periodicities in both types of behavior. Despite their intuitive appeal, however, there are little or no data with which to evaluate these theories because the coordination of muscles innervated by the facial nucleus has not been carefully compared between communicative and ingestive movements. Such data are especially crucial for reconciling neurophysiological assumptions regarding facial motor control in communication and ingestion. We here address this gap by contrasting the coordination of facial muscles during different types of rhythmic orofacial behavior in macaque monkeys, finding that the perioral muscles innervated by the facial nucleus are rhythmically coordinated during lipsmacks and that this coordination appears distinct from that observed during ingestion. PMID:22553017

  6. Facial Action Units Recognition: A Comparative Study

    NARCIS (Netherlands)

    Popa, M.C.; Rothkrantz, L.J.M.; Wiggers, P.; Braspenning, R.A.C.; Shan, C.

    2011-01-01

    Many approaches to facial expression recognition focus on assessing the six basic emotions (anger, disgust, happiness, fear, sadness, and surprise). Real-life situations proved to produce many more subtle facial expressions. A reliable way of analyzing the facial behavior is the Facial Action Coding

  7. Pediatric facial injuries: It's management.

    Science.gov (United States)

    Singh, Geeta; Mohammad, Shadab; Pal, U S; Hariram; Malkunje, Laxman R; Singh, Nimisha

    2011-07-01

    Facial injuries in children always present a challenge in respect of their diagnosis and management. Since these children are of a growing age every care should be taken so that later the overall growth pattern of the facial skeleton in these children is not jeopardized. To access the most feasible method for the management of facial injuries in children without hampering the facial growth. Sixty child patients with facial trauma were selected randomly for this study. On the basis of examination and investigations a suitable management approach involving rest and observation, open or closed reduction and immobilization, trans-osseous (TO) wiring, mini bone plate fixation, splinting and replantation, elevation and fixation of zygoma, etc. were carried out. In our study fall was the predominant cause for most of the facial injuries in children. There was a 1.09% incidence of facial injuries in children up to 16 years of age amongst the total patients. The age-wise distribution of the fracture amongst groups (I, II and III) was found to be 26.67%, 51.67% and 21.67% respectively. Male to female patient ratio was 3:1. The majority of the cases of facial injuries were seen in Group II patients (6-11 years) i.e. 51.67%. The mandibular fracture was found to be the most common fracture (0.60%) followed by dentoalveolar (0.27%), mandibular + midface (0.07) and midface (0.02%) fractures. Most of the mandibular fractures were found in the parasymphysis region. Simple fracture seems to be commonest in the mandible. Most of the mandibular and midface fractures in children were amenable to conservative therapies except a few which required surgical intervention.

  8. Computer-analyzed facial expression as a surrogate marker for autism spectrum social core symptoms.

    Directory of Open Access Journals (Sweden)

    Keiho Owada

    Full Text Available To develop novel interventions for autism spectrum disorder (ASD core symptoms, valid, reliable, and sensitive longitudinal outcome measures are required for detecting symptom change over time. Here, we tested whether a computerized analysis of quantitative facial expression measures could act as a marker for core ASD social symptoms. Facial expression intensity values during a semi-structured socially interactive situation extracted from the Autism Diagnostic Observation Schedule (ADOS were quantified by dedicated software in 18 high-functioning adult males with ASD. Controls were 17 age-, gender-, parental socioeconomic background-, and intellectual level-matched typically developing (TD individuals. Statistical analyses determined whether values representing the strength and variability of each facial expression element differed significantly between the ASD and TD groups and whether they correlated with ADOS reciprocal social interaction scores. Compared with the TD controls, facial expressions in the ASD group appeared more "Neutral" (d = 1.02, P = 0.005, PFDR 0.05 with lower variability in Happy expression (d = 1.10, P = 0.003, PFDR < 0.05. Moreover, the stronger Neutral facial expressions in the ASD participants were positively correlated with poorer ADOS reciprocal social interaction scores (ρ = 0.48, P = 0.042. These findings indicate that our method for quantitatively measuring reduced facial expressivity during social interactions can be a promising marker for core ASD social symptoms.

  9. Computer-analyzed facial expression as a surrogate marker for autism spectrum social core symptoms.

    Science.gov (United States)

    Owada, Keiho; Kojima, Masaki; Yassin, Walid; Kuroda, Miho; Kawakubo, Yuki; Kuwabara, Hitoshi; Kano, Yukiko; Yamasue, Hidenori

    2018-01-01

    To develop novel interventions for autism spectrum disorder (ASD) core symptoms, valid, reliable, and sensitive longitudinal outcome measures are required for detecting symptom change over time. Here, we tested whether a computerized analysis of quantitative facial expression measures could act as a marker for core ASD social symptoms. Facial expression intensity values during a semi-structured socially interactive situation extracted from the Autism Diagnostic Observation Schedule (ADOS) were quantified by dedicated software in 18 high-functioning adult males with ASD. Controls were 17 age-, gender-, parental socioeconomic background-, and intellectual level-matched typically developing (TD) individuals. Statistical analyses determined whether values representing the strength and variability of each facial expression element differed significantly between the ASD and TD groups and whether they correlated with ADOS reciprocal social interaction scores. Compared with the TD controls, facial expressions in the ASD group appeared more "Neutral" (d = 1.02, P = 0.005, PFDR Neutral expression (d = 1.08, P = 0.003, PFDR 0.05) with lower variability in Happy expression (d = 1.10, P = 0.003, PFDR Neutral facial expressions in the ASD participants were positively correlated with poorer ADOS reciprocal social interaction scores (ρ = 0.48, P = 0.042). These findings indicate that our method for quantitatively measuring reduced facial expressivity during social interactions can be a promising marker for core ASD social symptoms.

  10. Cerebral Angiographic Findings of Cosmetic Facial Filler-related Ophthalmic and Retinal Artery Occlusion.

    Science.gov (United States)

    Kim, Yong-Kyu; Jung, Cheolkyu; Woo, Se Joon; Park, Kyu Hyung

    2015-12-01

    Cosmetic facial filler-related ophthalmic artery occlusion is rare but is a devastating complication, while the exact pathophysiology is still elusive. Cerebral angiography provides more detailed information on blood flow of ophthalmic artery as well as surrounding orbital area which cannot be covered by fundus fluorescein angiography. This study aimed to evaluate cerebral angiographic features of cosmetic facial filler-related ophthalmic artery occlusion patients. We retrospectively reviewed cerebral angiography of 7 patients (4 hyaluronic acid [HA] and 3 autologous fat-injected cases) showing ophthalmic artery and its branches occlusion after cosmetic facial filler injections, and underwent intra-arterial thrombolysis. On selective ophthalmic artery angiograms, all fat-injected patients showed a large filling defect on the proximal ophthalmic artery, whereas the HA-injected patients showed occlusion of the distal branches of the ophthalmic artery. Three HA-injected patients revealed diminished distal runoff of the internal maxillary and facial arteries, which clinically corresponded with skin necrosis. However, all fat-injected patients and one HA-injected patient who were immediately treated with subcutaneous hyaluronidase injection showed preserved distal runoff of the internal maxillary and facial arteries and mild skin problems. The size difference between injected materials seems to be associated with different angiographic findings. Autologous fat is more prone to obstruct proximal part of ophthalmic artery, whereas HA obstructs distal branches. In addition, hydrophilic and volume-expansion property of HA might exacerbate blood flow on injected area, which is also related to skin necrosis. Intra-arterial thrombolysis has a limited role in reconstituting blood flow or regaining vision in cosmetic facial filler-associated ophthalmic artery occlusions.

  11. Multiple recurrent and de novo odontogenic keratocysts associated with oral-facial-digital syndrome

    NARCIS (Netherlands)

    Lindeboom, Jerome A. H.; Kroon, Frans H. M.; de Vires, Jan; van den Akker, Hans P.

    2003-01-01

    In 1954, Papillon-Leage and Psaume were the first to describe the clinical characteristics of oral-facial-digital syndrome (OFDS). On the basis of their clinical features and the inheritance pattern, 2 variants were initially distinguished, namely OFDS type I (Papillon-Leage and Psaume) and OFDS

  12. Virtual 3-D Facial Reconstruction

    Directory of Open Access Journals (Sweden)

    Martin Paul Evison

    2000-06-01

    Full Text Available Facial reconstructions in archaeology allow empathy with people who lived in the past and enjoy considerable popularity with the public. It is a common misconception that facial reconstruction will produce an exact likeness; a resemblance is the best that can be hoped for. Research at Sheffield University is aimed at the development of a computer system for facial reconstruction that will be accurate, rapid, repeatable, accessible and flexible. This research is described and prototypical 3-D facial reconstructions are presented. Interpolation models simulating obesity, ageing and ethnic affiliation are also described. Some strengths and weaknesses in the models, and their potential for application in archaeology are discussed.

  13. Facial Baroparesis Caused by Scuba Diving

    Directory of Open Access Journals (Sweden)

    Daisuke Kamide

    2012-01-01

    tympanic membrane and right facial palsy without other neurological findings. But facial palsy was disappeared immediately after myringotomy. We considered that the etiology of this case was neuropraxia of facial nerve in middle ear caused by over pressure of middle ear.

  14. Botulinum Toxin (Botox) for Facial Wrinkles

    Science.gov (United States)

    ... Stories Español Eye Health / Eye Health A-Z Botulinum Toxin (Botox) for Facial Wrinkles Sections Botulinum Toxin (Botox) ... Facial Wrinkles How Does Botulinum Toxin (Botox) Work? Botulinum Toxin (Botox) for Facial Wrinkles Leer en Español: La ...

  15. High-intensity facial nerve lesions on T2-weighted images in chronic persistent facial nerve palsy

    Energy Technology Data Exchange (ETDEWEB)

    Kinoshita, T. [Dept. of Radiology, Sendai City Hospital, Sendai (Japan); Dept. of Radiology, Tottori Univ. (Japan); Ishii, K. [Dept. of Radiology, Sendai City Hospital, Sendai (Japan); Okitsu, T. [Dept. of Otolaryngology, Sendai City Hospital (Japan); Ogawa, T. [Dept. of Radiology, Tottori Univ. (Japan); Okudera, T. [Dept. of Radiology, Research Inst. of Brain and Blood Vessels-Akita, Akita (Japan)

    2001-05-01

    Our aim was to estimate the value of MRI in detecting irreversibly paralysed facial nerves. We examined 95 consecutive patients with a facial nerve palsy (14 with a persistent palsy, and 81 with good recovery), using a 1.0 T unit, with T2-weighted and contrast-enhanced T1-weighted images. The geniculate ganglion and tympanic segment had gave high signal on T2-weighted images in the chronic stage of persistent palsy, but not in acute palsy. The enhancement pattern of the facial nerve in the chronic persistent facial nerve palsy is similar to that in the acute palsy with good recovery. These findings suggest that T2-weighted MRI can be used to show severely damaged facial nerves. (orig.)

  16. Operant conditioning of facial displays of pain.

    Science.gov (United States)

    Kunz, Miriam; Rainville, Pierre; Lautenbacher, Stefan

    2011-06-01

    The operant model of chronic pain posits that nonverbal pain behavior, such as facial expressions, is sensitive to reinforcement, but experimental evidence supporting this assumption is sparse. The aim of the present study was to investigate in a healthy population a) whether facial pain behavior can indeed be operantly conditioned using a discriminative reinforcement schedule to increase and decrease facial pain behavior and b) to what extent these changes affect pain experience indexed by self-ratings. In the experimental group (n = 29), the participants were reinforced every time that they showed pain-indicative facial behavior (up-conditioning) or a neutral expression (down-conditioning) in response to painful heat stimulation. Once facial pain behavior was successfully up- or down-conditioned, respectively (which occurred in 72% of participants), facial pain displays and self-report ratings were assessed. In addition, a control group (n = 11) was used that was yoked to the reinforcement plans of the experimental group. During the conditioning phases, reinforcement led to significant changes in facial pain behavior in the majority of the experimental group (p .136). Fine-grained analyses of facial muscle movements revealed a similar picture. Furthermore, the decline in facial pain displays (as observed during down-conditioning) strongly predicted changes in pain ratings (R(2) = 0.329). These results suggest that a) facial pain displays are sensitive to reinforcement and b) that changes in facial pain displays can affect self-report ratings.

  17. Reconocimiento facial

    OpenAIRE

    Urtiaga Abad, Juan Alfonso

    2014-01-01

    El presente proyecto trata sobre uno de los campos más problemáticos de la inteligencia artificial, el reconocimiento facial. Algo tan sencillo para las personas como es reconocer una cara conocida se traduce en complejos algoritmos y miles de datos procesados en cuestión de segundos. El proyecto comienza con un estudio del estado del arte de las diversas técnicas de reconocimiento facial, desde las más utilizadas y probadas como el PCA y el LDA, hasta técnicas experimentales que utilizan ...

  18. Facial Displays Are Tools for Social Influence.

    Science.gov (United States)

    Crivelli, Carlos; Fridlund, Alan J

    2018-05-01

    Based on modern theories of signal evolution and animal communication, the behavioral ecology view of facial displays (BECV) reconceives our 'facial expressions of emotion' as social tools that serve as lead signs to contingent action in social negotiation. BECV offers an externalist, functionalist view of facial displays that is not bound to Western conceptions about either expressions or emotions. It easily accommodates recent findings of diversity in facial displays, their public context-dependency, and the curious but common occurrence of solitary facial behavior. Finally, BECV restores continuity of human facial behavior research with modern functional accounts of non-human communication, and provides a non-mentalistic account of facial displays well-suited to new developments in artificial intelligence and social robotics. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  19. Aggressive osteoblastoma in mastoid process of temporal bone with facial palsy

    Directory of Open Access Journals (Sweden)

    Manoj Jain

    2013-01-01

    Full Text Available Osteoblastoma is an uncommon primary bone tumor with a predilection for posterior elements of spine. Its occurrence in temporal bone and middle ear is extremely rare. Clinical symptoms are non-specific and cranial nerve involvement is uncommon. The cytomorphological features of osteoblastoma are not very well defined and the experience is limited to only few reports. We report an interesting and rare case of aggressive osteoblastoma, with progressive hearing loss and facial palsy, involving the mastoid process of temporal bone and middle ear along with the description of cyto-morphological features.

  20. Does facial resemblance enhance cooperation?

    Directory of Open Access Journals (Sweden)

    Trang Giang

    Full Text Available Facial self-resemblance has been proposed to serve as a kinship cue that facilitates cooperation between kin. In the present study, facial resemblance was manipulated by morphing stimulus faces with the participants' own faces or control faces (resulting in self-resemblant or other-resemblant composite faces. A norming study showed that the perceived degree of kinship was higher for the participants and the self-resemblant composite faces than for actual first-degree relatives. Effects of facial self-resemblance on trust and cooperation were tested in a paradigm that has proven to be sensitive to facial trustworthiness, facial likability, and facial expression. First, participants played a cooperation game in which the composite faces were shown. Then, likability ratings were assessed. In a source memory test, participants were required to identify old and new faces, and were asked to remember whether the faces belonged to cooperators or cheaters in the cooperation game. Old-new recognition was enhanced for self-resemblant faces in comparison to other-resemblant faces. However, facial self-resemblance had no effects on the degree of cooperation in the cooperation game, on the emotional evaluation of the faces as reflected in the likability judgments, and on the expectation that a face belonged to a cooperator rather than to a cheater. Therefore, the present results are clearly inconsistent with the assumption of an evolved kin recognition module built into the human face recognition system.

  1. Facial skin care products and cosmetics.

    Science.gov (United States)

    Draelos, Zoe Diana

    2014-01-01

    Facial skin care products and cosmetics can both aid or incite facial dermatoses. Properly selected skin care can create an environment for barrier repair aiding in the re-establishment of a healing biofilm and diminution of facial redness; however, skin care products that aggressively remove intercellular lipids or cause irritation must be eliminated before the red face will resolve. Cosmetics are an additive variable either aiding or challenging facial skin health. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. A Statistical Model for Synthesis of Detailed Facial Geometry

    OpenAIRE

    Golovinskiy, Aleksey; Matusik, Wojciech; Pfister, Hanspeter; Rusinkiewicz, Szymon; Funkhouser, Thomas

    2006-01-01

    Detailed surface geometry contributes greatly to the visual realism of 3D face models. However, acquiring high-resolution face geometry is often tedious and expensive. Consequently, most face models used in games, virtual reality, or computer vision look unrealistically smooth. In this paper, we introduce a new statistical technique for the analysis and synthesis of small three-dimensional facial features, such as wrinkles and pores. We acquire high-resolution face geometry for people across ...

  3. A computational model of the development of separate representations of facial identity and expression in the primate visual system.

    Science.gov (United States)

    Tromans, James Matthew; Harris, Mitchell; Stringer, Simon Maitland

    2011-01-01

    Experimental studies have provided evidence that the visual processing areas of the primate brain represent facial identity and facial expression within different subpopulations of neurons. For example, in non-human primates there is evidence that cells within the inferior temporal gyrus (TE) respond primarily to facial identity, while cells within the superior temporal sulcus (STS) respond to facial expression. More recently, it has been found that the orbitofrontal cortex (OFC) of non-human primates contains some cells that respond exclusively to changes in facial identity, while other cells respond exclusively to facial expression. How might the primate visual system develop physically separate representations of facial identity and expression given that the visual system is always exposed to simultaneous combinations of facial identity and expression during learning? In this paper, a biologically plausible neural network model, VisNet, of the ventral visual pathway is trained on a set of carefully-designed cartoon faces with different identities and expressions. The VisNet model architecture is composed of a hierarchical series of four Self-Organising Maps (SOMs), with associative learning in the feedforward synaptic connections between successive layers. During learning, the network develops separate clusters of cells that respond exclusively to either facial identity or facial expression. We interpret the performance of the network in terms of the learning properties of SOMs, which are able to exploit the statistical indendependence between facial identity and expression.

  4. A computational model of the development of separate representations of facial identity and expression in the primate visual system.

    Directory of Open Access Journals (Sweden)

    James Matthew Tromans

    Full Text Available Experimental studies have provided evidence that the visual processing areas of the primate brain represent facial identity and facial expression within different subpopulations of neurons. For example, in non-human primates there is evidence that cells within the inferior temporal gyrus (TE respond primarily to facial identity, while cells within the superior temporal sulcus (STS respond to facial expression. More recently, it has been found that the orbitofrontal cortex (OFC of non-human primates contains some cells that respond exclusively to changes in facial identity, while other cells respond exclusively to facial expression. How might the primate visual system develop physically separate representations of facial identity and expression given that the visual system is always exposed to simultaneous combinations of facial identity and expression during learning? In this paper, a biologically plausible neural network model, VisNet, of the ventral visual pathway is trained on a set of carefully-designed cartoon faces with different identities and expressions. The VisNet model architecture is composed of a hierarchical series of four Self-Organising Maps (SOMs, with associative learning in the feedforward synaptic connections between successive layers. During learning, the network develops separate clusters of cells that respond exclusively to either facial identity or facial expression. We interpret the performance of the network in terms of the learning properties of SOMs, which are able to exploit the statistical indendependence between facial identity and expression.

  5. The Two Sides of Beauty: Laterality and the Duality of Facial Attractiveness

    Science.gov (United States)

    Franklin, Robert G., Jr.; Adams, Reginald B., Jr.

    2010-01-01

    We hypothesized that facial attractiveness represents a dual judgment, a combination of reward-based, sexual processes, and aesthetic, cognitive processes. Herein we describe a study that demonstrates that sexual and nonsexual processes both contribute to attractiveness judgments and that these processes can be dissociated. Female participants…

  6. Pseudotumoural hypertrophic neuritis of the facial nerve

    OpenAIRE

    Zanoletti, E; Mazzoni, A; Barbò, R

    2008-01-01

    In a retrospective study of our cases of recurrent paralysis of the facial nerve of tumoural and non-tumoural origin, a tumour-like lesion of the intra-temporal course of the facial nerve, mimicking facial nerve schwannoma, was found and investigated in 4 cases. This was defined as, pseudotumoral hypertrophic neuritis of the facial nerve. The picture was one of recurrent acute facial palsy with incomplete recovery and imaging of a benign tumour. It was different from the well-known recurrent ...

  7. Reduced Reliance on Optimal Facial Information for Identity Recognition in Autism Spectrum Disorder

    Science.gov (United States)

    Leonard, Hayley C.; Annaz, Dagmara; Karmiloff-Smith, Annette; Johnson, Mark H.

    2013-01-01

    Previous research into face processing in autism spectrum disorder (ASD) has revealed atypical biases toward particular facial information during identity recognition. Specifically, a focus on features (or high spatial frequencies [HSFs]) has been reported for both face and nonface processing in ASD. The current study investigated the development…

  8. Imaging of the facial nerve

    Energy Technology Data Exchange (ETDEWEB)

    Veillon, F. [Service de Radiologie I, Hopital de Hautepierre, 67098 Strasbourg Cedex (France)], E-mail: Francis.Veillon@chru-strasbourg.fr; Ramos-Taboada, L.; Abu-Eid, M. [Service de Radiologie I, Hopital de Hautepierre, 67098 Strasbourg Cedex (France); Charpiot, A. [Service d' ORL, Hopital de Hautepierre, 67098 Strasbourg Cedex (France); Riehm, S. [Service de Radiologie I, Hopital de Hautepierre, 67098 Strasbourg Cedex (France)

    2010-05-15

    The facial nerve is responsible for the motor innervation of the face. It has a visceral motor function (lacrimal, submandibular, sublingual glands and secretion of the nose); it conveys a great part of the taste fibers, participates to the general sensory of the auricle (skin of the concha) and the wall of the external auditory meatus. The facial mimic, production of tears, nasal flow and salivation all depend on the facial nerve. In order to image the facial nerve it is mandatory to be knowledgeable about its normal anatomy including the course of its efferent and afferent fibers and about relevant technical considerations regarding CT and MR to be able to achieve high-resolution images of the nerve.

  9. Reverse correlating love: highly passionate women idealize their partner's facial appearance.

    Science.gov (United States)

    Gunaydin, Gul; DeLong, Jordan E

    2015-01-01

    A defining feature of passionate love is idealization--evaluating romantic partners in an overly favorable light. Although passionate love can be expected to color how favorably individuals represent their partner in their mind, little is known about how passionate love is linked with visual representations of the partner. Using reverse correlation techniques for the first time to study partner representations, the present study investigated whether women who are passionately in love represent their partner's facial appearance more favorably than individuals who are less passionately in love. In a within-participants design, heterosexual women completed two forced-choice classification tasks, one for their romantic partner and one for a male acquaintance, and a measure of passionate love. In each classification task, participants saw two faces superimposed with noise and selected the face that most resembled their partner (or an acquaintance). Classification images for each of high passion and low passion groups were calculated by averaging across noise patterns selected as resembling the partner or the acquaintance and superimposing the averaged noise on an average male face. A separate group of women evaluated the classification images on attractiveness, trustworthiness, and competence. Results showed that women who feel high (vs. low) passionate love toward their partner tend to represent his face as more attractive and trustworthy, even when controlling for familiarity effects using the acquaintance representation. Using an innovative method to study partner representations, these findings extend our understanding of cognitive processes in romantic relationships.

  10. Forensic Facial Reconstruction: The Final Frontier.

    Science.gov (United States)

    Gupta, Sonia; Gupta, Vineeta; Vij, Hitesh; Vij, Ruchieka; Tyagi, Nutan

    2015-09-01

    Forensic facial reconstruction can be used to identify unknown human remains when other techniques fail. Through this article, we attempt to review the different methods of facial reconstruction reported in literature. There are several techniques of doing facial reconstruction, which vary from two dimensional drawings to three dimensional clay models. With the advancement in 3D technology, a rapid, efficient and cost effective computerized 3D forensic facial reconstruction method has been developed which has brought down the degree of error previously encountered. There are several methods of manual facial reconstruction but the combination Manchester method has been reported to be the best and most accurate method for the positive recognition of an individual. Recognition allows the involved government agencies to make a list of suspected victims'. This list can then be narrowed down and a positive identification may be given by the more conventional method of forensic medicine. Facial reconstruction allows visual identification by the individual's family and associates to become easy and more definite.

  11. Magnetic resonance imaging of facial muscles

    Energy Technology Data Exchange (ETDEWEB)

    Farrugia, M.E. [Department of Clinical Neurology, University of Oxford, Radcliffe Infirmary, Oxford (United Kingdom)], E-mail: m.e.farrugia@doctors.org.uk; Bydder, G.M. [Department of Radiology, University of California, San Diego, CA 92103-8226 (United States); Francis, J.M.; Robson, M.D. [OCMR, Department of Cardiovascular Medicine, University of Oxford, John Radcliffe Hospital, Oxford (United Kingdom)

    2007-11-15

    Facial and tongue muscles are commonly involved in patients with neuromuscular disorders. However, these muscles are not as easily accessible for biopsy and pathological examination as limb muscles. We have previously investigated myasthenia gravis patients with MuSK antibodies for facial and tongue muscle atrophy using different magnetic resonance imaging sequences, including ultrashort echo time techniques and image analysis tools that allowed us to obtain quantitative assessments of facial muscles. This imaging study had shown that facial muscle measurement is possible and that useful information can be obtained using a quantitative approach. In this paper we aim to review in detail the methods that we applied to our study, to enable clinicians to study these muscles within the domain of neuromuscular disease, oncological or head and neck specialties. Quantitative assessment of the facial musculature may be of value in improving the understanding of pathological processes occurring within facial muscles in certain neuromuscular disorders.

  12. Magnetic resonance imaging of facial muscles

    International Nuclear Information System (INIS)

    Farrugia, M.E.; Bydder, G.M.; Francis, J.M.; Robson, M.D.

    2007-01-01

    Facial and tongue muscles are commonly involved in patients with neuromuscular disorders. However, these muscles are not as easily accessible for biopsy and pathological examination as limb muscles. We have previously investigated myasthenia gravis patients with MuSK antibodies for facial and tongue muscle atrophy using different magnetic resonance imaging sequences, including ultrashort echo time techniques and image analysis tools that allowed us to obtain quantitative assessments of facial muscles. This imaging study had shown that facial muscle measurement is possible and that useful information can be obtained using a quantitative approach. In this paper we aim to review in detail the methods that we applied to our study, to enable clinicians to study these muscles within the domain of neuromuscular disease, oncological or head and neck specialties. Quantitative assessment of the facial musculature may be of value in improving the understanding of pathological processes occurring within facial muscles in certain neuromuscular disorders

  13. Biometric identification based on novel frequency domain facial asymmetry measures

    Science.gov (United States)

    Mitra, Sinjini; Savvides, Marios; Vijaya Kumar, B. V. K.

    2005-03-01

    In the modern world, the ever-growing need to ensure a system's security has spurred the growth of the newly emerging technology of biometric identification. The present paper introduces a novel set of facial biometrics based on quantified facial asymmetry measures in the frequency domain. In particular, we show that these biometrics work well for face images showing expression variations and have the potential to do so in presence of illumination variations as well. A comparison of the recognition rates with those obtained from spatial domain asymmetry measures based on raw intensity values suggests that the frequency domain representation is more robust to intra-personal distortions and is a novel approach for performing biometric identification. In addition, some feature analysis based on statistical methods comparing the asymmetry measures across different individuals and across different expressions is presented.

  14. Perceived functional impact of abnormal facial appearance.

    Science.gov (United States)

    Rankin, Marlene; Borah, Gregory L

    2003-06-01

    Functional facial deformities are usually described as those that impair respiration, eating, hearing, or speech. Yet facial scars and cutaneous deformities have a significant negative effect on social functionality that has been poorly documented in the scientific literature. Insurance companies are declining payments for reconstructive surgical procedures for facial deformities caused by congenital disabilities and after cancer or trauma operations that do not affect mechanical facial activity. The purpose of this study was to establish a large, sample-based evaluation of the perceived social functioning, interpersonal characteristics, and employability indices for a range of facial appearances (normal and abnormal). Adult volunteer evaluators (n = 210) provided their subjective perceptions based on facial physical appearance, and an analysis of the consequences of facial deformity on parameters of preferential treatment was performed. A two-group comparative research design rated the differences among 10 examples of digitally altered facial photographs of actual patients among various age and ethnic groups with "normal" and "abnormal" congenital deformities or posttrauma scars. Photographs of adult patients with observable congenital and posttraumatic deformities (abnormal) were digitally retouched to eliminate the stigmatic defects (normal). The normal and abnormal photographs of identical patients were evaluated by the large sample study group on nine parameters of social functioning, such as honesty, employability, attractiveness, and effectiveness, using a visual analogue rating scale. Patients with abnormal facial characteristics were rated as significantly less honest (p = 0.007), less employable (p = 0.001), less trustworthy (p = 0.01), less optimistic (p = 0.001), less effective (p = 0.02), less capable (p = 0.002), less intelligent (p = 0.03), less popular (p = 0.001), and less attractive (p = 0.001) than were the same patients with normal facial

  15. Cerebro-fronto-facial syndrome type 3 with polymicrogyria: a clinical presentation of Baraitser-Winter syndrome.

    Science.gov (United States)

    Eker, Hatice Koçak; Derinkuyu, Betül Emine; Ünal, Sevim; Masliah-Planchon, Julien; Drunat, Séverine; Verloes, Alain

    2014-01-01

    Baraitser-Winter syndrome (BRWS) is a rare condition affecting the development of the brain and the face. The most common characteristics are unusual facial appearance including hypertelorism and ptosis, ocular colobomas, hearing loss, impaired neuronal migration and intellectual disability. BRWS is caused by mutations in the ACTB and ACTG1 genes. Cerebro-fronto-facial syndrome (CFFS) is a clinically heterogeneous condition with distinct facial dysmorphism, and brain abnormalities. Three subtypes are identified. We report a female infant with striking facial features and brain anomalies (included polymicrogyria) that fit into the spectrum of the CFFS type 3 (CFFS3). She also had minor anomalies on her hands and feet, heart and kidney malformations, and recurrent infections. DNA investigations revealed c.586C>T mutation (p.Arg196Cys) in ACTB. This mutation places this patient in the spectrum of BRWS. The same mutation has been detected in a polymicrogyric patient reported previously in literature. We expand the malformation spectrum of BRWS/CFFS3, and present preliminary findings for phenotype-genotype correlation in this spectrum. Copyright © 2013 Elsevier Masson SAS. All rights reserved.

  16. Possibilities of pfysiotherapy in facial nerve paresis

    OpenAIRE

    ZIFČÁKOVÁ, Šárka

    2015-01-01

    The bachelor thesis addresses paresis of the facial nerve. The facial nerve paresis is a rather common illness, which cannot be often cured without consequences despite all the modern treatments. The paresis of the facial nerve occurs in two forms, central and peripheral. A central paresis is a result of a lesion located above the motor nucleus of the facial nerve. A peripheral paresis is caused by a lesion located either in the location of the motor nucleus or in the course of the facial ner...

  17. Facial colliculus syndrome

    Directory of Open Access Journals (Sweden)

    Rupinderjeet Kaur

    2016-01-01

    Full Text Available A male patient presented with horizontal diplopia and conjugate gaze palsy. Magnetic resonance imaging (MRI revealed acute infarct in right facial colliculus which is an anatomical elevation on the dorsal aspect of Pons. This elevation is due the 6th cranial nerve nucleus and the motor fibres of facial nerve which loop dorsal to this nucleus. Anatomical correlation of the clinical symptoms is also depicted in this report.

  18. [Surgical treatment in otogenic facial nerve palsy].

    Science.gov (United States)

    Feng, Guo-Dong; Gao, Zhi-Qiang; Zhai, Meng-Yao; Lü, Wei; Qi, Fang; Jiang, Hong; Zha, Yang; Shen, Peng

    2008-06-01

    To study the character of facial nerve palsy due to four different auris diseases including chronic otitis media, Hunt syndrome, tumor and physical or chemical factors, and to discuss the principles of the surgical management of otogenic facial nerve palsy. The clinical characters of 24 patients with otogenic facial nerve palsy because of the four different auris diseases were retrospectively analyzed, all the cases were performed surgical management from October 1991 to March 2007. Facial nerve function was evaluated with House-Brackmann (HB) grading system. The 24 patients including 10 males and 14 females were analysis, of whom 12 cases due to cholesteatoma, 3 cases due to chronic otitis media, 3 cases due to Hunt syndrome, 2 cases resulted from acute otitis media, 2 cases due to physical or chemical factors and 2 cases due to tumor. All cases were treated with operations included facial nerve decompression, lesion resection with facial nerve decompression and lesion resection without facial nerve decompression, 1 patient's facial nerve was resected because of the tumor. According to HB grade system, I degree recovery was attained in 4 cases, while II degree in 10 cases, III degree in 6 cases, IV degree in 2 cases, V degree in 2 cases and VI degree in 1 case. Removing the lesions completely was the basic factor to the surgery of otogenic facial palsy, moreover, it was important to have facial nerve decompression soon after lesion removal.

  19. Unspoken vowel recognition using facial electromyogram.

    Science.gov (United States)

    Arjunan, Sridhar P; Kumar, Dinesh K; Yau, Wai C; Weghorn, Hans

    2006-01-01

    The paper aims to identify speech using the facial muscle activity without the audio signals. The paper presents an effective technique that measures the relative muscle activity of the articulatory muscles. Five English vowels were used as recognition variables. This paper reports using moving root mean square (RMS) of surface electromyogram (SEMG) of four facial muscles to segment the signal and identify the start and end of the utterance. The RMS of the signal between the start and end markers was integrated and normalised. This represented the relative muscle activity of the four muscles. These were classified using back propagation neural network to identify the speech. The technique was successfully used to classify 5 vowels into three classes and was not sensitive to the variation in speed and the style of speaking of the different subjects. The results also show that this technique was suitable for classifying the 5 vowels into 5 classes when trained for each of the subjects. It is suggested that such a technology may be used for the user to give simple unvoiced commands when trained for the specific user.

  20. Recognition of schematic facial displays of emotion in parents of children with autism.

    Science.gov (United States)

    Palermo, Mark T; Pasqualetti, Patrizio; Barbati, Giulia; Intelligente, Fabio; Rossini, Paolo Maria

    2006-07-01

    Performance on an emotional labeling task in response to schematic facial patterns representing five basic emotions without the concurrent presentation of a verbal category was investigated in 40 parents of children with autism and 40 matched controls. 'Autism fathers' performed worse than 'autism mothers', who performed worse than controls in decoding displays representing sadness or disgust. This indicates the need to include facial expression decoding tasks in genetic research of autism. In addition, emotional expression interactions between parents and their children with autism, particularly through play, where affect and prosody are 'physiologically' exaggerated, may stimulate development of social competence. Future studies could benefit from a combination of stimuli including photographs and schematic drawings, with and without associated verbal categories. This may allow the subdivision of patients and relatives on the basis of the amount of information needed to understand and process social-emotionally relevant information.

  1. Face recognition using slow feature analysis and contourlet transform

    Science.gov (United States)

    Wang, Yuehao; Peng, Lingling; Zhe, Fuchuan

    2018-04-01

    In this paper we propose a novel face recognition approach based on slow feature analysis (SFA) in contourlet transform domain. This method firstly use contourlet transform to decompose the face image into low frequency and high frequency part, and then takes technological advantages of slow feature analysis for facial feature extraction. We named the new method combining the slow feature analysis and contourlet transform as CT-SFA. The experimental results on international standard face database demonstrate that the new face recognition method is effective and competitive.

  2. Facial identity and facial expression are initially integrated at visual perceptual stages of face processing.

    Science.gov (United States)

    Fisher, Katie; Towler, John; Eimer, Martin

    2016-01-08

    It is frequently assumed that facial identity and facial expression are analysed in functionally and anatomically distinct streams within the core visual face processing system. To investigate whether expression and identity interact during the visual processing of faces, we employed a sequential matching procedure where participants compared either the identity or the expression of two successively presented faces, and ignored the other irrelevant dimension. Repetitions versus changes of facial identity and expression were varied independently across trials, and event-related potentials (ERPs) were recorded during task performance. Irrelevant facial identity and irrelevant expression both interfered with performance in the expression and identity matching tasks. These symmetrical interference effects show that neither identity nor expression can be selectively ignored during face matching, and suggest that they are not processed independently. N250r components to identity repetitions that reflect identity matching mechanisms in face-selective visual cortex were delayed and attenuated when there was an expression change, demonstrating that facial expression interferes with visual identity matching. These findings provide new evidence for interactions between facial identity and expression within the core visual processing system, and question the hypothesis that these two attributes are processed independently. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Robust representation and recognition of facial emotions using extreme sparse learning.

    Science.gov (United States)

    Shojaeilangari, Seyedehsamaneh; Yau, Wei-Yun; Nandakumar, Karthik; Li, Jun; Teoh, Eam Khwang

    2015-07-01

    Recognition of natural emotions from human faces is an interesting topic with a wide range of potential applications, such as human-computer interaction, automated tutoring systems, image and video retrieval, smart environments, and driver warning systems. Traditionally, facial emotion recognition systems have been evaluated on laboratory controlled data, which is not representative of the environment faced in real-world applications. To robustly recognize the facial emotions in real-world natural situations, this paper proposes an approach called extreme sparse learning, which has the ability to jointly learn a dictionary (set of basis) and a nonlinear classification model. The proposed approach combines the discriminative power of extreme learning machine with the reconstruction property of sparse representation to enable accurate classification when presented with noisy signals and imperfect data recorded in natural settings. In addition, this paper presents a new local spatio-temporal descriptor that is distinctive and pose-invariant. The proposed framework is able to achieve the state-of-the-art recognition accuracy on both acted and spontaneous facial emotion databases.

  4. Objective assessment of facial skin aging and the associated environmental factors in Japanese monozygotic twins.

    Science.gov (United States)

    Ichibori, Ryoko; Fujiwara, Takashi; Tanigawa, Tomoko; Kanazawa, Shigeyuki; Shingaki, Kenta; Torii, Kosuke; Tomita, Koichi; Yano, Kenji; Sakai, Yasuo; Hosokawa, Ko

    2014-06-01

    Twin studies, especially those involving monozygotic (MZ) twins, facilitate the analysis of factors affecting skin aging while controlling for age, gender, and genetic susceptibility. The purpose of this study was to objectively assess various features of facial skin and analyze the effects of environmental factors on these features in MZ twins. At the Osaka Twin Research Center, 67 pairs of MZ twins underwent medical interviews and photographic assessments, using the VISIA(®) Complexion Analysis System. First, the average scores of the right and left cheek skin spots, wrinkles, pores, texture, and erythema were calculated; the differences between the scores were then compared in each pair of twins. Next, using the results of medical interviews and VISIA data, we investigated the effects of environmental factors on skin aging. The data were analyzed using Pearson's correlation coefficient test and the Wilcoxon signed-rank test. The intrapair differences in facial texture scores significantly increased as the age of the twins increased (P = 0.03). Among the twin pairs who provided answers to the questions regarding history differences in medical interviews, the twins who smoked or did not use skin protection showed significantly higher facial texture or wrinkle scores compared with the twins not exposed to cigarettes or protectants (P = 0.04 and 0.03, respectively). The study demonstrated that skin aging among Japanese MZ twins, especially in terms of facial texture, was significantly influenced by environmental factors. In addition, smoking and skin protectant use were important environmental factors influencing skin aging. © 2014 The Authors Journal of Cosmetic Dermatology Published by Wiley Periodicals, Inc.

  5. Perceptually Valid Facial Expressions for Character-Based Applications

    Directory of Open Access Journals (Sweden)

    Ali Arya

    2009-01-01

    Full Text Available This paper addresses the problem of creating facial expression of mixed emotions in a perceptually valid way. The research has been done in the context of a “game-like” health and education applications aimed at studying social competency and facial expression awareness in autistic children as well as native language learning, but the results can be applied to many other applications such as games with need for dynamic facial expressions or tools for automating the creation of facial animations. Most existing methods for creating facial expressions of mixed emotions use operations like averaging to create the combined effect of two universal emotions. Such methods may be mathematically justifiable but are not necessarily valid from a perceptual point of view. The research reported here starts by user experiments aiming at understanding how people combine facial actions to express mixed emotions, and how the viewers perceive a set of facial actions in terms of underlying emotions. Using the results of these experiments and a three-dimensional emotion model, we associate facial actions to dimensions and regions in the emotion space, and create a facial expression based on the location of the mixed emotion in the three-dimensional space. We call these regionalized facial actions “facial expression units.”

  6. Evidence for Anger Saliency during the Recognition of Chimeric Facial Expressions of Emotions in Underage Ebola Survivors

    Directory of Open Access Journals (Sweden)

    Martina Ardizzi

    2017-06-01

    Full Text Available One of the crucial features defining basic emotions and their prototypical facial expressions is their value for survival. Childhood traumatic experiences affect the effective recognition of facial expressions of negative emotions, normally allowing the recruitment of adequate behavioral responses to environmental threats. Specifically, anger becomes an extraordinarily salient stimulus unbalancing victims’ recognition of negative emotions. Despite the plethora of studies on this topic, to date, it is not clear whether this phenomenon reflects an overall response tendency toward anger recognition or a selective proneness to the salience of specific facial expressive cues of anger after trauma exposure. To address this issue, a group of underage Sierra Leonean Ebola virus disease survivors (mean age 15.40 years, SE 0.35; years of schooling 8.8 years, SE 0.46; 14 males and a control group (mean age 14.55, SE 0.30; years of schooling 8.07 years, SE 0.30, 15 males performed a forced-choice chimeric facial expressions recognition task. The chimeric facial expressions were obtained pairing upper and lower half faces of two different negative emotions (selected from anger, fear and sadness for a total of six different combinations. Overall, results showed that upper facial expressive cues were more salient than lower facial expressive cues. This priority was lost among Ebola virus disease survivors for the chimeric facial expressions of anger. In this case, differently from controls, Ebola virus disease survivors recognized anger regardless of the upper or lower position of the facial expressive cues of this emotion. The present results demonstrate that victims’ performance in the recognition of the facial expression of anger does not reflect an overall response tendency toward anger recognition, but rather the specific greater salience of facial expressive cues of anger. Furthermore, the present results show that traumatic experiences deeply modify

  7. Facial Expression and Vocal Pitch Height: Evidence of an Intermodal Association

    Directory of Open Access Journals (Sweden)

    David Huron

    2009-11-01

    Full Text Available Forty-four participants were asked to sing moderate, high, and low pitches while their faces were photographed. In a two-alternative forced choice task, independent judges selected the high-pitch faces as more friendly than the low-pitch faces. When photographs were cropped to show only the eye region, judges still rated the high-pitch faces friendlier than the low-pitch faces. These results are consistent with prior research showing that vocal pitch height is used to signal aggression (low pitch or appeasement (high pitch. An analysis of the facial features shows a strong correlation between eyebrow position and sung pitch—consistent with the role of eyebrows in signaling aggression and appeasement. Overall, the results are consistent with an inter-modal linkage between vocal and facial expressions.

  8. Intraoperative facial motor evoked potentials monitoring with transcranial electrical stimulation for preservation of facial nerve function in patients with large acoustic neuroma

    Institute of Scientific and Technical Information of China (English)

    LIU Bai-yun; TIAN Yong-ji; LIU Wen; LIU Shu-ling; QIAO Hui; ZHANG Jun-ting; JIA Gui-jun

    2007-01-01

    Background Although various monitoring techniques have been used routinely in the treatment of the lesions in the skull base, iatrogenic facial paresis or paralysis remains a significant clinical problem. The aim of this study was to investigate the effect of intraoperative facial motor evoked potentials monitoring with transcranial electrical stimulation on preservation of facial nerve function.Method From January to November 2005, 19 patients with large acoustic neuroma were treated using intraoperative facial motor evoked potentials monitoring with transcranial electrical stimulation (TCEMEP) for preservation of facial nerve function. The relationship between the decrease of MEP amplitude after tumor removal and the postoperative function of the facial nerve was analyzed.Results MEP amplitude decreased more than 75% in 11 patients, of which 6 presented significant facial paralysis (H-B grade 3), and 5 had mild facial paralysis (H-B grade 2). In the other 8 patients, whose MEP amplitude decreased less than 75%, 1 experienced significant facial paralysis, 5 had mild facial paralysis, and 2 were normal.Conclusions Intraoperative TCEMEP can be used to predict postoperative function of the facial nerve. The decreased MEP amplitude above 75 % is an alarm point for possible severe facial paralysis.

  9. What do facial expressions of emotion express in young children? The relationship between facial display and EMG measures

    Directory of Open Access Journals (Sweden)

    Michela Balconi

    2014-04-01

    Full Text Available The present paper explored the relationship between emotional facial response and electromyographic modulation in children when they observe facial expression of emotions. Facial responsiveness (evaluated by arousal and valence ratings and psychophysiological correlates (facial electromyography, EMG were analyzed when children looked at six facial expressions of emotions (happiness, anger, fear, sadness, surprise and disgust. About EMG measure, corrugator and zygomatic muscle activity was monitored in response to different emotional types. ANOVAs showed differences for both EMG and facial response across the subjects, as a function of different emotions. Specifically, some emotions were well expressed by all the subjects (such as happiness, anger and fear in terms of high arousal, whereas some others were less level arousal (such as sadness. Zygomatic activity was increased mainly for happiness, from one hand, corrugator activity was increased mainly for anger, fear and surprise, from the other hand. More generally, EMG and facial behavior were highly correlated each other, showing a “mirror” effect with respect of the observed faces.

  10. Cascaded face alignment via intimacy definition feature

    Science.gov (United States)

    Li, Hailiang; Lam, Kin-Man; Chiu, Man-Yau; Wu, Kangheng; Lei, Zhibin

    2017-09-01

    Recent years have witnessed the emerging popularity of regression-based face aligners, which directly learn mappings between facial appearance and shape-increment manifolds. We propose a random-forest based, cascaded regression model for face alignment by using a locally lightweight feature, namely intimacy definition feature. This feature is more discriminative than the pose-indexed feature, more efficient than the histogram of oriented gradients feature and the scale-invariant feature transform feature, and more compact than the local binary feature (LBF). Experimental validation of our algorithm shows that our approach achieves state-of-the-art performance when testing on some challenging datasets. Compared with the LBF-based algorithm, our method achieves about twice the speed, 20% improvement in terms of alignment accuracy and saves an order of magnitude on memory requirement.

  11. Inter-Ethnic/Racial Facial Variations: A Systematic Review and Bayesian Meta-Analysis of Photogrammetric Studies.

    Science.gov (United States)

    Wen, Yi Feng; Wong, Hai Ming; Lin, Ruitao; Yin, Guosheng; McGrath, Colman

    2015-01-01

    Numerous facial photogrammetric studies have been published around the world. We aimed to critically review these studies so as to establish population norms for various angular and linear facial measurements; and to determine inter-ethnic/racial facial variations. A comprehensive and systematic search of PubMed, ISI Web of Science, Embase, and Scopus was conducted to identify facial photogrammetric studies published before December, 2014. Subjects of eligible studies were either Africans, Asians or Caucasians. A Bayesian hierarchical random effects model was developed to estimate posterior means and 95% credible intervals (CrI) for each measurement by ethnicity/race. Linear contrasts were constructed to explore inter-ethnic/racial facial variations. We identified 38 eligible studies reporting 11 angular and 18 linear facial measurements. Risk of bias of the studies ranged from 0.06 to 0.66. At the significance level of 0.05, African males were found to have smaller nasofrontal angle (posterior mean difference: 8.1°, 95% CrI: 2.2°-13.5°) compared to Caucasian males and larger nasofacial angle (7.4°, 0.1°-13.2°) compared to Asian males. Nasolabial angle was more obtuse in Caucasian females than in African (17.4°, 0.2°-35.3°) and Asian (9.1°, 0.4°-17.3°) females. Additional inter-ethnic/racial variations were revealed when the level of statistical significance was set at 0.10. A comprehensive database for angular and linear facial measurements was established from existing studies using the statistical model and inter-ethnic/racial variations of facial features were observed. The results have implications for clinical practice and highlight the need and value for high quality photogrammetric studies.

  12. Inter-Ethnic/Racial Facial Variations: A Systematic Review and Bayesian Meta-Analysis of Photogrammetric Studies

    Science.gov (United States)

    Wen, Yi Feng; Wong, Hai Ming; Lin, Ruitao; Yin, Guosheng; McGrath, Colman

    2015-01-01

    Background Numerous facial photogrammetric studies have been published around the world. We aimed to critically review these studies so as to establish population norms for various angular and linear facial measurements; and to determine inter-ethnic/racial facial variations. Methods and Findings A comprehensive and systematic search of PubMed, ISI Web of Science, Embase, and Scopus was conducted to identify facial photogrammetric studies published before December, 2014. Subjects of eligible studies were either Africans, Asians or Caucasians. A Bayesian hierarchical random effects model was developed to estimate posterior means and 95% credible intervals (CrI) for each measurement by ethnicity/race. Linear contrasts were constructed to explore inter-ethnic/racial facial variations. We identified 38 eligible studies reporting 11 angular and 18 linear facial measurements. Risk of bias of the studies ranged from 0.06 to 0.66. At the significance level of 0.05, African males were found to have smaller nasofrontal angle (posterior mean difference: 8.1°, 95% CrI: 2.2°–13.5°) compared to Caucasian males and larger nasofacial angle (7.4°, 0.1°–13.2°) compared to Asian males. Nasolabial angle was more obtuse in Caucasian females than in African (17.4°, 0.2°–35.3°) and Asian (9.1°, 0.4°–17.3°) females. Additional inter-ethnic/racial variations were revealed when the level of statistical significance was set at 0.10. Conclusions A comprehensive database for angular and linear facial measurements was established from existing studies using the statistical model and inter-ethnic/racial variations of facial features were observed. The results have implications for clinical practice and highlight the need and value for high quality photogrammetric studies. PMID:26247212

  13. Inter-Ethnic/Racial Facial Variations: A Systematic Review and Bayesian Meta-Analysis of Photogrammetric Studies.

    Directory of Open Access Journals (Sweden)

    Yi Feng Wen

    Full Text Available Numerous facial photogrammetric studies have been published around the world. We aimed to critically review these studies so as to establish population norms for various angular and linear facial measurements; and to determine inter-ethnic/racial facial variations.A comprehensive and systematic search of PubMed, ISI Web of Science, Embase, and Scopus was conducted to identify facial photogrammetric studies published before December, 2014. Subjects of eligible studies were either Africans, Asians or Caucasians. A Bayesian hierarchical random effects model was developed to estimate posterior means and 95% credible intervals (CrI for each measurement by ethnicity/race. Linear contrasts were constructed to explore inter-ethnic/racial facial variations. We identified 38 eligible studies reporting 11 angular and 18 linear facial measurements. Risk of bias of the studies ranged from 0.06 to 0.66. At the significance level of 0.05, African males were found to have smaller nasofrontal angle (posterior mean difference: 8.1°, 95% CrI: 2.2°-13.5° compared to Caucasian males and larger nasofacial angle (7.4°, 0.1°-13.2° compared to Asian males. Nasolabial angle was more obtuse in Caucasian females than in African (17.4°, 0.2°-35.3° and Asian (9.1°, 0.4°-17.3° females. Additional inter-ethnic/racial variations were revealed when the level of statistical significance was set at 0.10.A comprehensive database for angular and linear facial measurements was established from existing studies using the statistical model and inter-ethnic/racial variations of facial features were observed. The results have implications for clinical practice and highlight the need and value for high quality photogrammetric studies.

  14. Neural Temporal Dynamics of Facial Emotion Processing: Age Effects and Relationship to Cognitive Function

    Directory of Open Access Journals (Sweden)

    Xiaoyan Liao

    2017-06-01

    Full Text Available This study used event-related potentials (ERPs to investigate the effects of age on neural temporal dynamics of processing task-relevant facial expressions and their relationship to cognitive functions. Negative (sad, afraid, angry, and disgusted, positive (happy, and neutral faces were presented to 30 older and 31 young participants who performed a facial emotion categorization task. Behavioral and ERP indices of facial emotion processing were analyzed. An enhanced N170 for negative faces, in addition to intact right-hemispheric N170 for positive faces, was observed in older adults relative to their younger counterparts. Moreover, older adults demonstrated an attenuated within-group N170 laterality effect for neutral faces, while younger adults showed the opposite pattern. Furthermore, older adults exhibited sustained temporo-occipital negativity deflection over the time range of 200–500 ms post-stimulus, while young adults showed posterior positivity and subsequent emotion-specific frontal negativity deflections. In older adults, decreased accuracy for labeling negative faces was positively correlated with Montreal Cognitive Assessment Scores, and accuracy for labeling neutral faces was negatively correlated with age. These findings suggest that older people may exert more effort in structural encoding for negative faces and there are different response patterns for the categorization of different facial emotions. Cognitive functioning may be related to facial emotion categorization deficits observed in older adults. This may not be attributable to positivity effects: it may represent a selective deficit for the processing of negative facial expressions in older adults.

  15. Traumatic facial nerve palsy: CT patterns of facial nerve canal fracture and correlation with clinical severity

    Energy Technology Data Exchange (ETDEWEB)

    Seo, Jae Cheol; Kim, Sang Joon; Park, Hyun Min; Lee, Young Suk; Lee, Jee Young [College of Medicine, Dankook Univ., Chonan (Korea, Republic of)

    2002-07-01

    To analyse the patterns of facial nerve canal injury seen at temporal bone computed tomography (CT) in patients with traumatic facial nerve palsy and to correlate these with clinical manifestations and outcome. Thirty cases of temporal bone CT in 29 patients with traumatic facial nerve palsy were analyzed with regard to the patterns of facial nerve canal involvement. The patterns were correlated with clinical grade, the electroneurographic (ENoG) findings, and clinical outcome. For clinical grading, the House-Brackmann scale was used, as follows:grade I-IV, partial palsy group; grade V-VI, complete palsy group. The electroneuronographic findings were categorized as mild to moderate (below 90%) or severe (90% and over) degeneration. In 25 cases, the bony wall of the facial nerve canals was involved directly (direct finding): discontinuity of the bony wall was onted in 22 cases, bony spicules in ten, and bony wall displacement in five. Indirect findings were canal widening in nine cases and adjacent bone fracture in two. In one case, there were no direct or indirect findings. All cases in which there was complete palsy (n=8) showed one or more direct findings including spicules in six, while in the incomplete palsy group (n=22), 17 cases showed direct findings. In the severe degeneration group (n=13), on ENog, 12 cases demonstrated direct findings, including spicules in nine cases. In 24 patients, symptoms of facial palsy showed improvement at follow up evaluation. Four of the five patients in whom symptoms did not improve had spicules. Among ten patients with spicules, five underwent surgery and symptoms improved in four of these; among the five patients not operated on , symptoms did not improve in three. In most patients with facial palsy after temporal bone injury, temporal bone CT revealed direct or indirect facial nerve canal involvement, and in complete palsy or severe degeneration groups, there were direct findings in most cases. We believe that meticulous

  16. Intratemporal Facial Nerve Paralysis- A Three Year Study

    Directory of Open Access Journals (Sweden)

    Anirban Ghosh

    2016-08-01

    Full Text Available Introduction This study on intratemporal facial paralysis is an attempt to understand the aetiology of facial nerve paralysis, effect of different management protocols and the outcome after long-term follow-up. Materials and Methods A prospective longitudinal study was conducted from September 2005 to August 2008 at the Department of Otorhinolaryngology of a medical college in Kolkata comprising 50 patients of intratemporal facial palsy. All cases were periodically followed up for at least 6 months and their prognostic outcome along with different treatment options were analyzed. Result Among different causes of facial palsy, Bell’s palsy is the commonest cause; whereas cholesteatoma and granulation were common findings in otogenic facial palsy. Traumatic facial palsies were exclusively due to longitudinal fracture of temporal bone running through geniculate ganglion. Herpes zoster oticus and neoplasia related facial palsies had significantly poorer outcome. Discussion Otogenic facial palsy showed excellent outcome after mastoid exploration and facial decompression. Transcanal decompression was performed in traumatic facial palsies showing inadequate recovery. Complete removal of cholesteatoma over dehiscent facial nerve gave better postoperative recovery. Conclusion The stapedial reflex test is the most objective and reproducible of all topodiagnostic tests. Return of the stapedial reflex within 3 weeks of injury indicates good prognosis. Bell’s palsy responded well to conservative measures. All traumatic facial palsies were due to longitudinal fracture and 2/3rd of these patients showed favourable outcome with medical therapy.

  17. Social Use of Facial Expressions in Hylobatids

    Science.gov (United States)

    Scheider, Linda; Waller, Bridget M.; Oña, Leonardo; Burrows, Anne M.; Liebal, Katja

    2016-01-01

    Non-human primates use various communicative means in interactions with others. While primate gestures are commonly considered to be intentionally and flexibly used signals, facial expressions are often referred to as inflexible, automatic expressions of affective internal states. To explore whether and how non-human primates use facial expressions in specific communicative interactions, we studied five species of small apes (gibbons) by employing a newly established Facial Action Coding System for hylobatid species (GibbonFACS). We found that, despite individuals often being in close proximity to each other, in social (as opposed to non-social contexts) the duration of facial expressions was significantly longer when gibbons were facing another individual compared to non-facing situations. Social contexts included grooming, agonistic interactions and play, whereas non-social contexts included resting and self-grooming. Additionally, gibbons used facial expressions while facing another individual more often in social contexts than non-social contexts where facial expressions were produced regardless of the attentional state of the partner. Also, facial expressions were more likely ‘responded to’ by the partner’s facial expressions when facing another individual than non-facing. Taken together, our results indicate that gibbons use their facial expressions differentially depending on the social context and are able to use them in a directed way in communicative interactions with other conspecifics. PMID:26978660

  18. Survey of methods of facial palsy documentation in use by members of the Sir Charles Bell Society.

    Science.gov (United States)

    Fattah, Adel Y; Gavilan, Javier; Hadlock, Tessa A; Marcus, Jeffrey R; Marres, Henri; Nduka, Charles; Slattery, William H; Snyder-Warwick, Alison K

    2014-10-01

    Facial palsy manifests a broad array of deficits affecting function, form, and psychological well-being. Assessment scales were introduced to standardize and document the features of facial palsy and to facilitate the exchange of information and comparison of outcomes. The aim of this study was to determine which assessment methodologies are currently employed by those involved in the care of patients with facial palsy as a first step toward the development of consensus on the appropriate assessments for this patient population. Online questionnaire. The Sir Charles Bell Society, a group of professionals dedicated to the care of patients with facial palsy, were surveyed to determine the scales used to document facial nerve function, patient reported outcome measures (PROM), and photographic documentation. Fifty-five percent of the membership responded (n = 83). Grading scales were used by 95%, most commonly the House-Brackmann and Sunnybrook scales. PROMs were used by 58%, typically the Facial Clinimetric Evaluation scale or Facial Disability Index. All used photographic recordings, but variability existed among the facial expressions used. Videography was performed by 82%, and mostly involved the same views as still photography; it was also used to document spontaneous movement and speech. Three-dimensional imaging was employed by 18% of respondents. There exists significant heterogeneity in assessments among clinicians, which impedes straightforward comparisons of outcomes following recovery and intervention. Widespread adoption of structured assessments, including scales, PROMs, photography, and videography, will facilitate communication and comparison among those who study the effects of interventions on this population. © 2014 The American Laryngological, Rhinological and Otological Society, Inc.

  19. Facial motion parameter estimation and error criteria in model-based image coding

    Science.gov (United States)

    Liu, Yunhai; Yu, Lu; Yao, Qingdong

    2000-04-01

    Model-based image coding has been given extensive attention due to its high subject image quality and low bit-rates. But the estimation of object motion parameter is still a difficult problem, and there is not a proper error criteria for the quality assessment that are consistent with visual properties. This paper presents an algorithm of the facial motion parameter estimation based on feature point correspondence and gives the motion parameter error criteria. The facial motion model comprises of three parts. The first part is the global 3-D rigid motion of the head, the second part is non-rigid translation motion in jaw area, and the third part consists of local non-rigid expression motion in eyes and mouth areas. The feature points are automatically selected by a function of edges, brightness and end-node outside the blocks of eyes and mouth. The numbers of feature point are adjusted adaptively. The jaw translation motion is tracked by the changes of the feature point position of jaw. The areas of non-rigid expression motion can be rebuilt by using block-pasting method. The estimation approach of motion parameter error based on the quality of reconstructed image is suggested, and area error function and the error function of contour transition-turn rate are used to be quality criteria. The criteria reflect the image geometric distortion caused by the error of estimated motion parameters properly.

  20. Enhanced MRI in patients with facial palsy

    International Nuclear Information System (INIS)

    Yanagida, Masahiro; Kato, Tsutomu; Ushiro, Koichi; Kitajiri, Masanori; Yamashita, Toshio; Kumazawa, Tadami; Tanaka, Yoshimasa

    1991-01-01

    We performed Gd-DTPA-enhanced magnetic resonance imaging (MRI) examinations at several stages in 40 patients with peripheral facial nerve palsy (Bell's palsy and Ramsay-Hunt syndrome). In 38 of the 40 patients, one and more enhanced region could be seen in certain portion of the facial nerve in the temporal bone on the affected side, whereas no enhanced regions were seen on the intact side. Correlations between the timing of the MRI examination and the location of the enhanced regions were analysed. In all 6 patients examined by MRI within 5 days after the onset of facial nerve palsy, enhanced regions were present in the meatal portion. In 3 of the 8 patients (38%) examined by MRI 6 to 10 days after the onset of facial palsy, enhanced areas were seen in both the meatal and labyrinthine portions. In 8 of the 9 patients (89%) tested 11 to 20 days after the onset of palsy, the vertical portion was enhanced. In the 12 patients examined by MRI 21 to 40 days after the onset of facial nerve palsy, the meatal portion was not enhanced while the labyrinthine portion, the horizontal portion and the vertical portion were enhanced in 5 (42%), 8 (67%) and 11 (92%), respectively. Enhancement in the vertical portion was observed in all 5 patients examined more than 41 days after the onset of facial palsy. These results suggest that the central portion of the facial nerve in the temporal bone tends to be enhanced in the early stage of facial nerve palsy, while the peripheral portion is enhanced in the late stage. These changes of Gd-DTPA enhanced regions in the facial nerve may suggest dromic degeneration of the facial nerve in peripheral facial nerve palsy. (author)