WorldWideScience

Sample records for facial features representative

  1. Learning representative features for facial images based on a modified principal component analysis

    Science.gov (United States)

    Averkin, Anton; Potapov, Alexey

    2013-05-01

    The paper is devoted to facial image analysis and particularly deals with the problem of automatic evaluation of the attractiveness of human faces. We propose a new approach for automatic construction of feature space based on a modified principal component analysis. Input data sets for the algorithm are the learning data sets of facial images, which are rated by one person. The proposed approach allows one to extract features of the individual subjective face beauty perception and to predict attractiveness values for new facial images, which were not included into a learning data set. The Pearson correlation coefficient between values predicted by our method for new facial images and personal attractiveness estimation values equals to 0.89. This means that the new approach proposed is promising and can be used for predicting subjective face attractiveness values in real systems of the facial images analysis.

  2. Enhancing facial features by using clear facial features

    Science.gov (United States)

    Rofoo, Fanar Fareed Hanna

    2017-09-01

    The similarity of features between individuals of same ethnicity motivated the idea of this project. The idea of this project is to extract features of clear facial image and impose them on blurred facial image of same ethnic origin as an approach to enhance a blurred facial image. A database of clear images containing 30 individuals equally divided to five different ethnicities which were Arab, African, Chines, European and Indian. Software was built to perform pre-processing on images in order to align the features of clear and blurred images. And the idea was to extract features of clear facial image or template built from clear facial images using wavelet transformation to impose them on blurred image by using reverse wavelet. The results of this approach did not come well as all the features did not align together as in most cases the eyes were aligned but the nose or mouth were not aligned. Then we decided in the next approach to deal with features separately but in the result in some cases a blocky effect was present on features due to not having close matching features. In general the available small database did not help to achieve the goal results, because of the number of available individuals. The color information and features similarity could be more investigated to achieve better results by having larger database as well as improving the process of enhancement by the availability of closer matches in each ethnicity.

  3. Fusing Facial Features for Face Recognition

    Directory of Open Access Journals (Sweden)

    Jamal Ahmad Dargham

    2012-06-01

    Full Text Available Face recognition is an important biometric method because of its potential applications in many fields, such as access control, surveillance, and human-computer interaction. In this paper, a face recognition system that fuses the outputs of three face recognition systems based on Gabor jets is presented. The first system uses the magnitude, the second uses the phase, and the third uses the phase-weighted magnitude of the jets. The jets are generated from facial landmarks selected using three selection methods. It was found out that fusing the facial features gives better recognition rate than either facial feature used individually regardless of the landmark selection method.

  4. Extraction and representation of common feature from uncertain facial expressions with cloud model.

    Science.gov (United States)

    Wang, Shuliang; Chi, Hehua; Yuan, Hanning; Geng, Jing

    2017-12-01

    Human facial expressions are key ingredient to convert an individual's innate emotion in communication. However, the variation of facial expressions affects the reliable identification of human emotions. In this paper, we present a cloud model to extract facial features for representing human emotion. First, the uncertainties in facial expression are analyzed in the context of cloud model. The feature extraction and representation algorithm is established under cloud generators. With forward cloud generator, facial expression images can be re-generated as many as we like for visually representing the extracted three features, and each feature shows different roles. The effectiveness of the computing model is tested on Japanese Female Facial Expression database. Three common features are extracted from seven facial expression images. Finally, the paper is concluded and remarked.

  5. Effects of Bariatric Surgery on Facial Features

    Directory of Open Access Journals (Sweden)

    Vardan Papoian

    2015-09-01

    Full Text Available BackgroundBariatric surgeries performed in the USA has increased twelve-fold in the past two decades. The effects of rapid weight loss on facial features has not been previously studied. We hypothesized that bariatric surgery will mimic the effects of aging thus giving the patient an older and less attractive appearance.MethodsConsecutive patients were enrolled from the bariatric surgical clinic at our institution. Pre and post weight loss photographs were taken and used to generate two surveys. The surveys were distributed through social media to assess the difference between the preoperative and postoperative facial photos, in terms of patients' perceived age and overall attractiveness. 102 respondents completed the first survey and 95 respondents completed the second survey.ResultsOf the 14 patients, five showed statistically significant change in perceived age (three more likely to be perceived older and two less likely to be perceived older. The patients were assessed to be more attractive postoperatively, which showed statistical significance.ConclusionsWeight loss does affect facial aesthetics. Mild weight loss is perceived by survey respondents to give the appearance of a younger but less attractive patient, while substantial weight loss is perceived to give the appearance of an older but more attractive patient.

  6. Dynamic facial expression recognition based on geometric and texture features

    Science.gov (United States)

    Li, Ming; Wang, Zengfu

    2018-04-01

    Recently, dynamic facial expression recognition in videos has attracted growing attention. In this paper, we propose a novel dynamic facial expression recognition method by using geometric and texture features. In our system, the facial landmark movements and texture variations upon pairwise images are used to perform the dynamic facial expression recognition tasks. For one facial expression sequence, pairwise images are created between the first frame and each of its subsequent frames. Integration of both geometric and texture features further enhances the representation of the facial expressions. Finally, Support Vector Machine is used for facial expression recognition. Experiments conducted on the extended Cohn-Kanade database show that our proposed method can achieve a competitive performance with other methods.

  7. Facial and Ocular Features of Marfan Syndrome

    Directory of Open Access Journals (Sweden)

    Juan C. Leoni

    2014-10-01

    Full Text Available Marfan syndrome is the most common inherited disorder of connective tissue affecting multiple organ systems. Identification of the facial, ocular and skeletal features should prompt referral for aortic imaging since sudden death by aortic dissection and rupture remains a major cause of death in patients with unrecognized Marfan syndrome. Echocardiography is recommended as the initial imaging test, and once a dilated aortic root is identified magnetic resonance or computed tomography should be done to assess the entire aorta. Prophylactic aortic root replacement is safe and has been demonstrated to improve life expectancy in patients with Marfan syndrome. Medical therapy for Marfan syndrome includes the use of beta blockers in older children and adults with an enlarged aorta. Addition of angiotensin receptor antagonists has been shown to slow the progression of aortic root dilation compared to beta blockers alone. Lifelong and regular follow up in a center for specialized care is important for patients with Marfan syndrome. We present a case of a patient with clinical features of Marfan syndrome and discuss possible therapeutic interventions for her dilated aorta.

  8. Facial feature tracking: a psychophysiological measure to assess exercise intensity?

    Science.gov (United States)

    Miles, Kathleen H; Clark, Bradley; Périard, Julien D; Goecke, Roland; Thompson, Kevin G

    2018-04-01

    The primary aim of this study was to determine whether facial feature tracking reliably measures changes in facial movement across varying exercise intensities. Fifteen cyclists completed three, incremental intensity, cycling trials to exhaustion while their faces were recorded with video cameras. Facial feature tracking was found to be a moderately reliable measure of facial movement during incremental intensity cycling (intra-class correlation coefficient = 0.65-0.68). Facial movement (whole face (WF), upper face (UF), lower face (LF) and head movement (HM)) increased with exercise intensity, from lactate threshold one (LT1) until attainment of maximal aerobic power (MAP) (WF 3464 ± 3364mm, P exercise intensities (UF minus LF at: LT1, 1048 ± 383mm; LT2, 1208 ± 611mm; MAP, 1401 ± 712mm; P exercise intensity.

  9. Representing affective facial expressions for robots and embodied conversational agents by facial landmarks

    NARCIS (Netherlands)

    Liu, C.; Ham, J.R.C.; Postma, E.O.; Midden, C.J.H.; Joosten, B.; Goudbeek, M.

    2013-01-01

    Affective robots and embodied conversational agents require convincing facial expressions to make them socially acceptable. To be able to virtually generate facial expressions, we need to investigate the relationship between technology and human perception of affective and social signals. Facial

  10. Frame-Based Facial Expression Recognition Using Geometrical Features

    Directory of Open Access Journals (Sweden)

    Anwar Saeed

    2014-01-01

    Full Text Available To improve the human-computer interaction (HCI to be as good as human-human interaction, building an efficient approach for human emotion recognition is required. These emotions could be fused from several modalities such as facial expression, hand gesture, acoustic data, and biophysiological data. In this paper, we address the frame-based perception of the universal human facial expressions (happiness, surprise, anger, disgust, fear, and sadness, with the help of several geometrical features. Unlike many other geometry-based approaches, the frame-based method does not rely on prior knowledge of a person-specific neutral expression; this knowledge is gained through human intervention and not available in real scenarios. Additionally, we provide a method to investigate the performance of the geometry-based approaches under various facial point localization errors. From an evaluation on two public benchmark datasets, we have found that using eight facial points, we can achieve the state-of-the-art recognition rate. However, this state-of-the-art geometry-based approach exploits features derived from 68 facial points and requires prior knowledge of the person-specific neutral expression. The expression recognition rate using geometrical features is adversely affected by the errors in the facial point localization, especially for the expressions with subtle facial deformations.

  11. An Improved AAM Method for Extracting Human Facial Features

    Directory of Open Access Journals (Sweden)

    Tao Zhou

    2012-01-01

    Full Text Available Active appearance model is a statistically parametrical model, which is widely used to extract human facial features and recognition. However, intensity values used in original AAM cannot provide enough information for image texture, which will lead to a larger error or a failure fitting of AAM. In order to overcome these defects and improve the fitting performance of AAM model, an improved texture representation is proposed in this paper. Firstly, translation invariant wavelet transform is performed on face images and then image structure is represented using the measure which is obtained by fusing the low-frequency coefficients with edge intensity. Experimental results show that the improved algorithm can increase the accuracy of the AAM fitting and express more information for structures of edge and texture.

  12. Feature selection from a facial image for distinction of sasang constitution.

    Science.gov (United States)

    Koo, Imhoi; Kim, Jong Yeol; Kim, Myoung Geun; Kim, Keun Ho

    2009-09-01

    Recently, oriental medicine has received attention for providing personalized medicine through consideration of the unique nature and constitution of individual patients. With the eventual goal of globalization, the current trend in oriental medicine research is the standardization by adopting western scientific methods, which could represent a scientific revolution. The purpose of this study is to establish methods for finding statistically significant features in a facial image with respect to distinguishing constitution and to show the meaning of those features. From facial photo images, facial elements are analyzed in terms of the distance, angle and the distance ratios, for which there are 1225, 61 250 and 749 700 features, respectively. Due to the very large number of facial features, it is quite difficult to determine truly meaningful features. We suggest a process for the efficient analysis of facial features including the removal of outliers, control for missing data to guarantee data confidence and calculation of statistical significance by applying ANOVA. We show the statistical properties of selected features according to different constitutions using the nine distances, 10 angles and 10 rates of distance features that are finally established. Additionally, the Sasang constitutional meaning of the selected features is shown here.

  13. Feature Selection from a Facial Image for Distinction of Sasang Constitution

    Directory of Open Access Journals (Sweden)

    Imhoi Koo

    2009-01-01

    Full Text Available Recently, oriental medicine has received attention for providing personalized medicine through consideration of the unique nature and constitution of individual patients. With the eventual goal of globalization, the current trend in oriental medicine research is the standardization by adopting western scientific methods, which could represent a scientific revolution. The purpose of this study is to establish methods for finding statistically significant features in a facial image with respect to distinguishing constitution and to show the meaning of those features. From facial photo images, facial elements are analyzed in terms of the distance, angle and the distance ratios, for which there are 1225, 61 250 and 749 700 features, respectively. Due to the very large number of facial features, it is quite difficult to determine truly meaningful features. We suggest a process for the efficient analysis of facial features including the removal of outliers, control for missing data to guarantee data confidence and calculation of statistical significance by applying ANOVA. We show the statistical properties of selected features according to different constitutions using the nine distances, 10 angles and 10 rates of distance features that are finally established. Additionally, the Sasang constitutional meaning of the selected features is shown here.

  14. Feature Selection from a Facial Image for Distinction of Sasang Constitution

    Science.gov (United States)

    Koo, Imhoi; Kim, Jong Yeol; Kim, Myoung Geun

    2009-01-01

    Recently, oriental medicine has received attention for providing personalized medicine through consideration of the unique nature and constitution of individual patients. With the eventual goal of globalization, the current trend in oriental medicine research is the standardization by adopting western scientific methods, which could represent a scientific revolution. The purpose of this study is to establish methods for finding statistically significant features in a facial image with respect to distinguishing constitution and to show the meaning of those features. From facial photo images, facial elements are analyzed in terms of the distance, angle and the distance ratios, for which there are 1225, 61 250 and 749 700 features, respectively. Due to the very large number of facial features, it is quite difficult to determine truly meaningful features. We suggest a process for the efficient analysis of facial features including the removal of outliers, control for missing data to guarantee data confidence and calculation of statistical significance by applying ANOVA. We show the statistical properties of selected features according to different constitutions using the nine distances, 10 angles and 10 rates of distance features that are finally established. Additionally, the Sasang constitutional meaning of the selected features is shown here. PMID:19745013

  15. Extracted facial feature of racial closely related faces

    Science.gov (United States)

    Liewchavalit, Chalothorn; Akiba, Masakazu; Kanno, Tsuneo; Nagao, Tomoharu

    2010-02-01

    Human faces contain a lot of demographic information such as identity, gender, age, race and emotion. Human being can perceive these pieces of information and use it as an important clue in social interaction with other people. Race perception is considered the most delicacy and sensitive parts of face perception. There are many research concerning image-base race recognition, but most of them are focus on major race group such as Caucasoid, Negroid and Mongoloid. This paper focuses on how people classify race of the racial closely related group. As a sample of racial closely related group, we choose Japanese and Thai face to represents difference between Northern and Southern Mongoloid. Three psychological experiment was performed to study the strategies of face perception on race classification. As a result of psychological experiment, it can be suggested that race perception is an ability that can be learn. Eyes and eyebrows are the most attention point and eyes is a significant factor in race perception. The Principal Component Analysis (PCA) was performed to extract facial features of sample race group. Extracted race features of texture and shape were used to synthesize faces. As the result, it can be suggested that racial feature is rely on detailed texture rather than shape feature. This research is a indispensable important fundamental research on the race perception which are essential in the establishment of human-like race recognition system.

  16. [Infantile facial paralysis: diagnostic and therapeutic features].

    Science.gov (United States)

    Montalt, J; Barona, R; Comeche, C; Basterra, J

    2000-01-01

    This paper deals with a series of 11 cases of peripheral unilateral facial paralyses affecting children under 15 years. Following parameters are reviewed: age, sex, side immobilized, origin, morbid antecedents, clinical and neurophysiological explorations (electroneurography through magnetic stimulation) and the evolutive course of the cases. These items are assembled in 3 sketches in the article. Clinical assessment of face movility is more difficult as the patient is younger, nevertheless electroneurography was possible in the whole group. Clinical restoration was complete, excepting one complicated cholesteatomatous patient. Some aspects concerning the etiology, diagnostic explorations and management of each pediatric case are discussed.

  17. Facial expression identification using 3D geometric features from Microsoft Kinect device

    Science.gov (United States)

    Han, Dongxu; Al Jawad, Naseer; Du, Hongbo

    2016-05-01

    Facial expression identification is an important part of face recognition and closely related to emotion detection from face images. Various solutions have been proposed in the past using different types of cameras and features. Microsoft Kinect device has been widely used for multimedia interactions. More recently, the device has been increasingly deployed for supporting scientific investigations. This paper explores the effectiveness of using the device in identifying emotional facial expressions such as surprise, smile, sad, etc. and evaluates the usefulness of 3D data points on a face mesh structure obtained from the Kinect device. We present a distance-based geometric feature component that is derived from the distances between points on the face mesh and selected reference points in a single frame. The feature components extracted across a sequence of frames starting and ending by neutral emotion represent a whole expression. The feature vector eliminates the need for complex face orientation correction, simplifying the feature extraction process and making it more efficient. We applied the kNN classifier that exploits a feature component based similarity measure following the principle of dynamic time warping to determine the closest neighbors. Preliminary tests on a small scale database of different facial expressions show promises of the newly developed features and the usefulness of the Kinect device in facial expression identification.

  18. Facial Feature Extraction Using Frequency Map Series in PCNN

    Directory of Open Access Journals (Sweden)

    Rencan Nie

    2016-01-01

    Full Text Available Pulse coupled neural network (PCNN has been widely used in image processing. The 3D binary map series (BMS generated by PCNN effectively describes image feature information such as edges and regional distribution, so BMS can be treated as the basis of extracting 1D oscillation time series (OTS for an image. However, the traditional methods using BMS did not consider the correlation of the binary sequence in BMS and the space structure for every map. By further processing for BMS, a novel facial feature extraction method is proposed. Firstly, consider the correlation among maps in BMS; a method is put forward to transform BMS into frequency map series (FMS, and the method lessens the influence of noncontinuous feature regions in binary images on OTS-BMS. Then, by computing the 2D entropy for every map in FMS, the 3D FMS is transformed into 1D OTS (OTS-FMS, which has good geometry invariance for the facial image, and contains the space structure information of the image. Finally, by analyzing the OTS-FMS, the standard Euclidean distance is used to measure the distances for OTS-FMS. Experimental results verify the effectiveness of OTS-FMS in facial recognition, and it shows better recognition performance than other feature extraction methods.

  19. Face detection and facial feature localization using notch based templates

    International Nuclear Information System (INIS)

    Qayyum, U.

    2007-01-01

    We present a real time detection off aces from the video with facial feature localization as well as the algorithm capable of differentiating between the face/non-face patterns. The need of face detection and facial feature localization arises in various application of computer vision, so a lot of research is dedicated to come up with a real time solution. The algorithm should remain simple to perform real time whereas it should not compromise on the challenges encountered during the detection and localization phase, keeping simplicity and all challenges i.e. algorithm invariant to scale, translation, and (+-45) rotation transformations. The proposed system contains two parts. Visual guidance and face/non-face classification. The visual guidance phase uses the fusion of motion and color cues to classify skin color. Morphological operation with union-structure component labeling algorithm extracts contiguous regions. Scale normalization is applied by nearest neighbor interpolation method to avoid the effect of different scales. Using the aspect ratio of width and height size. Region of Interest (ROI) is obtained and then passed to face/non-face classifier. Notch (Gaussian) based templates/ filters are used to find circular darker regions in ROI. The classified face region is handed over to facial feature localization phase, which uses YCbCr eyes/lips mask for face feature localization. The empirical results show an accuracy of 90% for five different videos with 1000 face/non-face patterns and processing rate of proposed algorithm is 15 frames/sec. (author)

  20. Orientations for the successful categorization of facial expressions and their link with facial features.

    Science.gov (United States)

    Duncan, Justin; Gosselin, Frédéric; Cobarro, Charlène; Dugas, Gabrielle; Blais, Caroline; Fiset, Daniel

    2017-12-01

    Horizontal information was recently suggested to be crucial for face identification. In the present paper, we expand on this finding and investigate the role of orientations for all the basic facial expressions and neutrality. To this end, we developed orientation bubbles to quantify utilization of the orientation spectrum by the visual system in a facial expression categorization task. We first validated the procedure in Experiment 1 with a simple plaid-detection task. In Experiment 2, we used orientation bubbles to reveal the diagnostic-i.e., task relevant-orientations for the basic facial expressions and neutrality. Overall, we found that horizontal information was highly diagnostic for expressions-surprise excepted. We also found that utilization of horizontal information strongly predicted performance level in this task. Despite the recent surge of research on horizontals, the link with local features remains unexplored. We were thus also interested in investigating this link. In Experiment 3, location bubbles were used to reveal the diagnostic features for the basic facial expressions. Crucially, Experiments 2 and 3 were run in parallel on the same participants, in an interleaved fashion. This way, we were able to correlate individual orientation and local diagnostic profiles. Our results indicate that individual differences in horizontal tuning are best predicted by utilization of the eyes.

  1. Seven Non-melanoma Features to Rule Out Facial Melanoma

    Directory of Open Access Journals (Sweden)

    Philipp Tschandl

    2017-08-01

    Full Text Available Facial melanoma is difficult to diagnose and dermatoscopic features are often subtle. Dermatoscopic non-melanoma patterns may have a comparable diagnostic value. In this pilot study, facial lesions were collected retrospectively, resulting in a case set of 339 melanomas and 308 non-melanomas. Lesions were evaluated for the prevalence (> 50% of lesional surface of 7 dermatoscopic non-melanoma features: scales, white follicles, erythema/reticular vessels, reticular and/or curved lines/fingerprints, structureless brown colour, sharp demarcation, and classic criteria of seborrhoeic keratosis. Melanomas had a lower number of non-melanoma patterns (p < 0.001. Scoring a lesion suspicious when no prevalent non-melanoma pattern is found resulted in a sensitivity of 88.5% and a specificity of 66.9% for the diagnosis of melanoma. Specificity was higher for solar lentigo (78.8% and seborrhoeic keratosis (74.3% and lower for actinic keratosis (61.4% and lichenoid keratosis (25.6%. Evaluation of prevalent non-melanoma patterns can provide slightly lower sensitivity and higher specificity in detecting facial melanoma compared with already known malignant features.

  2. Towards the automation of forensic facial individualisation: Comparing forensic to non forensic eyebrow features

    NARCIS (Netherlands)

    Zeinstra, Christopher Gerard; Veldhuis, Raymond N.J.; Spreeuwers, Lieuwe Jan

    2014-01-01

    The Facial Identification Scientific Working Group (FISWG) publishes recommendations regarding one-to-one facial comparisons. At this moment a draft version of a facial image comparison feature list for morphological analysis has been published. This feature list is based on casework experience by

  3. Millennial Filipino Student Engagement Analyzer Using Facial Feature Classification

    Science.gov (United States)

    Manseras, R.; Eugenio, F.; Palaoag, T.

    2018-03-01

    Millennials has been a word of mouth of everybody and a target market of various companies nowadays. In the Philippines, they comprise one third of the total population and most of them are still in school. Having a good education system is important for this generation to prepare them for better careers. And a good education system means having quality instruction as one of the input component indicators. In a classroom environment, teachers use facial features to measure the affect state of the class. Emerging technologies like Affective Computing is one of today’s trends to improve quality instruction delivery. This, together with computer vision, can be used in analyzing affect states of the students and improve quality instruction delivery. This paper proposed a system of classifying student engagement using facial features. Identifying affect state, specifically Millennial Filipino student engagement, is one of the main priorities of every educator and this directed the authors to develop a tool to assess engagement percentage. Multiple face detection framework using Face API was employed to detect as many student faces as possible to gauge current engagement percentage of the whole class. The binary classifier model using Support Vector Machine (SVM) was primarily set in the conceptual framework of this study. To achieve the most accuracy performance of this model, a comparison of SVM to two of the most widely used binary classifiers were tested. Results show that SVM bested RandomForest and Naive Bayesian algorithms in most of the experiments from the different test datasets.

  4. Accurate landmarking of three-dimensional facial data in the presence of facial expressions and occlusions using a three-dimensional statistical facial feature model.

    Science.gov (United States)

    Zhao, Xi; Dellandréa, Emmanuel; Chen, Liming; Kakadiaris, Ioannis A

    2011-10-01

    Three-dimensional face landmarking aims at automatically localizing facial landmarks and has a wide range of applications (e.g., face recognition, face tracking, and facial expression analysis). Existing methods assume neutral facial expressions and unoccluded faces. In this paper, we propose a general learning-based framework for reliable landmark localization on 3-D facial data under challenging conditions (i.e., facial expressions and occlusions). Our approach relies on a statistical model, called 3-D statistical facial feature model, which learns both the global variations in configurational relationships between landmarks and the local variations of texture and geometry around each landmark. Based on this model, we further propose an occlusion classifier and a fitting algorithm. Results from experiments on three publicly available 3-D face databases (FRGC, BU-3-DFE, and Bosphorus) demonstrate the effectiveness of our approach, in terms of landmarking accuracy and robustness, in the presence of expressions and occlusions.

  5. Sensorineural Deafness, Distinctive Facial Features and Abnormal Cranial Bones

    Science.gov (United States)

    Gad, Alona; Laurino, Mercy; Maravilla, Kenneth R.; Matsushita, Mark; Raskind, Wendy H.

    2008-01-01

    The Waardenburg syndromes (WS) account for approximately 2% of congenital sensorineural deafness. This heterogeneous group of diseases currently can be categorized into four major subtypes (WS types 1-4) on the basis of characteristic clinical features. Multiple genes have been implicated in WS, and mutations in some genes can cause more than one WS subtype. In addition to eye, hair and skin pigmentary abnormalities, dystopia canthorum and broad nasal bridge are seen in WS type 1. Mutations in the PAX3 gene are responsible for the condition in the majority of these patients. In addition, mutations in PAX3 have been found in WS type 3 that is distinguished by musculoskeletal abnormalities, and in a family with a rare subtype of WS, craniofacial-deafness-hand syndrome (CDHS), characterized by dysmorphic facial features, hand abnormalities, and absent or hypoplastic nasal and wrist bones. Here we describe a woman who shares some, but not all features of WS type 3 and CDHS, and who also has abnormal cranial bones. All sinuses were hypoplastic, and the cochlea were small. No sequence alteration in PAX3 was found. These observations broaden the clinical range of WS and suggest there may be genetic heterogeneity even within the CDHS subtype. PMID:18553554

  6. A Robust Shape Reconstruction Method for Facial Feature Point Detection

    Directory of Open Access Journals (Sweden)

    Shuqiu Tan

    2017-01-01

    Full Text Available Facial feature point detection has been receiving great research advances in recent years. Numerous methods have been developed and applied in practical face analysis systems. However, it is still a quite challenging task because of the large variability in expression and gestures and the existence of occlusions in real-world photo shoot. In this paper, we present a robust sparse reconstruction method for the face alignment problems. Instead of a direct regression between the feature space and the shape space, the concept of shape increment reconstruction is introduced. Moreover, a set of coupled overcomplete dictionaries termed the shape increment dictionary and the local appearance dictionary are learned in a regressive manner to select robust features and fit shape increments. Additionally, to make the learned model more generalized, we select the best matched parameter set through extensive validation tests. Experimental results on three public datasets demonstrate that the proposed method achieves a better robustness over the state-of-the-art methods.

  7. Nine-year-old children use norm-based coding to visually represent facial expression.

    Science.gov (United States)

    Burton, Nichola; Jeffery, Linda; Skinner, Andrew L; Benton, Christopher P; Rhodes, Gillian

    2013-10-01

    Children are less skilled than adults at making judgments about facial expression. This could be because they have not yet developed adult-like mechanisms for visually representing faces. Adults are thought to represent faces in a multidimensional face-space, and have been shown to code the expression of a face relative to the norm or average face in face-space. Norm-based coding is economical and adaptive, and may be what makes adults more sensitive to facial expression than children. This study investigated the coding system that children use to represent facial expression. An adaptation aftereffect paradigm was used to test 24 adults and 18 children (9 years 2 months to 9 years 11 months old). Participants adapted to weak and strong antiexpressions. They then judged the expression of an average expression. Adaptation created aftereffects that made the test face look like the expression opposite that of the adaptor. Consistent with the predictions of norm-based but not exemplar-based coding, aftereffects were larger for strong than weak adaptors for both age groups. Results indicate that, like adults, children's coding of facial expressions is norm-based. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  8. Interpretation of appearance: the effect of facial features on first impressions and personality

    DEFF Research Database (Denmark)

    Wolffhechel, Karin Marie Brandt; Fagertun, Jens; Jacobsen, Ulrik Plesner

    2014-01-01

    Appearance is known to influence social interactions, which in turn could potentially influence personality development. In this study we focus on discovering the relationship between self-reported personality traits, first impressions and facial characteristics. The results reveal that several...... personality traits can be read above chance from a face, and that facial features influence first impressions. Despite the former, our prediction model fails to reliably infer personality traits from either facial features or first impressions. First impressions, however, could be inferred more reliably from...... facial features. We have generated artificial, extreme faces visualising the characteristics having an effect on first impressions for several traits. Conclusively, we find a relationship between first impressions, some personality traits and facial features and consolidate that people on average assess...

  9. iFER: facial expression recognition using automatically selected geometric eye and eyebrow features

    Science.gov (United States)

    Oztel, Ismail; Yolcu, Gozde; Oz, Cemil; Kazan, Serap; Bunyak, Filiz

    2018-03-01

    Facial expressions have an important role in interpersonal communications and estimation of emotional states or intentions. Automatic recognition of facial expressions has led to many practical applications and became one of the important topics in computer vision. We present a facial expression recognition system that relies on geometry-based features extracted from eye and eyebrow regions of the face. The proposed system detects keypoints on frontal face images and forms a feature set using geometric relationships among groups of detected keypoints. Obtained feature set is refined and reduced using the sequential forward selection (SFS) algorithm and fed to a support vector machine classifier to recognize five facial expression classes. The proposed system, iFER (eye-eyebrow only facial expression recognition), is robust to lower face occlusions that may be caused by beards, mustaches, scarves, etc. and lower face motion during speech production. Preliminary experiments on benchmark datasets produced promising results outperforming previous facial expression recognition studies using partial face features, and comparable results to studies using whole face information, only slightly lower by ˜ 2.5 % compared to the best whole face facial recognition system while using only ˜ 1 / 3 of the facial region.

  10. Facial expression recognition in the wild based on multimodal texture features

    Science.gov (United States)

    Sun, Bo; Li, Liandong; Zhou, Guoyan; He, Jun

    2016-11-01

    Facial expression recognition in the wild is a very challenging task. We describe our work in static and continuous facial expression recognition in the wild. We evaluate the recognition results of gray deep features and color deep features, and explore the fusion of multimodal texture features. For the continuous facial expression recognition, we design two temporal-spatial dense scale-invariant feature transform (SIFT) features and combine multimodal features to recognize expression from image sequences. For the static facial expression recognition based on video frames, we extract dense SIFT and some deep convolutional neural network (CNN) features, including our proposed CNN architecture. We train linear support vector machine and partial least squares classifiers for those kinds of features on the static facial expression in the wild (SFEW) and acted facial expression in the wild (AFEW) dataset, and we propose a fusion network to combine all the extracted features at decision level. The final achievement we gained is 56.32% on the SFEW testing set and 50.67% on the AFEW validation set, which are much better than the baseline recognition rates of 35.96% and 36.08%.

  11. Assessing the accuracy of perceptions of intelligence based on heritable facial features

    OpenAIRE

    Lee, Anthony J.; Hibbs, Courtney; Wright, Margaret J.; Martin, Nicholas G.; Keller, Matthew C.; Zietsch, Brendan P.

    2017-01-01

    Perceptions of intelligence based on facial features can have a profound impact on many social situations, but findings have been mixed as to whether these judgements are accurate. Even if such perceptions were accurate, the underlying mechanism is unclear. Several possibilities have been proposed, including evolutionary explanations where certain morphological facial features are associated with fitness-related traits (including cognitive development), or that intelligence judgements are ove...

  12. Contributions of feature shapes and surface cues to the recognition of facial expressions.

    Science.gov (United States)

    Sormaz, Mladen; Young, Andrew W; Andrews, Timothy J

    2016-10-01

    Theoretical accounts of face processing often emphasise feature shapes as the primary visual cue to the recognition of facial expressions. However, changes in facial expression also affect the surface properties of the face. In this study, we investigated whether this surface information can also be used in the recognition of facial expression. First, participants identified facial expressions (fear, anger, disgust, sadness, happiness) from images that were manipulated such that they varied mainly in shape or mainly in surface properties. We found that the categorization of facial expression is possible in either type of image, but that different expressions are relatively dependent on surface or shape properties. Next, we investigated the relative contributions of shape and surface information to the categorization of facial expressions. This employed a complementary method that involved combining the surface properties of one expression with the shape properties from a different expression. Our results showed that the categorization of facial expressions in these hybrid images was equally dependent on the surface and shape properties of the image. Together, these findings provide a direct demonstration that both feature shape and surface information make significant contributions to the recognition of facial expressions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. A newly recognized syndrome of severe growth deficiency, microcephaly, intellectual disability, and characteristic facial features.

    Science.gov (United States)

    Vinkler, Chana; Leshinsky-Silver, Esther; Michelson, Marina; Haas, Dorothea; Lerman-Sagie, Tally; Lev, Dorit

    2014-01-01

    Genetic syndromes with proportionate severe short stature are rare. We describe two sisters born to nonconsanguineous parents with severe linear growth retardation, poor weight gain, microcephaly, characteristic facial features, cutaneous syndactyly of the toes, high myopia, and severe intellectual disability. During infancy and early childhood, the girls had transient hepatosplenomegaly and low blood cholesterol levels that normalized later. A thorough evaluation including metabolic studies, radiological, and genetic investigations were all normal. Cholesterol metabolism and transport were studied and no definitive abnormality was found. No clinical deterioration was observed and no metabolic crises were reported. After due consideration of other known hereditary causes of post-natal severe linear growth retardation, microcephaly, and intellectual disability, we propose that this condition represents a newly recognized autosomal recessive multiple congenital anomaly-intellectual disability syndrome. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  14. Binary pattern flavored feature extractors for Facial Expression Recognition: An overview

    DEFF Research Database (Denmark)

    Kristensen, Rasmus Lyngby; Tan, Zheng-Hua; Ma, Zhanyu

    2015-01-01

    This paper conducts a survey of modern binary pattern flavored feature extractors applied to the Facial Expression Recognition (FER) problem. In total, 26 different feature extractors are included, of which six are selected for in depth description. In addition, the paper unifies important FER...

  15. Joint Facial Action Unit Detection and Feature Fusion: A Multi-conditional Learning Approach.

    Science.gov (United States)

    Eleftheriadis, Stefanos; Rudovic, Ognjen; Pantic, Maja

    2016-10-05

    Automated analysis of facial expressions can benefit many domains, from marketing to clinical diagnosis of neurodevelopmental disorders. Facial expressions are typically encoded as a combination of facial muscle activations, i.e., action units. Depending on context, these action units co-occur in specific patterns, and rarely in isolation. Yet, most existing methods for automatic action unit detection fail to exploit dependencies among them, and the corresponding facial features. To address this, we propose a novel multi-conditional latent variable model for simultaneous fusion of facial features and joint action unit detection. Specifically, the proposed model performs feature fusion in a generative fashion via a low-dimensional shared subspace, while simultaneously performing action unit detection using a discriminative classification approach. We show that by combining the merits of both approaches, the proposed methodology outperforms existing purely discriminative/generative methods for the target task. To reduce the number of parameters, and avoid overfitting, a novel Bayesian learning approach based on Monte Carlo sampling is proposed, to integrate out the shared subspace. We validate the proposed method on posed and spontaneous data from three publicly available datasets (CK+, DISFA and Shoulder-pain), and show that both feature fusion and joint learning of action units leads to improved performance compared to the state-of-the-art methods for the task.

  16. An Algorithm Based on the Self-Organized Maps for the Classification of Facial Features

    Directory of Open Access Journals (Sweden)

    Gheorghe Gîlcă

    2015-12-01

    Full Text Available This paper deals with an algorithm based on Self Organized Maps networks which classifies facial features. The proposed algorithm can categorize the facial features defined by the input variables: eyebrow, mouth, eyelids into a map of their grouping. The groups map is based on calculating the distance between each input vector and each output neuron layer , the neuron with the minimum distance being declared winner neuron. The network structure consists of two levels: the first level contains three input vectors, each having forty-one values, while the second level contains the SOM competitive network which consists of 100 neurons. The proposed system can classify facial features quickly and easily using the proposed algorithm based on SOMs.

  17. The importance of internal facial features in learning new faces.

    Science.gov (United States)

    Longmore, Christopher A; Liu, Chang Hong; Young, Andrew W

    2015-01-01

    For familiar faces, the internal features (eyes, nose, and mouth) are known to be differentially salient for recognition compared to external features such as hairstyle. Two experiments are reported that investigate how this internal feature advantage accrues as a face becomes familiar. In Experiment 1, we tested the contribution of internal and external features to the ability to generalize from a single studied photograph to different views of the same face. A recognition advantage for the internal features over the external features was found after a change of viewpoint, whereas there was no internal feature advantage when the same image was used at study and test. In Experiment 2, we removed the most salient external feature (hairstyle) from studied photographs and looked at how this affected generalization to a novel viewpoint. Removing the hair from images of the face assisted generalization to novel viewpoints, and this was especially the case when photographs showing more than one viewpoint were studied. The results suggest that the internal features play an important role in the generalization between different images of an individual's face by enabling the viewer to detect the common identity-diagnostic elements across non-identical instances of the face.

  18. Recovering Faces from Memory: The Distracting Influence of External Facial Features

    Science.gov (United States)

    Frowd, Charlie D.; Skelton, Faye; Atherton, Chris; Pitchford, Melanie; Hepton, Gemma; Holden, Laura; McIntyre, Alex H.; Hancock, Peter J. B.

    2012-01-01

    Recognition memory for unfamiliar faces is facilitated when contextual cues (e.g., head pose, background environment, hair and clothing) are consistent between study and test. By contrast, inconsistencies in external features, especially hair, promote errors in unfamiliar face-matching tasks. For the construction of facial composites, as carried…

  19. Newborns' Face Recognition: Role of Inner and Outer Facial Features

    Science.gov (United States)

    Turati, Chiara; Macchi Cassia, Viola; Simion, Francesca; Leo, Irene

    2006-01-01

    Existing data indicate that newborns are able to recognize individual faces, but little is known about what perceptual cues drive this ability. The current study showed that either the inner or outer features of the face can act as sufficient cues for newborns' face recognition (Experiment 1), but the outer part of the face enjoys an advantage…

  20. Kernel-based discriminant feature extraction using a representative dataset

    Science.gov (United States)

    Li, Honglin; Sancho Gomez, Jose-Luis; Ahalt, Stanley C.

    2002-07-01

    Discriminant Feature Extraction (DFE) is widely recognized as an important pre-processing step in classification applications. Most DFE algorithms are linear and thus can only explore the linear discriminant information among the different classes. Recently, there has been several promising attempts to develop nonlinear DFE algorithms, among which is Kernel-based Feature Extraction (KFE). The efficacy of KFE has been experimentally verified by both synthetic data and real problems. However, KFE has some known limitations. First, KFE does not work well for strongly overlapped data. Second, KFE employs all of the training set samples during the feature extraction phase, which can result in significant computation when applied to very large datasets. Finally, KFE can result in overfitting. In this paper, we propose a substantial improvement to KFE that overcomes the above limitations by using a representative dataset, which consists of critical points that are generated from data-editing techniques and centroid points that are determined by using the Frequency Sensitive Competitive Learning (FSCL) algorithm. Experiments show that this new KFE algorithm performs well on significantly overlapped datasets, and it also reduces computational complexity. Further, by controlling the number of centroids, the overfitting problem can be effectively alleviated.

  1. The Association of Quantitative Facial Color Features with Cold Pattern in Traditional East Asian Medicine

    Directory of Open Access Journals (Sweden)

    Sujeong Mun

    2017-01-01

    Full Text Available Introduction. Facial diagnosis is a major component of the diagnostic method in traditional East Asian medicine. We investigated the association of quantitative facial color features with cold pattern using a fully automated facial color parameterization system. Methods. The facial color parameters of 64 participants were obtained from digital photographs using an automatic color correction and color parameter calculation system. Cold pattern severity was evaluated using a questionnaire. Results. The a⁎ values of the whole face, lower cheek, and chin were negatively associated with cold pattern score (CPS (whole face: B=-1.048, P=0.021; lower cheek: B=-0.494, P=0.007; chin: B=-0.640, P=0.031, while b⁎ value of the lower cheek was positively associated with CPS (B=0.234, P=0.019. The a⁎ values of the whole face were significantly correlated with specific cold pattern symptoms including cold abdomen (partial ρ=-0.354, P<0.01 and cold sensation in the body (partial ρ=-0.255, P<0.05. Conclusions. a⁎ values of the whole face were negatively associated with CPS, indicating that individuals with increased levels of cold pattern had paler faces. These findings suggest that objective facial diagnosis has utility for pattern identification.

  2. The extraction and use of facial features in low bit-rate visual communication.

    Science.gov (United States)

    Pearson, D

    1992-01-29

    A review is given of experimental investigations by the author and his collaborators into methods of extracting binary features from images of the face and hands. The aim of the research has been to enable deaf people to communicate by sign language over the telephone network. Other applications include model-based image coding and facial-recognition systems. The paper deals with the theoretical postulates underlying the successful experimental extraction of facial features. The basic philosophy has been to treat the face as an illuminated three-dimensional object and to identify features from characteristics of their Gaussian maps. It can be shown that in general a composite image operator linked to a directional-illumination estimator is required to accomplish this, although the latter can often be omitted in practice.

  3. Feature Fusion Algorithm for Multimodal Emotion Recognition from Speech and Facial Expression Signal

    Directory of Open Access Journals (Sweden)

    Han Zhiyan

    2016-01-01

    Full Text Available In order to overcome the limitation of single mode emotion recognition. This paper describes a novel multimodal emotion recognition algorithm, and takes speech signal and facial expression signal as the research subjects. First, fuse the speech signal feature and facial expression signal feature, get sample sets by putting back sampling, and then get classifiers by BP neural network (BPNN. Second, measure the difference between two classifiers by double error difference selection strategy. Finally, get the final recognition result by the majority voting rule. Experiments show the method improves the accuracy of emotion recognition by giving full play to the advantages of decision level fusion and feature level fusion, and makes the whole fusion process close to human emotion recognition more, with a recognition rate 90.4%.

  4. Evidence from facial morphology for similarity of Asian and African representatives of Homo erectus.

    Science.gov (United States)

    Rightmire, G P

    1998-05-01

    It has been argued that Homo erectus is a species confined to Asia. Specialized characters displayed by the Indonesian and Chinese skulls are said to be absent in material from eastern Africa, and individuals from Koobi Fora and Nariokotome are now referred by some workers to H. ergaster. This second species is held to be the ancestor from which later human populations are derived. The claim for two taxa is evaluated here with special reference to the facial skeleton. Asian fossils examined include Sangiran 4 and Sangiran 17, several of the Ngandong crania, Gongwangling, and of course the material from Zhoukoudian described by Weidenreich ([1943] Palaeontol. Sin. [New Ser. D] 10:1-484). African specimens compared are KNM-ER 3733 and KNM-ER 3883 from Koobi Fora and KNM-WT 15000 from Nariokotome. Hominid 9 from Olduvai is useful only insofar as the brows and interorbital pillar are preserved. Neither detailed anatomical comparisons nor measurements bring to light any consistent patterns in facial morphology which set the African hominids apart from Asian H. erectus. Faces of the African individuals do tend to be high and less broad across the orbits. Both of the Koobi Fora crania but not KNM-WT 15000 have nasal bones that are narrow superiorly, while the piriform aperture is relatively wide. In many other characters, including contour of the supraorbital torus, glabellar prominence, nasal bridge dimensions, internasal keeling, anatomy of the nasal sill and floor, development of the canine jugum, orientation of the zygomaticoalveolar pillar, rounding of the anterolateral surface of the cheek, formation of a malar tubercle, and palatal rugosity, there is variation among individuals from localities within the major geographic provinces. Here it is not possible to identify features that are unique to either the Asian or African assemblages. Additional traits such as a forward sloping "crista nasalis," presence of a "sulcus maxillaris," a high (and massive) cheek coupled

  5. Alagille syndrome in a Vietnamese cohort: mutation analysis and assessment of facial features.

    Science.gov (United States)

    Lin, Henry C; Le Hoang, Phuc; Hutchinson, Anne; Chao, Grace; Gerfen, Jennifer; Loomes, Kathleen M; Krantz, Ian; Kamath, Binita M; Spinner, Nancy B

    2012-05-01

    Alagille syndrome (ALGS, OMIM #118450) is an autosomal dominant disorder that affects multiple organ systems including the liver, heart, eyes, vertebrae, and face. ALGS is caused by mutations in one of two genes in the Notch Signaling Pathway, Jagged1 (JAG1) or NOTCH2. In this study, analysis of 21 Vietnamese ALGS individuals led to the identification of 19 different mutations (18 JAG1 and 1 NOTCH2), 17 of which are novel, including the third reported NOTCH2 mutation in Alagille Syndrome. The spectrum of JAG1 mutations in the Vietnamese patients is similar to that previously reported, including nine frameshift, three missense, two splice site, one nonsense, two whole gene, and one partial gene deletion. The missense mutations are all likely to be disease causing, as two are loss of cysteines (C22R and C78G) and the third creates a cryptic splice site in exon 9 (G386R). No correlation between genotype and phenotype was observed. Assessment of clinical phenotype revealed that skeletal manifestations occur with a higher frequency than in previously reported Alagille cohorts. Facial features were difficult to assess and a Vietnamese pediatric gastroenterologist was only able to identify the facial phenotype in 61% of the cohort. To assess the agreement among North American dysmorphologists at detecting the presence of ALGS facial features in the Vietnamese patients, 37 clinical dysmorphologists evaluated a photographic panel of 20 Vietnamese children with and without ALGS. The dysmorphologists were unable to identify the individuals with ALGS in the majority of cases, suggesting that evaluation of facial features should not be used in the diagnosis of ALGS in this population. This is the first report of mutations and phenotypic spectrum of ALGS in a Vietnamese population. Copyright © 2012 Wiley Periodicals, Inc.

  6. Ring 2 chromosome associated with failure to thrive, microcephaly and dysmorphic facial features.

    Science.gov (United States)

    López-Uriarte, Arelí; Quintero-Rivera, Fabiola; de la Fuente Cortez, Beatriz; Puente, Viviana Gómez; Campos, María Del Roble Velazco; de Villarreal, Laura E Martínez

    2013-10-15

    We report here a child with a ring chromosome 2 [r(2)] associated with failure to thrive, microcephaly and dysmorphic features. The chromosomal aberration was defined by chromosome microarray analysis, revealing two small deletions of 2p25.3 (139 kb) and 2q37.3 (147 kb). We show the clinical phenotype of the patient, using a conventional approach and the molecular cytogenetics of a male with a history of prenatal intrauterine growth restriction (IUGR), failure to thrive, microcephaly and dysmorphic facial features. The phenotype is very similar to that reported in other clinical cases with ring chromosome 2. © 2013 Elsevier B.V. All rights reserved.

  7. A Classification Method of Normal and Overweight Females Based on Facial Features for Automated Medical Applications

    Directory of Open Access Journals (Sweden)

    Bum Ju Lee

    2012-01-01

    Full Text Available Obesity and overweight have become serious public health problems worldwide. Obesity and abdominal obesity are associated with type 2 diabetes, cardiovascular diseases, and metabolic syndrome. In this paper, we first suggest a method of predicting normal and overweight females according to body mass index (BMI based on facial features. A total of 688 subjects participated in this study. We obtained the area under the ROC curve (AUC value of 0.861 and kappa value of 0.521 in Female: 21–40 (females aged 21–40 years group, and AUC value of 0.76 and kappa value of 0.401 in Female: 41–60 (females aged 41–60 years group. In two groups, we found many features showing statistical differences between normal and overweight subjects by using an independent two-sample t-test. We demonstrated that it is possible to predict BMI status using facial characteristics. Our results provide useful information for studies of obesity and facial characteristics, and may provide useful clues in the development of applications for alternative diagnosis of obesity in remote healthcare.

  8. Contributions of feature shapes and surface cues to the recognition and neural representation of facial identity.

    Science.gov (United States)

    Andrews, Timothy J; Baseler, Heidi; Jenkins, Rob; Burton, A Mike; Young, Andrew W

    2016-10-01

    A full understanding of face recognition will involve identifying the visual information that is used to discriminate different identities and how this is represented in the brain. The aim of this study was to explore the importance of shape and surface properties in the recognition and neural representation of familiar faces. We used image morphing techniques to generate hybrid faces that mixed shape properties (more specifically, second order spatial configural information as defined by feature positions in the 2D-image) from one identity and surface properties from a different identity. Behavioural responses showed that recognition and matching of these hybrid faces was primarily based on their surface properties. These behavioural findings contrasted with neural responses recorded using a block design fMRI adaptation paradigm to test the sensitivity of Haxby et al.'s (2000) core face-selective regions in the human brain to the shape or surface properties of the face. The fusiform face area (FFA) and occipital face area (OFA) showed a lower response (adaptation) to repeated images of the same face (same shape, same surface) compared to different faces (different shapes, different surfaces). From the behavioural data indicating the critical contribution of surface properties to the recognition of identity, we predicted that brain regions responsible for familiar face recognition should continue to adapt to faces that vary in shape but not surface properties, but show a release from adaptation to faces that vary in surface properties but not shape. However, we found that the FFA and OFA showed an equivalent release from adaptation to changes in both shape and surface properties. The dissociation between the neural and perceptual responses suggests that, although they may play a role in the process, these core face regions are not solely responsible for the recognition of facial identity. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Mirror on the wall: a study of women's perception of facial features as they age.

    Science.gov (United States)

    Sezgin, Billur; Findikcioglu, Kemal; Kaya, Basar; Sibar, Serhat; Yavuzer, Reha

    2012-05-01

    Facial aesthetic treatments are among the most popular cosmetic procedures worldwide, but the factors that motivate women to change their facial appearance are not fully understood. The authors examine the relationships among the facial areas on which women focus most as they age, women's general self-perception, and the effect of their personal focus on "beauty points" on their perception of other women's faces. In this prospective study, 200 women who presented to a cosmetic surgery outpatient clinic for consultation between December 2009 and February 2010 completed a questionnaire. The 200 participants were grouped by age: 20-29 years, 30-39, 40-49, and 50 or older (50 women in each group). They were asked which part of their face they focus on most when looking in the mirror, which part they notice most in other women (of different age groups), what they like/dislike most about their own face, and whether they wished to change any facial feature. A positive correlation was found between women's focal points and the areas they dislike or desire to change. Younger women focused mainly on their nose and skin, while older women focused on their periorbital area and jawline. Women focus on their personal focal points when looking at other women in their 20s and 30s, but not when looking at older women. Women presenting for cosmetic surgery consultation focus on the areas that they dislike most, which leads to a desire to change those features. The plastic surgeon must fully understand patients' expectations to select appropriate candidates and maximize satisfaction with the outcomes.

  10. Facial expression recognition under partial occlusion based on fusion of global and local features

    Science.gov (United States)

    Wang, Xiaohua; Xia, Chen; Hu, Min; Ren, Fuji

    2018-04-01

    Facial expression recognition under partial occlusion is a challenging research. This paper proposes a novel framework for facial expression recognition under occlusion by fusing the global and local features. In global aspect, first, information entropy are employed to locate the occluded region. Second, principal Component Analysis (PCA) method is adopted to reconstruct the occlusion region of image. After that, a replace strategy is applied to reconstruct image by replacing the occluded region with the corresponding region of the best matched image in training set, Pyramid Weber Local Descriptor (PWLD) feature is then extracted. At last, the outputs of SVM are fitted to the probabilities of the target class by using sigmoid function. For the local aspect, an overlapping block-based method is adopted to extract WLD features, and each block is weighted adaptively by information entropy, Chi-square distance and similar block summation methods are then applied to obtain the probabilities which emotion belongs to. Finally, fusion at the decision level is employed for the data fusion of the global and local features based on Dempster-Shafer theory of evidence. Experimental results on the Cohn-Kanade and JAFFE databases demonstrate the effectiveness and fault tolerance of this method.

  11. Microalbuminuria represents a feature of advanced renal disease in ...

    African Journals Online (AJOL)

    . The systematic screening for microalbuminuria represents the touchstone to prevent CRF in patients with diabetes mellitus. Microalbuminuria has also been demonstrated in patients with sickle cell disease. Whether this has the same ...

  12. Artificial Neural Networks and Gene Expression Programing based age estimation using facial features

    Directory of Open Access Journals (Sweden)

    Baddrud Z. Laskar

    2015-10-01

    Full Text Available This work is about estimating human age automatically through analysis of facial images. It has got a lot of real-world applications. Due to prompt advances in the fields of machine vision, facial image processing, and computer graphics, automatic age estimation via faces in computer is one of the dominant topics these days. This is due to widespread real-world applications, in areas of biometrics, security, surveillance, control, forensic art, entertainment, online customer management and support, along with cosmetology. As it is difficult to estimate the exact age, this system is to estimate a certain range of ages. Four sets of classifications have been used to differentiate a person’s data into one of the different age groups. The uniqueness about this study is the usage of two technologies i.e., Artificial Neural Networks (ANN and Gene Expression Programing (GEP to estimate the age and then compare the results. New methodologies like Gene Expression Programing (GEP have been explored here and significant results were found. The dataset has been developed to provide more efficient results by superior preprocessing methods. This proposed approach has been developed, tested and trained using both the methods. A public data set was used to test the system, FG-NET. The quality of the proposed system for age estimation using facial features is shown by broad experiments on the available database of FG-NET.

  13. Dysmorphic Facial Features and Other Clinical Characteristics in Two Patients with PEX1 Gene Mutations

    Science.gov (United States)

    Gunduz, Mehmet

    2016-01-01

    Peroxisomal disorders are a group of genetically heterogeneous metabolic diseases related to dysfunction of peroxisomes. Dysmorphic features, neurological abnormalities, and hepatic dysfunction can be presenting signs of peroxisomal disorders. Here we presented dysmorphic facial features and other clinical characteristics in two patients with PEX1 gene mutation. Follow-up periods were 3.5 years and 1 year in the patients. Case I was one-year-old girl that presented with neurodevelopmental delay, hepatomegaly, bilateral hearing loss, and visual problems. Ophthalmologic examination suggested septooptic dysplasia. Cranial magnetic resonance imaging (MRI) showed nonspecific gliosis at subcortical and periventricular deep white matter. Case II was 2.5-year-old girl referred for investigation of global developmental delay and elevated liver enzymes. Ophthalmologic examination findings were consistent with bilateral nystagmus and retinitis pigmentosa. Cranial MRI was normal. Dysmorphic facial features including broad nasal root, low set ears, downward slanting eyes, downward slanting eyebrows, and epichantal folds were common findings in two patients. Molecular genetic analysis indicated homozygous novel IVS1-2A>G mutation in Case I and homozygous p.G843D (c.2528G>A) mutation in Case II in the PEX1 gene. Clinical findings and developmental prognosis vary in PEX1 gene mutation. Kabuki-like phenotype associated with liver pathology may indicate Zellweger spectrum disorders (ZSD). PMID:27882258

  14. A Diagnosis to Consider in an Adult Patient with Facial Features and Intellectual Disability: Williams Syndrome.

    Science.gov (United States)

    Doğan, Özlem Akgün; Şimşek Kiper, Pelin Özlem; Utine, Gülen Eda; Alikaşifoğlu, Mehmet; Boduroğlu, Koray

    2017-03-01

    Williams syndrome (OMIM #194050) is a rare, well-recognized, multisystemic genetic condition affecting approximately 1/7,500 individuals. There are no marked regional differences in the incidence of Williams syndrome. The syndrome is caused by a hemizygous deletion of approximately 28 genes, including ELN on chromosome 7q11.2. Prenatal-onset growth retardation, distinct facial appearance, cardiovascular abnormalities, and unique hypersocial behavior are among the most common clinical features. Here, we report the case of a patient referred to us with distinct facial features and intellectual disability, who was diagnosed with Williams syndrome at the age of 37 years. Our aim is to increase awareness regarding the diagnostic features and complications of this recognizable syndrome among adult health care providers. Williams syndrome is usually diagnosed during infancy or childhood, but in the absence of classical findings, such as cardiovascular anomalies, hypercalcemia, and cognitive impairment, the diagnosis could be delayed. Due to the multisystemic and progressive nature of the syndrome, accurate diagnosis is critical for appropriate care and screening for the associated morbidities that may affect the patient's health and well-being.

  15. Contactless measurement of muscles fatigue by tracking facial feature points in a video

    DEFF Research Database (Denmark)

    Irani, Ramin; Nasrollahi, Kamal; Moeslund, Thomas B.

    2014-01-01

    their exercises when the level of the fatigue might be dangerous for the patients. The current technology for measuring tiredness, like Electromyography (EMG), requires installing some sensors on the body. In some applications, like remote patient monitoring, this however might not be possible. To deal...... with such cases, in this paper we present a contactless method based on computer vision techniques to measure tiredness by detecting, tracking, and analyzing some facial feature points during the exercise. Experimental results on several test subjects and comparing them against ground truth data show...... that the proposed system can properly find the temporal point of tiredness of the muscles when the test subjects are doing physical exercises....

  16. An adaptation study of internal and external features in facial representations.

    Science.gov (United States)

    Hills, Charlotte; Romano, Kali; Davies-Thompson, Jodie; Barton, Jason J S

    2014-07-01

    Prior work suggests that internal features contribute more than external features to face processing. Whether this asymmetry is also true of the mental representations of faces is not known. We used face adaptation to determine whether the internal and external features of faces contribute differently to the representation of facial identity, whether this was affected by familiarity, and whether the results differed if the features were presented in isolation or as part of a whole face. In a first experiment, subjects performed a study of identity adaptation for famous and novel faces, in which the adapting stimuli were whole faces, the internal features alone, or the external features alone. In a second experiment, the same faces were used, but the adapting internal and external features were superimposed on whole faces that were ambiguous to identity. The first experiment showed larger aftereffects for unfamiliar faces, and greater aftereffects from internal than from external features, and the latter was true for both familiar and unfamiliar faces. When internal and external features were presented in a whole-face context in the second experiment, aftereffects from either internal or external features was less than that from the whole face, and did not differ from each other. While we reproduce the greater importance of internal features when presented in isolation, we find this is equally true for familiar and unfamiliar faces. The dominant influence of internal features is reduced when integrated into a whole-face context, suggesting another facet of expert face processing. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. Is the emotion recognition deficit associated with frontotemporal dementia caused by selective inattention to diagnostic facial features?

    Science.gov (United States)

    Oliver, Lindsay D; Virani, Karim; Finger, Elizabeth C; Mitchell, Derek G V

    2014-07-01

    Frontotemporal dementia (FTD) is a debilitating neurodegenerative disorder characterized by severely impaired social and emotional behaviour, including emotion recognition deficits. Though fear recognition impairments seen in particular neurological and developmental disorders can be ameliorated by reallocating attention to critical facial features, the possibility that similar benefits can be conferred to patients with FTD has yet to be explored. In the current study, we examined the impact of presenting distinct regions of the face (whole face, eyes-only, and eyes-removed) on the ability to recognize expressions of anger, fear, disgust, and happiness in 24 patients with FTD and 24 healthy controls. A recognition deficit was demonstrated across emotions by patients with FTD relative to controls. Crucially, removal of diagnostic facial features resulted in an appropriate decline in performance for both groups; furthermore, patients with FTD demonstrated a lack of disproportionate improvement in emotion recognition accuracy as a result of isolating critical facial features relative to controls. Thus, unlike some neurological and developmental disorders featuring amygdala dysfunction, the emotion recognition deficit observed in FTD is not likely driven by selective inattention to critical facial features. Patients with FTD also mislabelled negative facial expressions as happy more often than controls, providing further evidence for abnormalities in the representation of positive affect in FTD. This work suggests that the emotional expression recognition deficit associated with FTD is unlikely to be rectified by adjusting selective attention to diagnostic features, as has proven useful in other select disorders. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Human Amygdala Tracks a Feature-Based Valence Signal Embedded within the Facial Expression of Surprise.

    Science.gov (United States)

    Kim, M Justin; Mattek, Alison M; Bennett, Randi H; Solomon, Kimberly M; Shin, Jin; Whalen, Paul J

    2017-09-27

    Human amygdala function has been traditionally associated with processing the affective valence (negative vs positive) of an emotionally charged event, especially those that signal fear or threat. However, this account of human amygdala function can be explained by alternative views, which posit that the amygdala might be tuned to either (1) general emotional arousal (activation vs deactivation) or (2) specific emotion categories (fear vs happy). Delineating the pure effects of valence independent of arousal or emotion category is a challenging task, given that these variables naturally covary under many circumstances. To circumvent this issue and test the sensitivity of the human amygdala to valence values specifically, we measured the dimension of valence within the single facial expression category of surprise. Given the inherent valence ambiguity of this category, we show that surprised expression exemplars are attributed valence and arousal values that are uniquely and naturally uncorrelated. We then present fMRI data from both sexes, showing that the amygdala tracks these consensus valence values. Finally, we provide evidence that these valence values are linked to specific visual features of the mouth region, isolating the signal by which the amygdala detects this valence information. SIGNIFICANCE STATEMENT There is an open question as to whether human amygdala function tracks the valence value of cues in the environment, as opposed to either a more general emotional arousal value or a more specific emotion category distinction. Here, we demonstrate the utility of surprised facial expressions because exemplars within this emotion category take on valence values spanning the dimension of bipolar valence (positive to negative) at a consistent level of emotional arousal. Functional neuroimaging data showed that amygdala responses tracked the valence of surprised facial expressions, unconfounded by arousal. Furthermore, a machine learning classifier identified

  19. Fixation to features and neural processing of facial expressions in a gender discrimination task.

    Science.gov (United States)

    Neath, Karly N; Itier, Roxane J

    2015-10-01

    Early face encoding, as reflected by the N170 ERP component, is sensitive to fixation to the eyes. Whether this sensitivity varies with facial expressions of emotion and can also be seen on other ERP components such as P1 and EPN, was investigated. Using eye-tracking to manipulate fixation on facial features, we found the N170 to be the only eye-sensitive component and this was true for fearful, happy and neutral faces. A different effect of fixation to features was seen for the earlier P1 that likely reflected general sensitivity to face position. An early effect of emotion (∼120 ms) for happy faces was seen at occipital sites and was sustained until ∼350 ms post-stimulus. For fearful faces, an early effect was seen around 80 ms followed by a later effect appearing at ∼150 ms until ∼300 ms at lateral posterior sites. Results suggests that in this emotion-irrelevant gender discrimination task, processing of fearful and happy expressions occurred early and largely independently of the eye-sensitivity indexed by the N170. Processing of the two emotions involved different underlying brain networks active at different times. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Local binary pattern variants-based adaptive texture features analysis for posed and nonposed facial expression recognition

    Science.gov (United States)

    Sultana, Maryam; Bhatti, Naeem; Javed, Sajid; Jung, Soon Ki

    2017-09-01

    Facial expression recognition (FER) is an important task for various computer vision applications. The task becomes challenging when it requires the detection and encoding of macro- and micropatterns of facial expressions. We present a two-stage texture feature extraction framework based on the local binary pattern (LBP) variants and evaluate its significance in recognizing posed and nonposed facial expressions. We focus on the parametric limitations of the LBP variants and investigate their effects for optimal FER. The size of the local neighborhood is an important parameter of the LBP technique for its extraction in images. To make the LBP adaptive, we exploit the granulometric information of the facial images to find the local neighborhood size for the extraction of center-symmetric LBP (CS-LBP) features. Our two-stage texture representations consist of an LBP variant and the adaptive CS-LBP features. Among the presented two-stage texture feature extractions, the binarized statistical image features and adaptive CS-LBP features were found showing high FER rates. Evaluation of the adaptive texture features shows competitive and higher performance than the nonadaptive features and other state-of-the-art approaches, respectively.

  1. Replicating distinctive facial features in lineups: identification performance in young versus older adults.

    Science.gov (United States)

    Badham, Stephen P; Wade, Kimberley A; Watts, Hannah J E; Woods, Natalie G; Maylor, Elizabeth A

    2013-04-01

    Criminal suspects with distinctive facial features, such as tattoos or bruising, may stand out in a police lineup. To prevent suspects from being unfairly identified on the basis of their distinctive feature, the police often manipulate lineup images to ensure that all of the members appear similar. Recent research shows that replicating a distinctive feature across lineup members enhances eyewitness identification performance, relative to removing that feature on the target. In line with this finding, the present study demonstrated that with young adults (n = 60; mean age = 20), replication resulted in more target identifications than did removal in target-present lineups and that replication did not impair performance, relative to removal, in target-absent lineups. Older adults (n = 90; mean age = 74) performed significantly worse than young adults, identifying fewer targets and more foils; moreover, older adults showed a minimal benefit from replication over removal. This pattern is consistent with the associative deficit hypothesis of aging, such that older adults form weaker links between faces and their distinctive features. Although replication did not produce much benefit over removal for older adults, it was not detrimental to their performance. Therefore, the results suggest that replication may not be as beneficial to older adults as it is to young adults and demonstrate a new practical implication of age-related associative deficits in memory.

  2. Tracking subtle stereotypes of children with trisomy 21: from facial-feature-based to implicit stereotyping.

    Directory of Open Access Journals (Sweden)

    Claire Enea-Drapeau

    Full Text Available BACKGROUND: Stigmatization is one of the greatest obstacles to the successful integration of people with Trisomy 21 (T21 or Down syndrome, the most frequent genetic disorder associated with intellectual disability. Research on attitudes and stereotypes toward these people still focuses on explicit measures subjected to social-desirability biases, and neglects how variability in facial stigmata influences attitudes and stereotyping. METHODOLOGY/PRINCIPAL FINDINGS: The participants were 165 adults including 55 young adult students, 55 non-student adults, and 55 professional caregivers working with intellectually disabled persons. They were faced with implicit association tests (IAT, a well-known technique whereby response latency is used to capture the relative strength with which some groups of people--here photographed faces of typically developing children and children with T21--are automatically (without conscious awareness associated with positive versus negative attributes in memory. Each participant also rated the same photographed faces (consciously accessible evaluations. We provide the first evidence that the positive bias typically found in explicit judgments of children with T21 is smaller for those whose facial features are highly characteristic of this disorder, compared to their counterparts with less distinctive features and to typically developing children. We also show that this bias can coexist with negative evaluations at the implicit level (with large effect sizes, even among professional caregivers. CONCLUSION: These findings support recent models of feature-based stereotyping, and more importantly show how crucial it is to go beyond explicit evaluations to estimate the true extent of stigmatization of intellectually disabled people.

  3. An Efficient Multimodal 2D + 3D Feature-based Approach to Automatic Facial Expression Recognition

    KAUST Repository

    Li, Huibin

    2015-07-29

    We present a fully automatic multimodal 2D + 3D feature-based facial expression recognition approach and demonstrate its performance on the BU-3DFE database. Our approach combines multi-order gradient-based local texture and shape descriptors in order to achieve efficiency and robustness. First, a large set of fiducial facial landmarks of 2D face images along with their 3D face scans are localized using a novel algorithm namely incremental Parallel Cascade of Linear Regression (iPar-CLR). Then, a novel Histogram of Second Order Gradients (HSOG) based local image descriptor in conjunction with the widely used first-order gradient based SIFT descriptor are used to describe the local texture around each 2D landmark. Similarly, the local geometry around each 3D landmark is described by two novel local shape descriptors constructed using the first-order and the second-order surface differential geometry quantities, i.e., Histogram of mesh Gradients (meshHOG) and Histogram of mesh Shape index (curvature quantization, meshHOS). Finally, the Support Vector Machine (SVM) based recognition results of all 2D and 3D descriptors are fused at both feature-level and score-level to further improve the accuracy. Comprehensive experimental results demonstrate that there exist impressive complementary characteristics between the 2D and 3D descriptors. We use the BU-3DFE benchmark to compare our approach to the state-of-the-art ones. Our multimodal feature-based approach outperforms the others by achieving an average recognition accuracy of 86.32%. Moreover, a good generalization ability is shown on the Bosphorus database.

  4. An Efficient Multimodal 2D + 3D Feature-based Approach to Automatic Facial Expression Recognition

    KAUST Repository

    Li, Huibin; Ding, Huaxiong; Huang, Di; Wang, Yunhong; Zhao, Xi; Morvan, Jean-Marie; Chen, Liming

    2015-01-01

    We present a fully automatic multimodal 2D + 3D feature-based facial expression recognition approach and demonstrate its performance on the BU-3DFE database. Our approach combines multi-order gradient-based local texture and shape descriptors in order to achieve efficiency and robustness. First, a large set of fiducial facial landmarks of 2D face images along with their 3D face scans are localized using a novel algorithm namely incremental Parallel Cascade of Linear Regression (iPar-CLR). Then, a novel Histogram of Second Order Gradients (HSOG) based local image descriptor in conjunction with the widely used first-order gradient based SIFT descriptor are used to describe the local texture around each 2D landmark. Similarly, the local geometry around each 3D landmark is described by two novel local shape descriptors constructed using the first-order and the second-order surface differential geometry quantities, i.e., Histogram of mesh Gradients (meshHOG) and Histogram of mesh Shape index (curvature quantization, meshHOS). Finally, the Support Vector Machine (SVM) based recognition results of all 2D and 3D descriptors are fused at both feature-level and score-level to further improve the accuracy. Comprehensive experimental results demonstrate that there exist impressive complementary characteristics between the 2D and 3D descriptors. We use the BU-3DFE benchmark to compare our approach to the state-of-the-art ones. Our multimodal feature-based approach outperforms the others by achieving an average recognition accuracy of 86.32%. Moreover, a good generalization ability is shown on the Bosphorus database.

  5. Joint Facial Action Unit Detection and Feature Fusion: A Multi-Conditional Learning Approach

    NARCIS (Netherlands)

    Eleftheriadis, Stefanos; Rudovic, Ognjen; Pantic, Maja

    2016-01-01

    Automated analysis of facial expressions can benefit many domains, from marketing to clinical diagnosis of neurodevelopmental disorders. Facial expressions are typically encoded as a combination of facial muscle activations, i.e., action units. Depending on context, these action units co-occur in

  6. Facial Nerve Schwannoma: A Case Report, Radiological Features and Literature Review.

    Science.gov (United States)

    Pilloni, Giulia; Mico, Barbara Massa; Altieri, Roberto; Zenga, Francesco; Ducati, Alessandro; Garbossa, Diego; Tartara, Fulvio

    2017-12-22

    Facial nerve schwannoma localized in the middle fossa is a rare lesion. We report a case of a facial nerve schwannoma in a 30-year-old male presenting with facial nerve palsy. Magnetic resonance imaging (MRI) showed a 3 cm diameter tumor of the right middle fossa. The tumor was removed using a sub-temporal approach. Intraoperative monitoring allowed for identification of the facial nerve, so it was not damaged during the surgical excision. Neurological clinical examination at discharge demonstrated moderate facial nerve improvement (Grade III House-Brackmann).

  7. Sensorineural deafness, distinctive facial features, and abnormal cranial bones: a new variant of Waardenburg syndrome?

    Science.gov (United States)

    Gad, Alona; Laurino, Mercy; Maravilla, Kenneth R; Matsushita, Mark; Raskind, Wendy H

    2008-07-15

    The Waardenburg syndromes (WS) account for approximately 2% of congenital sensorineural deafness. This heterogeneous group of diseases currently can be categorized into four major subtypes (WS types 1-4) on the basis of characteristic clinical features. Multiple genes have been implicated in WS, and mutations in some genes can cause more than one WS subtype. In addition to eye, hair, and skin pigmentary abnormalities, dystopia canthorum and broad nasal bridge are seen in WS type 1. Mutations in the PAX3 gene are responsible for the condition in the majority of these patients. In addition, mutations in PAX3 have been found in WS type 3 that is distinguished by musculoskeletal abnormalities, and in a family with a rare subtype of WS, craniofacial-deafness-hand syndrome (CDHS), characterized by dysmorphic facial features, hand abnormalities, and absent or hypoplastic nasal and wrist bones. Here we describe a woman who shares some, but not all features of WS type 3 and CDHS, and who also has abnormal cranial bones. All sinuses were hypoplastic, and the cochlea were small. No sequence alteration in PAX3 was found. These observations broaden the clinical range of WS and suggest there may be genetic heterogeneity even within the CDHS subtype. 2008 Wiley-Liss, Inc.

  8. Odor valence linearly modulates attractiveness, but not age assessment, of invariant facial features in a memory-based rating task.

    Science.gov (United States)

    Seubert, Janina; Gregory, Kristen M; Chamberland, Jessica; Dessirier, Jean-Marc; Lundström, Johan N

    2014-01-01

    Scented cosmetic products are used across cultures as a way to favorably influence one's appearance. While crossmodal effects of odor valence on perceived attractiveness of facial features have been demonstrated experimentally, it is unknown whether they represent a phenomenon specific to affective processing. In this experiment, we presented odors in the context of a face battery with systematic feature manipulations during a speeded response task. Modulatory effects of linear increases of odor valence were investigated by juxtaposing subsequent memory-based ratings tasks--one predominantly affective (attractiveness) and a second, cognitive (age). The linear modulation pattern observed for attractiveness was consistent with additive effects of face and odor appraisal. Effects of odor valence on age perception were not linearly modulated and may be the result of cognitive interference. Affective and cognitive processing of faces thus appear to differ in their susceptibility to modulation by odors, likely as a result of privileged access of olfactory stimuli to affective brain networks. These results are critically discussed with respect to potential biases introduced by the preceding speeded response task.

  9. Steel syndrome: dislocated hips and radial heads, carpal coalition, scoliosis, short stature, and characteristic facial features.

    Science.gov (United States)

    Flynn, John M; Ramirez, Norman; Betz, Randal; Mulcahey, Mary Jane; Pino, Franz; Herrera-Soto, Jose A; Carlo, Simon; Cornier, Alberto S

    2010-01-01

    A syndrome of children with short stature, bilateral hip dislocations, radial head dislocations, carpal coalitions, scoliosis, and cavus feet in Puerto Rican children, was reported by Steel et al in 1993. The syndrome was described as a unique entity with dismal results after conventional treatment of dislocated hips. The purpose of this study is to reevaluate this patient population with a longer follow-up and delineate the clinical and radiologic features, treatment outcomes, and the genetic characteristics. This is a retrospective cohort study of 32 patients in whom we evaluated the clinical, imaging data, and genetic characteristics. We compare the findings and quality of life in patients with this syndrome who have had attempts at reduction of the hips versus those who did not have the treatment. Congenital hip dislocations were present in 100% of the patients. There was no attempt at reduction in 39% (25/64) of the hips. In the remaining 61% (39/64), the hips were treated with a variety of modalities fraught with complications. Of those treated, 85% (33/39) remain dislocated, the rest of the hips continue subluxated with acetabular dysplasia and pain. The group of hips that were not treated reported fewer complaints and limitation in daily activities compared with the hips that had attempts at reduction. Steel syndrome is a distinct clinical entity characterized by short stature, bilateral hip and radial head dislocation, carpal coalition, scoliosis, cavus feet, and characteristic facial features with dismal results for attempts at reduction of the hips. Prognostic Study Level II.

  10. Spectrum of mucocutaneous, ocular and facial features and delineation of novel presentations in 62 classical Ehlers-Danlos syndrome patients.

    Science.gov (United States)

    Colombi, M; Dordoni, C; Venturini, M; Ciaccio, C; Morlino, S; Chiarelli, N; Zanca, A; Calzavara-Pinton, P; Zoppi, N; Castori, M; Ritelli, M

    2017-12-01

    Classical Ehlers-Danlos syndrome (cEDS) is characterized by marked cutaneous involvement, according to the Villefranche nosology and its 2017 revision. However, the diagnostic flow-chart that prompts molecular testing is still based on experts' opinion rather than systematic published data. Here we report on 62 molecularly characterized cEDS patients with focus on skin, mucosal, facial, and articular manifestations. The major and minor Villefranche criteria, additional 11 mucocutaneous signs and 15 facial dysmorphic traits were ascertained and feature rates compared by sex and age. In our cohort, we did not observe any mandatory clinical sign. Skin hyperextensibility plus atrophic scars was the most frequent combination, whereas generalized joint hypermobility according to the Beighton score decreased with age. Skin was more commonly hyperextensible on elbows, neck, and knees. The sites more frequently affected by abnormal atrophic scarring were knees, face (especially forehead), pretibial area, and elbows. Facial dysmorphism commonly affected midface/orbital areas with epicanthal folds and infraorbital creases more commonly observed in young patients. Our findings suggest that the combination of ≥1 eye dysmorphism and facial/forehead scars may support the diagnosis in children. Minor acquired traits, such as molluscoid pseudotumors, subcutaneous spheroids, and signs of premature skin aging are equally useful in adults. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  11. [Facial palsy].

    Science.gov (United States)

    Cavoy, R

    2013-09-01

    Facial palsy is a daily challenge for the clinicians. Determining whether facial nerve palsy is peripheral or central is a key step in the diagnosis. Central nervous lesions can give facial palsy which may be easily differentiated from peripheral palsy. The next question is the peripheral facial paralysis idiopathic or symptomatic. A good knowledge of anatomy of facial nerve is helpful. A structure approach is given to identify additional features that distinguish symptomatic facial palsy from idiopathic one. The main cause of peripheral facial palsies is idiopathic one, or Bell's palsy, which remains a diagnosis of exclusion. The most common cause of symptomatic peripheral facial palsy is Ramsay-Hunt syndrome. Early identification of symptomatic facial palsy is important because of often worst outcome and different management. The prognosis of Bell's palsy is on the whole favorable and is improved with a prompt tapering course of prednisone. In Ramsay-Hunt syndrome, an antiviral therapy is added along with prednisone. We also discussed of current treatment recommendations. We will review short and long term complications of peripheral facial palsy.

  12. Hierarchical Spatio-Temporal Probabilistic Graphical Model with Multiple Feature Fusion for Binary Facial Attribute Classification in Real-World Face Videos.

    Science.gov (United States)

    Demirkus, Meltem; Precup, Doina; Clark, James J; Arbel, Tal

    2016-06-01

    Recent literature shows that facial attributes, i.e., contextual facial information, can be beneficial for improving the performance of real-world applications, such as face verification, face recognition, and image search. Examples of face attributes include gender, skin color, facial hair, etc. How to robustly obtain these facial attributes (traits) is still an open problem, especially in the presence of the challenges of real-world environments: non-uniform illumination conditions, arbitrary occlusions, motion blur and background clutter. What makes this problem even more difficult is the enormous variability presented by the same subject, due to arbitrary face scales, head poses, and facial expressions. In this paper, we focus on the problem of facial trait classification in real-world face videos. We have developed a fully automatic hierarchical and probabilistic framework that models the collective set of frame class distributions and feature spatial information over a video sequence. The experiments are conducted on a large real-world face video database that we have collected, labelled and made publicly available. The proposed method is flexible enough to be applied to any facial classification problem. Experiments on a large, real-world video database McGillFaces [1] of 18,000 video frames reveal that the proposed framework outperforms alternative approaches, by up to 16.96 and 10.13%, for the facial attributes of gender and facial hair, respectively.

  13. Cosmetics as a Feature of the Extended Human Phenotype: Modulation of the Perception of Biologically Important Facial Signals

    Science.gov (United States)

    Etcoff, Nancy L.; Stock, Shannon; Haley, Lauren E.; Vickery, Sarah A.; House, David M.

    2011-01-01

    Research on the perception of faces has focused on the size, shape, and configuration of inherited features or the biological phenotype, and largely ignored the effects of adornment, or the extended phenotype. Research on the evolution of signaling has shown that animals frequently alter visual features, including color cues, to attract, intimidate or protect themselves from conspecifics. Humans engage in conscious manipulation of visual signals using cultural tools in real time rather than genetic changes over evolutionary time. Here, we investigate one tool, the use of color cosmetics. In two studies, we asked viewers to rate the same female faces with or without color cosmetics, and we varied the style of makeup from minimal (natural), to moderate (professional), to dramatic (glamorous). Each look provided increasing luminance contrast between the facial features and surrounding skin. Faces were shown for 250 ms or for unlimited inspection time, and subjects rated them for attractiveness, competence, likeability and trustworthiness. At 250 ms, cosmetics had significant positive effects on all outcomes. Length of inspection time did not change the effect for competence or attractiveness. However, with longer inspection time, the effect of cosmetics on likability and trust varied by specific makeup looks, indicating that cosmetics could impact automatic and deliberative judgments differently. The results suggest that cosmetics can create supernormal facial stimuli, and that one way they may do so is by exaggerating cues to sexual dimorphism. Our results provide evidence that judgments of facial trustworthiness and attractiveness are at least partially separable, that beauty has a significant positive effect on judgment of competence, a universal dimension of social cognition, but has a more nuanced effect on the other universal dimension of social warmth, and that the extended phenotype significantly influences perception of biologically important signals at first

  14. Cosmetics as a feature of the extended human phenotype: modulation of the perception of biologically important facial signals.

    Directory of Open Access Journals (Sweden)

    Nancy L Etcoff

    Full Text Available Research on the perception of faces has focused on the size, shape, and configuration of inherited features or the biological phenotype, and largely ignored the effects of adornment, or the extended phenotype. Research on the evolution of signaling has shown that animals frequently alter visual features, including color cues, to attract, intimidate or protect themselves from conspecifics. Humans engage in conscious manipulation of visual signals using cultural tools in real time rather than genetic changes over evolutionary time. Here, we investigate one tool, the use of color cosmetics. In two studies, we asked viewers to rate the same female faces with or without color cosmetics, and we varied the style of makeup from minimal (natural, to moderate (professional, to dramatic (glamorous. Each look provided increasing luminance contrast between the facial features and surrounding skin. Faces were shown for 250 ms or for unlimited inspection time, and subjects rated them for attractiveness, competence, likeability and trustworthiness. At 250 ms, cosmetics had significant positive effects on all outcomes. Length of inspection time did not change the effect for competence or attractiveness. However, with longer inspection time, the effect of cosmetics on likability and trust varied by specific makeup looks, indicating that cosmetics could impact automatic and deliberative judgments differently. The results suggest that cosmetics can create supernormal facial stimuli, and that one way they may do so is by exaggerating cues to sexual dimorphism. Our results provide evidence that judgments of facial trustworthiness and attractiveness are at least partially separable, that beauty has a significant positive effect on judgment of competence, a universal dimension of social cognition, but has a more nuanced effect on the other universal dimension of social warmth, and that the extended phenotype significantly influences perception of biologically important

  15. Cosmetics as a feature of the extended human phenotype: modulation of the perception of biologically important facial signals.

    Science.gov (United States)

    Etcoff, Nancy L; Stock, Shannon; Haley, Lauren E; Vickery, Sarah A; House, David M

    2011-01-01

    Research on the perception of faces has focused on the size, shape, and configuration of inherited features or the biological phenotype, and largely ignored the effects of adornment, or the extended phenotype. Research on the evolution of signaling has shown that animals frequently alter visual features, including color cues, to attract, intimidate or protect themselves from conspecifics. Humans engage in conscious manipulation of visual signals using cultural tools in real time rather than genetic changes over evolutionary time. Here, we investigate one tool, the use of color cosmetics. In two studies, we asked viewers to rate the same female faces with or without color cosmetics, and we varied the style of makeup from minimal (natural), to moderate (professional), to dramatic (glamorous). Each look provided increasing luminance contrast between the facial features and surrounding skin. Faces were shown for 250 ms or for unlimited inspection time, and subjects rated them for attractiveness, competence, likeability and trustworthiness. At 250 ms, cosmetics had significant positive effects on all outcomes. Length of inspection time did not change the effect for competence or attractiveness. However, with longer inspection time, the effect of cosmetics on likability and trust varied by specific makeup looks, indicating that cosmetics could impact automatic and deliberative judgments differently. The results suggest that cosmetics can create supernormal facial stimuli, and that one way they may do so is by exaggerating cues to sexual dimorphism. Our results provide evidence that judgments of facial trustworthiness and attractiveness are at least partially separable, that beauty has a significant positive effect on judgment of competence, a universal dimension of social cognition, but has a more nuanced effect on the other universal dimension of social warmth, and that the extended phenotype significantly influences perception of biologically important signals at first

  16. Tracking Subtle Stereotypes of Children with Trisomy 21: From Facial-Feature-Based to Implicit Stereotyping

    OpenAIRE

    Enea-Drapeau , Claire; Carlier , Michèle; Huguet , Pascal

    2012-01-01

    International audience; BackgroundStigmatization is one of the greatest obstacles to the successful integration of people with Trisomy 21 (T21 or Down syndrome), the most frequent genetic disorder associated with intellectual disability. Research on attitudes and stereotypes toward these people still focuses on explicit measures subjected to social-desirability biases, and neglects how variability in facial stigmata influences attitudes and stereotyping.Methodology/Principal FindingsThe parti...

  17. Facial Nerve Palsy: An Unusual Presenting Feature of Small Cell Lung Cancer

    Directory of Open Access Journals (Sweden)

    Ozcan Yildiz

    2011-01-01

    Full Text Available Lung cancer is the second most common type of cancer in the world and is the most common cause of cancer-related death in men and women; it is responsible for 1.3 million deaths annually worldwide. It can metastasize to any organ. The most common site of metastasis in the head and neck region is the brain; however, it can also metastasize to the oral cavity, gingiva, tongue, parotid gland and lymph nodes. This article reports a case of small cell lung cancer presenting with metastasis to the facial nerve.

  18. Integration of internal and external facial features in 8- to 10-year-old children and adults.

    Science.gov (United States)

    Meinhardt-Injac, Bozana; Persike, Malte; Meinhardt, Günter

    2014-06-01

    Investigation of whole-part and composite effects in 4- to 6-year-old children gave rise to claims that face perception is fully mature within the first decade of life (Crookes & McKone, 2009). However, only internal features were tested, and the role of external features was not addressed, although external features are highly relevant for holistic face perception (Sinha & Poggio, 1996; Axelrod & Yovel, 2010, 2011). In this study, 8- to 10-year-old children and adults performed a same-different matching task with faces and watches. In this task participants attended to either internal or external features. Holistic face perception was tested using a congruency paradigm, in which face and non-face stimuli either agreed or disagreed in both features (congruent contexts) or just in the attended ones (incongruent contexts). In both age groups, pronounced context congruency and inversion effects were found for faces, but not for watches. These findings indicate holistic feature integration for faces. While inversion effects were highly similar in both age groups, context congruency effects were stronger for children. Moreover, children's face matching performance was generally better when attending to external compared to internal features. Adults tended to perform better when attending to internal features. Our results indicate that both adults and 8- to 10-year-old children integrate external and internal facial features into holistic face representations. However, in children's face representations external features are much more relevant. These findings suggest that face perception is holistic but still not adult-like at the end of the first decade of life. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Assessment of the facial features and chin development of fetuses with use of serial three-dimensional sonography and the mandibular size monogram in a Chinese population.

    Science.gov (United States)

    Tsai, Meng-Yin; Lan, Kuo-Chung; Ou, Chia-Yo; Chen, Jen-Huang; Chang, Shiuh-Young; Hsu, Te-Yao

    2004-02-01

    Our purpose was to evaluate whether the application of serial three-dimensional (3D) sonography and the mandibular size monogram can allow observation of dynamic changes in facial features, as well as chin development in utero. The mandibular size monogram has been established through a cross-sectional study involving 183 fetal images. The serial changes of facial features and chin development are assessed in a cohort study involving 40 patients. The monogram reveals that the Biparietal distance (BPD)/Mandibular body length (MBL) ratio is gradually decreased with the advance of gestational age. The cohort study conducted with serial 3D sonography shows the same tendency. Both the images and the results of paired-samples t test (Pmonogram display disproportionate growth of the fetal head and chin that leads to changes in facial features in late gestation. This fact must be considered when we evaluate fetuses at risk for development of micrognathia.

  20. Familiarity and Within-Person Facial Variability: The Importance of the Internal and External Features.

    Science.gov (United States)

    Kramer, Robin S S; Manesi, Zoi; Towler, Alice; Reynolds, Michael G; Burton, A Mike

    2018-01-01

    As faces become familiar, we come to rely more on their internal features for recognition and matching tasks. Here, we assess whether this same pattern is also observed for a card sorting task. Participants sorted photos showing either the full face, only the internal features, or only the external features into multiple piles, one pile per identity. In Experiments 1 and 2, we showed the standard advantage for familiar faces-sorting was more accurate and showed very few errors in comparison with unfamiliar faces. However, for both familiar and unfamiliar faces, sorting was less accurate for external features and equivalent for internal and full faces. In Experiment 3, we asked whether external features can ever be used to make an accurate sort. Using familiar faces and instructions on the number of identities present, we nevertheless found worse performance for the external in comparison with the internal features, suggesting that less identity information was available in the former. Taken together, we show that full faces and internal features are similarly informative with regard to identity. In comparison, external features contain less identity information and produce worse card sorting performance. This research extends current thinking on the shift in focus, both in attention and importance, toward the internal features and away from the external features as familiarity with a face increases.

  1. Representing Objects using Global 3D Relational Features for Recognition Tasks

    DEFF Research Database (Denmark)

    Mustafa, Wail

    2015-01-01

    representations. For representing objects, we derive global descriptors encoding shape using viewpoint-invariant features obtained from multiple sensors observing the scene. Objects are also described using color independently. This allows for combining color and shape when it is required for the task. For more...... robust color description, color calibration is performed. The framework was used in three recognition tasks: object instance recognition, object category recognition, and object spatial relationship recognition. For the object instance recognition task, we present a system that utilizes color and scale...

  2. FEATURES OF NEED-MOTIVATION ORIENTATION OF STUDENTS WHO REPRESENT THE CHINESE CULTURE

    Directory of Open Access Journals (Sweden)

    T. V. Mayasova

    2016-01-01

    Full Text Available In the article it is investigated the features of need-motivational orientation of students who represent the Chinese culture, studying in the higher educational institutions of Russia. As personal characteristics are analyzed the degree of satisfaction of basic needs, the level of motivation to succeed, motivational structure of personality in Chinese and Russian students. The importance of the study of personality characteristics of foreign students of the university helps professionals find the conditions for successful social and cross-cultural adaptation of students in a foreign country. The analysis obtained during the empirical research results confirm that there are certain differences in the needs and motivation of the students, representatives of Chinese and Russian culture. There were significant differences in rates of interpersonal needs, need for recognition, motivation and the comfort level of motivation to the "total activity" in Chinese and Russian students, which allows to predict the occurrence of adaptation and socialization difficulties of foreign students during training.

  3. Familiarity and within-person facial variability: the importance of the internal and external features

    OpenAIRE

    Kramer, R. S. S.; Manesi, Z.; Towler, A.; Reynolds, M. G.; Burton, A. M.

    2018-01-01

    As faces become familiar, we come to rely more on their internal features for recognition and matching tasks. Here, we assess whether this same pattern is also observed for a card sorting task. Participants sorted photos showing either the full face, only the internal features, or only the external features into multiple piles, one pile per identity. In Experiments 1 and 2, we showed the standard advantage for familiar faces—sorting was more accurate and showed very few errors in comparison w...

  4. Facial anatomy.

    Science.gov (United States)

    Marur, Tania; Tuna, Yakup; Demirci, Selman

    2014-01-01

    Dermatologic problems of the face affect both function and aesthetics, which are based on complex anatomical features. Treating dermatologic problems while preserving the aesthetics and functions of the face requires knowledge of normal anatomy. When performing successfully invasive procedures of the face, it is essential to understand its underlying topographic anatomy. This chapter presents the anatomy of the facial musculature and neurovascular structures in a systematic way with some clinically important aspects. We describe the attachments of the mimetic and masticatory muscles and emphasize their functions and nerve supply. We highlight clinically relevant facial topographic anatomy by explaining the course and location of the sensory and motor nerves of the face and facial vasculature with their relations. Additionally, this chapter reviews the recent nomenclature of the branching pattern of the facial artery. © 2013 Elsevier Inc. All rights reserved.

  5. Variable developmental delays and characteristic facial features-A novel 7p22.3p22.2 microdeletion syndrome?

    Science.gov (United States)

    Yu, Andrea C; Zambrano, Regina M; Cristian, Ingrid; Price, Sue; Bernhard, Birgitta; Zucker, Marc; Venkateswaran, Sunita; McGowan-Jordan, Jean; Armour, Christine M

    2017-06-01

    Isolated 7p22.3p22.2 deletions are rarely described with only two reports in the literature. Most other reported cases either involve a much larger region of the 7p arm or have an additional copy number variation. Here, we report five patients with overlapping microdeletions at 7p22.3p22.2. The patients presented with variable developmental delays, exhibiting relative weaknesses in expressive language skills and relative strengths in gross, and fine motor skills. The most consistent facial features seen in these patients included a broad nasal root, a prominent forehead a prominent glabella and arched eyebrows. Additional variable features amongst the patients included microcephaly, metopic ridging or craniosynostosis, cleft palate, cardiac defects, and mild hypotonia. Although the patients' deletions varied in size, there was a 0.47 Mb region of overlap which contained 7 OMIM genes: EIP3B, CHST12, LFNG, BRAT1, TTYH3, AMZ1, and GNA12. We propose that monosomy of this region represents a novel microdeletion syndrome. We recommend that individuals with 7p22.3p22.2 deletions should receive a developmental assessment and a thorough cardiac exam, with consideration of an echocardiogram, as part of their initial evaluation. © 2017 Wiley Periodicals, Inc.

  6. Facial Expression Recognition

    NARCIS (Netherlands)

    Pantic, Maja; Li, S.; Jain, A.

    2009-01-01

    Facial expression recognition is a process performed by humans or computers, which consists of: 1. Locating faces in the scene (e.g., in an image; this step is also referred to as face detection), 2. Extracting facial features from the detected face region (e.g., detecting the shape of facial

  7. External and internal facial features modulate processing of vertical but not horizontal spatial relations.

    Science.gov (United States)

    Meinhardt, Günter; Kurbel, David; Meinhardt-Injac, Bozana; Persike, Malte

    2018-03-22

    Some years ago an asymmetry was reported for the inversion effect for horizontal (H) and vertical (V) relational face manipulations (Goffaux & Rossion, 2007). Subsequent research examined whether a specific disruption of long-range relations underlies the H/V inversion asymmetry (Sekunova & Barton, 2008). Here, we tested how detection of changes in interocular distance (H) and eye height (V) depends on cardinal internal features and external feature surround. Results replicated the H/V inversion asymmetry. Moreover, we found very different face cue dependencies for both change types. Performance and inversion effects did not depend on the presence of other face cues for detecting H changes. In contrast, accuracy for detecting V changes strongly depended on internal and external features, showing cumulative improvement when more cues were added. Inversion effects were generally large, and larger with external feature surround. The cue independence in detecting H relational changes indicates specialized local processing tightly tuned to the eyes region, while the strong cue dependency in detecting V relational changes indicates a global mechanism of cue integration across different face regions. These findings suggest that the H/V asymmetry of the inversion effect rests on an H/V anisotropy of face cue dependency, since only the global V mechanism suffers from disruption of cue integration as the major effect of face inversion. Copyright © 2018. Published by Elsevier Ltd.

  8. EXTRACTING SPATIOTEMPORAL OBJECTS FROM RASTER DATA TO REPRESENT PHYSICAL FEATURES AND ANALYZE RELATED PROCESSES

    Directory of Open Access Journals (Sweden)

    J. A. Zollweg

    2017-10-01

    Full Text Available Numerous ground-based, airborne, and orbiting platforms provide remotely-sensed data of remarkable spatial resolution at short time intervals. However, this spatiotemporal data is most valuable if it can be processed into information, thereby creating meaning. We live in a world of objects: cars, buildings, farms, etc. On a stormy day, we don’t see millions of cubes of atmosphere; we see a thunderstorm ‘object’. Temporally, we don’t see the properties of those individual cubes changing, we see the thunderstorm as a whole evolving and moving. There is a need to represent the bulky, raw spatiotemporal data from remote sensors as a small number of relevant spatiotemporal objects, thereby matching the human brain’s perception of the world. This presentation reveals an efficient algorithm and system to extract the objects/features from raster-formatted remotely-sensed data. The system makes use of the Python object-oriented programming language, SciPy/NumPy for matrix manipulation and scientific computation, and export/import to the GeoJSON standard geographic object data format. The example presented will show how thunderstorms can be identified and characterized in a spatiotemporal continuum using a Python program to process raster data from NOAA’s High-Resolution Rapid Refresh v2 (HRRRv2 data stream.

  9. Extracting Spatiotemporal Objects from Raster Data to Represent Physical Features and Analyze Related Processes

    Science.gov (United States)

    Zollweg, J. A.

    2017-10-01

    Numerous ground-based, airborne, and orbiting platforms provide remotely-sensed data of remarkable spatial resolution at short time intervals. However, this spatiotemporal data is most valuable if it can be processed into information, thereby creating meaning. We live in a world of objects: cars, buildings, farms, etc. On a stormy day, we don't see millions of cubes of atmosphere; we see a thunderstorm `object'. Temporally, we don't see the properties of those individual cubes changing, we see the thunderstorm as a whole evolving and moving. There is a need to represent the bulky, raw spatiotemporal data from remote sensors as a small number of relevant spatiotemporal objects, thereby matching the human brain's perception of the world. This presentation reveals an efficient algorithm and system to extract the objects/features from raster-formatted remotely-sensed data. The system makes use of the Python object-oriented programming language, SciPy/NumPy for matrix manipulation and scientific computation, and export/import to the GeoJSON standard geographic object data format. The example presented will show how thunderstorms can be identified and characterized in a spatiotemporal continuum using a Python program to process raster data from NOAA's High-Resolution Rapid Refresh v2 (HRRRv2) data stream.

  10. Neural processing of fearful and happy facial expressions during emotion-relevant and emotion-irrelevant tasks: a fixation-to-feature approach

    Science.gov (United States)

    Neath-Tavares, Karly N.; Itier, Roxane J.

    2017-01-01

    Research suggests an important role of the eyes and mouth for discriminating facial expressions of emotion. A gaze-contingent procedure was used to test the impact of fixation to facial features on the neural response to fearful, happy and neutral facial expressions in an emotion discrimination (Exp.1) and an oddball detection (Exp.2) task. The N170 was the only eye-sensitive ERP component, and this sensitivity did not vary across facial expressions. In both tasks, compared to neutral faces, responses to happy expressions were seen as early as 100–120ms occipitally, while responses to fearful expressions started around 150ms, on or after the N170, at both occipital and lateral-posterior sites. Analyses of scalp topographies revealed different distributions of these two emotion effects across most of the epoch. Emotion processing interacted with fixation location at different times between tasks. Results suggest a role of both the eyes and mouth in the neural processing of fearful expressions and of the mouth in the processing of happy expressions, before 350ms. PMID:27430934

  11. The look of fear and anger: facial maturity modulates recognition of fearful and angry expressions.

    Science.gov (United States)

    Sacco, Donald F; Hugenberg, Kurt

    2009-02-01

    The current series of studies provide converging evidence that facial expressions of fear and anger may have co-evolved to mimic mature and babyish faces in order to enhance their communicative signal. In Studies 1 and 2, fearful and angry facial expressions were manipulated to have enhanced babyish features (larger eyes) or enhanced mature features (smaller eyes) and in the context of a speeded categorization task in Study 1 and a visual noise paradigm in Study 2, results indicated that larger eyes facilitated the recognition of fearful facial expressions, while smaller eyes facilitated the recognition of angry facial expressions. Study 3 manipulated facial roundness, a stable structure that does not vary systematically with expressions, and found that congruency between maturity and expression (narrow face-anger; round face-fear) facilitated expression recognition accuracy. Results are discussed as representing a broad co-evolutionary relationship between facial maturity and fearful and angry facial expressions. (c) 2009 APA, all rights reserved

  12. The features of scientist’s lingual identity as representative of Russian cosmism discourse

    Directory of Open Access Journals (Sweden)

    Anita B. Tikhonova

    2010-12-01

    Full Text Available This article contains information about the lingual identity of the representative of the Russian cosmism. The analysis of the three levels of the structure of lingual identity is making: verbal and semantic level, lingual and cognitive level, motivational levels of lingual identity.

  13. 3D Representative Volume Element Reconstruction of Fiber Composites via Orientation Tensor and Substructure Features

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yi; Chen, Wei; Xu, Hongyi; Jin, Xuejun

    2016-01-01

    To provide a seamless integration of manufacturing processing simulation and fiber microstructure modeling, two new stochastic 3D microstructure reconstruction methods are proposed for two types of random fiber composites: random short fiber composites, and Sheet Molding Compounds (SMC) chopped fiber composites. A Random Sequential Adsorption (RSA) algorithm is first developed to embed statistical orientation information into 3D RVE reconstruction of random short fiber composites. For the SMC composites, an optimized Voronoi diagram based approach is developed for capturing the substructure features of SMC chopped fiber composites. The proposed methods are distinguished from other reconstruction works by providing a way of integrating statistical information (fiber orientation tensor) obtained from material processing simulation, as well as capturing the multiscale substructures of the SMC composites.

  14. PSYCHOLOGICAL FEATURES OF REPRESENTATIVES OF THE CHECHEN YOUTH PROFESSING ISLAM AND EXPERIENCING MYTHOLOGICAL FEARS

    Directory of Open Access Journals (Sweden)

    Razet Grimsoltanova

    2015-06-01

    Full Text Available The article described the study of the relevance of the role and place of religious contents among young people in the postconflict areas of the South of Russia, the rate of experiences of mythological fear had been explored with the help of the survey, as well as individual psychological characteristics of subjects had been studied by methods of  Eysenck, Schmieschek, J. Rotter and Taylor. Those representatives of the surveyed youth sample experiencing a high level of mythological fear could fall into the danger zone of initiation in the group of non-traditional religious sects, as well as come under extremists’ influence, since manip-ulation of consciousness and human behavior, depending on individual psychological characteristics and by using of mythological content, such as fear of possession by jinni, is most effective. The study was attended by representatives of Islam at the age of 19-21, divided by gender: 100 young men and 100 girls. The study was aimed at identifying indi-vidual personality characteristics of temperament, character accentuations, locus of control, the level of personal anxiety and the results of a content analysis of the survey done by the author of the article were identified in accordance with five scales. Results of the study revealed that about 80% of subjects experiencing a high level of mythological fears had the same peculiar correlation indices. In connection with the results of research, we had worked out and suggested a complex of psycho-pedagogical support consisting of four modules for the purpose of education, preventive and corrective activities with young people experiencing a high level of mythological fear (fear of possession by jinni.

  15. Facial paralysis

    Science.gov (United States)

    ... otherwise healthy, facial paralysis is often due to Bell palsy . This is a condition in which the facial ... speech, or occupational therapist. If facial paralysis from Bell palsy lasts for more than 6 to 12 months, ...

  16. Human Rhinovirus 87 and Enterovirus 68 Represent a Unique Serotype with Rhinovirus and Enterovirus Features

    Science.gov (United States)

    Blomqvist, Soile; Savolainen, Carita; Råman, Laura; Roivainen, Merja; Hovi, Tapani

    2002-01-01

    It has recently been reported that all but one of the 102 known serotypes of the genus Rhinovirus segregate into two genetic clusters (C. Savolainen, S. Blomqvist, M. N. Mulders, and T. Hovi, J. Gen. Virol. 83:333-340, 2002). The only exception is human rhinovirus 87 (HRV87). Here we demonstrate that HRV87 is genetically and antigenically highly similar to enterovirus 68 (EV68) and is related to EV70, the other member of human enterovirus group D. The partial nucleotide sequences of the 5′ untranslated region, capsid regions VP4/VP2 and VP1, and the 3D RNA polymerase gene of the HRV87 prototype strain F02-3607 Corn showed 97.3, 97.8, 95.2, and 95.9% identity to the corresponding regions of EV68 prototype strain Fermon. The amino acid identities were 100 and 98.1% for the products of the two capsid regions and 97.9% for 3D RNA polymerase. Antigenic cross-reaction between HRV87 and EV68 was indicated by microneutralization with monotypic antisera. Phylogenetic analysis showed definite clustering of HRV87 and EV68 with EV70 for all sequences examined. Both HRV87 and EV68 were shown to be acid sensitive by two different assays, while EV70 was acid resistant, which is typical of enteroviruses. The cytopathic effect induced by HRV87 or EV68 was inhibited by monoclonal antibodies to the decay-accelerating factor known to be the receptor of EV70. We conclude that HRV87 and EV68 are strains of the same picornavirus serotype presenting features of both rhinoviruses and enteroviruses. PMID:12409401

  17. Hirschsprung disease, microcephaly, mental retardation, and characteristic facial features: delineation of a new syndrome and identification of a locus at chromosome 2q22-q23.

    Science.gov (United States)

    Mowat, D R; Croaker, G D; Cass, D T; Kerr, B A; Chaitow, J; Adès, L C; Chia, N L; Wilson, M J

    1998-01-01

    We have identified six children with a distinctive facial phenotype in association with mental retardation (MR), microcephaly, and short stature, four of whom presented with Hirschsprung (HSCR) disease in the neonatal period. HSCR was diagnosed in a further child at the age of 3 years after investigation for severe chronic constipation and another child, identified as sharing the same facial phenotype, had chronic constipation, but did not have HSCR. One of our patients has an interstitial deletion of chromosome 2, del(2)(q21q23). These children strongly resemble the patient reported by Lurie et al with HSCR and dysmorphic features associated with del(2)(q22q23). All patients have been isolated cases, suggesting a contiguous gene syndrome or a dominant single gene disorder involving a locus for HSCR located at 2q22-q23. Review of published reports suggests that there is significant phenotypic and genetic heterogeneity within the group of patients with HSCR, MR, and microcephaly. In particular, our patients appear to have a separate disorder from Goldberg-Shprintzen syndrome, for which autosomal recessive inheritance has been proposed because of sib recurrence and consanguinity in some families. Images PMID:9719364

  18. The review and results of different methods for facial recognition

    Science.gov (United States)

    Le, Yifan

    2017-09-01

    In recent years, facial recognition draws much attention due to its wide potential applications. As a unique technology in Biometric Identification, facial recognition represents a significant improvement since it could be operated without cooperation of people under detection. Hence, facial recognition will be taken into defense system, medical detection, human behavior understanding, etc. Several theories and methods have been established to make progress in facial recognition: (1) A novel two-stage facial landmark localization method is proposed which has more accurate facial localization effect under specific database; (2) A statistical face frontalization method is proposed which outperforms state-of-the-art methods for face landmark localization; (3) It proposes a general facial landmark detection algorithm to handle images with severe occlusion and images with large head poses; (4) There are three methods proposed on Face Alignment including shape augmented regression method, pose-indexed based multi-view method and a learning based method via regressing local binary features. The aim of this paper is to analyze previous work of different aspects in facial recognition, focusing on concrete method and performance under various databases. In addition, some improvement measures and suggestions in potential applications will be put forward.

  19. Hair length, facial attractiveness, personality attribution: A multiple fitness model of hairdressing

    OpenAIRE

    Bereczkei, Tamas; Mesko, Norbert

    2007-01-01

    Multiple Fitness Model states that attractiveness varies across multiple dimensions, with each feature representing a different aspect of mate value. In the present study, male raters judged the attractiveness of young females with neotenous and mature facial features, with various hair lengths. Results revealed that the physical appearance of long-haired women was rated high, regardless of their facial attractiveness being valued high or low. Women rated as most attractive were those whose f...

  20. Persistent facial pain conditions

    DEFF Research Database (Denmark)

    Forssell, Heli; Alstergren, Per; Bakke, Merete

    2016-01-01

    Persistent facial pains, especially temporomandibular disorders (TMD), are common conditions. As dentists are responsible for the treatment of most of these disorders, up-to date knowledge on the latest advances in the field is essential for successful diagnosis and management. The review covers...... TMD, and different neuropathic or putative neuropathic facial pains such as persistent idiopathic facial pain and atypical odontalgia, trigeminal neuralgia and painful posttraumatic trigeminal neuropathy. The article presents an overview of TMD pain as a biopsychosocial condition, its prevalence......, clinical features, consequences, central and peripheral mechanisms, diagnostic criteria (DC/TMD), and principles of management. For each of the neuropathic facial pain entities, the definitions, prevalence, clinical features, and diagnostics are described. The current understanding of the pathophysiology...

  1. Nablus mask-like facial syndrome

    DEFF Research Database (Denmark)

    Allanson, Judith; Smith, Amanda; Hare, Heather

    2012-01-01

    Nablus mask-like facial syndrome (NMLFS) has many distinctive phenotypic features, particularly tight glistening skin with reduced facial expression, blepharophimosis, telecanthus, bulky nasal tip, abnormal external ear architecture, upswept frontal hairline, and sparse eyebrows. Over the last few...

  2. A novel malformation complex of bilateral and symmetric preaxial radial ray-thumb aplasia and lower limb defects with minimal facial dysmorphic features: a case report and literature review.

    Science.gov (United States)

    Al Kaissi, Ali; Klaushofer, Klaus; Krebs, Alexander; Grill, Franz

    2008-10-24

    Radial hemimelia is a congenital abnormality characterised by the partial or complete absence of the radius. The longitudinal hemimelia indicates the absence of one or more bones along the preaxial (medial) or postaxial (lateral) side of the limb. Preaxial limb defects occurred more frequently with a combination of microtia, esophageal atresia, anorectal atresia, heart defects, unilateral kidney dysgenesis, and some axial skeletal defects. Postaxial acrofacial dysostoses are characterised by distinctive facies and postaxial limb deficiencies, involving the 5th finger, metacarpal/ulnar/fibular/and metatarsal. The patient, an 8-year-old-boy with minimal craniofacial dysmorphic features but with profound upper limb defects of bilateral and symmetrical absence of the radius and the thumbs respectively. In addition, there was a unilateral tibio-fibular hypoplasia (hemimelia) associated with hypoplasia of the terminal phalanges and malsegmentation of the upper thoracic vertebrae, causing effectively the development of thoracic kyphosis. In the typical form of the preaxial acrofacial dysostosis, there are aberrations in the development of the first and second branchial arches and limb buds. The craniofacial dysmorphic features are characteristic such as micrognathia, zygomatic hypoplasia, cleft palate, and preaxial limb defects. Nager and de Reynier in 1948, who used the term acrofacial dysostosis (AFD) to distinguish the condition from mandibulofacial dysostosis. Neither the facial features nor the limb defects in our present patient appear to be absolutely typical with the previously reported cases of AFD. Our patient expands the phenotype of syndromic preaxial limb malformation complex. He might represent a new syndromic entity of mild naso-maxillary malformation in connection with axial and extra-axial malformation complex.

  3. Election Districts and Precincts, PrecinctPoly-The data set is a polygon feature consisting of 220 segments representing voter precinct boundaries., Published in 1991, Davis County Government.

    Data.gov (United States)

    NSGIC Local Govt | GIS Inventory — Election Districts and Precincts dataset current as of 1991. PrecinctPoly-The data set is a polygon feature consisting of 220 segments representing voter precinct...

  4. Facial Expression Recognition of Various Internal States via Manifold Learning

    Institute of Scientific and Technical Information of China (English)

    Young-Suk Shin

    2009-01-01

    Emotions are becoming increasingly important in human-centered interaction architectures. Recognition of facial expressions, which are central to human-computer interactions, seems natural and desirable. However, facial expressions include mixed emotions, continuous rather than discrete, which vary from moment to moment. This paper represents a novel method of recognizing facial expressions of various internal states via manifold learning, to achieve the aim of humancentered interaction studies. A critical review of widely used emotion models is described, then, facial expression features of various internal states via the locally linear embedding (LLE) are extracted. The recognition of facial expressions is created with the pleasure-displeasure and arousal-sleep dimensions in a two-dimensional model of emotion. The recognition result of various internal state expressions that mapped to the embedding space via the LLE algorithm can effectively represent the structural nature of the two-dimensional model of emotion. Therefore our research has established that the relationship between facial expressions of various internal states can be elaborated in the two-dimensional model of emotion, via the locally linear embedding algorithm.

  5. Facial trauma

    Science.gov (United States)

    Maxillofacial injury; Midface trauma; Facial injury; LeFort injuries ... Hockberger RS, Walls RM, eds. Rosen's Emergency Medicine: Concepts and Clinical Practice . 8th ed. Philadelphia, PA: Elsevier ...

  6. Recognizing Facial Slivers.

    Science.gov (United States)

    Gilad-Gutnick, Sharon; Harmatz, Elia Samuel; Tsourides, Kleovoulos; Yovel, Galit; Sinha, Pawan

    2018-07-01

    We report here an unexpectedly robust ability of healthy human individuals ( n = 40) to recognize extremely distorted needle-like facial images, challenging the well-entrenched notion that veridical spatial configuration is necessary for extracting facial identity. In face identification tasks of parametrically compressed internal and external features, we found that the sum of performances on each cue falls significantly short of performance on full faces, despite the equal visual information available from both measures (with full faces essentially being a superposition of internal and external features). We hypothesize that this large deficit stems from the use of positional information about how the internal features are positioned relative to the external features. To test this, we systematically changed the relations between internal and external features and found preferential encoding of vertical but not horizontal spatial relationships in facial representations ( n = 20). Finally, we employ magnetoencephalography imaging ( n = 20) to demonstrate a close mapping between the behavioral psychometric curve and the amplitude of the M250 face familiarity, but not M170 face-sensitive evoked response field component, providing evidence that the M250 can be modulated by faces that are perceptually identifiable, irrespective of extreme distortions to the face's veridical configuration. We theorize that the tolerance to compressive distortions has evolved from the need to recognize faces across varying viewpoints. Our findings help clarify the important, but poorly defined, concept of facial configuration and also enable an association between behavioral performance and previously reported neural correlates of face perception.

  7. Genetics Home Reference: oral-facial-digital syndrome

    Science.gov (United States)

    ... related conditions that affect the development of the oral cavity (the mouth and teeth), facial features, and digits ( ... this disorder involve problems with development of the oral cavity , facial features, and digits. Most forms are also ...

  8. Facial Fractures.

    Science.gov (United States)

    Ricketts, Sophie; Gill, Hameet S; Fialkov, Jeffery A; Matic, Damir B; Antonyshyn, Oleh M

    2016-02-01

    After reading this article, the participant should be able to: 1. Demonstrate an understanding of some of the changes in aspects of facial fracture management. 2. Assess a patient presenting with facial fractures. 3. Understand indications and timing of surgery. 4. Recognize exposures of the craniomaxillofacial skeleton. 5. Identify methods for repair of typical facial fracture patterns. 6. Discuss the common complications seen with facial fractures. Restoration of the facial skeleton and associated soft tissues after trauma involves accurate clinical and radiologic assessment to effectively plan a management approach for these injuries. When surgical intervention is necessary, timing, exposure, sequencing, and execution of repair are all integral to achieving the best long-term outcomes for these patients.

  9. A Genome-Wide Association Study Identifies Five Loci Influencing Facial Morphology in Europeans

    Science.gov (United States)

    Liu, Fan; van der Lijn, Fedde; Schurmann, Claudia; Zhu, Gu; Chakravarty, M. Mallar; Hysi, Pirro G.; Wollstein, Andreas; Lao, Oscar; de Bruijne, Marleen; Ikram, M. Arfan; van der Lugt, Aad; Rivadeneira, Fernando; Uitterlinden, André G.; Hofman, Albert; Niessen, Wiro J.; Homuth, Georg; de Zubicaray, Greig; McMahon, Katie L.; Thompson, Paul M.; Daboul, Amro; Puls, Ralf; Hegenscheid, Katrin; Bevan, Liisa; Pausova, Zdenka; Medland, Sarah E.; Montgomery, Grant W.; Wright, Margaret J.; Wicking, Carol; Boehringer, Stefan; Spector, Timothy D.; Paus, Tomáš; Martin, Nicholas G.; Biffar, Reiner; Kayser, Manfred

    2012-01-01

    Inter-individual variation in facial shape is one of the most noticeable phenotypes in humans, and it is clearly under genetic regulation; however, almost nothing is known about the genetic basis of normal human facial morphology. We therefore conducted a genome-wide association study for facial shape phenotypes in multiple discovery and replication cohorts, considering almost ten thousand individuals of European descent from several countries. Phenotyping of facial shape features was based on landmark data obtained from three-dimensional head magnetic resonance images (MRIs) and two-dimensional portrait images. We identified five independent genetic loci associated with different facial phenotypes, suggesting the involvement of five candidate genes—PRDM16, PAX3, TP63, C5orf50, and COL17A1—in the determination of the human face. Three of them have been implicated previously in vertebrate craniofacial development and disease, and the remaining two genes potentially represent novel players in the molecular networks governing facial development. Our finding at PAX3 influencing the position of the nasion replicates a recent GWAS of facial features. In addition to the reported GWA findings, we established links between common DNA variants previously associated with NSCL/P at 2p21, 8q24, 13q31, and 17q22 and normal facial-shape variations based on a candidate gene approach. Overall our study implies that DNA variants in genes essential for craniofacial development contribute with relatively small effect size to the spectrum of normal variation in human facial morphology. This observation has important consequences for future studies aiming to identify more genes involved in the human facial morphology, as well as for potential applications of DNA prediction of facial shape such as in future forensic applications. PMID:23028347

  10. A genome-wide association study identifies five loci influencing facial morphology in Europeans.

    Directory of Open Access Journals (Sweden)

    Fan Liu

    2012-09-01

    Full Text Available Inter-individual variation in facial shape is one of the most noticeable phenotypes in humans, and it is clearly under genetic regulation; however, almost nothing is known about the genetic basis of normal human facial morphology. We therefore conducted a genome-wide association study for facial shape phenotypes in multiple discovery and replication cohorts, considering almost ten thousand individuals of European descent from several countries. Phenotyping of facial shape features was based on landmark data obtained from three-dimensional head magnetic resonance images (MRIs and two-dimensional portrait images. We identified five independent genetic loci associated with different facial phenotypes, suggesting the involvement of five candidate genes--PRDM16, PAX3, TP63, C5orf50, and COL17A1--in the determination of the human face. Three of them have been implicated previously in vertebrate craniofacial development and disease, and the remaining two genes potentially represent novel players in the molecular networks governing facial development. Our finding at PAX3 influencing the position of the nasion replicates a recent GWAS of facial features. In addition to the reported GWA findings, we established links between common DNA variants previously associated with NSCL/P at 2p21, 8q24, 13q31, and 17q22 and normal facial-shape variations based on a candidate gene approach. Overall our study implies that DNA variants in genes essential for craniofacial development contribute with relatively small effect size to the spectrum of normal variation in human facial morphology. This observation has important consequences for future studies aiming to identify more genes involved in the human facial morphology, as well as for potential applications of DNA prediction of facial shape such as in future forensic applications.

  11. Impaired recognition of facial emotions from low-spatial frequencies in Asperger syndrome.

    Science.gov (United States)

    Kätsyri, Jari; Saalasti, Satu; Tiippana, Kaisa; von Wendt, Lennart; Sams, Mikko

    2008-01-01

    The theory of 'weak central coherence' [Happe, F., & Frith, U. (2006). The weak coherence account: Detail-focused cognitive style in autism spectrum disorders. Journal of Autism and Developmental Disorders, 36(1), 5-25] implies that persons with autism spectrum disorders (ASDs) have a perceptual bias for local but not for global stimulus features. The recognition of emotional facial expressions representing various different levels of detail has not been studied previously in ASDs. We analyzed the recognition of four basic emotional facial expressions (anger, disgust, fear and happiness) from low-spatial frequencies (overall global shapes without local features) in adults with an ASD. A group of 20 participants with Asperger syndrome (AS) was compared to a group of non-autistic age- and sex-matched controls. Emotion recognition was tested from static and dynamic facial expressions whose spatial frequency contents had been manipulated by low-pass filtering at two levels. The two groups recognized emotions similarly from non-filtered faces and from dynamic vs. static facial expressions. In contrast, the participants with AS were less accurate than controls in recognizing facial emotions from very low-spatial frequencies. The results suggest intact recognition of basic facial emotions and dynamic facial information, but impaired visual processing of global features in ASDs.

  12. Facial Fractures.

    Science.gov (United States)

    Ghosh, Rajarshi; Gopalkrishnan, Kulandaswamy

    2018-06-01

    The aim of this study is to retrospectively analyze the incidence of facial fractures along with age, gender predilection, etiology, commonest site, associated dental injuries, and any complications of patients operated in Craniofacial Unit of SDM College of Dental Sciences and Hospital. This retrospective study was conducted at the Department of OMFS, SDM College of Dental Sciences, Dharwad from January 2003 to December 2013. Data were recorded for the cause of injury, age and gender distribution, frequency and type of injury, localization and frequency of soft tissue injuries, dentoalveolar trauma, facial bone fractures, complications, concomitant injuries, and different treatment protocols.All the data were analyzed using statistical analysis that is chi-squared test. A total of 1146 patients reported at our unit with facial fractures during these 10 years. Males accounted for a higher frequency of facial fractures (88.8%). Mandible was the commonest bone to be fractured among all the facial bones (71.2%). Maxillary central incisors were the most common teeth to be injured (33.8%) and avulsion was the most common type of injury (44.6%). Commonest postoperative complication was plate infection (11%) leading to plate removal. Other injuries associated with facial fractures were rib fractures, head injuries, upper and lower limb fractures, etc., among these rib fractures were seen most frequently (21.6%). This study was performed to compare the different etiologic factors leading to diverse facial fracture patterns. By statistical analysis of this record the authors come to know about the relationship of facial fractures with gender, age, associated comorbidities, etc.

  13. Exploring the Relation between Prenatal and Neonatal Complications and Later Autistic-Like Features in a Representative Community Sample of Twins

    Science.gov (United States)

    Ronald, Angelica; Happe, Francesca; Dworzynski, Katharina; Bolton, Patrick; Plomin, Robert

    2010-01-01

    Prenatal and neonatal events were reported by parents of 13,690 eighteen-month-old twins enrolled in the Twins Early Development Study, a representative community sample born in England and Wales. At ages 7-8, parents and teachers completed questionnaires on social and nonsocial autistic-like features and parents completed the Childhood Asperger…

  14. Análisis comparativo de descriptores de forma 3D para detección de características faciales / Comparative analysis of 3D shape descriptors for facial feature detection

    OpenAIRE

    Cerón Correa, Alexander

    2011-01-01

    El rostro humano presenta una gran cantidad de características que actualmente pueden ser modeladas mediante un simple patrón 2D, un conjunto complejo de vértices 3D que forman una malla poligonal o un conjunto de par�ametros para cada grado de libertad o variación. La caracterización del rostro tiene gran cantidad de aplicaciones dentro de las cuales se tienen: identificación de rostros, modelado de la cara, síntesis de voz, identificación de expresiones y cirugía facial. Los modelos t...

  15. Facial Sports Injuries

    Science.gov (United States)

    ... Marketplace Find an ENT Doctor Near You Facial Sports Injuries Facial Sports Injuries Patient Health Information News ... should receive immediate medical attention. Prevention Of Facial Sports Injuries The best way to treat facial sports ...

  16. Facial Cosmetic Surgery

    Science.gov (United States)

    ... to find out more. Facial Cosmetic Surgery Facial Cosmetic Surgery Extensive education and training in surgical procedures ... to find out more. Facial Cosmetic Surgery Facial Cosmetic Surgery Extensive education and training in surgical procedures ...

  17. Facial trauma.

    Science.gov (United States)

    Peeters, N; Lemkens, P; Leach, R; Gemels B; Schepers, S; Lemmens, W

    Facial trauma. Patients with facial trauma must be assessed in a systematic way so as to avoid missing any injury. Severe and disfiguring facial injuries can be distracting. However, clinicians must first focus on the basics of trauma care, following the Advanced Trauma Life Support (ATLS) system of care. Maxillofacial trauma occurs in a significant number of severely injured patients. Life- and sight-threatening injuries must be excluded during the primary and secondary surveys. Special attention must be paid to sight-threatening injuries in stabilized patients through early referral to an appropriate specialist or the early initiation of emergency care treatment. The gold standard for the radiographic evaluation of facial injuries is computed tomography (CT) imaging. Nasal fractures are the most frequent isolated facial fractures. Isolated nasal fractures are principally diagnosed through history and clinical examination. Closed reduction is the most frequently performed treatment for isolated nasal fractures, with a fractured nasal septum as a predictor of failure. Ear, nose and throat surgeons, maxillofacial surgeons and ophthalmologists must all develop an adequate treatment plan for patients with complex maxillofacial trauma.

  18. Rejuvenecimiento facial

    Directory of Open Access Journals (Sweden)

    L. Daniel Jacubovsky, Dr.

    2010-01-01

    Full Text Available El envejecimiento facial es un proceso único y particular a cada individuo y está regido en especial por su carga genética. El lifting facial es una compleja técnica desarrollada en nuestra especialidad desde principios de siglo, para revertir los principales signos de este proceso. Los factores secundarios que gravitan en el envejecimiento facial son múltiples y por ello las ritidectomías o lifting cérvico faciales descritas han buscado corregir los cambios fisonómicos del envejecimiento excursionando, como se describe, en todos los planos tisulares involucrados. Esta cirugía por lo tanto, exige conocimiento cabal de la anatomía quirúrgica, pericia y experiencia para reducir las complicaciones, estigmas quirúrgicos y revisiones secundarias. La ridectomía facial ha evolucionado hacia un procedimiento más simple, de incisiones más cortas y disecciones menos extensas. Las suspensiones musculares han variado en su ejecución y los vectores de montaje y resección cutánea son cruciales en los resultados estéticos de la cirugía cérvico facial. Hoy estos vectores son de tracción más vertical. La corrección de la flaccidez va acompañada de un interés en reponer el volumen de la superficie del rostro, en especial el tercio medio. Las técnicas quirúrgicas de rejuvenecimiento, en especial el lifting facial, exigen una planificación para cada paciente. Las técnicas adjuntas al lifting, como blefaroplastias, mentoplastía, lipoaspiración de cuello, implantes faciales y otras, también han tenido una positiva evolución hacia la reducción de riesgos y mejor éxito estético.

  19. Reconocimiento facial

    OpenAIRE

    Urtiaga Abad, Juan Alfonso

    2014-01-01

    El presente proyecto trata sobre uno de los campos más problemáticos de la inteligencia artificial, el reconocimiento facial. Algo tan sencillo para las personas como es reconocer una cara conocida se traduce en complejos algoritmos y miles de datos procesados en cuestión de segundos. El proyecto comienza con un estudio del estado del arte de las diversas técnicas de reconocimiento facial, desde las más utilizadas y probadas como el PCA y el LDA, hasta técnicas experimentales que utilizan ...

  20. FACIAL PAIN·

    African Journals Online (AJOL)

    -As the conditions which cause pain in the facial structures are many and varied, the ... involvement of the auriculo-temporal nerve and is usually relieved by avulsion of that .... of its effects. If it is uspected that a lesion in the po terior fossa ma ...

  1. Delayed speech development, facial asymmetry, strabismus, and transverse ear lobe creases: a new syndrome?

    OpenAIRE

    Méhes, K

    1993-01-01

    A 4 year 9 month old boy and his 3 year 5 month old sister presented with delayed speech development, facial asymmetry, strabismus, and transverse ear lobe creases. The same features were found in their mother, but the father had no such anomalies. To our knowledge this familial association has not been described before and may represent an autosomal dominant syndrome.

  2. Chondromyxoid fibroma of the mastoid facial nerve canal mimicking a facial nerve schwannoma.

    Science.gov (United States)

    Thompson, Andrew L; Bharatha, Aditya; Aviv, Richard I; Nedzelski, Julian; Chen, Joseph; Bilbao, Juan M; Wong, John; Saad, Reda; Symons, Sean P

    2009-07-01

    Chondromyxoid fibroma of the skull base is a rare entity. Involvement of the temporal bone is particularly rare. We present an unusual case of progressive facial nerve paralysis with imaging and clinical findings most suggestive of a facial nerve schwannoma. The lesion was tubular in appearance, expanded the mastoid facial nerve canal, protruded out of the stylomastoid foramen, and enhanced homogeneously. The only unusual imaging feature was minor calcification within the tumor. Surgery revealed an irregular, cystic lesion. Pathology diagnosed a chondromyxoid fibroma involving the mastoid portion of the facial nerve canal, destroying the facial nerve.

  3. Colesteatoma causando paralisia facial Cholesteatoma causing facial paralysis

    Directory of Open Access Journals (Sweden)

    José Ricardo Gurgel Testa

    2003-10-01

    blood supply or production of neurotoxic substances secreted from either the cholesteatoma matrix or bacteria enclosed in the tumor. AIM: To evaluate the incidence, clinical features and treatment of the facial palsy due cholesteatoma. STUDY DESIGN: Clinical retrospective. MATERIAL AND METHOD: Retrospective study of 10 cases of facial paralysis due cholesteatoma selected through a survey of 206 decompressions of the facial nerve due various aetiologies realized in the last 10 years in UNIFESP-EPM. RESULTS: The incidence of facial paralysis due cholesteatoma in this study was 4,85%, with female predominance (60%. The average age of the patients was 39 years. The duration and severity of the facial palsy associated with the extension of lesion were important for the functional recovery of the facial nerve. CONCLUSION: Early surgical approach is necessary in these cases to improve the nerve function more adequately. When disruption or intense fibrous replacement occurs in the facial nerve, nerve grafting (greater auricular/sural nerves and/or hypoglossal facial anastomosis may be suggested.

  4. Facial nerve paralysis in children

    Science.gov (United States)

    Ciorba, Andrea; Corazzi, Virginia; Conz, Veronica; Bianchini, Chiara; Aimoni, Claudia

    2015-01-01

    Facial nerve palsy is a condition with several implications, particularly when occurring in childhood. It represents a serious clinical problem as it causes significant concerns in doctors because of its etiology, its treatment options and its outcome, as well as in little patients and their parents, because of functional and aesthetic outcomes. There are several described causes of facial nerve paralysis in children, as it can be congenital (due to delivery traumas and genetic or malformative diseases) or acquired (due to infective, inflammatory, neoplastic, traumatic or iatrogenic causes). Nonetheless, in approximately 40%-75% of the cases, the cause of unilateral facial paralysis still remains idiopathic. A careful diagnostic workout and differential diagnosis are particularly recommended in case of pediatric facial nerve palsy, in order to establish the most appropriate treatment, as the therapeutic approach differs in relation to the etiology. PMID:26677445

  5. Influence of gravity upon some facial signs.

    Science.gov (United States)

    Flament, F; Bazin, R; Piot, B

    2015-06-01

    Facial clinical signs and their integration are the basis of perception than others could have from ourselves, noticeably the age they imagine we are. Facial modifications in motion and their objective measurements before and after application of skin regimen are essential to go further in evaluation capacities to describe efficacy in facial dynamics. Quantification of facial modifications vis à vis gravity will allow us to answer about 'control' of facial shape in daily activities. Standardized photographs of the faces of 30 Caucasian female subjects of various ages (24-73 year) were successively taken at upright and supine positions within a short time interval. All these pictures were therefore reframed - any bias due to facial features was avoided when evaluating one single sign - for clinical quotation by trained experts of several facial signs regarding published standardized photographic scales. For all subjects, the supine position increased facial width but not height, giving a more fuller appearance to the face. More importantly, the supine position changed the severity of facial ageing features (e.g. wrinkles) compared to an upright position and whether these features were attenuated or exacerbated depended on their facial location. Supine station mostly modifies signs of the lower half of the face whereas those of the upper half appear unchanged or slightly accentuated. These changes appear much more marked in the older groups, where some deep labial folds almost vanish. These alterations decreased the perceived ages of the subjects by an average of 3.8 years. Although preliminary, this study suggests that a 90° rotation of the facial skin vis à vis gravity induces rapid rearrangements among which changes in tensional forces within and across the face, motility of interstitial free water among underlying skin tissue and/or alterations of facial Langer lines, likely play a significant role. © 2015 Society of Cosmetic Scientists and the Société Fran

  6. Morphological integration of soft-tissue facial morphology in Down Syndrome and siblings.

    Science.gov (United States)

    Starbuck, John; Reeves, Roger H; Richtsmeier, Joan

    2011-12-01

    Down syndrome (DS), resulting from trisomy of chromosome 21, is the most common live-born human aneuploidy. The phenotypic expression of trisomy 21 produces variable, though characteristic, facial morphology. Although certain facial features have been documented quantitatively and qualitatively as characteristic of DS (e.g., epicanthic folds, macroglossia, and hypertelorism), all of these traits occur in other craniofacial conditions with an underlying genetic cause. We hypothesize that the typical DS face is integrated differently than the face of non-DS siblings, and that the pattern of morphological integration unique to individuals with DS will yield information about underlying developmental associations between facial regions. We statistically compared morphological integration patterns of immature DS faces (N = 53) with those of non-DS siblings (N = 54), aged 6-12 years using 31 distances estimated from 3D coordinate data representing 17 anthropometric landmarks recorded on 3D digital photographic images. Facial features are affected differentially in DS, as evidenced by statistically significant differences in integration both within and between facial regions. Our results suggest a differential affect of trisomy on facial prominences during craniofacial development. 2011 Wiley Periodicals, Inc.

  7. 3D Facial Pattern Analysis for Autism

    Science.gov (United States)

    2010-07-01

    et al. (2001) proposed a two-level Garbor wavelet network (GWN) to detect eight facial features. In Bhuiyan et al. (2003) six facial features are...Toyama, K., Krüger, V., 2001. Hierarchical Wavelet Networks for Facial Feature Localization. ICCV’01 Workshop on Recognition, Analysis and... pathological  (red) and normal structure (blue) (b)  signed distance map (negative distance indicates the  pathological  shape is inside) (c) raw

  8. Quantitative anatomical analysis of facial expression using a 3D motion capture system: Application to cosmetic surgery and facial recognition technology.

    Science.gov (United States)

    Lee, Jae-Gi; Jung, Su-Jin; Lee, Hyung-Jin; Seo, Jung-Hyuk; Choi, You-Jin; Bae, Hyun-Sook; Park, Jong-Tae; Kim, Hee-Jin

    2015-09-01

    The topography of the facial muscles differs between males and females and among individuals of the same gender. To explain the unique expressions that people can make, it is important to define the shapes of the muscle, their associations with the skin, and their relative functions. Three-dimensional (3D) motion-capture analysis, often used to study facial expression, was used in this study to identify characteristic skin movements in males and females when they made six representative basic expressions. The movements of 44 reflective markers (RMs) positioned on anatomical landmarks were measured. Their mean displacement was large in males [ranging from 14.31 mm (fear) to 41.15 mm (anger)], and 3.35-4.76 mm smaller in females [ranging from 9.55 mm (fear) to 37.80 mm (anger)]. The percentages of RMs involved in the ten highest mean maximum displacement values in making at least one expression were 47.6% in males and 61.9% in females. The movements of the RMs were larger in males than females but were more limited. Expanding our understanding of facial expression requires morphological studies of facial muscles and studies of related complex functionality. Conducting these together with quantitative analyses, as in the present study, will yield data valuable for medicine, dentistry, and engineering, for example, for surgical operations on facial regions, software for predicting changes in facial features and expressions after corrective surgery, and the development of face-mimicking robots. © 2015 Wiley Periodicals, Inc.

  9. A statistical method for 2D facial landmarking

    NARCIS (Netherlands)

    Dibeklioğlu, H.; Salah, A.A.; Gevers, T.

    2012-01-01

    Many facial-analysis approaches rely on robust and accurate automatic facial landmarking to correctly function. In this paper, we describe a statistical method for automatic facial-landmark localization. Our landmarking relies on a parsimonious mixture model of Gabor wavelet features, computed in

  10. Estimador de calidad en sistemas de reconocimiento facial

    OpenAIRE

    Espejo Caballero, Daniel

    2015-01-01

    El fin de este proyecto es conseguir obtener una estimación de la calidad de una imagen facial, a partir del estudio y extracción de características obtenidas, a partir de las imágenes faciales. The goal of this project is get a quality estimation of a facial image, using the extraction and learning of the differents features that we can extract from a facial image.

  11. Facial Prototype Formation in Children.

    Science.gov (United States)

    Inn, Donald; And Others

    This study examined memory representation as it is exhibited in young children's formation of facial prototypes. In the first part of the study, researchers constructed images of faces using an Identikit that provided the features of hair, eyes, mouth, nose, and chin. Images were varied systematically. A series of these images, called exemplar…

  12. Complex chromosome rearrangement in a child with microcephaly, dysmorphic facial features and mosaicism for a terminal deletion del(18(q21.32-qter investigated by FISH and array-CGH: Case report

    Directory of Open Access Journals (Sweden)

    Kokotas Haris

    2008-11-01

    Full Text Available Abstract We report on a 7 years and 4 months old Greek boy with mild microcephaly and dysmorphic facial features. He was a sociable child with maxillary hypoplasia, epicanthal folds, upslanting palpebral fissures with long eyelashes, and hypertelorism. His ears were prominent and dysmorphic, he had a long philtrum and a high arched palate. His weight was 17 kg (25th percentile and his height 120 cm (50th percentile. High resolution chromosome analysis identified in 50% of the cells a normal male karyotype, and in 50% of the cells one chromosome 18 showed a terminal deletion from 18q21.32. Molecular cytogenetic investigation confirmed a del(18(q21.32-qter in the one chromosome 18, but furthermore revealed the presence of a duplication in q21.2 in the other chromosome 18. The case is discussed concerning comparable previously reported cases and the possible mechanisms of formation.

  13. Facial Nerve Schwannoma of the Cerebellopontine Angle: A Diagnostic Challenge

    OpenAIRE

    Lassaletta, Luis; Roda, José María; Frutos, Remedios; Patrón, Mercedes; Gavilán, Javier

    2002-01-01

    Facial nerve schwannomas are rare lesions that may involve any segment of the facial nerve. Because of their rarity and the lack of a consistent clinical and radiological pattern, facial nerve schwannomas located at the cerebellopontine angle (CPA) and internal auditory canal (IAC) represent a diagnostic and therapeutic challenge for clinicians. In this report, a case of a CPA/IAC facial nerve schwannoma is presented. Contemporary diagnosis and management of this rare lesion are analyzed.

  14. Heartbeat Rate Measurement from Facial Video

    DEFF Research Database (Denmark)

    Haque, Mohammad Ahsanul; Irani, Ramin; Nasrollahi, Kamal

    2016-01-01

    Heartbeat Rate (HR) reveals a person’s health condition. This paper presents an effective system for measuring HR from facial videos acquired in a more realistic environment than the testing environment of current systems. The proposed method utilizes a facial feature point tracking method...... by combining a ‘Good feature to track’ and a ‘Supervised descent method’ in order to overcome the limitations of currently available facial video based HR measuring systems. Such limitations include, e.g., unrealistic restriction of the subject’s movement and artificial lighting during data capture. A face...

  15. Magnetoencephalographic study on facial movements

    Directory of Open Access Journals (Sweden)

    Kensaku eMiki

    2014-07-01

    Full Text Available In this review, we introduced our three studies that focused on facial movements. In the first study, we examined the temporal characteristics of neural responses elicited by viewing mouth movements, and assessed differences between the responses to mouth opening and closing movements and an averting eyes condition. Our results showed that the occipitotemporal area, the human MT/V5 homologue, was active in the perception of both mouth and eye motions. Viewing mouth and eye movements did not elicit significantly different activity in the occipitotemporal area, which indicated that perception of the movement of facial parts may be processed in the same manner, and this is different from motion in general. In the second study, we investigated whether early activity in the occipitotemporal region evoked by eye movements was influenced by a face contour and/or features such as the mouth. Our results revealed specific information processing for eye movements in the occipitotemporal region, and this activity was significantly influenced by whether movements appeared with the facial contour and/or features, in other words, whether the eyes moved, even if the movement itself was the same. In the third study, we examined the effects of inverting the facial contour (hair and chin and features (eyes, nose, and mouth on processing for static and dynamic face perception. Our results showed the following: (1 In static face perception, activity in the right fusiform area was affected more by the inversion of features while that in the left fusiform area was affected more by a disruption in the spatial relationship between the contour and features, and (2 In dynamic face perception, activity in the right occipitotemporal area was affected by the inversion of the facial contour.

  16. Impaired social brain network for processing dynamic facial expressions in autism spectrum disorders

    Directory of Open Access Journals (Sweden)

    Sato Wataru

    2012-08-01

    Full Text Available Abstract Background Impairment of social interaction via facial expressions represents a core clinical feature of autism spectrum disorders (ASD. However, the neural correlates of this dysfunction remain unidentified. Because this dysfunction is manifested in real-life situations, we hypothesized that the observation of dynamic, compared with static, facial expressions would reveal abnormal brain functioning in individuals with ASD. We presented dynamic and static facial expressions of fear and happiness to individuals with high-functioning ASD and to age- and sex-matched typically developing controls and recorded their brain activities using functional magnetic resonance imaging (fMRI. Result Regional analysis revealed reduced activation of several brain regions in the ASD group compared with controls in response to dynamic versus static facial expressions, including the middle temporal gyrus (MTG, fusiform gyrus, amygdala, medial prefrontal cortex, and inferior frontal gyrus (IFG. Dynamic causal modeling analyses revealed that bi-directional effective connectivity involving the primary visual cortex–MTG–IFG circuit was enhanced in response to dynamic as compared with static facial expressions in the control group. Group comparisons revealed that all these modulatory effects were weaker in the ASD group than in the control group. Conclusions These results suggest that weak activity and connectivity of the social brain network underlie the impairment in social interaction involving dynamic facial expressions in individuals with ASD.

  17. Complex Odontome Causing Facial Asymmetry

    Directory of Open Access Journals (Sweden)

    Karthikeya Patil

    2006-01-01

    Full Text Available Odontomas are the most common non-cystic odontogenic lesions representing 70% of all odontogenic tumors. Often small and asymptomatic, they are detected on routine radiographs. Occasionally they become large and produce expansion of bone with consequent facial asymmetry. We report a case of such a lesion causing expansion of the mandible in an otherwise asymptomatic patient.

  18. Facial orientation and facial shape in extant great apes: a geometric morphometric analysis of covariation.

    Science.gov (United States)

    Neaux, Dimitri; Guy, Franck; Gilissen, Emmanuel; Coudyzer, Walter; Vignaud, Patrick; Ducrocq, Stéphane

    2013-01-01

    The organization of the bony face is complex, its morphology being influenced in part by the rest of the cranium. Characterizing the facial morphological variation and craniofacial covariation patterns in extant hominids is fundamental to the understanding of their evolutionary history. Numerous studies on hominid facial shape have proposed hypotheses concerning the relationship between the anterior facial shape, facial block orientation and basicranial flexion. In this study we test these hypotheses in a sample of adult specimens belonging to three extant hominid genera (Homo, Pan and Gorilla). Intraspecific variation and covariation patterns are analyzed using geometric morphometric methods and multivariate statistics, such as partial least squared on three-dimensional landmarks coordinates. Our results indicate significant intraspecific covariation between facial shape, facial block orientation and basicranial flexion. Hominids share similar characteristics in the relationship between anterior facial shape and facial block orientation. Modern humans exhibit a specific pattern in the covariation between anterior facial shape and basicranial flexion. This peculiar feature underscores the role of modern humans' highly-flexed basicranium in the overall integration of the cranium. Furthermore, our results are consistent with the hypothesis of a relationship between the reduction of the value of the cranial base angle and a downward rotation of the facial block in modern humans, and to a lesser extent in chimpanzees.

  19. Recurrent unilateral facial nerve palsy in a child with dehiscent facial nerve canal

    Directory of Open Access Journals (Sweden)

    Christopher Liu

    2016-12-01

    Full Text Available Objective: The dehiscent facial nerve canal has been well documented in histopathological studies of temporal bones as well as in clinical setting. We describe clinical and radiologic features of a child with recurrent facial nerve palsy and dehiscent facial nerve canal. Methods: Retrospective chart review. Results: A 5-year-old male was referred to the otolaryngology clinic for evaluation of recurrent acute otitis media and hearing loss. He also developed recurrent left peripheral FN palsy associated with episodes of bilateral acute otitis media. High resolution computed tomography of the temporal bones revealed incomplete bony coverage of the tympanic segment of the left facial nerve. Conclusions: Recurrent peripheral FN palsy may occur in children with recurrent acute otitis media in the presence of a dehiscent facial nerve canal. Facial nerve canal dehiscence should be considered in the differential diagnosis of children with recurrent peripheral FN palsy.

  20. Facial Expression Recognition Through Machine Learning

    Directory of Open Access Journals (Sweden)

    Nazia Perveen

    2015-08-01

    Full Text Available Facial expressions communicate non-verbal cues which play an important role in interpersonal relations. Automatic recognition of facial expressions can be an important element of normal human-machine interfaces it might likewise be utilized as a part of behavioral science and in clinical practice. In spite of the fact that people perceive facial expressions for all intents and purposes immediately solid expression recognition by machine is still a challenge. From the point of view of automatic recognition a facial expression can be considered to comprise of disfigurements of the facial parts and their spatial relations or changes in the faces pigmentation. Research into automatic recognition of the facial expressions addresses the issues encompassing the representation and arrangement of static or dynamic qualities of these distortions or face pigmentation. We get results by utilizing the CVIPtools. We have taken train data set of six facial expressions of three persons and for train data set purpose we have total border mask sample 90 and 30 border mask sample for test data set purpose and we use RST- Invariant features and texture features for feature analysis and then classified them by using k- Nearest Neighbor classification algorithm. The maximum accuracy is 90.

  1. СREATING OF BARCODES FOR FACIAL IMAGES BASED ON INTENSITY GRADIENTS

    Directory of Open Access Journals (Sweden)

    G. A. Kukharev

    2014-05-01

    Full Text Available The paper provides analysis of existing approaches to the generating of barcodes and description of the system structure for generating of barcodes from facial images. The method for generating of standard type linear barcodes from facial images is proposed. This method is based on the difference of intensity gradients, which represent images in the form of initial features. Further averaging of these features into a limited number of intervals is performed; the quantization of results into decimal digits from 0 to 9 and table conversion into the standard barcode is done. Testing was conducted on the Face94 database and database of composite faces of different ages. It showed that the proposed method ensures the stability of generated barcodes according to changes of scale, pose and mirroring of facial images, as well as changes of facial expressions and shadows on faces from local lighting. The proposed solutions are computationally low-cost and do not require the use of any specialized image processing software for generating of facial barcodes in real-time systems.

  2. Deletion of 11q12.3-11q13.1 in a patient with intellectual disability and childhood facial features resembling Cornelia de Lange syndrome

    DEFF Research Database (Denmark)

    Boyle, Martine Isabel; Jespersgaard, Cathrine; Nazaryan, Lusine

    2015-01-01

    Deletions within 11q12.3-11q13.1 are very rare and to date only two cases have been described in the literature. In this study we describe a 23-year-old male patient with intellectual disability, behavioral problems, dysmorphic features, dysphagia, gastroesophageal reflux and skeletal abnormalities...

  3. Local intensity area descriptor for facial recognition in ideal and noise conditions

    Science.gov (United States)

    Tran, Chi-Kien; Tseng, Chin-Dar; Chao, Pei-Ju; Ting, Hui-Min; Chang, Liyun; Huang, Yu-Jie; Lee, Tsair-Fwu

    2017-03-01

    We propose a local texture descriptor, local intensity area descriptor (LIAD), which is applied for human facial recognition in ideal and noisy conditions. Each facial image is divided into small regions from which LIAD histograms are extracted and concatenated into a single feature vector to represent the facial image. The recognition is performed using a nearest neighbor classifier with histogram intersection and chi-square statistics as dissimilarity measures. Experiments were conducted with LIAD using the ORL database of faces (Olivetti Research Laboratory, Cambridge), the Face94 face database, the Georgia Tech face database, and the FERET database. The results demonstrated the improvement in accuracy of our proposed descriptor compared to conventional descriptors [local binary pattern (LBP), uniform LBP, local ternary pattern, histogram of oriented gradients, and local directional pattern]. Moreover, the proposed descriptor was less sensitive to noise and had low histogram dimensionality. Thus, it is expected to be a powerful texture descriptor that can be used for various computer vision problems.

  4. Representing dispositions

    Directory of Open Access Journals (Sweden)

    Röhl Johannes

    2011-08-01

    Full Text Available Abstract Dispositions and tendencies feature significantly in the biomedical domain and therefore in representations of knowledge of that domain. They are not only important for specific applications like an infectious disease ontology, but also as part of a general strategy for modelling knowledge about molecular interactions. But the task of representing dispositions in some formal ontological systems is fraught with several problems, which are partly due to the fact that Description Logics can only deal well with binary relations. The paper will discuss some of the results of the philosophical debate about dispositions, in order to see whether the formal relations needed to represent dispositions can be broken down to binary relations. Finally, we will discuss problems arising from the possibility of the absence of realizations, of multi-track or multi-trigger dispositions and offer suggestions on how to deal with them.

  5. Road and Street Centerlines, Street-The data set is a line feature consisting of 13948 line segments representing streets. It was created to maintain the location of city and county based streets., Published in 1989, Davis County Government.

    Data.gov (United States)

    NSGIC Local Govt | GIS Inventory — Road and Street Centerlines dataset current as of 1989. Street-The data set is a line feature consisting of 13948 line segments representing streets. It was created...

  6. A small-world network model of facial emotion recognition.

    Science.gov (United States)

    Takehara, Takuma; Ochiai, Fumio; Suzuki, Naoto

    2016-01-01

    Various models have been proposed to increase understanding of the cognitive basis of facial emotions. Despite those efforts, interactions between facial emotions have received minimal attention. If collective behaviours relating to each facial emotion in the comprehensive cognitive system could be assumed, specific facial emotion relationship patterns might emerge. In this study, we demonstrate that the frameworks of complex networks can effectively capture those patterns. We generate 81 facial emotion images (6 prototypes and 75 morphs) and then ask participants to rate degrees of similarity in 3240 facial emotion pairs in a paired comparison task. A facial emotion network constructed on the basis of similarity clearly forms a small-world network, which features an extremely short average network distance and close connectivity. Further, even if two facial emotions have opposing valences, they are connected within only two steps. In addition, we show that intermediary morphs are crucial for maintaining full network integration, whereas prototypes are not at all important. These results suggest the existence of collective behaviours in the cognitive systems of facial emotions and also describe why people can efficiently recognize facial emotions in terms of information transmission and propagation. For comparison, we construct three simulated networks--one based on the categorical model, one based on the dimensional model, and one random network. The results reveal that small-world connectivity in facial emotion networks is apparently different from those networks, suggesting that a small-world network is the most suitable model for capturing the cognitive basis of facial emotions.

  7. Facial Sports Injuries

    Science.gov (United States)

    ... the patient has HIV or hepatitis. Facial Fractures Sports injuries can cause potentially serious broken bones or fractures of the face. Common symptoms of facial fractures include: swelling and bruising, ...

  8. Incongruence Between Observers’ and Observed Facial Muscle Activation Reduces Recognition of Emotional Facial Expressions From Video Stimuli

    Directory of Open Access Journals (Sweden)

    Tanja S. H. Wingenbach

    2018-06-01

    Full Text Available According to embodied cognition accounts, viewing others’ facial emotion can elicit the respective emotion representation in observers which entails simulations of sensory, motor, and contextual experiences. In line with that, published research found viewing others’ facial emotion to elicit automatic matched facial muscle activation, which was further found to facilitate emotion recognition. Perhaps making congruent facial muscle activity explicit produces an even greater recognition advantage. If there is conflicting sensory information, i.e., incongruent facial muscle activity, this might impede recognition. The effects of actively manipulating facial muscle activity on facial emotion recognition from videos were investigated across three experimental conditions: (a explicit imitation of viewed facial emotional expressions (stimulus-congruent condition, (b pen-holding with the lips (stimulus-incongruent condition, and (c passive viewing (control condition. It was hypothesised that (1 experimental condition (a and (b result in greater facial muscle activity than (c, (2 experimental condition (a increases emotion recognition accuracy from others’ faces compared to (c, (3 experimental condition (b lowers recognition accuracy for expressions with a salient facial feature in the lower, but not the upper face area, compared to (c. Participants (42 males, 42 females underwent a facial emotion recognition experiment (ADFES-BIV while electromyography (EMG was recorded from five facial muscle sites. The experimental conditions’ order was counter-balanced. Pen-holding caused stimulus-incongruent facial muscle activity for expressions with facial feature saliency in the lower face region, which reduced recognition of lower face region emotions. Explicit imitation caused stimulus-congruent facial muscle activity without modulating recognition. Methodological implications are discussed.

  9. Incongruence Between Observers' and Observed Facial Muscle Activation Reduces Recognition of Emotional Facial Expressions From Video Stimuli.

    Science.gov (United States)

    Wingenbach, Tanja S H; Brosnan, Mark; Pfaltz, Monique C; Plichta, Michael M; Ashwin, Chris

    2018-01-01

    According to embodied cognition accounts, viewing others' facial emotion can elicit the respective emotion representation in observers which entails simulations of sensory, motor, and contextual experiences. In line with that, published research found viewing others' facial emotion to elicit automatic matched facial muscle activation, which was further found to facilitate emotion recognition. Perhaps making congruent facial muscle activity explicit produces an even greater recognition advantage. If there is conflicting sensory information, i.e., incongruent facial muscle activity, this might impede recognition. The effects of actively manipulating facial muscle activity on facial emotion recognition from videos were investigated across three experimental conditions: (a) explicit imitation of viewed facial emotional expressions (stimulus-congruent condition), (b) pen-holding with the lips (stimulus-incongruent condition), and (c) passive viewing (control condition). It was hypothesised that (1) experimental condition (a) and (b) result in greater facial muscle activity than (c), (2) experimental condition (a) increases emotion recognition accuracy from others' faces compared to (c), (3) experimental condition (b) lowers recognition accuracy for expressions with a salient facial feature in the lower, but not the upper face area, compared to (c). Participants (42 males, 42 females) underwent a facial emotion recognition experiment (ADFES-BIV) while electromyography (EMG) was recorded from five facial muscle sites. The experimental conditions' order was counter-balanced. Pen-holding caused stimulus-incongruent facial muscle activity for expressions with facial feature saliency in the lower face region, which reduced recognition of lower face region emotions. Explicit imitation caused stimulus-congruent facial muscle activity without modulating recognition. Methodological implications are discussed.

  10. Incongruence Between Observers’ and Observed Facial Muscle Activation Reduces Recognition of Emotional Facial Expressions From Video Stimuli

    Science.gov (United States)

    Wingenbach, Tanja S. H.; Brosnan, Mark; Pfaltz, Monique C.; Plichta, Michael M.; Ashwin, Chris

    2018-01-01

    According to embodied cognition accounts, viewing others’ facial emotion can elicit the respective emotion representation in observers which entails simulations of sensory, motor, and contextual experiences. In line with that, published research found viewing others’ facial emotion to elicit automatic matched facial muscle activation, which was further found to facilitate emotion recognition. Perhaps making congruent facial muscle activity explicit produces an even greater recognition advantage. If there is conflicting sensory information, i.e., incongruent facial muscle activity, this might impede recognition. The effects of actively manipulating facial muscle activity on facial emotion recognition from videos were investigated across three experimental conditions: (a) explicit imitation of viewed facial emotional expressions (stimulus-congruent condition), (b) pen-holding with the lips (stimulus-incongruent condition), and (c) passive viewing (control condition). It was hypothesised that (1) experimental condition (a) and (b) result in greater facial muscle activity than (c), (2) experimental condition (a) increases emotion recognition accuracy from others’ faces compared to (c), (3) experimental condition (b) lowers recognition accuracy for expressions with a salient facial feature in the lower, but not the upper face area, compared to (c). Participants (42 males, 42 females) underwent a facial emotion recognition experiment (ADFES-BIV) while electromyography (EMG) was recorded from five facial muscle sites. The experimental conditions’ order was counter-balanced. Pen-holding caused stimulus-incongruent facial muscle activity for expressions with facial feature saliency in the lower face region, which reduced recognition of lower face region emotions. Explicit imitation caused stimulus-congruent facial muscle activity without modulating recognition. Methodological implications are discussed. PMID:29928240

  11. Genetics Home Reference: branchio-oculo-facial syndrome

    Science.gov (United States)

    ... face and neck. Its characteristic features include skin anomalies on the neck, malformations of the eyes and ears, and distinctive facial features. "Branchio-" refers to the branchial arches, which are structures in the developing embryo ...

  12. Factors contributing to the adaptation aftereffects of facial expression.

    Science.gov (United States)

    Butler, Andrea; Oruc, Ipek; Fox, Christopher J; Barton, Jason J S

    2008-01-29

    Previous studies have demonstrated the existence of adaptation aftereffects for facial expressions. Here we investigated which aspects of facial stimuli contribute to these aftereffects. In Experiment 1, we examined the role of local adaptation to image elements such as curvature, shape and orientation, independent of expression, by using hybrid faces constructed from either the same or opposing expressions. While hybrid faces made with consistent expressions generated aftereffects as large as those with normal faces, there were no aftereffects from hybrid faces made from different expressions, despite the fact that these contained the same local image elements. In Experiment 2, we examined the role of facial features independent of the normal face configuration by contrasting adaptation with whole faces to adaptation with scrambled faces. We found that scrambled faces also generated significant aftereffects, indicating that expressive features without a normal facial configuration could generate expression aftereffects. In Experiment 3, we examined the role of facial configuration by using schematic faces made from line elements that in isolation do not carry expression-related information (e.g. curved segments and straight lines) but that convey an expression when arranged in a normal facial configuration. We obtained a significant aftereffect for facial configurations but not scrambled configurations of these line elements. We conclude that facial expression aftereffects are not due to local adaptation to image elements but due to high-level adaptation of neural representations that involve both facial features and facial configuration.

  13. Eagle's syndrome with facial palsy

    Directory of Open Access Journals (Sweden)

    Mohammed Al-Hashim

    2017-01-01

    Full Text Available Eagle's syndrome (ES is a rare disease in which the styloid process is elongated and compressing adjacent structures. We describe a rare presentation of ES in which the patient presented with facial palsy. Facial palsy as a presentation of ES is very rare. A review of the English literature revealed only one previously reported case. Our case is a 39-year-old male who presented with left facial palsy. He also reported a 9-year history of the classical symptoms of ES. A computed tomography scan with three-dimensional reconstruction confirmed the diagnoses. He was started on conservative management but without significant improvement. Surgical intervention was offered, but the patient refused. It is important for otolaryngologists, dentists, and other specialists who deal with head and neck problems to be able to recognize ES despite its rarity. Although the patient responded to a treatment similar to that of Bell's palsy because of the clinical features and imaging, ES was most likely the cause of his facial palsy.

  14. Non-odontogenic tumors of the facial bones in children and adolescents: role of multiparametric imaging

    International Nuclear Information System (INIS)

    Becker, Minerva; Stefanelli, Salvatore; Poletti, Pierre Alexandre; Merlini, Laura; Rougemont, Anne-Laure

    2017-01-01

    Tumors of the pediatric facial skeleton represent a major challenge in clinical practice because they can lead to functional impairment, facial deformation, and long-term disfigurement. Their treatment often requires a multidisciplinary approach, and radiologists play a pivotal role in the diagnosis and management of these lesions. Although rare, pediatric tumors arising in the facial bones comprise a wide spectrum of benign and malignant lesions of osteogenic, fibrogenic, hematopoietic, neurogenic, or epithelial origin. The more common lesions include Langerhans cell histiocytosis and osteoma, while rare lesions include inflammatory myofibroblastic and desmoid tumors; juvenile ossifying fibroma; primary intraosseous lymphoma; Ewing sarcoma; and metastases to the facial bones from neuroblastoma, Ewing sarcoma, or retinoblastoma. This article provides a comprehensive approach for the evaluation of children with non-odontogenic tumors of the facial skeleton. Typical findings are discussed with emphasis on the added value of multimodality multiparametric imaging with computed tomography (CT), magnetic resonance imaging (MRI) with diffusion-weighted imaging (DWI), positron emission tomography CT (PET CT), and PET MRI. Key imaging findings and characteristic histologic features of benign and malignant lesions are reviewed and the respective role of each modality for pretherapeutic assessment and post-treatment follow-up. Pitfalls of image interpretation are addressed and how to avoid them. (orig.)

  15. Non-odontogenic tumors of the facial bones in children and adolescents: role of multiparametric imaging

    Energy Technology Data Exchange (ETDEWEB)

    Becker, Minerva; Stefanelli, Salvatore; Poletti, Pierre Alexandre; Merlini, Laura [University of Geneva, Division of Radiology, Department of Imaging and Medical Informatics, Geneva University Hospital, Geneva (Switzerland); Rougemont, Anne-Laure [University of Geneva, Division of Clinical Pathology, Department of Genetic and Laboratory Medicine, Geneva University Hospital, Geneva (Switzerland)

    2017-04-15

    Tumors of the pediatric facial skeleton represent a major challenge in clinical practice because they can lead to functional impairment, facial deformation, and long-term disfigurement. Their treatment often requires a multidisciplinary approach, and radiologists play a pivotal role in the diagnosis and management of these lesions. Although rare, pediatric tumors arising in the facial bones comprise a wide spectrum of benign and malignant lesions of osteogenic, fibrogenic, hematopoietic, neurogenic, or epithelial origin. The more common lesions include Langerhans cell histiocytosis and osteoma, while rare lesions include inflammatory myofibroblastic and desmoid tumors; juvenile ossifying fibroma; primary intraosseous lymphoma; Ewing sarcoma; and metastases to the facial bones from neuroblastoma, Ewing sarcoma, or retinoblastoma. This article provides a comprehensive approach for the evaluation of children with non-odontogenic tumors of the facial skeleton. Typical findings are discussed with emphasis on the added value of multimodality multiparametric imaging with computed tomography (CT), magnetic resonance imaging (MRI) with diffusion-weighted imaging (DWI), positron emission tomography CT (PET CT), and PET MRI. Key imaging findings and characteristic histologic features of benign and malignant lesions are reviewed and the respective role of each modality for pretherapeutic assessment and post-treatment follow-up. Pitfalls of image interpretation are addressed and how to avoid them. (orig.)

  16. Idiopathic ophthalmodynia and idiopathic rhinalgia: two topographic facial pain syndromes.

    Science.gov (United States)

    Pareja, Juan A; Cuadrado, María L; Porta-Etessam, Jesús; Fernández-de-las-Peñas, César; Gili, Pablo; Caminero, Ana B; Cebrián, José L

    2010-09-01

    To describe 2 topographic facial pain conditions with the pain clearly localized in the eye (idiopathic ophthalmodynia) or in the nose (idiopathic rhinalgia), and to propose their distinction from persistent idiopathic facial pain. Persistent idiopathic facial pain, burning mouth syndrome, atypical odontalgia, and facial arthromyalgia are idiopathic facial pain syndromes that have been separated according to topographical criteria. Still, some other facial pain syndromes might have been veiled under the broad term of persistent idiopathic facial pain. Through a 10-year period we have studied all patients referred to our neurological clinic because of facial pain of unknown etiology that might deviate from all well-characterized facial pain syndromes. In a group of patients we have identified 2 consistent clinical pictures with pain precisely located either in the eye (n=11) or in the nose (n=7). Clinical features resembled those of other localized idiopathic facial syndromes, the key differences relying on the topographic distribution of the pain. Both idiopathic ophthalmodynia and idiopathic rhinalgia seem specific pain syndromes with a distinctive location, and may deserve a nosologic status just as other focal pain syndromes of the face. Whether all such focal syndromes are topographic variants of persistent idiopathic facial pain or independent disorders remains a controversial issue.

  17. [Neural representations of facial identity and its associative meaning].

    Science.gov (United States)

    Eifuku, Satoshi

    2012-07-01

    Since the discovery of "face cells" in the early 1980s, single-cell recording experiments in non-human primates have made significant contributions toward the elucidation of neural mechanisms underlying face perception and recognition. In this paper, we review the recent progress in face cell studies, including the recent remarkable findings of the face patches that are scattered around the anterior temporal cortical areas of monkeys. In particular, we focus on the neural representations of facial identity within these areas. The identification of faces requires both discrimination of facial identities and generalization across facial views. It has been indicated by some laboratories that the population of face cells found in the anterior ventral inferior temporal cortex of monkeys represent facial identity in a manner which is facial view-invariant. These findings suggest a relatively distributed representation that operates for facial identification. It has also been shown that certain individual neurons in the medial temporal lobe of humans represent view-invariant facial identity. This finding suggests a relatively sparse representation that may be employed for memory formation. Finally, we summarize our recent study, showing that the population of face cells in the anterior ventral inferior temporal cortex of monkeys that represent view-invariant facial identity, can also represent learned paired associations between an abstract picture and a particular facial identity, extending our understanding of the function of the anterior ventral inferior temporal cortex in the recognition of associative meanings of faces.

  18. Does Facial Amimia Impact the Recognition of Facial Emotions? An EMG Study in Parkinson’s Disease

    Science.gov (United States)

    Argaud, Soizic; Delplanque, Sylvain; Houvenaghel, Jean-François; Auffret, Manon; Duprez, Joan; Vérin, Marc; Grandjean, Didier; Sauleau, Paul

    2016-01-01

    According to embodied simulation theory, understanding other people’s emotions is fostered by facial mimicry. However, studies assessing the effect of facial mimicry on the recognition of emotion are still controversial. In Parkinson’s disease (PD), one of the most distinctive clinical features is facial amimia, a reduction in facial expressiveness, but patients also show emotional disturbances. The present study used the pathological model of PD to examine the role of facial mimicry on emotion recognition by investigating EMG responses in PD patients during a facial emotion recognition task (anger, joy, neutral). Our results evidenced a significant decrease in facial mimicry for joy in PD, essentially linked to the absence of reaction of the zygomaticus major and the orbicularis oculi muscles in response to happy avatars, whereas facial mimicry for expressions of anger was relatively preserved. We also confirmed that PD patients were less accurate in recognizing positive and neutral facial expressions and highlighted a beneficial effect of facial mimicry on the recognition of emotion. We thus provide additional arguments for embodied simulation theory suggesting that facial mimicry is a potential lever for therapeutic actions in PD even if it seems not to be necessarily required in recognizing emotion as such. PMID:27467393

  19. Greater perceptual sensitivity to happy facial expression.

    Science.gov (United States)

    Maher, Stephen; Ekstrom, Tor; Chen, Yue

    2014-01-01

    Perception of subtle facial expressions is essential for social functioning; yet it is unclear if human perceptual sensitivities differ in detecting varying types of facial emotions. Evidence diverges as to whether salient negative versus positive emotions (such as sadness versus happiness) are preferentially processed. Here, we measured perceptual thresholds for the detection of four types of emotion in faces--happiness, fear, anger, and sadness--using psychophysical methods. We also evaluated the association of the perceptual performances with facial morphological changes between neutral and respective emotion types. Human observers were highly sensitive to happiness compared with the other emotional expressions. Further, this heightened perceptual sensitivity to happy expressions can be attributed largely to the emotion-induced morphological change of a particular facial feature (end-lip raise).

  20. Asians' Facial Responsiveness to Basic Tastes by Automated Facial Expression Analysis System.

    Science.gov (United States)

    Zhi, Ruicong; Cao, Lianyu; Cao, Gang

    2017-03-01

    Growing evidence shows that consumer choices in real life are mostly driven by unconscious mechanisms rather than conscious. The unconscious process could be measured by behavioral measurements. This study aims to apply automatic facial expression analysis technique for consumers' emotion representation, and explore the relationships between sensory perception and facial responses. Basic taste solutions (sourness, sweetness, bitterness, umami, and saltiness) with 6 levels plus water were used, which could cover most of the tastes found in food and drink. The other contribution of this study is to analyze the characteristics of facial expressions and correlation between facial expressions and perceptive hedonic liking for Asian consumers. Up until now, the facial expression application researches only reported for western consumers, while few related researches investigated the facial responses during food consuming for Asian consumers. Experimental results indicated that facial expressions could identify different stimuli with various concentrations and different hedonic levels. The perceived liking increased at lower concentrations and decreased at higher concentrations, while samples with medium concentrations were perceived as the most pleasant except sweetness and bitterness. High correlations were founded between perceived intensities of bitterness, umami, saltiness, and facial reactions of disgust and fear. Facial expression disgust and anger could characterize emotion "dislike," and happiness could characterize emotion "like," while neutral could represent "neither like nor dislike." The identified facial expressions agree with the perceived sensory emotions elicited by basic taste solutions. The correlation analysis between hedonic levels and facial expression intensities obtained in this study are in accordance with that discussed for western consumers. © 2017 Institute of Food Technologists®.

  1. Automatic facial animation parameters extraction in MPEG-4 visual communication

    Science.gov (United States)

    Yang, Chenggen; Gong, Wanwei; Yu, Lu

    2002-01-01

    Facial Animation Parameters (FAPs) are defined in MPEG-4 to animate a facial object. The algorithm proposed in this paper to extract these FAPs is applied to very low bit-rate video communication, in which the scene is composed of a head-and-shoulder object with complex background. This paper addresses the algorithm to automatically extract all FAPs needed to animate a generic facial model, estimate the 3D motion of head by points. The proposed algorithm extracts human facial region by color segmentation and intra-frame and inter-frame edge detection. Facial structure and edge distribution of facial feature such as vertical and horizontal gradient histograms are used to locate the facial feature region. Parabola and circle deformable templates are employed to fit facial feature and extract a part of FAPs. A special data structure is proposed to describe deformable templates to reduce time consumption for computing energy functions. Another part of FAPs, 3D rigid head motion vectors, are estimated by corresponding-points method. A 3D head wire-frame model provides facial semantic information for selection of proper corresponding points, which helps to increase accuracy of 3D rigid object motion estimation.

  2. The face is not an empty canvas: how facial expressions interact with facial appearance.

    Science.gov (United States)

    Hess, Ursula; Adams, Reginald B; Kleck, Robert E

    2009-12-12

    Faces are not simply blank canvases upon which facial expressions write their emotional messages. In fact, facial appearance and facial movement are both important social signalling systems in their own right. We here provide multiple lines of evidence for the notion that the social signals derived from facial appearance on the one hand and facial movement on the other interact in a complex manner, sometimes reinforcing and sometimes contradicting one another. Faces provide information on who a person is. Sex, age, ethnicity, personality and other characteristics that can define a person and the social group the person belongs to can all be derived from the face alone. The present article argues that faces interact with the perception of emotion expressions because this information informs a decoder's expectations regarding an expresser's probable emotional reactions. Facial appearance also interacts more directly with the interpretation of facial movement because some of the features that are used to derive personality or sex information are also features that closely resemble certain emotional expressions, thereby enhancing or diluting the perceived strength of particular expressions.

  3. Contralateral botulinum toxin injection to improve facial asymmetry after acute facial paralysis.

    Science.gov (United States)

    Kim, Jin

    2013-02-01

    The application of botulinum toxin to the healthy side of the face in patients with long-standing facial paralysis has been shown to be a minimally invasive technique that improves facial symmetry at rest and during facial motion, but our experience using botulinum toxin therapy for facial sequelae prompted the idea that botulinum toxin might be useful in acute cases of facial paralysis, leading to improve facial asymmetry. In cases in which medical or surgical treatment options are limited because of existing medical problems or advanced age, most patients with acute facial palsy are advised to await spontaneous recovery or are informed that no effective intervention exists. The purpose of this study was to evaluate the effect of botulinum toxin treatment for facial asymmetry in 18 patients after acute facial palsy who could not be optimally treated by medical or surgical management because of severe medical or other problems. From 2009 to 2011, nine patients with Bell's palsy, 5 with herpes zoster oticus and 4 with traumatic facial palsy (10 men and 8 women; age range, 22-82 yr; mean, 50.8 yr) participated in this study. Botulinum toxin A (Botox; Allergan Incorporated, Irvine, CA, USA) was injected using a tuberculin syringe with a 27-gauge needle. The amount injected per site varied from 2.5 to 3 U, and the total dose used per patient was 32 to 68 U (mean, 47.5 +/- 8.4 U). After administration of a single dose of botulinum toxin A on the nonparalyzed side of 18 patients with acute facial paralysis, marked relief of facial asymmetry was observed in 8 patients within 1 month of injection. Decreased facial asymmetry and strengthened facial function on the paralyzed side led to an increased HB and SB grade within 6 months after injection. Use of botulinum toxin after acute facial palsy cases is of great value. Such therapy decreases the relative hyperkinesis contralateral to the paralysis, leading to greater symmetric function. Especially in patients with medical

  4. Judgment of Nasolabial Esthetics in Cleft Lip and Palate Is Not Influenced by Overall Facial Attractiveness.

    Science.gov (United States)

    Kocher, Katharina; Kowalski, Piotr; Kolokitha, Olga-Elpis; Katsaros, Christos; Fudalej, Piotr S

    2016-05-01

    To determine whether judgment of nasolabial esthetics in cleft lip and palate (CLP) is influenced by overall facial attractiveness. Experimental study. University of Bern, Switzerland. Seventy-two fused images (36 of boys, 36 of girls) were constructed. Each image comprised (1) the nasolabial region of a treated child with complete unilateral CLP (UCLP) and (2) the external facial features, i.e., the face with masked nasolabial region, of a noncleft child. Photographs of the nasolabial region of six boys and six girls with UCLP representing a wide range of esthetic outcomes, i.e., from very good to very poor appearance, were randomly chosen from a sample of 60 consecutively treated patients in whom nasolabial esthetics had been rated in a previous study. Photographs of external facial features of six boys and six girls without UCLP with various esthetics were randomly selected from patients' files. Eight lay raters evaluated the fused images using a 100-mm visual analogue scale. Method reliability was assessed by reevaluation of fused images after >1 month. A regression model was used to analyze which elements of facial esthetics influenced the perception of nasolabial appearance. Method reliability was good. A regression analysis demonstrated that only the appearance of the nasolabial area affected the esthetic scores of fused images (coefficient = -11.44; P esthetic evaluation can be performed on images of full faces.

  5. Remembering facial configurations.

    Science.gov (United States)

    Bruce, V; Doyle, T; Dench, N; Burton, M

    1991-02-01

    Eight experiments are reported showing that subjects can remember rather subtle aspects of the configuration of facial features to which they have earlier been exposed. Subjects saw several slightly different configurations (formed by altering the relative placement of internal features of the face) of each of ten different faces, and they were asked to rate the apparent age and masculinity-femininity of each. Afterwards, subjects were asked to select from pairs of faces the configuration which was identical to one previously rated. Subjects responded strongly to the central or "prototypical" configuration of each studied face where this was included as one member of each test pair, whether or not it had been studied (Experiments 1, 2 and 4). Subjects were also quite accurate at recognizing one of the previously encountered extremes of the series of configurations that had been rated (Experiment 3), but when unseen prototypes were paired with seen exemplars subjects' performance was at chance (Experiment 5). Prototype learning of face patterns was shown to be stronger than that for house patterns, though both classes of patterns were affected equally by inversion (Experiment 6). The final two experiments demonstrated that preferences for the prototype could be affected by instructions at study and by whether different exemplars of the same face were shown consecutively or distributed through the study series. The discussion examines the implications of these results for theories of the representation of faces and for instance-based models of memory.

  6. Facial Transplantation Surgery Introduction

    OpenAIRE

    Eun, Seok-Chan

    2015-01-01

    Severely disfiguring facial injuries can have a devastating impact on the patient's quality of life. During the past decade, vascularized facial allotransplantation has progressed from an experimental possibility to a clinical reality in the fields of disease, trauma, and congenital malformations. This technique may now be considered a viable option for repairing complex craniofacial defects for which the results of autologous reconstruction remain suboptimal. Vascularized facial allotranspla...

  7. Regression-based Multi-View Facial Expression Recognition

    NARCIS (Netherlands)

    Rudovic, Ognjen; Patras, Ioannis; Pantic, Maja

    2010-01-01

    We present a regression-based scheme for multi-view facial expression recognition based on 2蚠D geometric features. We address the problem by mapping facial points (e.g. mouth corners) from non-frontal to frontal view where further recognition of the expressions can be performed using a

  8. Binary pattern analysis for 3D facial action unit detection

    NARCIS (Netherlands)

    Sandbach, Georgia; Zafeiriou, Stefanos; Pantic, Maja

    2012-01-01

    In this paper we propose new binary pattern features for use in the problem of 3D facial action unit (AU) detection. Two representations of 3D facial geometries are employed, the depth map and the Azimuthal Projection Distance Image (APDI). To these the traditional Local Binary Pattern is applied,

  9. [Facial tics and spasms].

    Science.gov (United States)

    Potgieser, Adriaan R E; van Dijk, J Marc C; Elting, Jan Willem J; de Koning-Tijssen, Marina A J

    2014-01-01

    Facial tics and spasms are socially incapacitating, but effective treatment is often available. The clinical picture is sufficient for distinguishing between the different diseases that cause this affliction.We describe three cases of patients with facial tics or spasms: one case of tics, which are familiar to many physicians; one case of blepharospasms; and one case of hemifacial spasms. We discuss the differential diagnosis and the treatment possibilities for facial tics and spasms. Early diagnosis and treatment is important, because of the associated social incapacitation. Botulin toxin should be considered as a treatment option for facial tics and a curative neurosurgical intervention should be considered for hemifacial spasms.

  10. Very late relapse in diffuse large B-cell lymphoma represents clonally related disease and is marked by germinal center cell features

    NARCIS (Netherlands)

    de Jong, Daphne; Glas, Annuska M.; Boerrigter, Lucie; Hermus, Marie-Christine; Dalesio, Otilia; Willemse, Els; Nederlof, Petra M.; Kersten, Marie José

    2003-01-01

    Patients with diffuse large B-cell lymphoma (DLBCL) rarely show relapse after 4 years of complete remission (CR). In this study, we addressed the following questions: (1) Does late-relapsing DLBCL represent clonally related disease or a second malignancy; and (2) is there a characteristic biologic

  11. Irregular echogenic foci representing coagulation necrosis: a useful but perhaps under-recognized EUS echo feature of malignant lymph node invasion.

    Science.gov (United States)

    Bhutani, Manoop S; Saftoiu, Adrian; Chaya, Charles; Gupta, Parantap; Markowitz, Avi B; Willis, Maurice; Kessel, Ivan; Sharma, Gulshan; Zwischenberger, Joseph B

    2009-06-01

    Coagulation necrosis has been described in malignant lymph nodes. Our aim was to determine if coagulation necrosis in mediastinal lymph nodes imaged by EUS could be used as a useful echo feature for predicting malignant invasion. Patients with known or suspected lung cancer who had undergone mediastinal lymph node staging by EUS. Tertiary Care university hospital. An expert endosonographer blinded to the final diagnosis, reviewed the archived digital EUS images of lymph nodes prior to being sampled by FNA. LNs positive for malignancy by FNA were included. The benign group included lymph node images with either negative EUS-FNA or lymph nodes imaged by EUS but not subjected to EUS-FNA, with surgical correlation of their benign nature. 24 patients were included. 8 patients were found to have coagulation necrosis. 7/8 patients had positive result for malignancy by EUS-FNA. One patient determined to have coagulation necrosis had a non-malignant diagnosis indicating a false positive result. 16 patients had no coagulation necrosis. In 6 patients with no coagulation necrosis, the final diagnosis was malignant and in the remaining 10 cases, the final diagnosis was benign. For coagulation necrosis as an echo feature for malignant invasion, sensitivity was 54%, specificity was 91%, positive predictive value was 88%, negative predictive value was 63% and accuracy was 71%. Coagulation necrosis is a useful echo feature for mediastinal lymph node staging by EUS.

  12. Quality of life assessment in facial palsy: validation of the Dutch Facial Clinimetric Evaluation Scale.

    Science.gov (United States)

    Kleiss, Ingrid J; Beurskens, Carien H G; Stalmeier, Peep F M; Ingels, Koen J A O; Marres, Henri A M

    2015-08-01

    This study aimed at validating an existing health-related quality of life questionnaire for patients with facial palsy for implementation in the Dutch language and culture. The Facial Clinimetric Evaluation Scale was translated into the Dutch language using a forward-backward translation method. A pilot test with the translated questionnaire was performed in 10 patients with facial palsy and 10 normal subjects. Finally, cross-cultural adaption was accomplished at our outpatient clinic for facial palsy. Analyses for internal consistency, test-retest reliability, construct validity and responsiveness were performed. Ninety-three patients completed the Dutch Facial Clinimetric Evaluation Scale, the Dutch Facial Disability Index, and the Dutch Short Form (36) Health Survey. Cronbach's α, representing internal consistency, was 0.800. Test-retest reliability was shown by an intraclass correlation coefficient of 0.737. Correlations with the House-Brackmann score, Sunnybrook score, Facial Disability Index physical function, and social/well-being function were -0.292, 0.570, 0.713, and 0.575, respectively. The SF-36 domains correlate best with the FaCE social function domain, with the strongest correlation between the both social function domains (r = 0.576). The FaCE score did statistically significantly increase in 35 patients receiving botulinum toxin type A (P = 0.042, Student t test). The domains 'facial comfort' and 'social function' improved statistically significantly as well (P = 0.022 and P = 0.046, respectively, Student t-test). The Dutch Facial Clinimetric Evaluation Scale shows good psychometric values and can be implemented in the management of Dutch-speaking patients with facial palsy in the Netherlands. Translation of the instrument into other languages may lead to widespread use, making evaluation and comparison possible among different providers.

  13. Automated facial acne assessment from smartphone images

    Science.gov (United States)

    Amini, Mohammad; Vasefi, Fartash; Valdebran, Manuel; Huang, Kevin; Zhang, Haomiao; Kemp, William; MacKinnon, Nicholas

    2018-02-01

    A smartphone mobile medical application is presented, that provides analysis of the health of skin on the face using a smartphone image and cloud-based image processing techniques. The mobile application employs the use of the camera to capture a front face image of a subject, after which the captured image is spatially calibrated based on fiducial points such as position of the iris of the eye. A facial recognition algorithm is used to identify features of the human face image, to normalize the image, and to define facial regions of interest (ROI) for acne assessment. We identify acne lesions and classify them into two categories: those that are papules and those that are pustules. Automated facial acne assessment was validated by performing tests on images of 60 digital human models and 10 real human face images. The application was able to identify 92% of acne lesions within five facial ROIs. The classification accuracy for separating papules from pustules was 98%. Combined with in-app documentation of treatment, lifestyle factors, and automated facial acne assessment, the app can be used in both cosmetic and clinical dermatology. It allows users to quantitatively self-measure acne severity and treatment efficacy on an ongoing basis to help them manage their chronic facial acne.

  14. Active AU Based Patch Weighting for Facial Expression Recognition

    Directory of Open Access Journals (Sweden)

    Weicheng Xie

    2017-01-01

    Full Text Available Facial expression has many applications in human-computer interaction. Although feature extraction and selection have been well studied, the specificity of each expression variation is not fully explored in state-of-the-art works. In this work, the problem of multiclass expression recognition is converted into triplet-wise expression recognition. For each expression triplet, a new feature optimization model based on action unit (AU weighting and patch weight optimization is proposed to represent the specificity of the expression triplet. The sparse representation-based approach is then proposed to detect the active AUs of the testing sample for better generalization. The algorithm achieved competitive accuracies of 89.67% and 94.09% for the Jaffe and Cohn–Kanade (CK+ databases, respectively. Better cross-database performance has also been observed.

  15. Facial Expression Generation from Speaker's Emotional States in Daily Conversation

    Science.gov (United States)

    Mori, Hiroki; Ohshima, Koh

    A framework for generating facial expressions from emotional states in daily conversation is described. It provides a mapping between emotional states and facial expressions, where the former is represented by vectors with psychologically-defined abstract dimensions, and the latter is coded by the Facial Action Coding System. In order to obtain the mapping, parallel data with rated emotional states and facial expressions were collected for utterances of a female speaker, and a neural network was trained with the data. The effectiveness of proposed method is verified by a subjective evaluation test. As the result, the Mean Opinion Score with respect to the suitability of generated facial expression was 3.86 for the speaker, which was close to that of hand-made facial expressions.

  16. Facial talon cusps.

    LENUS (Irish Health Repository)

    McNamara, T

    1997-12-01

    This is a report of two patients with isolated facial talon cusps. One occurred on a permanent mandibular central incisor; the other on a permanent maxillary canine. The locations of these talon cusps suggests that the definition of a talon cusp include teeth in addition to the incisor group and be extended to include the facial aspect of teeth.

  17. Cranio-facial clefts in pre-hispanic America.

    Science.gov (United States)

    Marius-Nunez, A L; Wasiak, D T

    2015-10-01

    Among the representations of congenital malformations in Moche ceramic art, cranio-facial clefts have been portrayed in pottery found in Moche burials. These pottery vessels were used as domestic items during lifetime and funerary offerings upon death. The aim of this study was to examine archeological evidence for representations of cranio-facial cleft malformations in Moche vessels. Pottery depicting malformations of the midface in Moche collections in Lima-Peru were studied. The malformations portrayed on pottery were analyzed using the Tessier classification. Photographs were authorized by the Larco Museo.Three vessels were observed to have median cranio-facial dysraphia in association with midline cleft of the lower lip with cleft of the mandible. ML001489 portrays a median cranio-facial dysraphia with an orbital cleft and a midline cleft of the lower lip extending to the mandible. ML001514 represents a median facial dysraphia in association with an orbital facial cleft and a vertical orbital dystopia. ML001491 illustrates a median facial cleft with a soft tissue cleft. Three cases of midline, orbital and lateral facial clefts have been portrayed in Moche full-figure portrait vessels. They represent the earliest registries of congenital cranio-facial malformations in ancient Peru. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  18. A facial marker in facial wasting rehabilitation.

    Science.gov (United States)

    Rauso, Raffaele; Tartaro, Gianpaolo; Freda, Nicola; Rusciani, Antonio; Curinga, Giuseppe

    2012-02-01

    Facial lipoatrophy is one of the most distressing manifestation for HIV patients. It can be stigmatizing, severely affecting quality of life and self-esteem, and it may result in reduced antiretroviral adherence. Several filling techniques have been proposed in facial wasting restoration, with different outcomes. The aim of this study is to present a triangular area that is useful to fill in facial wasting rehabilitation. Twenty-eight HIV patients rehabilitated for facial wasting were enrolled in this study. Sixteen were rehabilitated with a non-resorbable filler and twelve with structural fat graft harvested from lipohypertrophied areas. A photographic pre-operative and post-operative evaluation was performed by the patients and by two plastic surgeons who were "blinded." The filled area, in both patients rehabilitated with structural fat grafts or non-resorbable filler, was a triangular area of depression identified between the nasolabial fold, the malar arch, and the line that connects these two anatomical landmarks. The cosmetic result was evaluated after three months after the last filling procedure in the non-resorbable filler group and after three months post-surgery in the structural fat graft group. The mean patient satisfaction score was 8.7 as assessed with a visual analogue scale. The mean score for blinded evaluators was 7.6. In this study the authors describe a triangular area of the face, between the nasolabial fold, the malar arch, and the line that connects these two anatomical landmarks, where a good aesthetic facial restoration in HIV patients with facial wasting may be achieved regardless of which filling technique is used.

  19. Advances in facial reanimation.

    Science.gov (United States)

    Tate, James R; Tollefson, Travis T

    2006-08-01

    Facial paralysis often has a significant emotional impact on patients. Along with the myriad of new surgical techniques in managing facial paralysis comes the challenge of selecting the most effective procedure for the patient. This review delineates common surgical techniques and reviews state-of-the-art techniques. The options for dynamic reanimation of the paralyzed face must be examined in the context of several patient factors, including age, overall health, and patient desires. The best functional results are obtained with direct facial nerve anastomosis and interpositional nerve grafts. In long-standing facial paralysis, temporalis muscle transfer gives a dependable and quick result. Microvascular free tissue transfer is a reliable technique with reanimation potential whose results continue to improve as microsurgical expertise increases. Postoperative results can be improved with ancillary soft tissue procedures, as well as botulinum toxin. The paper provides an overview of recent advances in facial reanimation, including preoperative assessment, surgical reconstruction options, and postoperative management.

  20. Sad Facial Expressions Increase Choice Blindness

    Directory of Open Access Journals (Sweden)

    Yajie Wang

    2018-01-01

    Full Text Available Previous studies have discovered a fascinating phenomenon known as choice blindness—individuals fail to detect mismatches between the face they choose and the face replaced by the experimenter. Although previous studies have reported a couple of factors that can modulate the magnitude of choice blindness, the potential effect of facial expression on choice blindness has not yet been explored. Using faces with sad and neutral expressions (Experiment 1 and faces with happy and neutral expressions (Experiment 2 in the classic choice blindness paradigm, the present study investigated the effects of facial expressions on choice blindness. The results showed that the detection rate was significantly lower on sad faces than neutral faces, whereas no significant difference was observed between happy faces and neutral faces. The exploratory analysis of verbal reports found that participants who reported less facial features for sad (as compared to neutral expressions also tended to show a lower detection rate of sad (as compared to neutral faces. These findings indicated that sad facial expressions increased choice blindness, which might have resulted from inhibition of further processing of the detailed facial features by the less attractive sad expressions (as compared to neutral expressions.

  1. Sad Facial Expressions Increase Choice Blindness.

    Science.gov (United States)

    Wang, Yajie; Zhao, Song; Zhang, Zhijie; Feng, Wenfeng

    2017-01-01

    Previous studies have discovered a fascinating phenomenon known as choice blindness-individuals fail to detect mismatches between the face they choose and the face replaced by the experimenter. Although previous studies have reported a couple of factors that can modulate the magnitude of choice blindness, the potential effect of facial expression on choice blindness has not yet been explored. Using faces with sad and neutral expressions (Experiment 1) and faces with happy and neutral expressions (Experiment 2) in the classic choice blindness paradigm, the present study investigated the effects of facial expressions on choice blindness. The results showed that the detection rate was significantly lower on sad faces than neutral faces, whereas no significant difference was observed between happy faces and neutral faces. The exploratory analysis of verbal reports found that participants who reported less facial features for sad (as compared to neutral) expressions also tended to show a lower detection rate of sad (as compared to neutral) faces. These findings indicated that sad facial expressions increased choice blindness, which might have resulted from inhibition of further processing of the detailed facial features by the less attractive sad expressions (as compared to neutral expressions).

  2. http://www.bioline.org.br/js 101 Aetiological Profile of Facial Nerve ...

    African Journals Online (AJOL)

    jen

    Background: Facial nerve abnormalities represent a broad spectrum of lesions which are commonly seen by the otolaryngologist. The aim of this paper is to highlight the aetiologic profile of facial nerve palsy. Methods: A retrospective study of patients with facial nerve palsy seen in the Ear, Nose and Throat clinic for 5 years.

  3. [Facial nerve neurinomas].

    Science.gov (United States)

    Sokołowski, Jacek; Bartoszewicz, Robert; Morawski, Krzysztof; Jamróz, Barbara; Niemczyk, Kazimierz

    2013-01-01

    Evaluation of diagnostic, surgical technique, treatment results facial nerve neurinomas and its comparison with literature was the main purpose of this study. Seven cases of patients (2005-2011) with facial nerve schwannomas were included to retrospective analysis in the Department of Otolaryngology, Medical University of Warsaw. All patients were assessed with history of the disease, physical examination, hearing tests, computed tomography and/or magnetic resonance imaging, electronystagmography. Cases were observed in the direction of potential complications and recurrences. Neurinoma of the facial nerve occurred in the vertical segment (n=2), facial nerve geniculum (n=1) and the internal auditory canal (n=4). The symptoms observed in patients were analyzed: facial nerve paresis (n=3), hearing loss (n=2), dizziness (n=1). Magnetic resonance imaging and computed tomography allowed to confirm the presence of the tumor and to assess its staging. Schwannoma of the facial nerve has been surgically removed using the middle fossa approach (n=5) and by antromastoidectomy (n=2). Anatomical continuity of the facial nerve was achieved in 3 cases. In the twelve months after surgery, facial nerve paresis was rated at level II-III° HB. There was no recurrence of the tumor in radiological observation. Facial nerve neurinoma is a rare tumor. Currently surgical techniques allow in most cases, the radical removing of the lesion and reconstruction of the VII nerve function. The rate of recurrence is low. A tumor of the facial nerve should be considered in the differential diagnosis of nerve VII paresis. Copyright © 2013 Polish Otorhinolaryngology - Head and Neck Surgery Society. Published by Elsevier Urban & Partner Sp. z.o.o. All rights reserved.

  4. Facial skin follllicular hyperkeratosis of patients with basal cell carcinoma

    Directory of Open Access Journals (Sweden)

    M. V. Zhuchkov

    2016-01-01

    Full Text Available This article provides a clinical observation of paraneoplastic syndrome of a patient with basal cell carcinoma of skin. Authors present clinical features of the described for the first time, paraneoplastic retentional follicular hyperkeratosis of facial area.

  5. Toward a universal, automated facial measurement tool in facial reanimation.

    Science.gov (United States)

    Hadlock, Tessa A; Urban, Luke S

    2012-01-01

    To describe a highly quantitative facial function-measuring tool that yields accurate, objective measures of facial position in significantly less time than existing methods. Facial Assessment by Computer Evaluation (FACE) software was designed for facial analysis. Outputs report the static facial landmark positions and dynamic facial movements relevant in facial reanimation. Fifty individuals underwent facial movement analysis using Photoshop-based measurements and the new software; comparisons of agreement and efficiency were made. Comparisons were made between individuals with normal facial animation and patients with paralysis to gauge sensitivity to abnormal movements. Facial measurements were matched using FACE software and Photoshop-based measures at rest and during expressions. The automated assessments required significantly less time than Photoshop-based assessments.FACE measurements easily revealed differences between individuals with normal facial animation and patients with facial paralysis. FACE software produces accurate measurements of facial landmarks and facial movements and is sensitive to paralysis. Given its efficiency, it serves as a useful tool in the clinical setting for zonal facial movement analysis in comprehensive facial nerve rehabilitation programs.

  6. Sound-induced facial synkinesis following facial nerve paralysis

    NARCIS (Netherlands)

    Ma, Ming-San; van der Hoeven, Johannes H.; Nicolai, Jean-Philippe A.; Meek, Marcel F.

    Facial synkinesis (or synkinesia) (FS) occurs frequently after paresis or paralysis of the facial nerve and is in most cases due to aberrant regeneration of (branches of) the facial nerve. Patients suffer from inappropriate and involuntary synchronous facial muscle contractions. Here we describe two

  7. Facial Emotions Recognition using Gabor Transform and Facial Animation Parameters with Neural Networks

    Science.gov (United States)

    Harit, Aditya; Joshi, J. C., Col; Gupta, K. K.

    2018-03-01

    The paper proposed an automatic facial emotion recognition algorithm which comprises of two main components: feature extraction and expression recognition. The algorithm uses a Gabor filter bank on fiducial points to find the facial expression features. The resulting magnitudes of Gabor transforms, along with 14 chosen FAPs (Facial Animation Parameters), compose the feature space. There are two stages: the training phase and the recognition phase. Firstly, for the present 6 different emotions, the system classifies all training expressions in 6 different classes (one for each emotion) in the training stage. In the recognition phase, it recognizes the emotion by applying the Gabor bank to a face image, then finds the fiducial points, and then feeds it to the trained neural architecture.

  8. Facial Scar Revision: Understanding Facial Scar Treatment

    Science.gov (United States)

    ... keep the head elevated when lying down, to use cold compresses to reduce swelling, and to avoid any activity that places undue stress on the area of the incision. Depending on the surgery performed and the site of the scar, the facial plastic surgeon will explain the types of activities to ...

  9. Facial expression recognition based on improved deep belief networks

    Science.gov (United States)

    Wu, Yao; Qiu, Weigen

    2017-08-01

    In order to improve the robustness of facial expression recognition, a method of face expression recognition based on Local Binary Pattern (LBP) combined with improved deep belief networks (DBNs) is proposed. This method uses LBP to extract the feature, and then uses the improved deep belief networks as the detector and classifier to extract the LBP feature. The combination of LBP and improved deep belief networks is realized in facial expression recognition. In the JAFFE (Japanese Female Facial Expression) database on the recognition rate has improved significantly.

  10. Geographic variation in chin shape challenges the universal facial attractiveness hypothesis.

    Directory of Open Access Journals (Sweden)

    Zaneta M Thayer

    Full Text Available The universal facial attractiveness (UFA hypothesis proposes that some facial features are universally preferred because they are reliable signals of mate quality. The primary evidence for this hypothesis comes from cross-cultural studies of perceived attractiveness. However, these studies do not directly address patterns of morphological variation at the population level. An unanswered question is therefore: Are universally preferred facial phenotypes geographically invariant, as the UFA hypothesis implies? The purpose of our study is to evaluate this often overlooked aspect of the UFA hypothesis by examining patterns of geographic variation in chin shape. We collected symphyseal outlines from 180 recent human mandibles (90 male, 90 female representing nine geographic regions. Elliptical Fourier functions analysis was used to quantify chin shape, and principle components analysis was used to compute shape descriptors. In contrast to the expectations of the UFA hypothesis, we found significant geographic differences in male and female chin shape. These findings are consistent with region-specific sexual selection and/or random genetic drift, but not universal sexual selection. We recommend that future studies of facial attractiveness take into consideration patterns of morphological variation within and between diverse human populations.

  11. Facial Structure Predicts Sexual Orientation in Both Men and Women.

    Science.gov (United States)

    Skorska, Malvina N; Geniole, Shawn N; Vrysen, Brandon M; McCormick, Cheryl M; Bogaert, Anthony F

    2015-07-01

    Biological models have typically framed sexual orientation in terms of effects of variation in fetal androgen signaling on sexual differentiation, although other biological models exist. Despite marked sex differences in facial structure, the relationship between sexual orientation and facial structure is understudied. A total of 52 lesbian women, 134 heterosexual women, 77 gay men, and 127 heterosexual men were recruited at a Canadian campus and various Canadian Pride and sexuality events. We found that facial structure differed depending on sexual orientation; substantial variation in sexual orientation was predicted using facial metrics computed by a facial modelling program from photographs of White faces. At the univariate level, lesbian and heterosexual women differed in 17 facial features (out of 63) and four were unique multivariate predictors in logistic regression. Gay and heterosexual men differed in 11 facial features at the univariate level, of which three were unique multivariate predictors. Some, but not all, of the facial metrics differed between the sexes. Lesbian women had noses that were more turned up (also more turned up in heterosexual men), mouths that were more puckered, smaller foreheads, and marginally more masculine face shapes (also in heterosexual men) than heterosexual women. Gay men had more convex cheeks, shorter noses (also in heterosexual women), and foreheads that were more tilted back relative to heterosexual men. Principal components analysis and discriminant functions analysis generally corroborated these results. The mechanisms underlying variation in craniofacial structure--both related and unrelated to sexual differentiation--may thus be important in understanding the development of sexual orientation.

  12. Are symptom features of depression during pregnancy, the postpartum period and outside the peripartum period distinct? Results from a nationally representative sample using item response theory (IRT).

    Science.gov (United States)

    Hoertel, Nicolas; López, Saioa; Peyre, Hugo; Wall, Melanie M; González-Pinto, Ana; Limosin, Frédéric; Blanco, Carlos

    2015-02-01

    Whether there are systematic differences in depression symptom expression during pregnancy, the postpartum period and outside these periods (i.e., outside the peripartum period) remains debated. The aim of this study was to use methods based on item response theory (IRT) to examine, after equating for depression severity, differences in the likelihood of reporting DSM-IV symptoms of major depressive episode (MDE) in women of childbearing age (i.e., aged 18-50) during pregnancy, the postpartum period and outside the peripartum period. We conducted these analyses using a large, nationally representative sample of women of childbearing age from the United States (n = 11,256) who participated in the second wave of the National Epidemiologic Survey on Alcohol and Related Conditions (NESARC). The overall 12-month prevalence of all depressive criteria (except for worthlessness/guilt) was significantly lower in pregnant women than in women of childbearing age outside the peripartum period, whereas the prevalence of all symptoms (except for "psychomotor symptoms") was not significantly different between the postpartum and the nonperipartum group. There were no clinically significant differences in the endorsement rates of symptoms of MDE by pregnancy status when equating for levels of depression severity. This study suggests that the clinical presentation of depressive symptoms in women of childbearing age does not differ during pregnancy, the postpartum period and outside the peripartum period. These findings do not provide psychometric support for the inclusion of the peripartum onset specifier for major depressive disorder in the DSM-5. © 2014 Wiley Periodicals, Inc.

  13. Pediatric facial injuries: It's management

    OpenAIRE

    Singh, Geeta; Mohammad, Shadab; Pal, U. S.; Hariram,; Malkunje, Laxman R.; Singh, Nimisha

    2011-01-01

    Background: Facial injuries in children always present a challenge in respect of their diagnosis and management. Since these children are of a growing age every care should be taken so that later the overall growth pattern of the facial skeleton in these children is not jeopardized. Purpose: To access the most feasible method for the management of facial injuries in children without hampering the facial growth. Materials and Methods: Sixty child patients with facial trauma were selected rando...

  14. Body size and allometric variation in facial shape in children.

    Science.gov (United States)

    Larson, Jacinda R; Manyama, Mange F; Cole, Joanne B; Gonzalez, Paula N; Percival, Christopher J; Liberton, Denise K; Ferrara, Tracey M; Riccardi, Sheri L; Kimwaga, Emmanuel A; Mathayo, Joshua; Spitzmacher, Jared A; Rolian, Campbell; Jamniczky, Heather A; Weinberg, Seth M; Roseman, Charles C; Klein, Ophir; Lukowiak, Ken; Spritz, Richard A; Hallgrimsson, Benedikt

    2018-02-01

    Morphological integration, or the tendency for covariation, is commonly seen in complex traits such as the human face. The effects of growth on shape, or allometry, represent a ubiquitous but poorly understood axis of integration. We address the question of to what extent age and measures of size converge on a single pattern of allometry for human facial shape. Our study is based on two large cross-sectional cohorts of children, one from Tanzania and the other from the United States (N = 7,173). We employ 3D facial imaging and geometric morphometrics to relate facial shape to age and anthropometric measures. The two populations differ significantly in facial shape, but the magnitude of this difference is small relative to the variation within each group. Allometric variation for facial shape is similar in both populations, representing a small but significant proportion of total variation in facial shape. Different measures of size are associated with overlapping but statistically distinct aspects of shape variation. Only half of the size-related variation in facial shape can be explained by the first principal component of four size measures and age while the remainder associates distinctly with individual measures. Allometric variation in the human face is complex and should not be regarded as a singular effect. This finding has important implications for how size is treated in studies of human facial shape and for the developmental basis for allometric variation more generally. © 2017 Wiley Periodicals, Inc.

  15. Traumatic facial nerve neuroma with facial palsy presenting in infancy.

    Science.gov (United States)

    Clark, James H; Burger, Peter C; Boahene, Derek Kofi; Niparko, John K

    2010-07-01

    To describe the management of traumatic neuroma of the facial nerve in a child and literature review. Sixteen-month-old male subject. Radiological imaging and surgery. Facial nerve function. The patient presented at 16 months with a right facial palsy and was found to have a right facial nerve traumatic neuroma. A transmastoid, middle fossa resection of the right facial nerve lesion was undertaken with a successful facial nerve-to-hypoglossal nerve anastomosis. The facial palsy improved postoperatively. A traumatic neuroma should be considered in an infant who presents with facial palsy, even in the absence of an obvious history of trauma. The treatment of such lesion is complex in any age group but especially in young children. Symptoms, age, lesion size, growth rate, and facial nerve function determine the appropriate management.

  16. Facial colliculus syndrome

    Directory of Open Access Journals (Sweden)

    Rupinderjeet Kaur

    2016-01-01

    Full Text Available A male patient presented with horizontal diplopia and conjugate gaze palsy. Magnetic resonance imaging (MRI revealed acute infarct in right facial colliculus which is an anatomical elevation on the dorsal aspect of Pons. This elevation is due the 6th cranial nerve nucleus and the motor fibres of facial nerve which loop dorsal to this nucleus. Anatomical correlation of the clinical symptoms is also depicted in this report.

  17. Stability of Facial Affective Expressions in Schizophrenia

    Directory of Open Access Journals (Sweden)

    H. Fatouros-Bergman

    2012-01-01

    Full Text Available Thirty-two videorecorded interviews were conducted by two interviewers with eight patients diagnosed with schizophrenia. Each patient was interviewed four times: three weekly interviews by the first interviewer and one additional interview by the second interviewer. 64 selected sequences where the patients were speaking about psychotic experiences were scored for facial affective behaviour with Emotion Facial Action Coding System (EMFACS. In accordance with previous research, the results show that patients diagnosed with schizophrenia express negative facial affectivity. Facial affective behaviour seems not to be dependent on temporality, since within-subjects ANOVA revealed no substantial changes in the amount of affects displayed across the weekly interview occasions. Whereas previous findings found contempt to be the most frequent affect in patients, in the present material disgust was as common, but depended on the interviewer. The results suggest that facial affectivity in these patients is primarily dominated by the negative emotions of disgust and, to a lesser extent, contempt and implies that this seems to be a fairly stable feature.

  18. Facial infiltrative lipomatosis

    International Nuclear Information System (INIS)

    Haloi, A.K.; Ditchfield, M.; Pennington, A.; Philips, R.

    2006-01-01

    Although there are multiple case reports and small series concerning facial infiltrative lipomatosis, there is no composite radiological description of the condition. Radiological evaluation of facial infiltrative lipomatosis using plain film, sonography, CT and MRI. We radiologically evaluated four patients with facial infiltrative lipomatosis. Initial plain radiographs of the face were acquired in all patients. Three children had an initial sonographic examination to evaluate the condition, followed by MRI. One child had a CT and then MRI. One child had abnormalities on plain radiographs. Sonographically, the lesions were seen as ill-defined heterogeneously hypoechoic areas with indistinct margins. On CT images, the lesions did not have a homogeneous fat density but showed some relatively more dense areas in deeper parts of the lesions. MRI provided better delineation of the exact extent of the process and characterization of facial infiltrative lipomatosis. Facial infiltrative lipomatosis should be considered as a differential diagnosis of vascular or lymphatic malformation when a child presents with unilateral facial swelling. MRI is the most useful single imaging modality to evaluate the condition, as it provides the best delineation of the exact extent of the process. (orig.)

  19. A facial expression image database and norm for Asian population: a preliminary report

    Science.gov (United States)

    Chen, Chien-Chung; Cho, Shu-ling; Horszowska, Katarzyna; Chen, Mei-Yen; Wu, Chia-Ching; Chen, Hsueh-Chih; Yeh, Yi-Yu; Cheng, Chao-Min

    2009-01-01

    We collected 6604 images of 30 models in eight types of facial expression: happiness, anger, sadness, disgust, fear, surprise, contempt and neutral. Among them, 406 most representative images from 12 models were rated by more than 200 human raters for perceived emotion category and intensity. Such large number of emotion categories, models and raters is sufficient for most serious expression recognition research both in psychology and in computer science. All the models and raters are of Asian background. Hence, this database can also be used when the culture background is a concern. In addition, 43 landmarks each of the 291 rated frontal view images were identified and recorded. This information should facilitate feature based research of facial expression. Overall, the diversity in images and richness in information should make our database and norm useful for a wide range of research.

  20. Evaluation of a Topical Anti-inflammatory/Antifungal Combination Cream in Mild-to-moderate Facial Seborrheic Dermatitis: An Intra-subject Controlled Trial Examining Treated vs. Untreated Skin Utilizing Clinical Features and Erythema-directed Digital Photography.

    Science.gov (United States)

    Dall'Oglio, Federica; Tedeschi, Aurora; Guardabasso, Vincenzo; Micali, Giuseppe

    2015-09-01

    To evaluate if nonprescription topical agents may provide positive outcomes in the management of mild-to-moderate facial seborrheic dermatitis by reducing inflammation and scale production through clinical evaluation and erythema-directed digital photography. Open-label, prospective, not-blinded, intra-patient, controlled, clinical trial (target area). Twenty adult subjects affected by mild-to-moderate facial seborrheic dermatitis were enrolled and instructed to apply the study cream two times daily, initially on a selected target area only for seven days. If the subject developed visible improvement, it was advised to extend the application to all facial affected area for 21 additional days. Efficacy was evaluated by measuring the grade of erythema (by clinical examination and by erythema-directed digital photography), desquamation (by clinical examination), and pruritus (by subject-completed visual analog scale). Additionally, at the end of the protocol, a Physician Global Assessment was carried out. Eighteen subjects completed the study, whereas two subjects were lost to follow-up for nonadherence and personal reasons, respectively. Day 7 data from target areas showed a significant reduction in erythema. At the end of study, a significant improvement was recorded for erythema, desquamation, and pruritus compared to baseline. Physician Global Assessment showed improvement in 89 percent of patients, with a complete response in 56 percent of cases. These preliminary results indicate that the study cream may be a viable nonprescription therapeutic option for patients affected by facial seborrheic dermatitis able to determine early and significant improvement. This study also emphasizes the advantages of using an erythema-directed digital photography system for assisting in a simple, more accurate erythema severity grading and therapeutic monitoring in patients affected by seborrheic dermatitis.

  1. Facial expression recognition based on improved local ternary pattern and stacked auto-encoder

    Science.gov (United States)

    Wu, Yao; Qiu, Weigen

    2017-08-01

    In order to enhance the robustness of facial expression recognition, we propose a method of facial expression recognition based on improved Local Ternary Pattern (LTP) combined with Stacked Auto-Encoder (SAE). This method uses the improved LTP extraction feature, and then uses the improved depth belief network as the detector and classifier to extract the LTP feature. The combination of LTP and improved deep belief network is realized in facial expression recognition. The recognition rate on CK+ databases has improved significantly.

  2. Representing Development

    DEFF Research Database (Denmark)

    Representing Development presents the different social representations that have formed the idea of development in Western thinking over the past three centuries. Offering an acute perspective on the current state of developmental science and providing constructive insights into future pathways, ...

  3. Ethnic differences in the structural properties of facial skin.

    Science.gov (United States)

    Sugiyama-Nakagiri, Yoriko; Sugata, Keiichi; Hachiya, Akira; Osanai, Osamu; Ohuchi, Atsushi; Kitahara, Takashi

    2009-02-01

    Conspicuous facial pores are one type of serious aesthetic defects for many women. However, the mechanism(s) that underlie the conspicuousness of facial pores remains unclear. We previously characterized the epidermal architecture around facial pores that correlated with the appearance of those pores. A survey was carried out to elucidate ethnic-dependent differences in facial pore size and in epidermal architecture. The subjects included 80 healthy women (aged 30-39: Caucasians, Asians, Hispanics and African Americans) living in Dallas in the USA. First, surface replicas were collected to compare pore sizes of cheek skin. Second, horizontal cross-sectioned images from cheek skin were obtained non-invasively from the same subjects using in vivo confocal laser scanning microscopy (CLSM) and the severity of impairment of epidermal architecture around facial pores was determined. Finally, to compare racial differences in the architecture of the interfollicular epidermis of facial cheek skin, horizontal cross-sectioned images were obtained and the numbers of dermal papillae were counted. Asians had the smallest pore areas compared with other racial groups. Regarding the epidermal architecture around facial pores, all ethnic groups observed in this study had similar morphological features and African Americans showed substantially more severe impairment of architecture around facial pores than any other racial group. In addition, significant differences were observed in the architecture of the interfollicular epidermis between ethnic groups. These results suggest that facial pore size, the epidermal architecture around facial pores and the architecture of the interfollicular epidermis differ between ethnic groups. This might affect the appearance of facial pores.

  4. Discrimination of gender using facial image with expression change

    Science.gov (United States)

    Kuniyada, Jun; Fukuda, Takahiro; Terada, Kenji

    2005-12-01

    By carrying out marketing research, the managers of large-sized department stores or small convenience stores obtain the information such as ratio of men and women of visitors and an age group, and improve their management plan. However, these works are carried out in the manual operations, and it becomes a big burden to small stores. In this paper, the authors propose a method of men and women discrimination by extracting difference of the facial expression change from color facial images. Now, there are a lot of methods of the automatic recognition of the individual using a motion facial image or a still facial image in the field of image processing. However, it is very difficult to discriminate gender under the influence of the hairstyle and clothes, etc. Therefore, we propose the method which is not affected by personality such as size and position of facial parts by paying attention to a change of an expression. In this method, it is necessary to obtain two facial images with an expression and an expressionless. First, a region of facial surface and the regions of facial parts such as eyes, nose, and mouth are extracted in the facial image with color information of hue and saturation in HSV color system and emphasized edge information. Next, the features are extracted by calculating the rate of the change of each facial part generated by an expression change. In the last step, the values of those features are compared between the input data and the database, and the gender is discriminated. In this paper, it experimented for the laughing expression and smile expression, and good results were provided for discriminating gender.

  5. Tensor Rank Preserving Discriminant Analysis for Facial Recognition.

    Science.gov (United States)

    Tao, Dapeng; Guo, Yanan; Li, Yaotang; Gao, Xinbo

    2017-10-12

    Facial recognition, one of the basic topics in computer vision and pattern recognition, has received substantial attention in recent years. However, for those traditional facial recognition algorithms, the facial images are reshaped to a long vector, thereby losing part of the original spatial constraints of each pixel. In this paper, a new tensor-based feature extraction algorithm termed tensor rank preserving discriminant analysis (TRPDA) for facial image recognition is proposed; the proposed method involves two stages: in the first stage, the low-dimensional tensor subspace of the original input tensor samples was obtained; in the second stage, discriminative locality alignment was utilized to obtain the ultimate vector feature representation for subsequent facial recognition. On the one hand, the proposed TRPDA algorithm fully utilizes the natural structure of the input samples, and it applies an optimization criterion that can directly handle the tensor spectral analysis problem, thereby decreasing the computation cost compared those traditional tensor-based feature selection algorithms. On the other hand, the proposed TRPDA algorithm extracts feature by finding a tensor subspace that preserves most of the rank order information of the intra-class input samples. Experiments on the three facial databases are performed here to determine the effectiveness of the proposed TRPDA algorithm.

  6. Facial dynamics and emotional expressions in facial aging treatments.

    Science.gov (United States)

    Michaud, Thierry; Gassia, Véronique; Belhaouari, Lakhdar

    2015-03-01

    Facial expressions convey emotions that form the foundation of interpersonal relationships, and many of these emotions promote and regulate our social linkages. Hence, the facial aging symptomatological analysis and the treatment plan must of necessity include knowledge of the facial dynamics and the emotional expressions of the face. This approach aims to more closely meet patients' expectations of natural-looking results, by correcting age-related negative expressions while observing the emotional language of the face. This article will successively describe patients' expectations, the role of facial expressions in relational dynamics, the relationship between facial structures and facial expressions, and the way facial aging mimics negative expressions. Eventually, therapeutic implications for facial aging treatment will be addressed. © 2015 Wiley Periodicals, Inc.

  7. Sound-induced facial synkinesis following facial nerve paralysis.

    Science.gov (United States)

    Ma, Ming-San; van der Hoeven, Johannes H; Nicolai, Jean-Philippe A; Meek, Marcel F

    2009-08-01

    Facial synkinesis (or synkinesia) (FS) occurs frequently after paresis or paralysis of the facial nerve and is in most cases due to aberrant regeneration of (branches of) the facial nerve. Patients suffer from inappropriate and involuntary synchronous facial muscle contractions. Here we describe two cases of sound-induced facial synkinesis (SFS) after facial nerve injury. As far as we know, this phenomenon has not been described in the English literature before. Patient A presented with right hemifacial palsy after lesion of the facial nerve due to skull base fracture. He reported involuntary muscle activity at the right corner of the mouth, specifically on hearing ringing keys. Patient B suffered from left hemifacial palsy following otitis media and developed involuntary muscle contraction in the facial musculature specifically on hearing clapping hands or a trumpet sound. Both patients were evaluated by means of video, audio and EMG analysis. Possible mechanisms in the pathophysiology of SFS are postulated and therapeutic options are discussed.

  8. Facial transplantation surgery introduction.

    Science.gov (United States)

    Eun, Seok-Chan

    2015-06-01

    Severely disfiguring facial injuries can have a devastating impact on the patient's quality of life. During the past decade, vascularized facial allotransplantation has progressed from an experimental possibility to a clinical reality in the fields of disease, trauma, and congenital malformations. This technique may now be considered a viable option for repairing complex craniofacial defects for which the results of autologous reconstruction remain suboptimal. Vascularized facial allotransplantation permits optimal anatomical reconstruction and provides desired functional, esthetic, and psychosocial benefits that are far superior to those achieved with conventional methods. Along with dramatic improvements in their functional statuses, patients regain the ability to make facial expressions such as smiling and to perform various functions such as smelling, eating, drinking, and speaking. The ideas in the 1997 movie "Face/Off" have now been realized in the clinical field. The objective of this article is to introduce this new surgical field, provide a basis for examining the status of the field of face transplantation, and stimulate and enhance facial transplantation studies in Korea.

  9. Caricaturing facial expressions.

    Science.gov (United States)

    Calder, A J; Rowland, D; Young, A W; Nimmo-Smith, I; Keane, J; Perrett, D I

    2000-08-14

    The physical differences between facial expressions (e.g. fear) and a reference norm (e.g. a neutral expression) were altered to produce photographic-quality caricatures. In Experiment 1, participants rated caricatures of fear, happiness and sadness for their intensity of these three emotions; a second group of participants rated how 'face-like' the caricatures appeared. With increasing levels of exaggeration the caricatures were rated as more emotionally intense, but less 'face-like'. Experiment 2 demonstrated a similar relationship between emotional intensity and level of caricature for six different facial expressions. Experiments 3 and 4 compared intensity ratings of facial expression caricatures prepared relative to a selection of reference norms - a neutral expression, an average expression, or a different facial expression (e.g. anger caricatured relative to fear). Each norm produced a linear relationship between caricature and rated intensity of emotion; this finding is inconsistent with two-dimensional models of the perceptual representation of facial expression. An exemplar-based multidimensional model is proposed as an alternative account.

  10. Facial Image Compression Based on Structured Codebooks in Overcomplete Domain

    Directory of Open Access Journals (Sweden)

    Vila-Forcén JE

    2006-01-01

    Full Text Available We advocate facial image compression technique in the scope of distributed source coding framework. The novelty of the proposed approach is twofold: image compression is considered from the position of source coding with side information and, contrarily to the existing scenarios where the side information is given explicitly; the side information is created based on a deterministic approximation of the local image features. We consider an image in the overcomplete transform domain as a realization of a random source with a structured codebook of symbols where each symbol represents a particular edge shape. Due to the partial availability of the side information at both encoder and decoder, we treat our problem as a modification of the Berger-Flynn-Gray problem and investigate a possible gain over the solutions when side information is either unavailable or available at the decoder. Finally, the paper presents a practical image compression algorithm for facial images based on our concept that demonstrates the superior performance in the very-low-bit-rate regime.

  11. Botulinum toxin treatment for facial palsy: A systematic review.

    Science.gov (United States)

    Cooper, Lilli; Lui, Michael; Nduka, Charles

    2017-06-01

    Facial palsy may be complicated by ipsilateral synkinesis or contralateral hyperkinesis. Botulinum toxin is increasingly used in the management of facial palsy; however, the optimum dose, treatment interval, adjunct therapy and performance as compared with alternative treatments have not been well established. This study aimed to systematically review the evidence for the use of botulinum toxin in facial palsy. The Cochrane central register of controlled trials (CENTRAL), MEDLINE(R) (1946 to September 2015) and Embase Classic + Embase (1947 to September 2015) were searched for randomised studies using botulinum toxin in facial palsy. Forty-seven studies were identified, and three included. Their physical and patient-reported outcomes are described, and observations and cautions are discussed. Facial asymmetry has a strong correlation to subjective domains such as impairment in social interaction and perception of self-image and appearance. Botulinum toxin injections represent a minimally invasive technique that is helpful in restoring facial symmetry at rest and during movement in chronic, and potentially acute, facial palsy. Botulinum toxin in combination with physical therapy may be particularly helpful. Currently, there is a paucity of data; areas for further research are suggested. A strong body of evidence may allow botulinum toxin treatment to be nationally standardised and recommended in the management of facial palsy. Copyright © 2017 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  12. Facial emotion perception impairments in schizophrenia patients with comorbid antisocial personality disorder.

    Science.gov (United States)

    Tang, Dorothy Y Y; Liu, Amy C Y; Lui, Simon S Y; Lam, Bess Y H; Siu, Bonnie W M; Lee, Tatia M C; Cheung, Eric F C

    2016-02-28

    Impairment in facial emotion perception is believed to be associated with aggression. Schizophrenia patients with antisocial features are more impaired in facial emotion perception than their counterparts without these features. However, previous studies did not define the comorbidity of antisocial personality disorder (ASPD) using stringent criteria. We recruited 30 participants with dual diagnoses of ASPD and schizophrenia, 30 participants with schizophrenia and 30 controls. We employed the Facial Emotional Recognition paradigm to measure facial emotion perception, and administered a battery of neurocognitive tests. The Life History of Aggression scale was used. ANOVAs and ANCOVAs were conducted to examine group differences in facial emotion perception, and control for the effect of other neurocognitive dysfunctions on facial emotion perception. Correlational analyses were conducted to examine the association between facial emotion perception and aggression. Patients with dual diagnoses performed worst in facial emotion perception among the three groups. The group differences in facial emotion perception remained significant, even after other neurocognitive impairments were controlled for. Severity of aggression was correlated with impairment in perceiving negative-valenced facial emotions in patients with dual diagnoses. Our findings support the presence of facial emotion perception impairment and its association with aggression in schizophrenia patients with comorbid ASPD. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  13. The influence of different facial components on facial aesthetics.

    NARCIS (Netherlands)

    Faure, J.C.; Rieffe, C.; Maltha, J.C.

    2002-01-01

    Facial aesthetics have an important influence on social behaviour and perception in our society. The purpose of the present study was to evaluate the effect of facial symmetry and inter-ocular distance on the assessment of facial aesthetics, factors that are often suggested as major contributors to

  14. Facial Onset Sensory and Motor Neuronopathy: Further Evidence for a TDP-43 Proteinopathy

    Directory of Open Access Journals (Sweden)

    Besa Ziso

    2015-04-01

    Full Text Available Three patients with the clinical and investigation features of facial onset sensory and motor neuronopathy (FOSMN syndrome are presented, one of whom came to a post-mortem examination. This showed TDP-43-positive inclusions in the bulbar and spinal motor neurones as well as in the trigeminal nerve nuclei, consistent with a neurodegenerative pathogenesis. These data support the idea that at least some FOSMN cases fall within the spectrum of the TDP-43 proteinopathies, and represent a focal form of this pathology.

  15. Children's understanding of facial expression of emotion: II. Drawing of emotion-faces.

    Science.gov (United States)

    Missaghi-Lakshman, M; Whissell, C

    1991-06-01

    67 children from Grades 2, 4, and 7 drew faces representing the emotional expressions of fear, anger, surprise, disgust, happiness, and sadness. The children themselves and 29 adults later decoded the drawings in an emotion-recognition task. Children were the more accurate decoders, and their accuracy and the accuracy of adults increased significantly for judgments of 7th-grade drawings. The emotions happy and sad were most accurately decoded. There were no significant differences associated with sex. In their drawings, children utilized a symbol system that seems to be based on a highlighting or exaggeration of features of the innately governed facial expression of emotion.

  16. How Do Typically Developing Deaf Children and Deaf Children with Autism Spectrum Disorder Use the Face When Comprehending Emotional Facial Expressions in British Sign Language?

    Science.gov (United States)

    Denmark, Tanya; Atkinson, Joanna; Campbell, Ruth; Swettenham, John

    2014-01-01

    Facial expressions in sign language carry a variety of communicative features. While emotion can modulate a spoken utterance through changes in intonation, duration and intensity, in sign language specific facial expressions presented concurrently with a manual sign perform this function. When deaf adult signers cannot see facial features, their…

  17. Oro-facial-digital syndrome Type 1: A case report

    Directory of Open Access Journals (Sweden)

    Kanika Singh Dhull

    2014-01-01

    Full Text Available Oro-Facial Digital Syndrome (OFDS is a generic term for group of apparently distinctive genetic diseases that affect the development of the oral cavity, facial features, and digits. One of these is OFDS type I (OFDS-I which has rarely been reported in Asian countries. This is the case report of a 13 year old patient with OFDS type I who reported to the Department of Pedodontics and Preventive Dentistry, with the complaint of discolored upper front teeth.

  18. Computer facial animation

    CERN Document Server

    Parke, Frederic I

    2008-01-01

    This comprehensive work provides the fundamentals of computer facial animation and brings into sharper focus techniques that are becoming mainstream in the industry. Over the past decade, since the publication of the first edition, there have been significant developments by academic research groups and in the film and games industries leading to the development of morphable face models, performance driven animation, as well as increasingly detailed lip-synchronization and hair modeling techniques. These topics are described in the context of existing facial animation principles. The second ed

  19. Representing time

    Directory of Open Access Journals (Sweden)

    Luca Poncellini

    2010-06-01

    Full Text Available The analysis of natural phenomena applied to architectural planning and design is facing the most fascinating and elusive of the four dimensions through which man attempts to define life within the universe: time. We all know what time is, said St. Augustine, but nobody knows how to describe it. Within architectural projects and representations, time rarely appears in explicit form. This paper presents the results of a research conducted by students of NABA and of the Polytechnic of Milan with the purpose of representing time considered as a key element within architectural projects. Student investigated new approaches and methodologies to represent time using the two-dimensional support of a sheet of paper.

  20. Automatic recognition of emotions from facial expressions

    Science.gov (United States)

    Xue, Henry; Gertner, Izidor

    2014-06-01

    In the human-computer interaction (HCI) process it is desirable to have an artificial intelligent (AI) system that can identify and categorize human emotions from facial expressions. Such systems can be used in security, in entertainment industries, and also to study visual perception, social interactions and disorders (e.g. schizophrenia and autism). In this work we survey and compare the performance of different feature extraction algorithms and classification schemes. We introduce a faster feature extraction method that resizes and applies a set of filters to the data images without sacrificing the accuracy. In addition, we have enhanced SVM to multiple dimensions while retaining the high accuracy rate of SVM. The algorithms were tested using the Japanese Female Facial Expression (JAFFE) Database and the Database of Faces (AT&T Faces).

  1. Computed tomography in facial trauma

    International Nuclear Information System (INIS)

    Zilkha, A.

    1982-01-01

    Computed tomography (CT), plain radiography, and conventional tomography were performed on 30 patients with facial trauma. CT demonstrated bone and soft-tissue involvement. In all cases, CT was superior to tomography in the assessment of facial injury. It is suggested that CT follow plain radiography in the evaluation of facial trauma

  2. Spoofing detection on facial images recognition using LBP and GLCM combination

    Science.gov (United States)

    Sthevanie, F.; Ramadhani, K. N.

    2018-03-01

    The challenge for the facial based security system is how to detect facial image falsification such as facial image spoofing. Spoofing occurs when someone try to pretend as a registered user to obtain illegal access and gain advantage from the protected system. This research implements facial image spoofing detection method by analyzing image texture. The proposed method for texture analysis combines the Local Binary Pattern (LBP) and Gray Level Co-occurrence Matrix (GLCM) method. The experimental results show that spoofing detection using LBP and GLCM combination achieves high detection rate compared to that of using only LBP feature or GLCM feature.

  3. Soft-tissue facial characteristics of attractive Chinese men compared to normal men.

    Science.gov (United States)

    Wu, Feng; Li, Junfang; He, Hong; Huang, Na; Tang, Youchao; Wang, Yuanqing

    2015-01-01

    To compare the facial characteristics of attractive Chinese men with those of reference men. The three-dimensional coordinates of 50 facial landmarks were collected in 40 healthy reference men and in 40 "attractive" men, soft tissue facial angles, distances, areas, and volumes were computed and compared using analysis of variance. When compared with reference men, attractive men shared several similar facial characteristics: relatively large forehead, reduced mandible, and rounded face. They had a more acute soft tissue profile, an increased upper facial width and middle facial depth, larger mouth, and more voluminous lips than reference men. Attractive men had several facial characteristics suggesting babyness. Nonetheless, each group of men was characterized by a different development of these features. Esthetic reference values can be a useful tool for clinicians, but should always consider the characteristics of individual faces.

  4. Facial Expression Recognition using Multiclass Ensemble Least-Square Support Vector Machine

    Science.gov (United States)

    Lawi, Armin; Sya'Rani Machrizzandi, M.

    2018-03-01

    Facial expression is one of behavior characteristics of human-being. The use of biometrics technology system with facial expression characteristics makes it possible to recognize a person’s mood or emotion. The basic components of facial expression analysis system are face detection, face image extraction, facial classification and facial expressions recognition. This paper uses Principal Component Analysis (PCA) algorithm to extract facial features with expression parameters, i.e., happy, sad, neutral, angry, fear, and disgusted. Then Multiclass Ensemble Least-Squares Support Vector Machine (MELS-SVM) is used for the classification process of facial expression. The result of MELS-SVM model obtained from our 185 different expression images of 10 persons showed high accuracy level of 99.998% using RBF kernel.

  5. Delineation of facial archetypes by 3d averaging.

    Science.gov (United States)

    Shaweesh, Ashraf I; Thomas, C David L; Bankier, Agnes; Clement, John G

    2004-10-01

    The objective of this study was to investigate the feasibility of creating archetypal 3D faces through computerized 3D facial averaging. A 3D surface scanner Fiore and its software were used to acquire the 3D scans of the faces while 3D Rugle3 and locally-developed software generated the holistic facial averages. 3D facial averages were created from two ethnic groups; European and Japanese and from children with three previous genetic disorders; Williams syndrome, achondroplasia and Sotos syndrome as well as the normal control group. The method included averaging the corresponding depth (z) coordinates of the 3D facial scans. Compared with other face averaging techniques there was not any warping or filling in the spaces by interpolation; however, this facial average lacked colour information. The results showed that as few as 14 faces were sufficient to create an archetypal facial average. In turn this would make it practical to use face averaging as an identification tool in cases where it would be difficult to recruit a larger number of participants. In generating the average, correcting for size differences among faces was shown to adjust the average outlines of the facial features. It is assumed that 3D facial averaging would help in the identification of the ethnic status of persons whose identity may not be known with certainty. In clinical medicine, it would have a great potential for the diagnosis of syndromes with distinctive facial features. The system would also assist in the education of clinicians in the recognition and identification of such syndromes.

  6. The Emotional Modulation of Facial Mimicry: A Kinematic Study

    Directory of Open Access Journals (Sweden)

    Antonella Tramacere

    2018-01-01

    Full Text Available It is well-established that the observation of emotional facial expression induces facial mimicry responses in the observers. However, how the interaction between emotional and motor components of facial expressions can modulate the motor behavior of the perceiver is still unknown. We have developed a kinematic experiment to evaluate the effect of different oro-facial expressions on perceiver's face movements. Participants were asked to perform two movements, i.e., lip stretching and lip protrusion, in response to the observation of four meaningful (i.e., smile, angry-mouth, kiss, and spit and two meaningless mouth gestures. All the stimuli were characterized by different motor patterns (mouth aperture or mouth closure. Response Times and kinematics parameters of the movements (amplitude, duration, and mean velocity were recorded and analyzed. Results evidenced a dissociated effect on reaction times and movement kinematics. We found shorter reaction time when a mouth movement was preceded by the observation of a meaningful and motorically congruent oro-facial gesture, in line with facial mimicry effect. On the contrary, during execution, the perception of smile was associated with the facilitation, in terms of shorter duration and higher velocity of the incongruent movement, i.e., lip protrusion. The same effect resulted in response to kiss and spit that significantly facilitated the execution of lip stretching. We called this phenomenon facial mimicry reversal effect, intended as the overturning of the effect normally observed during facial mimicry. In general, the findings show that both motor features and types of emotional oro-facial gestures (conveying positive or negative valence affect the kinematics of subsequent mouth movements at different levels: while congruent motor features facilitate a general motor response, motor execution could be speeded by gestures that are motorically incongruent with the observed one. Moreover, valence

  7. The Emotional Modulation of Facial Mimicry: A Kinematic Study.

    Science.gov (United States)

    Tramacere, Antonella; Ferrari, Pier F; Gentilucci, Maurizio; Giuffrida, Valeria; De Marco, Doriana

    2017-01-01

    It is well-established that the observation of emotional facial expression induces facial mimicry responses in the observers. However, how the interaction between emotional and motor components of facial expressions can modulate the motor behavior of the perceiver is still unknown. We have developed a kinematic experiment to evaluate the effect of different oro-facial expressions on perceiver's face movements. Participants were asked to perform two movements, i.e., lip stretching and lip protrusion, in response to the observation of four meaningful (i.e., smile, angry-mouth, kiss, and spit) and two meaningless mouth gestures. All the stimuli were characterized by different motor patterns (mouth aperture or mouth closure). Response Times and kinematics parameters of the movements (amplitude, duration, and mean velocity) were recorded and analyzed. Results evidenced a dissociated effect on reaction times and movement kinematics. We found shorter reaction time when a mouth movement was preceded by the observation of a meaningful and motorically congruent oro-facial gesture, in line with facial mimicry effect. On the contrary, during execution, the perception of smile was associated with the facilitation, in terms of shorter duration and higher velocity of the incongruent movement, i.e., lip protrusion. The same effect resulted in response to kiss and spit that significantly facilitated the execution of lip stretching. We called this phenomenon facial mimicry reversal effect , intended as the overturning of the effect normally observed during facial mimicry. In general, the findings show that both motor features and types of emotional oro-facial gestures (conveying positive or negative valence) affect the kinematics of subsequent mouth movements at different levels: while congruent motor features facilitate a general motor response, motor execution could be speeded by gestures that are motorically incongruent with the observed one. Moreover, valence effect depends on

  8. Performance-driven facial animation: basic research on human judgments of emotional state in facial avatars.

    Science.gov (United States)

    Rizzo, A A; Neumann, U; Enciso, R; Fidaleo, D; Noh, J Y

    2001-08-01

    three-dimensional avatar using a performance-driven facial animation (PDFA) system developed at the University of Southern California Integrated Media Systems Center. PDFA offers a means for creating high-fidelity visual representations of human faces and bodies. This effort explores the feasibility of sensing and reproducing a range of facial expressions with a PDFA system. In order to test concordance of human ratings of emotional expression between video and avatar facial delivery, we first had facial model subjects observe stimuli that were designed to elicit naturalistic facial expressions. The emotional stimulus induction involved presenting text-based, still image, and video clips to subjects that were previously rated to induce facial expressions for the six universals2 of facial expression (happy, sad, fear, anger, disgust, and surprise), in addition to attentiveness, puzzlement and frustration. Videotapes of these induced facial expressions that best represented prototypic examples of the above emotional states and three-dimensional avatar animations of the same facial expressions were randomly presented to 38 human raters. The raters used open-end, forced choice and seven-point Likert-type scales to rate expression in terms of identification. The forced choice and seven-point ratings provided the most usable data to determine video/animation concordance and these data are presented. To support a clear understanding of this data, a website has been set up that will allow readers to view the video and facial animation clips to illustrate the assets and limitations of these types of facial expression-rendering methods (www. USCAvatars.com/MMVR). This methodological first step in our research program has served to provide valuable human user-centered feedback to support the iterative design and development of facial avatar characteristics for expression of emotional communication.

  9. Paralisia facial bilateral

    Directory of Open Access Journals (Sweden)

    J. Fortes-Rego

    1976-03-01

    Full Text Available É apresentado um caso de diplegia facial surgida após meningite meningocócica e infecção por herpes simples. Depois de discutir as diversas condições que o fenômeno pode apresentar-se, o autor inclina-se por uma etiologia herpética.

  10. Diplegia facial traumatica

    Directory of Open Access Journals (Sweden)

    J. Fortes-Rego

    1975-12-01

    Full Text Available É relatado um caso de paralisia facial bilateral, incompleta, associada a hipoacusia esquerda, após traumatismo cranioencefálico, com fraturas evidenciadas radiológicamente. Algumas considerações são formuladas tentando relacionar ditas manifestações com fraturas do osso temporal.

  11. Multiracial Facial Golden Ratio and Evaluation of Facial Appearance.

    Science.gov (United States)

    Alam, Mohammad Khursheed; Mohd Noor, Nor Farid; Basri, Rehana; Yew, Tan Fo; Wen, Tay Hui

    2015-01-01

    This study aimed to investigate the association of facial proportion and its relation to the golden ratio with the evaluation of facial appearance among Malaysian population. This was a cross-sectional study with 286 randomly selected from Universiti Sains Malaysia (USM) Health Campus students (150 females and 136 males; 100 Malaysian Chinese, 100 Malaysian Malay and 86 Malaysian Indian), with the mean age of 21.54 ± 1.56 (Age range, 18-25). Facial indices obtained from direct facial measurements were used for the classification of facial shape into short, ideal and long. A validated structured questionnaire was used to assess subjects' evaluation of their own facial appearance. The mean facial indices of Malaysian Indian (MI), Malaysian Chinese (MC) and Malaysian Malay (MM) were 1.59 ± 0.19, 1.57 ± 0.25 and 1.54 ± 0.23 respectively. Only MC showed significant sexual dimorphism in facial index (P = 0.047; Pmean score of 2.18 ± 0.97 for overall impression and 2.15 ± 1.04 for facial parts, compared to MM and MI, with mean score of 1.80 ± 0.97 and 1.64 ± 0.74 respectively for overall impression; 1.75 ± 0.95 and 1.70 ± 0.83 respectively for facial parts. 1) Only 17.1% of Malaysian facial proportion conformed to the golden ratio, with majority of the population having short face (54.5%); 2) Facial index did not depend significantly on races; 3) Significant sexual dimorphism was shown among Malaysian Chinese; 4) All three races are generally satisfied with their own facial appearance; 5) No significant association was found between golden ratio and facial evaluation score among Malaysian population.

  12. Facial Asymmetry-Based Age Group Estimation: Role in Recognizing Age-Separated Face Images.

    Science.gov (United States)

    Sajid, Muhammad; Taj, Imtiaz Ahmad; Bajwa, Usama Ijaz; Ratyal, Naeem Iqbal

    2018-04-23

    Face recognition aims to establish the identity of a person based on facial characteristics. On the other hand, age group estimation is the automatic calculation of an individual's age range based on facial features. Recognizing age-separated face images is still a challenging research problem due to complex aging processes involving different types of facial tissues, skin, fat, muscles, and bones. Certain holistic and local facial features are used to recognize age-separated face images. However, most of the existing methods recognize face images without incorporating the knowledge learned from age group estimation. In this paper, we propose an age-assisted face recognition approach to handle aging variations. Inspired by the observation that facial asymmetry is an age-dependent intrinsic facial feature, we first use asymmetric facial dimensions to estimate the age group of a given face image. Deeply learned asymmetric facial features are then extracted for face recognition using a deep convolutional neural network (dCNN). Finally, we integrate the knowledge learned from the age group estimation into the face recognition algorithm using the same dCNN. This integration results in a significant improvement in the overall performance compared to using the face recognition algorithm alone. The experimental results on two large facial aging datasets, the MORPH and FERET sets, show that the proposed age group estimation based on the face recognition approach yields superior performance compared to some existing state-of-the-art methods. © 2018 American Academy of Forensic Sciences.

  13. Perception of facial expressions produced by configural relations

    Directory of Open Access Journals (Sweden)

    V A Barabanschikov

    2010-06-01

    Full Text Available The authors discuss the problem of perception of facial expressions produced by configural features. Experimentally found configural features influence the perception of emotional expression of subjectively emotionless face. Classical results by E. Brunsvik related to perception of schematic faces are partly confirmed.

  14. Cognitive penetrability and emotion recognition in human facial expressions

    Directory of Open Access Journals (Sweden)

    Francesco eMarchi

    2015-06-01

    Full Text Available Do our background beliefs, desires, and mental images influence our perceptual experience of the emotions of others? In this paper, we will address the possibility of cognitive penetration of perceptual experience in the domain of social cognition. In particular, we focus on emotion recognition based on the visual experience of facial expressions. After introducing the current debate on cognitive penetration, we review examples of perceptual adaptation for facial expressions of emotion. This evidence supports the idea that facial expressions are perceptually processed as wholes. That is, the perceptual system integrates lower-level facial features, such as eyebrow orientation, mouth angle etc., into facial compounds. We then present additional experimental evidence showing that in some cases, emotion recognition on the basis of facial expression is sensitive to and modified by the background knowledge of the subject. We argue that such sensitivity is best explained as a difference in the visual experience of the facial expression, not just as a modification of the judgment based on this experience. The difference in experience is characterized as the result of the interference of background knowledge with the perceptual integration process for faces. Thus, according to the best explanation, we have to accept cognitive penetration in some cases of emotion recognition. Finally, we highlight a recent model of social vision in order to propose a mechanism for cognitive penetration used in the face-based recognition of emotion.

  15. Automatic Facial Expression Recognition and Operator Functional State

    Science.gov (United States)

    Blanson, Nina

    2012-01-01

    The prevalence of human error in safety-critical occupations remains a major challenge to mission success despite increasing automation in control processes. Although various methods have been proposed to prevent incidences of human error, none of these have been developed to employ the detection and regulation of Operator Functional State (OFS), or the optimal condition of the operator while performing a task, in work environments due to drawbacks such as obtrusiveness and impracticality. A video-based system with the ability to infer an individual's emotional state from facial feature patterning mitigates some of the problems associated with other methods of detecting OFS, like obtrusiveness and impracticality in integration with the mission environment. This paper explores the utility of facial expression recognition as a technology for inferring OFS by first expounding on the intricacies of OFS and the scientific background behind emotion and its relationship with an individual's state. Then, descriptions of the feedback loop and the emotion protocols proposed for the facial recognition program are explained. A basic version of the facial expression recognition program uses Haar classifiers and OpenCV libraries to automatically locate key facial landmarks during a live video stream. Various methods of creating facial expression recognition software are reviewed to guide future extensions of the program. The paper concludes with an examination of the steps necessary in the research of emotion and recommendations for the creation of an automatic facial expression recognition program for use in real-time, safety-critical missions

  16. Automatic Facial Expression Recognition and Operator Functional State

    Science.gov (United States)

    Blanson, Nina

    2011-01-01

    The prevalence of human error in safety-critical occupations remains a major challenge to mission success despite increasing automation in control processes. Although various methods have been proposed to prevent incidences of human error, none of these have been developed to employ the detection and regulation of Operator Functional State (OFS), or the optimal condition of the operator while performing a task, in work environments due to drawbacks such as obtrusiveness and impracticality. A video-based system with the ability to infer an individual's emotional state from facial feature patterning mitigates some of the problems associated with other methods of detecting OFS, like obtrusiveness and impracticality in integration with the mission environment. This paper explores the utility of facial expression recognition as a technology for inferring OFS by first expounding on the intricacies of OFS and the scientific background behind emotion and its relationship with an individual's state. Then, descriptions of the feedback loop and the emotion protocols proposed for the facial recognition program are explained. A basic version of the facial expression recognition program uses Haar classifiers and OpenCV libraries to automatically locate key facial landmarks during a live video stream. Various methods of creating facial expression recognition software are reviewed to guide future extensions of the program. The paper concludes with an examination of the steps necessary in the research of emotion and recommendations for the creation of an automatic facial expression recognition program for use in real-time, safety-critical missions.

  17. Adaptive weighted local textural features for illumination, expression, and occlusion invariant face recognition

    Science.gov (United States)

    Cui, Chen; Asari, Vijayan K.

    2014-03-01

    Biometric features such as fingerprints, iris patterns, and face features help to identify people and restrict access to secure areas by performing advanced pattern analysis and matching. Face recognition is one of the most promising biometric methodologies for human identification in a non-cooperative security environment. However, the recognition results obtained by face recognition systems are a affected by several variations that may happen to the patterns in an unrestricted environment. As a result, several algorithms have been developed for extracting different facial features for face recognition. Due to the various possible challenges of data captured at different lighting conditions, viewing angles, facial expressions, and partial occlusions in natural environmental conditions, automatic facial recognition still remains as a difficult issue that needs to be resolved. In this paper, we propose a novel approach to tackling some of these issues by analyzing the local textural descriptions for facial feature representation. The textural information is extracted by an enhanced local binary pattern (ELBP) description of all the local regions of the face. The relationship of each pixel with respect to its neighborhood is extracted and employed to calculate the new representation. ELBP reconstructs a much better textural feature extraction vector from an original gray level image in different lighting conditions. The dimensionality of the texture image is reduced by principal component analysis performed on each local face region. Each low dimensional vector representing a local region is now weighted based on the significance of the sub-region. The weight of each sub-region is determined by employing the local variance estimate of the respective region, which represents the significance of the region. The final facial textural feature vector is obtained by concatenating the reduced dimensional weight sets of all the modules (sub-regions) of the face image

  18. Effect of a Facial Muscle Exercise Device on Facial Rejuvenation.

    Science.gov (United States)

    Hwang, Ui-Jae; Kwon, Oh-Yun; Jung, Sung-Hoon; Ahn, Sun-Hee; Gwak, Gyeong-Tae

    2018-01-20

    The efficacy of facial muscle exercises (FMEs) for facial rejuvenation is controversial. In the majority of previous studies, nonquantitative assessment tools were used to assess the benefits of FMEs. This study examined the effectiveness of FMEs using a Pao (MTG, Nagoya, Japan) device to quantify facial rejuvenation. Fifty females were asked to perform FMEs using a Pao device for 30 seconds twice a day for 8 weeks. Facial muscle thickness and cross-sectional area were measured sonographically. Facial surface distance, surface area, and volumes were determined using a laser scanning system before and after FME. Facial muscle thickness, cross-sectional area, midfacial surface distances, jawline surface distance, and lower facial surface area and volume were compared bilaterally before and after FME using a paired Student t test. The cross-sectional areas of the zygomaticus major and digastric muscles increased significantly (right: P jawline surface distances (right: P = 0.004, left: P = 0.003) decreased significantly after FME using the Pao device. The lower facial surface areas (right: P = 0.005, left: P = 0.006) and volumes (right: P = 0.001, left: P = 0.002) were also significantly reduced after FME using the Pao device. FME using the Pao device can increase facial muscle thickness and cross-sectional area, thus contributing to facial rejuvenation. © 2018 The American Society for Aesthetic Plastic Surgery, Inc.

  19. Outcome of different facial nerve reconstruction techniques

    OpenAIRE

    Mohamed, Aboshanif; Omi, Eigo; Honda, Kohei; Suzuki, Shinsuke; Ishikawa, Kazuo

    2016-01-01

    Abstract Introduction: There is no technique of facial nerve reconstruction that guarantees facial function recovery up to grade III. Objective: To evaluate the efficacy and safety of different facial nerve reconstruction techniques. Methods: Facial nerve reconstruction was performed in 22 patients (facial nerve interpositional graft in 11 patients and hypoglossal-facial nerve transfer in another 11 patients). All patients had facial function House-Brackmann (HB) grade VI, either caused by...

  20. Deep learning the dynamic appearance and shape of facial action units

    OpenAIRE

    Jaiswal, Shashank; Valstar, Michel F.

    2016-01-01

    Spontaneous facial expression recognition under uncontrolled conditions is a hard task. It depends on multiple factors including shape, appearance and dynamics of the facial features, all of which are adversely affected by environmental noise and low intensity signals typical of such conditions. In this work, we present a novel approach to Facial Action Unit detection using a combination of Convolutional and Bi-directional Long Short-Term Memory Neural Networks (CNN-BLSTM), which jointly lear...

  1. Walk and Learn: Facial Attribute Representation Learning from Egocentric Video and Contextual Data

    OpenAIRE

    Wang, Jing; Cheng, Yu; Feris, Rogerio Schmidt

    2016-01-01

    The way people look in terms of facial attributes (ethnicity, hair color, facial hair, etc.) and the clothes or accessories they wear (sunglasses, hat, hoodies, etc.) is highly dependent on geo-location and weather condition, respectively. This work explores, for the first time, the use of this contextual information, as people with wearable cameras walk across different neighborhoods of a city, in order to learn a rich feature representation for facial attribute classification, without the c...

  2. Automated facial recognition of manually generated clay facial approximations: Potential application in unidentified persons data repositories.

    Science.gov (United States)

    Parks, Connie L; Monson, Keith L

    2018-01-01

    This research examined how accurately 2D images (i.e., photographs) of 3D clay facial approximations were matched to corresponding photographs of the approximated individuals using an objective automated facial recognition system. Irrespective of search filter (i.e., blind, sex, or ancestry) or rank class (R 1 , R 10 , R 25 , and R 50 ) employed, few operationally informative results were observed. In only a single instance of 48 potential match opportunities was a clay approximation matched to a corresponding life photograph within the top 50 images (R 50 ) of a candidate list, even with relatively small gallery sizes created from the application of search filters (e.g., sex or ancestry search restrictions). Increasing the candidate lists to include the top 100 images (R 100 ) resulted in only two additional instances of correct match. Although other untested variables (e.g., approximation method, 2D photographic process, and practitioner skill level) may have impacted the observed results, this study suggests that 2D images of manually generated clay approximations are not readily matched to life photos by automated facial recognition systems. Further investigation is necessary in order to identify the underlying cause(s), if any, of the poor recognition results observed in this study (e.g., potential inferior facial feature detection and extraction). Additional inquiry exploring prospective remedial measures (e.g., stronger feature differentiation) is also warranted, particularly given the prominent use of clay approximations in unidentified persons casework. Copyright © 2017. Published by Elsevier B.V.

  3. Facial attractiveness, symmetry and cues of good genes.

    Science.gov (United States)

    Scheib, J E; Gangestad, S W; Thornhill, R

    1999-09-22

    Cues of phenotypic condition should be among those used by women in their choice of mates. One marker of better phenotypic condition is thought to be symmetrical bilateral body and facial features. However, it is not clear whether women use symmetry as the primary cue in assessing the phenotypic quality of potential mates or whether symmetry is correlated with other facial markers affecting physical attractiveness. Using photographs of men's faces, for which facial symmetry had been measured, we found a relationship between women's attractiveness ratings of these faces and symmetry, but the subjects could not rate facial symmetry accurately. Moreover, the relationship between facial attractiveness and symmetry was still observed, even when symmetry cues were removed by presenting only the left or right half of faces. These results suggest that attractive features other than symmetry can be used to assess phenotypic condition. We identified one such cue, facial masculinity (cheek-bone prominence and a relatively longer lower face), which was related to both symmetry and full- and half-face attractiveness.

  4. Cues of fatigue: effects of sleep deprivation on facial appearance.

    Science.gov (United States)

    Sundelin, Tina; Lekander, Mats; Kecklund, Göran; Van Someren, Eus J W; Olsson, Andreas; Axelsson, John

    2013-09-01

    To investigate the facial cues by which one recognizes that someone is sleep deprived versus not sleep deprived. Experimental laboratory study. Karolinska Institutet, Stockholm, Sweden. Forty observers (20 women, mean age 25 ± 5 y) rated 20 facial photographs with respect to fatigue, 10 facial cues, and sadness. The stimulus material consisted of 10 individuals (five women) photographed at 14:30 after normal sleep and after 31 h of sleep deprivation following a night with 5 h of sleep. Ratings of fatigue, fatigue-related cues, and sadness in facial photographs. The faces of sleep deprived individuals were perceived as having more hanging eyelids, redder eyes, more swollen eyes, darker circles under the eyes, paler skin, more wrinkles/fine lines, and more droopy corners of the mouth (effects ranging from b = +3 ± 1 to b = +15 ± 1 mm on 100-mm visual analog scales, P sleep deprivation (P sleep deprivation, nor associated with judgements of fatigue. In addition, sleep-deprived individuals looked sadder than after normal sleep, and sadness was related to looking fatigued (P sleep deprivation affects features relating to the eyes, mouth, and skin, and that these features function as cues of sleep loss to other people. Because these facial regions are important in the communication between humans, facial cues of sleep deprivation and fatigue may carry social consequences for the sleep deprived individual in everyday life.

  5. When Age Matters: Differences in Facial Mimicry and Autonomic Responses to Peers' Emotions in Teenagers and Adults

    Science.gov (United States)

    Ardizzi, Martina; Sestito, Mariateresa; Martini, Francesca; Umiltà, Maria Alessandra; Ravera, Roberto; Gallese, Vittorio

    2014-01-01

    Age-group membership effects on explicit emotional facial expressions recognition have been widely demonstrated. In this study we investigated whether Age-group membership could also affect implicit physiological responses, as facial mimicry and autonomic regulation, to observation of emotional facial expressions. To this aim, facial Electromyography (EMG) and Respiratory Sinus Arrhythmia (RSA) were recorded from teenager and adult participants during the observation of facial expressions performed by teenager and adult models. Results highlighted that teenagers exhibited greater facial EMG responses to peers' facial expressions, whereas adults showed higher RSA-responses to adult facial expressions. The different physiological modalities through which young and adults respond to peers' emotional expressions are likely to reflect two different ways to engage in social interactions with coetaneous. Findings confirmed that age is an important and powerful social feature that modulates interpersonal interactions by influencing low-level physiological responses. PMID:25337916

  6. Human age estimation framework using different facial parts

    Directory of Open Access Journals (Sweden)

    Mohamed Y. El Dib

    2011-03-01

    Full Text Available Human age estimation from facial images has a wide range of real-world applications in human computer interaction (HCI. In this paper, we use the bio-inspired features (BIF to analyze different facial parts: (a eye wrinkles, (b whole internal face (without forehead area and (c whole face (with forehead area using different feature shape points. The analysis shows that eye wrinkles which cover 30% of the facial area contain the most important aging features compared to internal face and whole face. Furthermore, more extensive experiments are made on FG-NET database by increasing the number of missing pictures in older age groups using MORPH database to enhance the results.

  7. Facial Affect Recognition Using Regularized Discriminant Analysis-Based Algorithms

    Directory of Open Access Journals (Sweden)

    Cheng-Yuan Shih

    2010-01-01

    Full Text Available This paper presents a novel and effective method for facial expression recognition including happiness, disgust, fear, anger, sadness, surprise, and neutral state. The proposed method utilizes a regularized discriminant analysis-based boosting algorithm (RDAB with effective Gabor features to recognize the facial expressions. Entropy criterion is applied to select the effective Gabor feature which is a subset of informative and nonredundant Gabor features. The proposed RDAB algorithm uses RDA as a learner in the boosting algorithm. The RDA combines strengths of linear discriminant analysis (LDA and quadratic discriminant analysis (QDA. It solves the small sample size and ill-posed problems suffered from QDA and LDA through a regularization technique. Additionally, this study uses the particle swarm optimization (PSO algorithm to estimate optimal parameters in RDA. Experiment results demonstrate that our approach can accurately and robustly recognize facial expressions.

  8. Estimation of human emotions using thermal facial information

    Science.gov (United States)

    Nguyen, Hung; Kotani, Kazunori; Chen, Fan; Le, Bac

    2014-01-01

    In recent years, research on human emotion estimation using thermal infrared (IR) imagery has appealed to many researchers due to its invariance to visible illumination changes. Although infrared imagery is superior to visible imagery in its invariance to illumination changes and appearance differences, it has difficulties in handling transparent glasses in the thermal infrared spectrum. As a result, when using infrared imagery for the analysis of human facial information, the regions of eyeglasses are dark and eyes' thermal information is not given. We propose a temperature space method to correct eyeglasses' effect using the thermal facial information in the neighboring facial regions, and then use Principal Component Analysis (PCA), Eigen-space Method based on class-features (EMC), and PCA-EMC method to classify human emotions from the corrected thermal images. We collected the Kotani Thermal Facial Emotion (KTFE) database and performed the experiments, which show the improved accuracy rate in estimating human emotions.

  9. Automatic prediction of facial trait judgments: appearance vs. structural models.

    Directory of Open Access Journals (Sweden)

    Mario Rojas

    Full Text Available Evaluating other individuals with respect to personality characteristics plays a crucial role in human relations and it is the focus of attention for research in diverse fields such as psychology and interactive computer systems. In psychology, face perception has been recognized as a key component of this evaluation system. Multiple studies suggest that observers use face information to infer personality characteristics. Interactive computer systems are trying to take advantage of these findings and apply them to increase the natural aspect of interaction and to improve the performance of interactive computer systems. Here, we experimentally test whether the automatic prediction of facial trait judgments (e.g. dominance can be made by using the full appearance information of the face and whether a reduced representation of its structure is sufficient. We evaluate two separate approaches: a holistic representation model using the facial appearance information and a structural model constructed from the relations among facial salient points. State of the art machine learning methods are applied to a derive a facial trait judgment model from training data and b predict a facial trait value for any face. Furthermore, we address the issue of whether there are specific structural relations among facial points that predict perception of facial traits. Experimental results over a set of labeled data (9 different trait evaluations and classification rules (4 rules suggest that a prediction of perception of facial traits is learnable by both holistic and structural approaches; b the most reliable prediction of facial trait judgments is obtained by certain type of holistic descriptions of the face appearance; and c for some traits such as attractiveness and extroversion, there are relationships between specific structural features and social perceptions.

  10. Anatomia del nervo faciale

    OpenAIRE

    Barbut , J.; Tankere , F.; Bernat , I.

    2017-01-01

    International audience; Il nervo faciale è al centro della pratica quotidiana in oto-rino-laringoiatria. La sua singolare fisiologia e la sua patologia fanno di questo paio di nervi cranici un soggetto appassionante in cui alcuni si sono specializzati. La precisa conoscenza della sua anatomia, il cui percorso è tortuoso e presenta molte relazioni con altri elementi nobili, è un prerequisito indispensabile per il suo approccio, sia in chirurgia cervicale che in quella otologica che in quella n...

  11. Anatomía del Nervio Facial y sus Implicancias en los Procedimientos Quirúrgicos

    OpenAIRE

    Rodrigues, Antonio de Castro; Andreo, Jesus Carlos; Menezes, Laura de Freitas; Chinellato, Tatiana Pimentel; Rosa Júnior, Geraldo Marco

    2009-01-01

    Facial palsy, parotid diseases and others are a relatively common clinical condition with a variety of causes. Irrespective of its etiology, facial palsy always represents a very serious problem for the patient. Parotid gland diseases also are very common occurrence. In this particular case, the knowledge of surgical anatomy of the facial nerve and its correlations with the parotid gland is very important for an adequate preservation in the cases of surgery of benign and malignant diseases of...

  12. Facial Symmetry: An Illusion?

    Directory of Open Access Journals (Sweden)

    Naveen Reddy Admala

    2013-01-01

    Materials and methods: A sample of 120 patients (60 males and 60 females; mean age, 15 years; range, 16-22 years who had received orthodontic clinical examination at AME′s Dental College and Hospital were selected. Selection was made in such a way that following malocclusions with equal sexual distribution was possible from the patient database. Patients selected were classified into skeletal Class I (25 males and 25 females, Class II (25 males and 25 females and Class III (10 males and 10 females based on ANB angle. The number was predecided to be the same and also was based on the number of patients with following malocclusions reported to the department. Differences in length between distances from the points at which ear rods were inserted to the facial midline and the perpendicular distance from the softtissue menton to the facial midline were measured on a frontofacial photograph. Subjects with a discrepancy of more than three standard deviations of the measurement error were categorized as having left- or right-sided laterality. Results: Of subjects with facial asymmetry, 74.1% had a wider right hemiface, and 51.6% of those with chin deviation had left-sided laterality. These tendencies were independent of sex or skeletal jaw relationships. Conclusion: These results suggest that laterality in the normal asymmetry of the face, which is consistently found in humans, is likely to be a hereditary rather than an acquired trait.

  13. Multiracial Facial Golden Ratio and Evaluation of Facial Appearance.

    Directory of Open Access Journals (Sweden)

    Mohammad Khursheed Alam

    Full Text Available This study aimed to investigate the association of facial proportion and its relation to the golden ratio with the evaluation of facial appearance among Malaysian population. This was a cross-sectional study with 286 randomly selected from Universiti Sains Malaysia (USM Health Campus students (150 females and 136 males; 100 Malaysian Chinese, 100 Malaysian Malay and 86 Malaysian Indian, with the mean age of 21.54 ± 1.56 (Age range, 18-25. Facial indices obtained from direct facial measurements were used for the classification of facial shape into short, ideal and long. A validated structured questionnaire was used to assess subjects' evaluation of their own facial appearance. The mean facial indices of Malaysian Indian (MI, Malaysian Chinese (MC and Malaysian Malay (MM were 1.59 ± 0.19, 1.57 ± 0.25 and 1.54 ± 0.23 respectively. Only MC showed significant sexual dimorphism in facial index (P = 0.047; P<0.05 but no significant difference was found between races. Out of the 286 subjects, 49 (17.1% were of ideal facial shape, 156 (54.5% short and 81 (28.3% long. The facial evaluation questionnaire showed that MC had the lowest satisfaction with mean score of 2.18 ± 0.97 for overall impression and 2.15 ± 1.04 for facial parts, compared to MM and MI, with mean score of 1.80 ± 0.97 and 1.64 ± 0.74 respectively for overall impression; 1.75 ± 0.95 and 1.70 ± 0.83 respectively for facial parts.1 Only 17.1% of Malaysian facial proportion conformed to the golden ratio, with majority of the population having short face (54.5%; 2 Facial index did not depend significantly on races; 3 Significant sexual dimorphism was shown among Malaysian Chinese; 4 All three races are generally satisfied with their own facial appearance; 5 No significant association was found between golden ratio and facial evaluation score among Malaysian population.

  14. Adolescents with HIV and facial lipoatrophy: response to facial stimulation

    Directory of Open Access Journals (Sweden)

    Jesus Claudio Gabana-Silveira

    2014-08-01

    Full Text Available OBJECTIVES: This study evaluated the effects of facial stimulation over the superficial muscles of the face in individuals with facial lipoatrophy associated with human immunodeficiency virus (HIV and with no indication for treatment with polymethyl methacrylate. METHOD: The study sample comprised four adolescents of both genders ranging from 13 to 17 years in age. To participate in the study, the participants had to score six or less points on the Facial Lipoatrophy Index. The facial stimulation program used in our study consisted of 12 weekly 30-minute sessions during which individuals received therapy. The therapy consisted of intra- and extra-oral muscle contraction and stretching maneuvers of the zygomaticus major and minor and the masseter muscles. Pre- and post-treatment results were obtained using anthropometric static measurements of the face and the Facial Lipoatrophy Index. RESULTS: The results suggest that the therapeutic program effectively improved the volume of the buccinators. No significant differences were observed for the measurements of the medial portion of the face, the lateral portion of the face, the volume of the masseter muscle, or Facial Lipoatrophy Index scores. CONCLUSION: The results of our study suggest that facial maneuvers applied to the superficial muscles of the face of adolescents with facial lipoatrophy associated with HIV improved the facial area volume related to the buccinators muscles. We believe that our results will encourage future research with HIV patients, especially for patients who do not have the possibility of receiving an alternative aesthetic treatment.

  15. Four not six: Revealing culturally common facial expressions of emotion.

    Science.gov (United States)

    Jack, Rachael E; Sun, Wei; Delis, Ioannis; Garrod, Oliver G B; Schyns, Philippe G

    2016-06-01

    As a highly social species, humans generate complex facial expressions to communicate a diverse range of emotions. Since Darwin's work, identifying among these complex patterns which are common across cultures and which are culture-specific has remained a central question in psychology, anthropology, philosophy, and more recently machine vision and social robotics. Classic approaches to addressing this question typically tested the cross-cultural recognition of theoretically motivated facial expressions representing 6 emotions, and reported universality. Yet, variable recognition accuracy across cultures suggests a narrower cross-cultural communication supported by sets of simpler expressive patterns embedded in more complex facial expressions. We explore this hypothesis by modeling the facial expressions of over 60 emotions across 2 cultures, and segregating out the latent expressive patterns. Using a multidisciplinary approach, we first map the conceptual organization of a broad spectrum of emotion words by building semantic networks in 2 cultures. For each emotion word in each culture, we then model and validate its corresponding dynamic facial expression, producing over 60 culturally valid facial expression models. We then apply to the pooled models a multivariate data reduction technique, revealing 4 latent and culturally common facial expression patterns that each communicates specific combinations of valence, arousal, and dominance. We then reveal the face movements that accentuate each latent expressive pattern to create complex facial expressions. Our data questions the widely held view that 6 facial expression patterns are universal, instead suggesting 4 latent expressive patterns with direct implications for emotion communication, social psychology, cognitive neuroscience, and social robotics. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  16. Research on facial expression simulation based on depth image

    Science.gov (United States)

    Ding, Sha-sha; Duan, Jin; Zhao, Yi-wu; Xiao, Bo; Wang, Hao

    2017-11-01

    Nowadays, face expression simulation is widely used in film and television special effects, human-computer interaction and many other fields. Facial expression is captured by the device of Kinect camera .The method of AAM algorithm based on statistical information is employed to detect and track faces. The 2D regression algorithm is applied to align the feature points. Among them, facial feature points are detected automatically and 3D cartoon model feature points are signed artificially. The aligned feature points are mapped by keyframe techniques. In order to improve the animation effect, Non-feature points are interpolated based on empirical models. Under the constraint of Bézier curves we finish the mapping and interpolation. Thus the feature points on the cartoon face model can be driven if the facial expression varies. In this way the purpose of cartoon face expression simulation in real-time is came ture. The experiment result shows that the method proposed in this text can accurately simulate the facial expression. Finally, our method is compared with the previous method. Actual data prove that the implementation efficiency is greatly improved by our method.

  17. Basal cell carcinoma appearing in a facial nevus sebaceous of Jadassohn: dermoscopic features Carcinoma basocelular aparecendo em um nevo sebáceo de Jadassohn: características dermatoscópicas

    Directory of Open Access Journals (Sweden)

    Maria Leonor Enei

    2012-08-01

    Full Text Available The nevus sebaceous of Jadassohn usually affects the face or scalp. It tends to evolve in three stages, and the final stage is characterized by the appearance of tumours. We present the case of a facial nevus sebaceous of Jadasshon in which a basal cell carcinoma developed. We also explore the diagnosis of this disease, which was established through dermoscopy, and propose using this technique in the clinical follow-up of this type of hamartoma, thereby allowing the early detection of cancer development.O nevo sebáceo de Jadassohn geralmente afeta a face ou o couro cabeludo. A sua tendência natural é evoluir em três estágios, sendo que o estágio final é caracterizado pelo aparecimento de tumores. Apresentamos o caso de um nevo sebáceo de Jadassohn na face a partir do qual um carcinoma basocelular se desenvolveu. Também abordamos o diagnóstico dessa doença, estabelecido por meio da dermatoscopia. Sugerimos a utilização dessa técnica no acompanhamento clínico desse hamartoma, permitindo assim a detecção precoce de um câncer.

  18. Improving the Quality of Facial Composites Using a Holistic Cognitive Interview

    Science.gov (United States)

    Frowd, Charlie D.; Bruce, Vicki; Smith, Ashley J.; Hancock, Peter J. B.

    2008-01-01

    Witnesses to and victims of serious crime are normally asked to describe the appearance of a criminal suspect, using a Cognitive Interview (CI), and to construct a facial composite, a visual representation of the face. Research suggests that focusing on the global aspects of a face, as opposed to its facial features, facilitates recognition and…

  19. An Improved Surface Simplification Method for Facial Expression Animation Based on Homogeneous Coordinate Transformation Matrix and Maximum Shape Operator

    Directory of Open Access Journals (Sweden)

    Juin-Ling Tseng

    2016-01-01

    Full Text Available Facial animation is one of the most popular 3D animation topics researched in recent years. However, when using facial animation, a 3D facial animation model has to be stored. This 3D facial animation model requires many triangles to accurately describe and demonstrate facial expression animation because the face often presents a number of different expressions. Consequently, the costs associated with facial animation have increased rapidly. In an effort to reduce storage costs, researchers have sought to simplify 3D animation models using techniques such as Deformation Sensitive Decimation and Feature Edge Quadric. The studies conducted have examined the problems in the homogeneity of the local coordinate system between different expression models and in the retainment of simplified model characteristics. This paper proposes a method that applies Homogeneous Coordinate Transformation Matrix to solve the problem of homogeneity of the local coordinate system and Maximum Shape Operator to detect shape changes in facial animation so as to properly preserve the features of facial expressions. Further, root mean square error and perceived quality error are used to compare the errors generated by different simplification methods in experiments. Experimental results show that, compared with Deformation Sensitive Decimation and Feature Edge Quadric, our method can not only reduce the errors caused by simplification of facial animation, but also retain more facial features.

  20. [Prosopagnosia and facial expression recognition].

    Science.gov (United States)

    Koyama, Shinichi

    2014-04-01

    This paper reviews clinical neuropsychological studies that have indicated that the recognition of a person's identity and the recognition of facial expressions are processed by different cortical and subcortical areas of the brain. The fusiform gyrus, especially the right fusiform gyrus, plays an important role in the recognition of identity. The superior temporal sulcus, amygdala, and medial frontal cortex play important roles in facial-expression recognition. Both facial recognition and facial-expression recognition are highly intellectual processes that involve several regions of the brain.

  1. Virtual 3-D Facial Reconstruction

    Directory of Open Access Journals (Sweden)

    Martin Paul Evison

    2000-06-01

    Full Text Available Facial reconstructions in archaeology allow empathy with people who lived in the past and enjoy considerable popularity with the public. It is a common misconception that facial reconstruction will produce an exact likeness; a resemblance is the best that can be hoped for. Research at Sheffield University is aimed at the development of a computer system for facial reconstruction that will be accurate, rapid, repeatable, accessible and flexible. This research is described and prototypical 3-D facial reconstructions are presented. Interpolation models simulating obesity, ageing and ethnic affiliation are also described. Some strengths and weaknesses in the models, and their potential for application in archaeology are discussed.

  2. Relation between facial affect recognition and configural face processing in antipsychotic-free schizophrenia.

    Science.gov (United States)

    Fakra, Eric; Jouve, Elisabeth; Guillaume, Fabrice; Azorin, Jean-Michel; Blin, Olivier

    2015-03-01

    Deficit in facial affect recognition is a well-documented impairment in schizophrenia, closely connected to social outcome. This deficit could be related to psychopathology, but also to a broader dysfunction in processing facial information. In addition, patients with schizophrenia inadequately use configural information-a type of processing that relies on spatial relationships between facial features. To date, no study has specifically examined the link between symptoms and misuse of configural information in the deficit in facial affect recognition. Unmedicated schizophrenia patients (n = 30) and matched healthy controls (n = 30) performed a facial affect recognition task and a face inversion task, which tests aptitude to rely on configural information. In patients, regressions were carried out between facial affect recognition, symptom dimensions and inversion effect. Patients, compared with controls, showed a deficit in facial affect recognition and a lower inversion effect. Negative symptoms and lower inversion effect could account for 41.2% of the variance in facial affect recognition. This study confirms the presence of a deficit in facial affect recognition, and also of dysfunctional manipulation in configural information in antipsychotic-free patients. Negative symptoms and poor processing of configural information explained a substantial part of the deficient recognition of facial affect. We speculate that this deficit may be caused by several factors, among which independently stand psychopathology and failure in correctly manipulating configural information. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  3. BMI and WHR Are Reflected in Female Facial Shape and Texture: A Geometric Morphometric Image Analysis.

    Directory of Open Access Journals (Sweden)

    Christine Mayer

    Full Text Available Facial markers of body composition are frequently studied in evolutionary psychology and are important in computational and forensic face recognition. We assessed the association of body mass index (BMI and waist-to-hip ratio (WHR with facial shape and texture (color pattern in a sample of young Middle European women by a combination of geometric morphometrics and image analysis. Faces of women with high BMI had a wider and rounder facial outline relative to the size of the eyes and lips, and relatively lower eyebrows. Furthermore, women with high BMI had a brighter and more reddish skin color than women with lower BMI. The same facial features were associated with WHR, even though BMI and WHR were only moderately correlated. Yet BMI was better predictable than WHR from facial attributes. After leave-one-out cross-validation, we were able to predict 25% of variation in BMI and 10% of variation in WHR by facial shape. Facial texture predicted only about 3-10% of variation in BMI and WHR. This indicates that facial shape primarily reflects total fat proportion, rather than the distribution of fat within the body. The association of reddish facial texture in high-BMI women may be mediated by increased blood pressure and superficial blood flow as well as diet. Our study elucidates how geometric morphometric image analysis serves to quantify the effect of biological factors such as BMI and WHR to facial shape and color, which in turn contributes to social perception.

  4. Facial Expression Recognition via Non-Negative Least-Squares Sparse Coding

    Directory of Open Access Journals (Sweden)

    Ying Chen

    2014-05-01

    Full Text Available Sparse coding is an active research subject in signal processing, computer vision, and pattern recognition. A novel method of facial expression recognition via non-negative least squares (NNLS sparse coding is presented in this paper. The NNLS sparse coding is used to form a facial expression classifier. To testify the performance of the presented method, local binary patterns (LBP and the raw pixels are extracted for facial feature representation. Facial expression recognition experiments are conducted on the Japanese Female Facial Expression (JAFFE database. Compared with other widely used methods such as linear support vector machines (SVM, sparse representation-based classifier (SRC, nearest subspace classifier (NSC, K-nearest neighbor (KNN and radial basis function neural networks (RBFNN, the experiment results indicate that the presented NNLS method performs better than other used methods on facial expression recognition tasks.

  5. Facial soft tissue analysis among various vertical facial patterns

    International Nuclear Information System (INIS)

    Jeelani, W.; Fida, M.; Shaikh, A.

    2016-01-01

    Background: The emergence of soft tissue paradigm in orthodontics has made various soft tissue parameters an integral part of the orthodontic problem list. The purpose of this study was to determine and compare various facial soft tissue parameters on lateral cephalograms among patients with short, average and long facial patterns. Methods: A cross-sectional study was conducted on the lateral cephalograms of 180 adult subjects divided into three equal groups, i.e., short, average and long face according to the vertical facial pattern. Incisal display at rest, nose height, upper and lower lip lengths, degree of lip procumbency and the nasolabial angle were measured for each individual. The gender differences for these soft tissue parameters were determined using Mann-Whitney U test while the comparison among different facial patterns was performed using Kruskal-Wallis test. Results: Significant differences in the incisal display at rest, total nasal height, lip procumbency, the nasolabial angle and the upper and lower lip lengths were found among the three vertical facial patterns. A significant positive correlation of nose and lip dimensions was found with the underlying skeletal pattern. Similarly, the incisal display at rest, upper and lower lip procumbency and the nasolabial angle were significantly correlated with the lower anterior facial height. Conclusion: Short facial pattern is associated with minimal incisal display, recumbent upper and lower lips and acute nasolabial angle while the long facial pattern is associated with excessive incisal display, procumbent upper and lower lips and obtuse nasolabial angle. (author)

  6. Facial soft tissue thickness in North Indian adult population

    Directory of Open Access Journals (Sweden)

    Tanushri Saxena

    2012-01-01

    Full Text Available Objectives: Forensic facial reconstruction is an attempt to reproduce a likeness of facial features of an individual, based on characteristics of the skull, for the purpose of individual identification - The aim of this study was to determine the soft tissue thickness values of individuals of Bareilly population, Uttar Pradesh, India and to evaluate whether these values can help in forensic identification. Study design: A total of 40 individuals (19 males, 21 females were evaluated using spiral computed tomographic (CT scan with 2 mm slice thickness in axial sections and soft tissue thicknesses were measured at seven midfacial anthropological facial landmarks. Results: It was found that facial soft tissue thickness values decreased with age. Soft tissue thickness values were less in females than in males, except at ramus region. Comparing the left and right values in individuals it was found to be not significant. Conclusion: Soft tissue thickness values are an important factor in facial reconstruction and also help in forensic identification of an individual. CT scan gives a good representation of these values and hence is considered an important tool in facial reconstruction- This study has been conducted in North Indian population and further studies with larger sample size can surely add to the data regarding soft tissue thicknesses.

  7. Heartbeat Signal from Facial Video for Biometric Recognition

    DEFF Research Database (Denmark)

    Haque, Mohammad Ahsanul; Nasrollahi, Kamal; Moeslund, Thomas B.

    2015-01-01

    Different biometric traits such as face appearance and heartbeat signal from Electrocardiogram (ECG)/Phonocardiogram (PCG) are widely used in the human identity recognition. Recent advances in facial video based measurement of cardio-physiological parameters such as heartbeat rate, respiratory rate......, and blood volume pressure provide the possibility of extracting heartbeat signal from facial video instead of using obtrusive ECG or PCG sensors in the body. This paper proposes the Heartbeat Signal from Facial Video (HSFV) as a new biometric trait for human identity recognition, for the first time...... to the best of our knowledge. Feature extraction from the HSFV is accomplished by employing Radon transform on a waterfall model of the replicated HSFV. The pairwise Minkowski distances are obtained from the Radon image as the features. The authentication is accomplished by a decision tree based supervised...

  8. Facial expressions : What the mirror neuron system can and cannot tell us

    NARCIS (Netherlands)

    van der Gaag, Christiaan; Minderaa, Ruud B.; Keysers, Christian

    2007-01-01

    Facial expressions contain both motor and emotional components. The inferior frontal gyrus (IFG) and posterior parietal cortex have been considered to compose a mirror neuron system (MNS) for the motor components of facial expressions, while the amygdala and insula may represent an "additional" MNS

  9. Facial Emotion Recognition Impairment in Patients with Parkinson's Disease and Isolated Apathy

    Directory of Open Access Journals (Sweden)

    Mercè Martínez-Corral

    2010-01-01

    Full Text Available Apathy is a frequent feature of Parkinson's disease (PD, usually related with executive dysfunction. However, in a subgroup of PD patients apathy may represent the only or predominant neuropsychiatric feature. To understand the mechanisms underlying apathy in PD, we investigated emotional processing in PD patients with and without apathy and in healthy controls (HC, assessed by a facial emotion recognition task (FERT. We excluded PD patients with cognitive impairment, depression, other affective disturbances and previous surgery for PD. PD patients with apathy scored significantly worse in the FERT, performing worse in fear, anger, and sadness recognition. No differences, however, were found between nonapathetic PD patients and HC. These findings suggest the existence of a disruption of emotional-affective processing in cognitive preserved PD patients with apathy. To identify specific dysfunction of limbic structures in PD, patients with isolated apathy may have therapeutic and prognostic implications.

  10. Magnetically retained silicone facial prosthesis

    African Journals Online (AJOL)

    2013-06-09

    Jun 9, 2013 ... Prosthetic camouflaging of facial defects and use of silicone maxillofacial material are the alternatives to the surgical retreatment. Silicone elastomers provide more options to clinician for customization of the facial prosthesis which is simple, esthetically good when coupled with bio magnets for retention.

  11. [Multidisciplinary approach of facial injuries

    NARCIS (Netherlands)

    Dubois, L.; Schreurs, R.; Lapid, O.; Saeed, P.; Adriaensen, G.F.; Hoefnagels, F.M.; Jong, V.M. de

    2017-01-01

    BACKGROUND: Approximately one quarter of polytrauma patients has facial injuries, which usually lead to loss of form and function. Several specialties are involved in the acute and reconstructive phases of facial injuries, such as oral and maxillofacial surgery, otorhinolaryngology, plastic surgery,

  12. Facial responsiveness of psychopaths to the emotional expressions of others.

    Directory of Open Access Journals (Sweden)

    Janina Künecke

    Full Text Available Psychopathic individuals show selfish, manipulative, and antisocial behavior in addition to emotional detachment and reduced empathy. Their empathic deficits are thought to be associated with a reduced responsiveness to emotional stimuli. Immediate facial muscle responses to the emotional expressions of others reflect the expressive part of emotional responsiveness and are positively related to trait empathy. Empirical evidence for reduced facial muscle responses in adult psychopathic individuals to the emotional expressions of others is rare. In the present study, 261 male criminal offenders and non-offenders categorized dynamically presented facial emotion expressions (angry, happy, sad, and neutral during facial electromyography recording of their corrugator muscle activity. We replicated a measurement model of facial muscle activity, which controls for general facial responsiveness to face stimuli, and modeled three correlated emotion-specific factors (i.e., anger, happiness, and sadness representing emotion specific activity. In a multi-group confirmatory factor analysis, we compared the means of the anger, happiness, and sadness latent factors between three groups: 1 non-offenders, 2 low, and 3 high psychopathic offenders. There were no significant mean differences between groups. Our results challenge current theories that focus on deficits in emotional responsiveness as leading to the development of psychopathy and encourage further theoretical development on deviant emotional processes in psychopathic individuals.

  13. Women living with facial hair: the psychological and behavioral burden.

    Science.gov (United States)

    Lipton, Michelle G; Sherr, Lorraine; Elford, Jonathan; Rustin, Malcolm H A; Clayton, William J

    2006-08-01

    While unwanted facial hair is clearly distressing for women, relatively little is known about its psychological impact. This study reports on the psychological and behavioral burden of facial hair in women with suspected polycystic ovary syndrome. Eighty-eight women (90% participation rate) completed a self-administered questionnaire concerning hair removal practices; the impact of facial hair on social and emotional domains; relationships and daily life; anxiety and depression (Hospital Anxiety and Depression Scale); self-esteem (Rosenberg Self-esteem Scale); and quality of life (WHOQOL-BREF). Women spent considerable time on the management of their facial hair (mean, 104 min/week). Two thirds (67%) reported continually checking in mirrors and 76% by touch. Forty percent felt uncomfortable in social situations. High levels of emotional distress and psychological morbidity were detected; 30% had levels of depression above the clinical cut off point, while 75% reported clinical levels of anxiety; 29% reported both. Although overall quality of life was good, scores were low in social and relationship domains--reflecting the impact of unwanted facial hair. Unwanted facial hair carries a high psychological burden for women and represents a significant intrusion into their daily lives. Psychological support is a neglected element of care for these women.

  14. MRI of the facial nerve in idiopathic facial palsy

    International Nuclear Information System (INIS)

    Saatci, I.; Sahintuerk, F.; Sennaroglu, L.; Boyvat, F.; Guersel, B.; Besim, A.

    1996-01-01

    The purpose of this prospective study was to define the enhancement pattern of the facial nerve in idiopathic facial paralysis (Bell's palsy) on magnetic resonance (MR) imaging with routine doses of gadolinium-DTPA (0.1 mmol/kg). Using 0.5 T imager, 24 patients were examined with a mean interval time of 13.7 days between the onset of symptoms and the MR examination. Contralateral asymptomatic facial nerves constituted the control group and five of the normal facial nerves (20.8%) showed enhancement confined to the geniculate ganglion. Hence, contrast enhancement limited to the geniculate ganglion in the abnormal facial nerve (3 of 24) was referred to a equivocal. Not encountered in any of the normal facial nerves, enhancement of other segments alone or associated with geniculate ganglion enhancement was considered to be abnormal and noted in 70.8% of the symptomatic facial nerves. The most frequently enhancing segments were the geniculate ganglion and the distal intracanalicular segment. (orig.)

  15. MRI of the facial nerve in idiopathic facial palsy

    Energy Technology Data Exchange (ETDEWEB)

    Saatci, I. [Dept. of Radiology, Hacettepe Univ., Hospital Sihhiye, Ankara (Turkey); Sahintuerk, F. [Dept. of Radiology, Hacettepe Univ., Hospital Sihhiye, Ankara (Turkey); Sennaroglu, L. [Dept. of Otolaryngology, Head and Neck Surgery, Hacettepe Univ., Hospital Sihhiye, Ankara (Turkey); Boyvat, F. [Dept. of Radiology, Hacettepe Univ., Hospital Sihhiye, Ankara (Turkey); Guersel, B. [Dept. of Otolaryngology, Head and Neck Surgery, Hacettepe Univ., Hospital Sihhiye, Ankara (Turkey); Besim, A. [Dept. of Radiology, Hacettepe Univ., Hospital Sihhiye, Ankara (Turkey)

    1996-10-01

    The purpose of this prospective study was to define the enhancement pattern of the facial nerve in idiopathic facial paralysis (Bell`s palsy) on magnetic resonance (MR) imaging with routine doses of gadolinium-DTPA (0.1 mmol/kg). Using 0.5 T imager, 24 patients were examined with a mean interval time of 13.7 days between the onset of symptoms and the MR examination. Contralateral asymptomatic facial nerves constituted the control group and five of the normal facial nerves (20.8%) showed enhancement confined to the geniculate ganglion. Hence, contrast enhancement limited to the geniculate ganglion in the abnormal facial nerve (3 of 24) was referred to a equivocal. Not encountered in any of the normal facial nerves, enhancement of other segments alone or associated with geniculate ganglion enhancement was considered to be abnormal and noted in 70.8% of the symptomatic facial nerves. The most frequently enhancing segments were the geniculate ganglion and the distal intracanalicular segment. (orig.)

  16. Representing vision and blindness.

    Science.gov (United States)

    Ray, Patrick L; Cox, Alexander P; Jensen, Mark; Allen, Travis; Duncan, William; Diehl, Alexander D

    2016-01-01

    There have been relatively few attempts to represent vision or blindness ontologically. This is unsurprising as the related phenomena of sight and blindness are difficult to represent ontologically for a variety of reasons. Blindness has escaped ontological capture at least in part because: blindness or the employment of the term 'blindness' seems to vary from context to context, blindness can present in a myriad of types and degrees, and there is no precedent for representing complex phenomena such as blindness. We explore current attempts to represent vision or blindness, and show how these attempts fail at representing subtypes of blindness (viz., color blindness, flash blindness, and inattentional blindness). We examine the results found through a review of current attempts and identify where they have failed. By analyzing our test cases of different types of blindness along with the strengths and weaknesses of previous attempts, we have identified the general features of blindness and vision. We propose an ontological solution to represent vision and blindness, which capitalizes on resources afforded to one who utilizes the Basic Formal Ontology as an upper-level ontology. The solution we propose here involves specifying the trigger conditions of a disposition as well as the processes that realize that disposition. Once these are specified we can characterize vision as a function that is realized by certain (in this case) biological processes under a range of triggering conditions. When the range of conditions under which the processes can be realized are reduced beyond a certain threshold, we are able to say that blindness is present. We characterize vision as a function that is realized as a seeing process and blindness as a reduction in the conditions under which the sight function is realized. This solution is desirable because it leverages current features of a major upper-level ontology, accurately captures the phenomenon of blindness, and can be

  17. Road and Street Centerlines, StreetLabels-The data set is a text feature consisting of 6329 label points representing street names. It was created to show the names of city and county based streets., Published in 1989, Davis County Government.

    Data.gov (United States)

    NSGIC Local Govt | GIS Inventory — Road and Street Centerlines dataset current as of 1989. StreetLabels-The data set is a text feature consisting of 6329 label points representing street names. It was...

  18. Facial nerve palsy as a primary presentation of advanced carcinoma ...

    African Journals Online (AJOL)

    Introduction: Cranial nerve neuropathy is a rare presentation of advanced cancer of the prostate. Observation: We report a case of 65-year-old man who presented with right lower motor neuron (LMN) facial nerve palsy. The prostate had malignant features on digital rectal examination (DRE) and the prostate specific antigen ...

  19. Combining Facial Dynamics With Appearance for Age Estimation

    NARCIS (Netherlands)

    Dibeklioğlu, H.; Alnajar, F.; Salah, A.A.; Gevers, T.

    2015-01-01

    Estimating the age of a human from the captured images of his/her face is a challenging problem. In general, the existing approaches to this problem use appearance features only. In this paper, we show that in addition to appearance information, facial dynamics can be leveraged in age estimation. We

  20. Borrowed beauty? Understanding identity in Asian facial cosmetic surgery

    NARCIS (Netherlands)

    Aquino, Y.S.; Steinkamp, N.L.

    2016-01-01

    This review aims to identify (1) sources of knowledge and (2) important themes of the ethical debate related to surgical alteration of facial features in East Asians. This article integrates narrative and systematic review methods. In March 2014, we searched databases including PubMed, Philosopher's

  1. 3D Facial Landmarking under Expression, Pose, and Occlusion Variations

    NARCIS (Netherlands)

    H. Dibeklioğ lu; A.A. Salah (Albert Ali); L. Akarun

    2008-01-01

    htmlabstractAutomatic localization of 3D facial features is important for face recognition, tracking, modeling and expression analysis. Methods developed for 2D images were shown to have problems working across databases acquired with different illumination conditions. Expression variations, pose

  2. Contemporary Koreans’ Perceptions of Facial Beauty

    Directory of Open Access Journals (Sweden)

    Seung Chul Rhee

    2017-09-01

    Full Text Available Background This article aims to investigate current perceptions of beauty of the general public and physicians without a specialization in plastic surgery performing aesthetic procedures. Methods A cross-sectional and interviewing questionnaire was administered to 290 people in Seoul, South Korea in September 2015. The questionnaire addressed three issues: general attitudes about plastic surgery (Q1, perception of and preferences regarding Korean female celebrities’ facial attractiveness (Q2, and the relative influence of each facial aesthetic subunit on overall facial attractiveness. The survey’s results were gathered by a professional research agency and classified according to a respondent’s gender, age, and job type (95%±5.75% confidence interval. Statistical analysis was performed using SPSS ver. 10.1, calculating one-way analysis of variance with post hoc analysis and Tukey’s t-test. Results Among the respondents, 38.3% were in favor of aesthetic plastic surgery. The most common source of plastic surgery information was the internet (50.0%. The most powerful factor influencing hospital or clinic selection was the postoperative surgical results of acquaintances (74.9%. We created a composite face of an attractive Korean female, representing the current facial configuration considered appealing to the Koreans. Beauty perceptions differed to some degree based on gender and generational differences. We found that there were certain differences in beauty perceptions between general physicians who perform aesthetic procedures and the general public. Conclusions Our study results provide aesthetic plastic surgeons with detailed information about contemporary Korean people’s attitudes toward and perceptions of plastic surgery and the specific characteristics of female Korean faces currently considered attractive, plus trends in these perceptions, which should inform plastic surgeons within their specialized fields.

  3. Diplegia facial traumatica Traumatic facial diplegia: a case report

    Directory of Open Access Journals (Sweden)

    J. Fortes-Rego

    1975-12-01

    Full Text Available É relatado um caso de paralisia facial bilateral, incompleta, associada a hipoacusia esquerda, após traumatismo cranioencefálico, com fraturas evidenciadas radiológicamente. Algumas considerações são formuladas tentando relacionar ditas manifestações com fraturas do osso temporal.A case of traumatic facial diplegia with left partial loss of hearing following head injury is reported. X-rays showed fractures on the occipital and left temporal bones. A review of traumatic facial paralysis is made.

  4. A Brief Review of Facial Emotion Recognition Based on Visual Information

    Science.gov (United States)

    2018-01-01

    Facial emotion recognition (FER) is an important topic in the fields of computer vision and artificial intelligence owing to its significant academic and commercial potential. Although FER can be conducted using multiple sensors, this review focuses on studies that exclusively use facial images, because visual expressions are one of the main information channels in interpersonal communication. This paper provides a brief review of researches in the field of FER conducted over the past decades. First, conventional FER approaches are described along with a summary of the representative categories of FER systems and their main algorithms. Deep-learning-based FER approaches using deep networks enabling “end-to-end” learning are then presented. This review also focuses on an up-to-date hybrid deep-learning approach combining a convolutional neural network (CNN) for the spatial features of an individual frame and long short-term memory (LSTM) for temporal features of consecutive frames. In the later part of this paper, a brief review of publicly available evaluation metrics is given, and a comparison with benchmark results, which are a standard for a quantitative comparison of FER researches, is described. This review can serve as a brief guidebook to newcomers in the field of FER, providing basic knowledge and a general understanding of the latest state-of-the-art studies, as well as to experienced researchers looking for productive directions for future work. PMID:29385749

  5. A Brief Review of Facial Emotion Recognition Based on Visual Information.

    Science.gov (United States)

    Ko, Byoung Chul

    2018-01-30

    Facial emotion recognition (FER) is an important topic in the fields of computer vision and artificial intelligence owing to its significant academic and commercial potential. Although FER can be conducted using multiple sensors, this review focuses on studies that exclusively use facial images, because visual expressions are one of the main information channels in interpersonal communication. This paper provides a brief review of researches in the field of FER conducted over the past decades. First, conventional FER approaches are described along with a summary of the representative categories of FER systems and their main algorithms. Deep-learning-based FER approaches using deep networks enabling "end-to-end" learning are then presented. This review also focuses on an up-to-date hybrid deep-learning approach combining a convolutional neural network (CNN) for the spatial features of an individual frame and long short-term memory (LSTM) for temporal features of consecutive frames. In the later part of this paper, a brief review of publicly available evaluation metrics is given, and a comparison with benchmark results, which are a standard for a quantitative comparison of FER researches, is described. This review can serve as a brief guidebook to newcomers in the field of FER, providing basic knowledge and a general understanding of the latest state-of-the-art studies, as well as to experienced researchers looking for productive directions for future work.

  6. A Brief Review of Facial Emotion Recognition Based on Visual Information

    Directory of Open Access Journals (Sweden)

    Byoung Chul Ko

    2018-01-01

    Full Text Available Facial emotion recognition (FER is an important topic in the fields of computer vision and artificial intelligence owing to its significant academic and commercial potential. Although FER can be conducted using multiple sensors, this review focuses on studies that exclusively use facial images, because visual expressions are one of the main information channels in interpersonal communication. This paper provides a brief review of researches in the field of FER conducted over the past decades. First, conventional FER approaches are described along with a summary of the representative categories of FER systems and their main algorithms. Deep-learning-based FER approaches using deep networks enabling “end-to-end” learning are then presented. This review also focuses on an up-to-date hybrid deep-learning approach combining a convolutional neural network (CNN for the spatial features of an individual frame and long short-term memory (LSTM for temporal features of consecutive frames. In the later part of this paper, a brief review of publicly available evaluation metrics is given, and a comparison with benchmark results, which are a standard for a quantitative comparison of FER researches, is described. This review can serve as a brief guidebook to newcomers in the field of FER, providing basic knowledge and a general understanding of the latest state-of-the-art studies, as well as to experienced researchers looking for productive directions for future work.

  7. Automated Analysis of Facial Cues from Videos as a Potential Method for Differentiating Stress and Boredom of Players in Games

    Directory of Open Access Journals (Sweden)

    Fernando Bevilacqua

    2018-01-01

    Full Text Available Facial analysis is a promising approach to detect emotions of players unobtrusively; however approaches are commonly evaluated in contexts not related to games or facial cues are derived from models not designed for analysis of emotions during interactions with games. We present a method for automated analysis of facial cues from videos as a potential tool for detecting stress and boredom of players behaving naturally while playing games. Computer vision is used to automatically and unobtrusively extract 7 facial features aimed at detecting the activity of a set of facial muscles. Features are mainly based on the Euclidean distance of facial landmarks and do not rely on predefined facial expressions, training of a model, or the use of facial standards. An empirical evaluation was conducted on video recordings of an experiment involving games as emotion elicitation sources. Results show statistically significant differences in the values of facial features during boring and stressful periods of gameplay for 5 of the 7 features. We believe our approach is more user-tailored, convenient, and better suited for contexts involving games.

  8. People with chronic facial pain perform worse than controls at a facial emotion recognition task, but it is not all about the emotion.

    Science.gov (United States)

    von Piekartz, H; Wallwork, S B; Mohr, G; Butler, D S; Moseley, G L

    2015-04-01

    Alexithymia, or a lack of emotional awareness, is prevalent in some chronic pain conditions and has been linked to poor recognition of others' emotions. Recognising others' emotions from their facial expression involves both emotional and motor processing, but the possible contribution of motor disruption has not been considered. It is possible that poor performance on emotional recognition tasks could reflect problems with emotional processing, motor processing or both. We hypothesised that people with chronic facial pain would be less accurate in recognising others' emotions from facial expressions, would be less accurate in a motor imagery task involving the face, and that performance on both tasks would be positively related. A convenience sample of 19 people (15 females) with chronic facial pain and 19 gender-matched controls participated. They undertook two tasks; in the first task, they identified the facial emotion presented in a photograph. In the second, they identified whether the person in the image had a facial feature pointed towards their left or right side, a well-recognised paradigm to induce implicit motor imagery. People with chronic facial pain performed worse than controls at both tasks (Facially Expressed Emotion Labelling (FEEL) task P facial pain were worse than controls at both the FEEL emotion recognition task and the left/right facial expression task and performance covaried within participants. We propose that disrupted motor processing may underpin or at least contribute to the difficulty that facial pain patients have in emotion recognition and that further research that tests this proposal is warranted. © 2014 John Wiley & Sons Ltd.

  9. Facial anthropometric differences among gender, ethnicity, and age groups.

    Science.gov (United States)

    Zhuang, Ziqing; Landsittel, Douglas; Benson, Stacey; Roberge, Raymond; Shaffer, Ronald

    2010-06-01

    The impact of race/ethnicity upon facial anthropometric data in the US workforce, on the development of personal protective equipment, has not been investigated to any significant degree. The proliferation of minority populations in the US workforce has increased the need to investigate differences in facial dimensions among these workers. The objective of this study was to determine the face shape and size differences among race and age groups from the National Institute for Occupational Safety and Health survey of 3997 US civilian workers. Survey participants were divided into two gender groups, four racial/ethnic groups, and three age groups. Measurements of height, weight, neck circumference, and 18 facial dimensions were collected using traditional anthropometric techniques. A multivariate analysis of the data was performed using Principal Component Analysis. An exploratory analysis to determine the effect of different demographic factors had on anthropometric features was assessed via a linear model. The 21 anthropometric measurements, body mass index, and the first and second principal component scores were dependent variables, while gender, ethnicity, age, occupation, weight, and height served as independent variables. Gender significantly contributes to size for 19 of 24 dependent variables. African-Americans have statistically shorter, wider, and shallower noses than Caucasians. Hispanic workers have 14 facial features that are significantly larger than Caucasians, while their nose protrusion, height, and head length are significantly shorter. The other ethnic group was composed primarily of Asian subjects and has statistically different dimensions from Caucasians for 16 anthropometric values. Nineteen anthropometric values for subjects at least 45 years of age are statistically different from those measured for subjects between 18 and 29 years of age. Workers employed in manufacturing, fire fighting, healthcare, law enforcement, and other occupational

  10. Does facial resemblance enhance cooperation?

    Directory of Open Access Journals (Sweden)

    Trang Giang

    Full Text Available Facial self-resemblance has been proposed to serve as a kinship cue that facilitates cooperation between kin. In the present study, facial resemblance was manipulated by morphing stimulus faces with the participants' own faces or control faces (resulting in self-resemblant or other-resemblant composite faces. A norming study showed that the perceived degree of kinship was higher for the participants and the self-resemblant composite faces than for actual first-degree relatives. Effects of facial self-resemblance on trust and cooperation were tested in a paradigm that has proven to be sensitive to facial trustworthiness, facial likability, and facial expression. First, participants played a cooperation game in which the composite faces were shown. Then, likability ratings were assessed. In a source memory test, participants were required to identify old and new faces, and were asked to remember whether the faces belonged to cooperators or cheaters in the cooperation game. Old-new recognition was enhanced for self-resemblant faces in comparison to other-resemblant faces. However, facial self-resemblance had no effects on the degree of cooperation in the cooperation game, on the emotional evaluation of the faces as reflected in the likability judgments, and on the expectation that a face belonged to a cooperator rather than to a cheater. Therefore, the present results are clearly inconsistent with the assumption of an evolved kin recognition module built into the human face recognition system.

  11. Facial Action Units Recognition: A Comparative Study

    NARCIS (Netherlands)

    Popa, M.C.; Rothkrantz, L.J.M.; Wiggers, P.; Braspenning, R.A.C.; Shan, C.

    2011-01-01

    Many approaches to facial expression recognition focus on assessing the six basic emotions (anger, disgust, happiness, fear, sadness, and surprise). Real-life situations proved to produce many more subtle facial expressions. A reliable way of analyzing the facial behavior is the Facial Action Coding

  12. Microbial biofilms on silicone facial prostheses

    NARCIS (Netherlands)

    Ariani, Nina

    2015-01-01

    Facial disfigurements can result from oncologic surgery, trauma and congenital deformities. These disfigurements can be rehabilitated with facial prostheses. Facial prostheses are usually made of silicones. A problem of facial prostheses is that microorganisms can colonize their surface. It is hard

  13. Facial nerve palsy due to birth trauma

    Science.gov (United States)

    Seventh cranial nerve palsy due to birth trauma; Facial palsy - birth trauma; Facial palsy - neonate; Facial palsy - infant ... An infant's facial nerve is also called the seventh cranial nerve. It can be damaged just before or at the time of delivery. ...

  14. Facial transplantation for massive traumatic injuries.

    Science.gov (United States)

    Alam, Daniel S; Chi, John J

    2013-10-01

    This article describes the challenges of facial reconstruction and the role of facial transplantation in certain facial defects and injuries. This information is of value to surgeons assessing facial injuries with massive soft tissue loss or injury. Copyright © 2013 Elsevier Inc. All rights reserved.

  15. Persistent idiopathic facial pain

    DEFF Research Database (Denmark)

    Maarbjerg, Stine; Wolfram, Frauke; Heinskou, Tone Bruvik

    2017-01-01

    Introduction: Persistent idiopathic facial pain (PIFP) is a poorly understood chronic orofacial pain disorder and a differential diagnosis to trigeminal neuralgia. To address the lack of systematic studies in PIFP we here report clinical characteristics and neuroimaging findings in PIFP. Methods...... pain 7 (13%), hypoesthesia 23 (48%), depression 16 (30%) and other chronic pain conditions 17 (32%) and a low prevalence of stabbing pain 21 (40%), touch-evoked pain 14 (26%) and remission periods 10 (19%). The odds ratio between neurovascular contact and the painful side was 1.4 (95% Cl 0.4–4.4, p = 0.......565) and the odds ratio between neurovascular contact with displacement of the trigeminal nerve and the painful side was 0.2 (95% Cl 0.0–2.1, p = 0.195). Conclusion: PIFP is separated from trigeminal neuralgia both with respect to the clinical characteristics and neuroimaging findings, as NVC was not associated...

  16. When is facial paralysis Bell palsy? Current diagnosis and treatment.

    Science.gov (United States)

    Ahmed, Anwar

    2005-05-01

    Bell palsy is largely a diagnosis of exclusion, but certain features in the history and physical examination help distinguish it from facial paralysis due to other conditions: eg, abrupt onset with complete, unilateral facial weakness at 24 to 72 hours, and, on the affected side, numbness or pain around the ear, a reduction in taste, and hypersensitivity to sounds. Corticosteroids and antivirals given within 10 days of onset have been shown to help. But Bell palsy resolves spontaneously without treatment in most patients within 6 months.

  17. Ellis-van Creveld syndrome with facial hemiatrophy

    Directory of Open Access Journals (Sweden)

    Bhat Yasmeen

    2010-01-01

    Full Text Available Ellis-van Creveld (EVC syndrome is a rare autosomal recessive congenital disorder characterized by chondrodysplasia and polydactyly, ectodermal dysplasia and congenital defects of the heart. We present here a case of a 16-year-old short-limbed dwarf with skeletal deformities and bilateral postaxial polydactyly, dysplastic nails and teeth, also having left-sided facial hemiatrophy. The diagnosis of EVC syndrome was made on the basis of clinical and radiological features. To the best of our knowledge, this is the first report of EVC syndrome with facial hemiatrophy in the medical literature from India.

  18. Facial recognition and laser surface scan: a pilot study

    DEFF Research Database (Denmark)

    Lynnerup, Niels; Clausen, Maja-Lisa; Kristoffersen, Agnethe May

    2009-01-01

    Surface scanning of the face of a suspect is presented as a way to better match the facial features with those of a perpetrator from CCTV footage. We performed a simple pilot study where we obtained facial surface scans of volunteers and then in blind trials tried to match these scans with 2D...... photographs of the faces of the volunteers. Fifteen male volunteers were surface scanned using a Polhemus FastSCAN Cobra Handheld Laser Scanner. Three photographs were taken of each volunteer's face in full frontal, profile and from above at an angle of 45 degrees and also 45 degrees laterally. Via special...

  19. Facial recognition in education system

    Science.gov (United States)

    Krithika, L. B.; Venkatesh, K.; Rathore, S.; Kumar, M. Harish

    2017-11-01

    Human beings exploit emotions comprehensively for conveying messages and their resolution. Emotion detection and face recognition can provide an interface between the individuals and technologies. The most successful applications of recognition analysis are recognition of faces. Many different techniques have been used to recognize the facial expressions and emotion detection handle varying poses. In this paper, we approach an efficient method to recognize the facial expressions to track face points and distances. This can automatically identify observer face movements and face expression in image. This can capture different aspects of emotion and facial expressions.

  20. [Presurgical orthodontics for facial asymmetry].

    Science.gov (United States)

    Labarrère, H

    2003-03-01

    As with the treatment of all facial deformities, orthodontic pre-surgical preparation for facial asymmetry should aim at correcting severe occlusal discrepancies not solely on the basis of a narrow occlusal analysis but also in a way that will not disturb the proposed surgical protocol. In addition, facial asymmetries require specific adjustments, difficult to derive and to apply because of their inherent atypical morphological orientation of both alveolar and basal bony support. Three treated cases illustrate different solutions to problems posed by pathological torque: this torque must be considered with respect to proposed surgical changes, within the framework of their limitations and their possible contra-indications.

  1. Details from dignity to decay: facial expression lines in visual arts.

    Science.gov (United States)

    Heckmann, Marc

    2003-10-01

    A number of dermatologic procedures are intended to reduce facial wrinkles. This article is about wrinkles as a statement of art. This article explores how frown lines and other facial wrinkles are used in visual art to feature personal peculiarities and accentuate specific feelings or moods. Facial lines as an artistic element emerged with advanced painting techniques evolving during the Renaissance and following periods. The skill to paint fine details, the use of light and shadow, and the understanding of space that allowed for a three-dimensional presentation of the human face were essential prerequisites. Painters used facial lines to emphasize respected values such as dignity, determination, diligence, and experience. Facial lines, however, were often accentuated to portrait negative features such as anger, fear, aggression, sadness, exhaustion, and decay. This has reinforced a cultural stigma of facial wrinkles expressing not only age but also misfortune, dismay, or even tragedy. Removing wrinkles by dermatologic procedures may not only aim to make people look younger but also to liberate them from unwelcome negative connotations. On the other hand, consideration and care must be taken-especially when interfering with facial muscles-to preserve a natural balance of emotional facial expressions.

  2. The Associations between Visual Attention and Facial Expression Identification in Patients with Schizophrenia.

    Science.gov (United States)

    Lin, I-Mei; Fan, Sheng-Yu; Huang, Tiao-Lai; Wu, Wan-Ting; Li, Shi-Ming

    2013-12-01

    Visual search is an important attention process that precedes the information processing. Visual search also mediates the relationship between cognition function (attention) and social cognition (such as facial expression identification). However, the association between visual attention and social cognition in patients with schizophrenia remains unknown. The purposes of this study were to examine the differences in visual search performance and facial expression identification between patients with schizophrenia and normal controls, and to explore the relationship between visual search performance and facial expression identification in patients with schizophrenia. Fourteen patients with schizophrenia (mean age=46.36±6.74) and 15 normal controls (mean age=40.87±9.33) participated this study. The visual search task, including feature search and conjunction search, and Japanese and Caucasian Facial Expression of Emotion were administered. Patients with schizophrenia had worse visual search performance both in feature search and conjunction search than normal controls, as well as had worse facial expression identification, especially in surprised and sadness. In addition, there were negative associations between visual search performance and facial expression identification in patients with schizophrenia, especially in surprised and sadness. However, this phenomenon was not showed in normal controls. Patients with schizophrenia who had visual search deficits had the impairment on facial expression identification. Increasing ability of visual search and facial expression identification may improve their social function and interpersonal relationship.

  3. FaceWarehouse: a 3D facial expression database for visual computing.

    Science.gov (United States)

    Cao, Chen; Weng, Yanlin; Zhou, Shun; Tong, Yiying; Zhou, Kun

    2014-03-01

    We present FaceWarehouse, a database of 3D facial expressions for visual computing applications. We use Kinect, an off-the-shelf RGBD camera, to capture 150 individuals aged 7-80 from various ethnic backgrounds. For each person, we captured the RGBD data of her different expressions, including the neutral expression and 19 other expressions such as mouth-opening, smile, kiss, etc. For every RGBD raw data record, a set of facial feature points on the color image such as eye corners, mouth contour, and the nose tip are automatically localized, and manually adjusted if better accuracy is required. We then deform a template facial mesh to fit the depth data as closely as possible while matching the feature points on the color image to their corresponding points on the mesh. Starting from these fitted face meshes, we construct a set of individual-specific expression blendshapes for each person. These meshes with consistent topology are assembled as a rank-3 tensor to build a bilinear face model with two attributes: identity and expression. Compared with previous 3D facial databases, for every person in our database, there is a much richer matching collection of expressions, enabling depiction of most human facial actions. We demonstrate the potential of FaceWarehouse for visual computing with four applications: facial image manipulation, face component transfer, real-time performance-based facial image animation, and facial animation retargeting from video to image.

  4. Dynamic Facial Prosthetics for Sufferers of Facial Paralysis

    Directory of Open Access Journals (Sweden)

    Fergal Coulter

    2011-10-01

    Full Text Available BackgroundThis paper discusses the various methods and the materialsfor the fabrication of active artificial facial muscles. Theprimary use for these will be the reanimation of paralysedor atrophied muscles in sufferers of non-recoverableunilateral facial paralysis.MethodThe prosthetic solution described in this paper is based onsensing muscle motion of the contralateral healthy musclesand replicating that motion across a patient’s paralysed sideof the face, via solid state and thin film actuators. Thedevelopment of this facial prosthetic device focused onrecreating a varying intensity smile, with emphasis ontiming, displacement and the appearance of the wrinklesand folds that commonly appear around the nose and eyesduring the expression.An animatronic face was constructed with actuations beingmade to a silicone representation musculature, usingmultiple shape-memory alloy cascades. Alongside theartificial muscle physical prototype, a facial expressionrecognition software system was constructed. This formsthe basis of an automated calibration and reconfigurationsystem for the artificial muscles following implantation, soas to suit the implantee’s unique physiognomy.ResultsAn animatronic model face with silicone musculature wasdesigned and built to evaluate the performance of ShapeMemory Alloy artificial muscles, their power controlcircuitry and software control systems. A dual facial motionsensing system was designed to allow real time control overmodel – a piezoresistive flex sensor to measure physicalmotion, and a computer vision system to evaluate real toartificial muscle performance.Analysis of various facial expressions in real subjects wasmade, which give useful data upon which to base thesystems parameter limits.ConclusionThe system performed well, and the various strengths andshortcomings of the materials and methods are reviewedand considered for the next research phase, when newpolymer based artificial muscles are constructed

  5. Neurophysiology of spontaneous facial expressions: I. Motor control of the upper and lower face is behaviorally independent in adults.

    Science.gov (United States)

    Ross, Elliott D; Gupta, Smita S; Adnan, Asif M; Holden, Thomas L; Havlicek, Joseph; Radhakrishnan, Sridhar

    2016-03-01

    Facial expressions are described traditionally as monolithic entities. However, humans have the capacity to produce facial blends, in which the upper and lower face simultaneously display different emotional expressions. This, in turn, has led to the Component Theory of facial expressions. Recent neuroanatomical studies in monkeys have demonstrated that there are separate cortical motor areas for controlling the upper and lower face that, presumably, also occur in humans. The lower face is represented on the posterior ventrolateral surface of the frontal lobes in the primary motor and premotor cortices and the upper face is represented on the medial surface of the posterior frontal lobes in the supplementary motor and anterior cingulate cortices. Our laboratory has been engaged in a series of studies exploring the perception and production of facial blends. Using high-speed videography, we began measuring the temporal aspects of facial expressions to develop a more complete understanding of the neurophysiology underlying facial expressions and facial blends. The goal of the research presented here was to determine if spontaneous facial expressions in adults are predominantly monolithic or exhibit independent motor control of the upper and lower face. We found that spontaneous facial expressions are very complex and that the motor control of the upper and lower face is overwhelmingly independent, thus robustly supporting the Component Theory of facial expressions. Seemingly monolithic expressions, be they full facial or facial blends, are most likely the result of a timing coincident rather than a synchronous coordination between the ventrolateral and medial cortical motor areas responsible for controlling the lower and upper face, respectively. In addition, we found evidence that the right and left face may also exhibit independent motor control, thus supporting the concept that spontaneous facial expressions are organized predominantly across the horizontal facial

  6. Facial exercises for facial rejuvenation: a control group study.

    Science.gov (United States)

    De Vos, Marie-Camille; Van den Brande, Helen; Boone, Barbara; Van Borsel, John

    2013-01-01

    Facial exercises are a noninvasive alternative to medical approaches to facial rejuvenation. Logopedists could be involved in providing these exercises. Little research has been conducted, however, on the effectiveness of exercises for facial rejuvenation. This study assessed the effectiveness of 4 exercises purportedly reducing wrinkles and sagging of the facial skin. A control group study was conducted with 18 participants, 9 of whom (the experimental group) underwent daily training for 7 weeks. Pictures taken before and after 7 weeks of 5 facial areas (forehead, nasolabial folds, area above the upper lip, jawline and area under the chin) were evaluated by a panel of laypersons. In addition, the participants of the experimental group evaluated their own pictures. Evaluation included the pairwise presentation of pictures before and after 7 weeks and scoring of the same pictures by means of visual analogue scales in a random presentation. Only one significant difference was found between the control and experimental group. In the experimental group, the picture after therapy of the upper lip was more frequently chosen to be the younger-looking one by the panel. It cannot be concluded that facial exercises are effective. More systematic research is needed. © 2013 S. Karger AG, Basel.

  7. Dispersion assessment in the location of facial landmarks on photographs.

    Science.gov (United States)

    Campomanes-Álvarez, B R; Ibáñez, O; Navarro, F; Alemán, I; Cordón, O; Damas, S

    2015-01-01

    The morphological assessment of facial features using photographs has played an important role in forensic anthropology. The analysis of anthropometric landmarks for determining facial dimensions and angles has been considered in diverse forensic areas. Hence, the quantification of the error associated to the location of facial landmarks seems to be necessary when photographs become a key element of the forensic procedure. In this work, we statistically evaluate the inter- and intra-observer dispersions related to the facial landmark identification on photographs. In the inter-observer experiment, a set of 18 facial landmarks was provided to 39 operators. They were requested to mark only those that they could precisely place on 10 photographs with different poses (frontal, oblique, and lateral views). The frequency of landmark location was studied together with their dispersion. Regarding the intra-observer evaluation, three participants identified 13 facial points on five photographs classified in the frontal and oblique views. Each landmark location was repeated five times at intervals of at least 24 h. The frequency results reveal that glabella, nasion, subnasale, labiale superius, and pogonion obtained the highest location frequency in the three image categories. On the contrary, the lowest rate corresponds to labiale inferius and menton. Meanwhile, zygia, gonia, and gnathion were significantly more difficult to locate than other facial landmarks. They produced a significant effect on the dispersion depending on the pose of the image where they were placed, regardless of the type of observer that positioned them. In particular, zygia and gonia presented a statistically greater variation in the three image poses, while the location of gnathion is less precise in oblique view photographs. Hence, our findings suggest that the latter landmarks tend to be highly variable when determining their exact position.

  8. Enhancement of the facial nerve at MR imaging

    International Nuclear Information System (INIS)

    Gebarski, S.S.; Telian, S.; Niparko, J.

    1990-01-01

    In the few cases studied, normal facial nerves are reported to show no MR enhancement. Because this did not fit clinical experience, the authors designed a retrospective imaging review with anatomic correlation. Between June 1989 and June 1990, 175 patients underwent focused temporal bone MR imaging before and after administration of intravenous gadopentetate dimeglumine (0.1 mmol/kg). Exclusion criteria for the study included facial nerve dysfunction (subjective or objective); facial nerve mass; central nervous system infection, inflammation, or trauma; neurofibromatosis; or previous cranial surgery of any type. The following sequences were reviewed: GE 1.5-T axial spin-echo TR 567 msec, TE 20 msec, 256 x 192, 2.0 excitations, 20-cm field of view, 3-mm section thickness. Imaging analysis was a side-by side comparison of the images and region-of-interest quantified signal intensity. Anatomic correlation included a comparison with dissection and axial histologic sections. Ninety-three patients (aged 15-75 years) were available for imaging analysis after the exclusionary criteria were applied. With 46 patients (92 facial nerves) analyzed, they found that 76 nerves (83%) showed easily visible gadopentetate dimeglumine enhancement, especially about the geniculate ganglia. Sixteen (17%) of the 92 nerves did not show visible enhancement, but region-of-interest analysis showed increased intensity after gadopentetate dimeglumine administration. Sixteen patients (42%) showed right-to-left asymmetry in facial nerve enhancement. The facial nerves showed enhancement in the geniculate, tympanic, and fallopian portions; the facial nerve within the IAC showed no enhancement. This corresponded exactly with the topographic features of a circummeural arterial/venous plexus seen on the anatomic preparations

  9. Automatic Emotional State Detection using Facial Expression Dynamic in Videos

    Directory of Open Access Journals (Sweden)

    Hongying Meng

    2014-11-01

    Full Text Available In this paper, an automatic emotion detection system is built for a computer or machine to detect the emotional state from facial expressions in human computer communication. Firstly, dynamic motion features are extracted from facial expression videos and then advanced machine learning methods for classification and regression are used to predict the emotional states. The system is evaluated on two publicly available datasets, i.e. GEMEP_FERA and AVEC2013, and satisfied performances are achieved in comparison with the baseline results provided. With this emotional state detection capability, a machine can read the facial expression of its user automatically. This technique can be integrated into applications such as smart robots, interactive games and smart surveillance systems.

  10. Representing Color Ensembles.

    Science.gov (United States)

    Chetverikov, Andrey; Campana, Gianluca; Kristjánsson, Árni

    2017-10-01

    Colors are rarely uniform, yet little is known about how people represent color distributions. We introduce a new method for studying color ensembles based on intertrial learning in visual search. Participants looked for an oddly colored diamond among diamonds with colors taken from either uniform or Gaussian color distributions. On test trials, the targets had various distances in feature space from the mean of the preceding distractor color distribution. Targets on test trials therefore served as probes into probabilistic representations of distractor colors. Test-trial response times revealed a striking similarity between the physical distribution of colors and their internal representations. The results demonstrate that the visual system represents color ensembles in a more detailed way than previously thought, coding not only mean and variance but, most surprisingly, the actual shape (uniform or Gaussian) of the distribution of colors in the environment.

  11. Sympathicotomy for isolated facial blushing

    DEFF Research Database (Denmark)

    Licht, Peter Bjørn; Pilegaard, Hans K; Ladegaard, Lars

    2012-01-01

    Background. Facial blushing is one of the most peculiar of human expressions. The pathophysiology is unclear, and the prevalence is unknown. Thoracoscopic sympathectomy may cure the symptom and is increasingly used in patients with isolated facial blushing. The evidence base for the optimal level...... of targeting the sympathetic chain is limited to retrospective case studies. We present a randomized clinical trial. Methods. 100 patients were randomized (web-based, single-blinded) to rib-oriented (R2 or R2-R3) sympathicotomy for isolated facial blushing at two university hospitals during a 6-year period...... between R2 and R2-R3 sympathicotomy for isolated facial blushing. Both were effective, and QOL increased significantly. Despite very frequent side effects, the vast majority of patients were satisfied. Surprisingly, many patients experienced mild recurrent symptoms within the first year; this should...

  12. Measuring facial expression of emotion.

    Science.gov (United States)

    Wolf, Karsten

    2015-12-01

    Research into emotions has increased in recent decades, especially on the subject of recognition of emotions. However, studies of the facial expressions of emotion were compromised by technical problems with visible video analysis and electromyography in experimental settings. These have only recently been overcome. There have been new developments in the field of automated computerized facial recognition; allowing real-time identification of facial expression in social environments. This review addresses three approaches to measuring facial expression of emotion and describes their specific contributions to understanding emotion in the healthy population and in persons with mental illness. Despite recent progress, studies on human emotions have been hindered by the lack of consensus on an emotion theory suited to examining the dynamic aspects of emotion and its expression. Studying expression of emotion in patients with mental health conditions for diagnostic and therapeutic purposes will profit from theoretical and methodological progress.

  13. Imaging of the facial nerve

    Energy Technology Data Exchange (ETDEWEB)

    Veillon, F. [Service de Radiologie I, Hopital de Hautepierre, 67098 Strasbourg Cedex (France)], E-mail: Francis.Veillon@chru-strasbourg.fr; Ramos-Taboada, L.; Abu-Eid, M. [Service de Radiologie I, Hopital de Hautepierre, 67098 Strasbourg Cedex (France); Charpiot, A. [Service d' ORL, Hopital de Hautepierre, 67098 Strasbourg Cedex (France); Riehm, S. [Service de Radiologie I, Hopital de Hautepierre, 67098 Strasbourg Cedex (France)

    2010-05-15

    The facial nerve is responsible for the motor innervation of the face. It has a visceral motor function (lacrimal, submandibular, sublingual glands and secretion of the nose); it conveys a great part of the taste fibers, participates to the general sensory of the auricle (skin of the concha) and the wall of the external auditory meatus. The facial mimic, production of tears, nasal flow and salivation all depend on the facial nerve. In order to image the facial nerve it is mandatory to be knowledgeable about its normal anatomy including the course of its efferent and afferent fibers and about relevant technical considerations regarding CT and MR to be able to achieve high-resolution images of the nerve.

  14. A computational model of the development of separate representations of facial identity and expression in the primate visual system.

    Science.gov (United States)

    Tromans, James Matthew; Harris, Mitchell; Stringer, Simon Maitland

    2011-01-01

    Experimental studies have provided evidence that the visual processing areas of the primate brain represent facial identity and facial expression within different subpopulations of neurons. For example, in non-human primates there is evidence that cells within the inferior temporal gyrus (TE) respond primarily to facial identity, while cells within the superior temporal sulcus (STS) respond to facial expression. More recently, it has been found that the orbitofrontal cortex (OFC) of non-human primates contains some cells that respond exclusively to changes in facial identity, while other cells respond exclusively to facial expression. How might the primate visual system develop physically separate representations of facial identity and expression given that the visual system is always exposed to simultaneous combinations of facial identity and expression during learning? In this paper, a biologically plausible neural network model, VisNet, of the ventral visual pathway is trained on a set of carefully-designed cartoon faces with different identities and expressions. The VisNet model architecture is composed of a hierarchical series of four Self-Organising Maps (SOMs), with associative learning in the feedforward synaptic connections between successive layers. During learning, the network develops separate clusters of cells that respond exclusively to either facial identity or facial expression. We interpret the performance of the network in terms of the learning properties of SOMs, which are able to exploit the statistical indendependence between facial identity and expression.

  15. A computational model of the development of separate representations of facial identity and expression in the primate visual system.

    Directory of Open Access Journals (Sweden)

    James Matthew Tromans

    Full Text Available Experimental studies have provided evidence that the visual processing areas of the primate brain represent facial identity and facial expression within different subpopulations of neurons. For example, in non-human primates there is evidence that cells within the inferior temporal gyrus (TE respond primarily to facial identity, while cells within the superior temporal sulcus (STS respond to facial expression. More recently, it has been found that the orbitofrontal cortex (OFC of non-human primates contains some cells that respond exclusively to changes in facial identity, while other cells respond exclusively to facial expression. How might the primate visual system develop physically separate representations of facial identity and expression given that the visual system is always exposed to simultaneous combinations of facial identity and expression during learning? In this paper, a biologically plausible neural network model, VisNet, of the ventral visual pathway is trained on a set of carefully-designed cartoon faces with different identities and expressions. The VisNet model architecture is composed of a hierarchical series of four Self-Organising Maps (SOMs, with associative learning in the feedforward synaptic connections between successive layers. During learning, the network develops separate clusters of cells that respond exclusively to either facial identity or facial expression. We interpret the performance of the network in terms of the learning properties of SOMs, which are able to exploit the statistical indendependence between facial identity and expression.

  16. Pediatric facial injuries: It's management.

    Science.gov (United States)

    Singh, Geeta; Mohammad, Shadab; Pal, U S; Hariram; Malkunje, Laxman R; Singh, Nimisha

    2011-07-01

    Facial injuries in children always present a challenge in respect of their diagnosis and management. Since these children are of a growing age every care should be taken so that later the overall growth pattern of the facial skeleton in these children is not jeopardized. To access the most feasible method for the management of facial injuries in children without hampering the facial growth. Sixty child patients with facial trauma were selected randomly for this study. On the basis of examination and investigations a suitable management approach involving rest and observation, open or closed reduction and immobilization, trans-osseous (TO) wiring, mini bone plate fixation, splinting and replantation, elevation and fixation of zygoma, etc. were carried out. In our study fall was the predominant cause for most of the facial injuries in children. There was a 1.09% incidence of facial injuries in children up to 16 years of age amongst the total patients. The age-wise distribution of the fracture amongst groups (I, II and III) was found to be 26.67%, 51.67% and 21.67% respectively. Male to female patient ratio was 3:1. The majority of the cases of facial injuries were seen in Group II patients (6-11 years) i.e. 51.67%. The mandibular fracture was found to be the most common fracture (0.60%) followed by dentoalveolar (0.27%), mandibular + midface (0.07) and midface (0.02%) fractures. Most of the mandibular fractures were found in the parasymphysis region. Simple fracture seems to be commonest in the mandible. Most of the mandibular and midface fractures in children were amenable to conservative therapies except a few which required surgical intervention.

  17. Peripheral facial palsy in children.

    Science.gov (United States)

    Yılmaz, Unsal; Cubukçu, Duygu; Yılmaz, Tuba Sevim; Akıncı, Gülçin; Ozcan, Muazzez; Güzel, Orkide

    2014-11-01

    The aim of this study is to evaluate the types and clinical characteristics of peripheral facial palsy in children. The hospital charts of children diagnosed with peripheral facial palsy were reviewed retrospectively. A total of 81 children (42 female and 39 male) with a mean age of 9.2 ± 4.3 years were included in the study. Causes of facial palsy were 65 (80.2%) idiopathic (Bell palsy) facial palsy, 9 (11.1%) otitis media/mastoiditis, and tumor, trauma, congenital facial palsy, chickenpox, Melkersson-Rosenthal syndrome, enlarged lymph nodes, and familial Mediterranean fever (each 1; 1.2%). Five (6.1%) patients had recurrent attacks. In patients with Bell palsy, female/male and right/left ratios were 36/29 and 35/30, respectively. Of them, 31 (47.7%) had a history of preceding infection. The overall rate of complete recovery was 98.4%. A wide variety of disorders can present with peripheral facial palsy in children. Therefore, careful investigation and differential diagnosis is essential. © The Author(s) 2013.

  18. The role of great auricular-facial nerve neurorrhaphy in facial nerve damage

    OpenAIRE

    Sun, Yan; Liu, Limei; Han, Yuechen; Xu, Lei; Zhang, Daogong; Wang, Haibo

    2015-01-01

    Background: Facial nerve is easy to be damaged, and there are many reconstructive methods for facial nerve reconstructive, such as facial nerve end to end anastomosis, the great auricular nerve graft, the sural nerve graft, or hypoglossal-facial nerve anastomosis. However, there is still little study about great auricular-facial nerve neurorrhaphy. The aim of the present study was to identify the role of great auricular-facial nerve neurorrhaphy and the mechanism. Methods: Rat models of facia...

  19. [Measuring impairment of facial affects recognition in schizophrenia. Preliminary study of the facial emotions recognition task (TREF)].

    Science.gov (United States)

    Gaudelus, B; Virgile, J; Peyroux, E; Leleu, A; Baudouin, J-Y; Franck, N

    2015-06-01

    The impairment of social cognition, including facial affects recognition, is a well-established trait in schizophrenia, and specific cognitive remediation programs focusing on facial affects recognition have been developed by different teams worldwide. However, even though social cognitive impairments have been confirmed, previous studies have also shown heterogeneity of the results between different subjects. Therefore, assessment of personal abilities should be measured individually before proposing such programs. Most research teams apply tasks based on facial affects recognition by Ekman et al. or Gur et al. However, these tasks are not easily applicable in a clinical exercise. Here, we present the Facial Emotions Recognition Test (TREF), which is designed to identify facial affects recognition impairments in a clinical practice. The test is composed of 54 photos and evaluates abilities in the recognition of six universal emotions (joy, anger, sadness, fear, disgust and contempt). Each of these emotions is represented with colored photos of 4 different models (two men and two women) at nine intensity levels from 20 to 100%. Each photo is presented during 10 seconds; no time limit for responding is applied. The present study compared the scores of the TREF test in a sample of healthy controls (64 subjects) and people with stabilized schizophrenia (45 subjects) according to the DSM IV-TR criteria. We analysed global scores for all emotions, as well as sub scores for each emotion between these two groups, taking into account gender differences. Our results were coherent with previous findings. Applying TREF, we confirmed an impairment in facial affects recognition in schizophrenia by showing significant differences between the two groups in their global results (76.45% for healthy controls versus 61.28% for people with schizophrenia), as well as in sub scores for each emotion except for joy. Scores for women were significantly higher than for men in the population

  20. A Real-Time Interactive System for Facial Makeup of Peking Opera

    Science.gov (United States)

    Cai, Feilong; Yu, Jinhui

    In this paper we present a real-time interactive system for making facial makeup of Peking Opera. First, we analyze the process of drawing facial makeup and characteristics of the patterns used in it, and then construct a SVG pattern bank based on local features like eye, nose, mouth, etc. Next, we pick up some SVG patterns from the pattern bank and composed them to make a new facial makeup. We offer a vector-based free form deformation (FFD) tool to edit patterns and, based on editing, our system creates automatically texture maps for a template head model. Finally, the facial makeup is rendered on the 3D head model in real time. Our system offers flexibility in designing and synthesizing various 3D facial makeup. Potential applications of the system include decoration design, digital museum exhibition and education of Peking Opera.

  1. Real Time 3D Facial Movement Tracking Using a Monocular Camera

    Directory of Open Access Journals (Sweden)

    Yanchao Dong

    2016-07-01

    Full Text Available The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference.

  2. The Child Affective Facial Expression (CAFE) set: validity and reliability from untrained adults.

    Science.gov (United States)

    LoBue, Vanessa; Thrasher, Cat

    2014-01-01

    Emotional development is one of the largest and most productive areas of psychological research. For decades, researchers have been fascinated by how humans respond to, detect, and interpret emotional facial expressions. Much of the research in this area has relied on controlled stimulus sets of adults posing various facial expressions. Here we introduce a new stimulus set of emotional facial expressions into the domain of research on emotional development-The Child Affective Facial Expression set (CAFE). The CAFE set features photographs of a racially and ethnically diverse group of 2- to 8-year-old children posing for six emotional facial expressions-angry, fearful, sad, happy, surprised, and disgusted-and a neutral face. In the current work, we describe the set and report validity and reliability data on the set from 100 untrained adult participants.

  3. The Child Affective Facial Expression (CAFE Set: Validity and Reliability from Untrained Adults

    Directory of Open Access Journals (Sweden)

    Vanessa eLoBue

    2015-01-01

    Full Text Available Emotional development is one of the largest and most productive areas of psychological research. For decades, researchers have been fascinated by how humans respond to, detect, and interpret emotional facial expressions. Much of the research in this area has relied on controlled stimulus sets of adults posing various facial expressions. Here we introduce a new stimulus set of emotional facial expressions into the domain of research on emotional development—The Child Affective Facial Expression set (CAFE. The CAFE set features photographs of a racially and ethnically diverse group of 2- to 8-year-old children posing for 6 emotional facial expressions—angry, fearful, sad, happy, surprised, and disgusted—and a neutral face. In the current work, we describe the set and report validity and reliability data on the set from 100 untrained adult participants.

  4. Keloid Skin Flap Retention and Resurfacing in Facial Keloid Treatment.

    Science.gov (United States)

    Liu, Shu; Liang, Weizhong; Song, Kexin; Wang, Youbin

    2018-02-01

    Facial keloids commonly occur in young patients. Multiple keloid masses often converge into a large lesion on the face, representing a significant obstacle to keloid mass excision and reconstruction. We describe a new surgical method that excises the keloid mass and resurfaces the wound by saving the keloid skin as a skin flap during facial keloid treatment. Forty-five patients with facial keloids were treated in our department between January 2013 and January 2016. Multiple incisions were made along the facial esthetic line on the keloid mass. The keloid skin was dissected and elevated as a skin flap with one or two pedicles. The scar tissue in the keloid was then removed through the incision. The wound was covered with the preserved keloid skin flap and closed without tension. Radiotherapy and hyperbaric oxygen were applied after surgery. Patients underwent follow-up examinations 6 and 12 months after surgery. Of the 45 total patients, 32 patients were cured and seven patients were partially cured. The efficacy rate was 88.9%, and 38 patients (84.4%) were satisfied with the esthetic result. We describe an efficacious and esthetically satisfactory surgical method for managing facial keloids by preserving the keloid skin as a skin flap. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .

  5. The masculinity paradox: facial masculinity and beardedness interact to determine women's ratings of men's facial attractiveness.

    Science.gov (United States)

    Dixson, B J W; Sulikowski, D; Gouda-Vossos, A; Rantala, M J; Brooks, R C

    2016-11-01

    In many species, male secondary sexual traits have evolved via female choice as they confer indirect (i.e. genetic) benefits or direct benefits such as enhanced fertility or survival. In humans, the role of men's characteristically masculine androgen-dependent facial traits in determining men's attractiveness has presented an enduring paradox in studies of human mate preferences. Male-typical facial features such as a pronounced brow ridge and a more robust jawline may signal underlying health, whereas beards may signal men's age and masculine social dominance. However, masculine faces are judged as more attractive for short-term relationships over less masculine faces, whereas beards are judged as more attractive than clean-shaven faces for long-term relationships. Why such divergent effects occur between preferences for two sexually dimorphic traits remains unresolved. In this study, we used computer graphic manipulation to morph male faces varying in facial hair from clean-shaven, light stubble, heavy stubble and full beards to appear more (+25% and +50%) or less (-25% and -50%) masculine. Women (N = 8520) were assigned to treatments wherein they rated these stimuli for physical attractiveness in general, for a short-term liaison or a long-term relationship. Results showed a significant interaction between beardedness and masculinity on attractiveness ratings. Masculinized and, to an even greater extent, feminized faces were less attractive than unmanipulated faces when all were clean-shaven, and stubble and beards dampened the polarizing effects of extreme masculinity and femininity. Relationship context also had effects on ratings, with facial hair enhancing long-term, and not short-term, attractiveness. Effects of facial masculinization appear to have been due to small differences in the relative attractiveness of each masculinity level under the three treatment conditions and not to any change in the order of their attractiveness. Our findings suggest that

  6. Survey of methods of facial palsy documentation in use by members of the Sir Charles Bell Society

    NARCIS (Netherlands)

    Fattah, A.Y.; Gavilan, J.; Hadlock, T.A.; Marcus, J.R.; Marres, H.A.; Nduka, C.; Slattery, W.H.; Snyder-Warwick, A.K.

    2014-01-01

    OBJECTIVES/HYPOTHESIS: Facial palsy manifests a broad array of deficits affecting function, form, and psychological well-being. Assessment scales were introduced to standardize and document the features of facial palsy and to facilitate the exchange of information and comparison of outcomes. The aim

  7. Temporal neural mechanisms underlying conscious access to different levels of facial stimulus contents.

    Science.gov (United States)

    Hsu, Shen-Mou; Yang, Yu-Fang

    2018-04-01

    An important issue facing the empirical study of consciousness concerns how the contents of incoming stimuli gain access to conscious processing. According to classic theories, facial stimuli are processed in a hierarchical manner. However, it remains unclear how the brain determines which level of stimulus content is consciously accessible when facing an incoming facial stimulus. Accordingly, with a magnetoencephalography technique, this study aims to investigate the temporal dynamics of the neural mechanism mediating which level of stimulus content is consciously accessible. Participants were instructed to view masked target faces at threshold so that, according to behavioral responses, their perceptual awareness alternated from consciously accessing facial identity in some trials to being able to consciously access facial configuration features but not facial identity in other trials. Conscious access at these two levels of facial contents were associated with a series of differential neural events. Before target presentation, different patterns of phase angle adjustment were observed between the two types of conscious access. This effect was followed by stronger phase clustering for awareness of facial identity immediately during stimulus presentation. After target onset, conscious access to facial identity, as opposed to facial configural features, was able to elicit more robust late positivity. In conclusion, we suggest that the stages of neural events, ranging from prestimulus to stimulus-related activities, may operate in combination to determine which level of stimulus contents is consciously accessed. Conscious access may thus be better construed as comprising various forms that depend on the level of stimulus contents accessed. NEW & NOTEWORTHY The present study investigates how the brain determines which level of stimulus contents is consciously accessible when facing an incoming facial stimulus. Using magnetoencephalography, we show that prestimulus

  8. The Mysterious Noh Mask: Contribution of Multiple Facial Parts to the Recognition of Emotional Expressions

    Science.gov (United States)

    Miyata, Hiromitsu; Nishimura, Ritsuko; Okanoya, Kazuo; Kawai, Nobuyuki

    2012-01-01

    Background A Noh mask worn by expert actors when performing on a Japanese traditional Noh drama is suggested to convey countless different facial expressions according to different angles of head/body orientation. The present study addressed the question of how different facial parts of a Noh mask, including the eyebrows, the eyes, and the mouth, may contribute to different emotional expressions. Both experimental situations of active creation and passive recognition of emotional facial expressions were introduced. Methodology/Principal Findings In Experiment 1, participants either created happy or sad facial expressions, or imitated a face that looked up or down, by actively changing each facial part of a Noh mask image presented on a computer screen. For an upward tilted mask, the eyebrows and the mouth shared common features with sad expressions, whereas the eyes with happy expressions. This contingency tended to be reversed for a downward tilted mask. Experiment 2 further examined which facial parts of a Noh mask are crucial in determining emotional expressions. Participants were exposed to the synthesized Noh mask images with different facial parts expressing different emotions. Results clearly revealed that participants primarily used the shape of the mouth in judging emotions. The facial images having the mouth of an upward/downward tilted Noh mask strongly tended to be evaluated as sad/happy, respectively. Conclusions/Significance The results suggest that Noh masks express chimeric emotional patterns, with different facial parts conveying different emotions This appears consistent with the principles of Noh which highly appreciate subtle and composite emotional expressions, as well as with the mysterious facial expressions observed in Western art. It was further demonstrated that the mouth serves as a diagnostic feature in characterizing the emotional expressions. This indicates the superiority of biologically-driven factors over the traditionally

  9. Facial Contrast Is a Cross-Cultural Cue for Perceiving Age.

    Science.gov (United States)

    Porcheron, Aurélie; Mauger, Emmanuelle; Soppelsa, Frédérique; Liu, Yuli; Ge, Liezhong; Pascalis, Olivier; Russell, Richard; Morizot, Frédérique

    2017-01-01

    Age is a fundamental social dimension and a youthful appearance is of importance for many individuals, perhaps because it is a relevant predictor of aspects of health, facial attractiveness and general well-being. We recently showed that facial contrast-the color and luminance difference between facial features and the surrounding skin-is age-related and a cue to age perception of Caucasian women. Specifically, aspects of facial contrast decrease with age in Caucasian women, and Caucasian female faces with higher contrast look younger (Porcheron et al., 2013). Here we investigated faces of other ethnic groups and raters of other cultures to see whether facial contrast is a cross-cultural youth-related attribute. Using large sets of full face color photographs of Chinese, Latin American and black South African women aged 20-80, we measured the luminance and color contrast between the facial features (the eyes, the lips, and the brows) and the surrounding skin. Most aspects of facial contrast that were previously found to decrease with age in Caucasian women were also found to decrease with age in the other ethnic groups. Though the overall pattern of changes with age was common to all women, there were also some differences between the groups. In a separate study, individual faces of the 4 ethnic groups were perceived younger by French and Chinese participants when the aspects of facial contrast that vary with age in the majority of faces were artificially increased, but older when they were artificially decreased. Altogether these findings indicate that facial contrast is a cross-cultural cue to youthfulness. Because cosmetics were shown to enhance facial contrast, this work provides some support for the notion that a universal function of cosmetics is to make female faces look younger.

  10. Towards Real-Time Facial Landmark Detection in Depth Data Using Auxiliary Information

    Directory of Open Access Journals (Sweden)

    Connah Kendrick

    2018-06-01

    Full Text Available Modern facial motion capture systems employ a two-pronged approach for capturing and rendering facial motion. Visual data (2D is used for tracking the facial features and predicting facial expression, whereas Depth (3D data is used to build a series of expressions on 3D face models. An issue with modern research approaches is the use of a single data stream that provides little indication of the 3D facial structure. We compare and analyse the performance of Convolutional Neural Networks (CNN using visual, Depth and merged data to identify facial features in real-time using a Depth sensor. First, we review the facial landmarking algorithms and its datasets for Depth data. We address the limitation of the current datasets by introducing the Kinect One Expression Dataset (KOED. Then, we propose the use of CNNs for the single data stream and merged data streams for facial landmark detection. We contribute to existing work by performing a full evaluation on which streams are the most effective for the field of facial landmarking. Furthermore, we improve upon the existing work by extending neural networks to predict into 3D landmarks in real-time with additional observations on the impact of using 2D landmarks as auxiliary information. We evaluate the performance by using Mean Square Error (MSE and Mean Average Error (MAE. We observe that the single data stream predicts accurate facial landmarks on Depth data when auxiliary information is used to train the network. The codes and dataset used in this paper will be made available.

  11. Facial Contrast Is a Cross-Cultural Cue for Perceiving Age

    Directory of Open Access Journals (Sweden)

    Aurélie Porcheron

    2017-07-01

    Full Text Available Age is a fundamental social dimension and a youthful appearance is of importance for many individuals, perhaps because it is a relevant predictor of aspects of health, facial attractiveness and general well-being. We recently showed that facial contrast—the color and luminance difference between facial features and the surrounding skin—is age-related and a cue to age perception of Caucasian women. Specifically, aspects of facial contrast decrease with age in Caucasian women, and Caucasian female faces with higher contrast look younger (Porcheron et al., 2013. Here we investigated faces of other ethnic groups and raters of other cultures to see whether facial contrast is a cross-cultural youth-related attribute. Using large sets of full face color photographs of Chinese, Latin American and black South African women aged 20–80, we measured the luminance and color contrast between the facial features (the eyes, the lips, and the brows and the surrounding skin. Most aspects of facial contrast that were previously found to decrease with age in Caucasian women were also found to decrease with age in the other ethnic groups. Though the overall pattern of changes with age was common to all women, there were also some differences between the groups. In a separate study, individual faces of the 4 ethnic groups were perceived younger by French and Chinese participants when the aspects of facial contrast that vary with age in the majority of faces were artificially increased, but older when they were artificially decreased. Altogether these findings indicate that facial contrast is a cross-cultural cue to youthfulness. Because cosmetics were shown to enhance facial contrast, this work provides some support for the notion that a universal function of cosmetics is to make female faces look younger.

  12. [Idiopathic facial paralysis in children].

    Science.gov (United States)

    Achour, I; Chakroun, A; Ayedi, S; Ben Rhaiem, Z; Mnejja, M; Charfeddine, I; Hammami, B; Ghorbel, A

    2015-05-01

    Idiopathic facial palsy is the most common cause of facial nerve palsy in children. Controversy exists regarding treatment options. The objectives of this study were to review the epidemiological and clinical characteristics as well as the outcome of idiopathic facial palsy in children to suggest appropriate treatment. A retrospective study was conducted on children with a diagnosis of idiopathic facial palsy from 2007 to 2012. A total of 37 cases (13 males, 24 females) with a mean age of 13.9 years were included in this analysis. The mean duration between onset of Bell's palsy and consultation was 3 days. Of these patients, 78.3% had moderately severe (grade IV) or severe paralysis (grade V on the House and Brackmann grading). Twenty-seven patients were treated in an outpatient context, three patients were hospitalized, and seven patients were treated as outpatients and subsequently hospitalized. All patients received corticosteroids. Eight of them also received antiviral treatment. The complete recovery rate was 94.6% (35/37). The duration of complete recovery was 7.4 weeks. Children with idiopathic facial palsy have a very good prognosis. The complete recovery rate exceeds 90%. However, controversy exists regarding treatment options. High-quality studies have been conducted on adult populations. Medical treatment based on corticosteroids alone or combined with antiviral treatment is certainly effective in improving facial function outcomes in adults. In children, the recommendation for prescription of steroids and antiviral drugs based on adult treatment appears to be justified. Randomized controlled trials in the pediatric population are recommended to define a strategy for management of idiopathic facial paralysis. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  13. Facial preservation following extreme mummification: Shrunken heads.

    Science.gov (United States)

    Houlton, Tobias M R; Wilkinson, Caroline

    2018-05-01

    Shrunken heads are a mummification phenomenon unique to South America. Ceremonial tsantsa are ritually reduced heads from enemy victims of the Shuar, Achuar, Awajún (Aguaruna), Wampís (Huambisa), and Candoshi-Shapra cultures. Commercial shrunken heads are comparatively modern and fraudulently produced for the curio-market, often using stolen bodies from hospital mortuaries and graves. To achieve shrinkage and desiccation, heads undergo skinning, simmering (in water) and drying. Considering the intensive treatments applied, this research aims to identify how the facial structure can alter and impact identification using post-mortem depiction. Sixty-five human shrunken heads were assessed: 6 ceremonial, 36 commercial, and 23 ambiguous. Investigations included manual inspection, multi-detector computerised tomography, infrared reflectography, ultraviolet fluorescence and microscopic hair analysis. The mummification process disfigures the outer face, cheeks, nasal root and bridge form, including brow ridge, eyes, ears, mouth, and nose projection. Melanin depletion, epidermal degeneration, and any applied staining changes the natural skin complexion. Papillary and reticular dermis separation is possible. Normal hair structure (cuticle, cortex, medulla) is retained. Hair appears longer (unless cut) and more profuse following shrinkage. Significant features retained include skin defects, facial creases, hairlines and earlobe form. Hair conditions that only affect living scalps are preserved (e.g. nits, hair casts). Ear and nose cartilage helps to retain some morphological information. Commercial heads appear less distorted than ceremonial tsantsa, often presenting a definable eyebrow shape, vermillion lip shape, lip thickness (if mouth is open), philtrum form, and palpebral slit angle. Facial identification capabilities are considered limited, and only perceived possible for commercial heads. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Computer Aided Facial Prosthetics Manufacturing System

    Directory of Open Access Journals (Sweden)

    Peng H.K.

    2016-01-01

    Full Text Available Facial deformities can impose burden to the patient. There are many solutions for facial deformities such as plastic surgery and facial prosthetics. However, current fabrication method of facial prosthetics is high-cost and time consuming. This study aimed to identify a new method to construct a customized facial prosthetic. A 3D scanner, computer software and 3D printer were used in this study. Results showed that the new developed method can be used to produce a customized facial prosthetics. The advantages of the developed method over the conventional process are low cost, reduce waste of material and pollution in order to meet the green concept.

  15. Three-Dimensional Facial Adaptation for MPEG-4 Talking Heads

    Directory of Open Access Journals (Sweden)

    Nikos Grammalidis

    2002-10-01

    Full Text Available This paper studies a new method for three-dimensional (3D facial model adaptation and its integration into a text-to-speech (TTS system. The 3D facial adaptation requires a set of two orthogonal views of the user′s face with a number of feature points located on both views. Based on the correspondences of the feature points′ positions, a generic face model is deformed nonrigidly treating every facial part as a separate entity. A cylindrical texture map is then built from the two image views. The generated head models are compared to corresponding models obtained by the commonly used adaptation method that utilizes 3D radial bases functions. The generated 3D models are integrated into a talking head system, which consists of two distinct parts: a multilingual text to speech sub-system and an MPEG-4 compliant facial animation sub-system. Support for the Greek language has been added, while preserving lip and speech synchronization.

  16. Tumors and tumor - like lesions of the oro - facial region at Mayo hospital, Lahore - a five year study

    International Nuclear Information System (INIS)

    Riaz, N.; Warriach, R.A.

    2011-01-01

    The oro-facial region including the oral cavity, the maxilla and mandible and related tissues can be the site of a multitude of neoplastic conditions. These tumours have a predilection for the entire facial region; however, odontogenic tumours tend to affect the mandible more than the maxilla. We report results from a retrospective study spanning five years on the frequency, clinical presentation, sites and character of orofacial tumors seen in the main referral hospital of Pakistan. Patients and Methods: Records of consecutive patients of all age and sex seen by the author's team at the Department of Oral and Maxillofacial Surgery, Mayo Hospital with tumours affecting the oro-facial region from January 2005 to December 2009 were retrieved, coded and entered into a database. The data were then analyzed by age, sex, presenting signs and symptoms, site of lesion, and their histology. Results: A total of 237 patients with oro-facial swellings were retrieved from the registry. The complete data set was obtained for 189 patients, comprising 108 (57.9%) males and 81 (42%) females. The most common clinical presenting features were mandibular facial swelling (63%), intra-oral swelling (55%), and ulceration (29%). The tumors were found in the mandible 67 (35%), buccal mucosa 33 (17%), floor of the mouth 22 (11%) and tongue 29 (15%). The remainder making up almost 20% was found in the palate, submandibular region, pre auricular region and lips. Ninety three (49.2%) of the patients presented with lesions that were classified as malignant of which 64 (69%) were diagnosed as squamous cell carcinoma (SCC). seventy (37.0%) had benign odontogenic tumors and twenty six (13.7%) had non-odontogenic tumor - like lesions. Sixty - four (69%) of malignant tumors were squamous cell carcinoma; sixty four (86.4%) of the benign odontogenic tumors were classified as ameloblastoma. The mean age at presentation of all lesions was 40.4 years with over 50% of benign lesions in patients aged

  17. Novel dynamic Bayesian networks for facial action element recognition and understanding

    Science.gov (United States)

    Zhao, Wei; Park, Jeong-Seon; Choi, Dong-You; Lee, Sang-Woong

    2011-12-01

    In daily life, language is an important tool of communication between people. Besides language, facial action can also provide a great amount of information. Therefore, facial action recognition has become a popular research topic in the field of human-computer interaction (HCI). However, facial action recognition is quite a challenging task due to its complexity. In a literal sense, there are thousands of facial muscular movements, many of which have very subtle differences. Moreover, muscular movements always occur simultaneously when the pose is changed. To address this problem, we first build a fully automatic facial points detection system based on a local Gabor filter bank and principal component analysis. Then, novel dynamic Bayesian networks are proposed to perform facial action recognition using the junction tree algorithm over a limited number of feature points. In order to evaluate the proposed method, we have used the Korean face database for model training. For testing, we used the CUbiC FacePix, facial expressions and emotion database, Japanese female facial expression database, and our own database. Our experimental results clearly demonstrate the feasibility of the proposed approach.

  18. 5-HTTLPR modulates the recognition accuracy and exploration of emotional facial expressions

    Directory of Open Access Journals (Sweden)

    Sabrina eBoll

    2014-07-01

    Full Text Available Individual genetic differences in the serotonin transporter-linked polymorphic region (5-HTTLPR have been associated with variations in the sensitivity to social and emotional cues as well as altered amygdala reactivity to facial expressions of emotion. Amygdala activation has further been shown to trigger gaze changes towards diagnostically relevant facial features. The current study examined whether altered socio-emotional reactivity in variants of the 5-HTTLPR promoter polymorphism reflects individual differences in attending to diagnostic features of facial expressions. For this purpose, visual exploration of emotional facial expressions was compared between a low (n=39 and a high (n=40 5-HTT expressing group of healthy human volunteers in an eye tracking paradigm. Emotional faces were presented while manipulating the initial fixation such that saccadic changes towards the eyes and towards the mouth could be identified. We found that the low versus the high 5-HTT group demonstrated greater accuracy with regard to emotion classifications, particularly when faces were presented for a longer duration. No group differences in gaze orientation towards diagnostic facial features could be observed. However, participants in the low 5-HTT group exhibited more and faster fixation changes for certain emotions when faces were presented for a longer duration and overall face fixation times were reduced for this genotype group. These results suggest that the 5-HTT gene influences social perception by modulating the general vigilance to social cues rather than selectively affecting the pre-attentive detection of diagnostic facial features.

  19. Facial Expression Recognition from Video Sequences Based on Spatial-Temporal Motion Local Binary Pattern and Gabor Multiorientation Fusion Histogram

    Directory of Open Access Journals (Sweden)

    Lei Zhao

    2017-01-01

    Full Text Available This paper proposes novel framework for facial expressions analysis using dynamic and static information in video sequences. First, based on incremental formulation, discriminative deformable face alignment method is adapted to locate facial points to correct in-plane head rotation and break up facial region from background. Then, spatial-temporal motion local binary pattern (LBP feature is extracted and integrated with Gabor multiorientation fusion histogram to give descriptors, which reflect static and dynamic texture information of facial expressions. Finally, a one-versus-one strategy based multiclass support vector machine (SVM classifier is applied to classify facial expressions. Experiments on Cohn-Kanade (CK + facial expression dataset illustrate that integrated framework outperforms methods using single descriptors. Compared with other state-of-the-art methods on CK+, MMI, and Oulu-CASIA VIS datasets, our proposed framework performs better.

  20. [The history of facial paralysis].

    Science.gov (United States)

    Glicenstein, J

    2015-10-01

    Facial paralysis has been a recognized condition since Antiquity, and was mentionned by Hippocratus. In the 17th century, in 1687, the Dutch physician Stalpart Van der Wiel rendered a detailed observation. It was, however, Charles Bell who, in 1821, provided the description that specified the role of the facial nerve. Facial nerve surgery began at the end of the 19th century. Three different techniques were used successively: nerve anastomosis, (XI-VII Balance 1895, XII-VII, Korte 1903), myoplasties (Lexer 1908), and suspensions (Stein 1913). Bunnell successfully accomplished the first direct facial nerve repair in the temporal bone, in 1927, and in 1932 Balance and Duel experimented with nerve grafts. Thanks to progress in microsurgical techniques, the first faciofacial anastomosis was realized in 1970 (Smith, Scaramella), and an account of the first microneurovascular muscle transfer published in 1976 by Harii. Treatment of the eyelid paralysis was at the origin of numerous operations beginning in the 1960s; including palpebral spring (Morel Fatio 1962) silicone sling (Arion 1972), upperlid loading with gold plate (Illig 1968), magnets (Muhlbauer 1973) and transfacial nerve grafts (Anderl 1973). By the end of the 20th century, surgeons had at their disposal a wide range of valid techniques for facial nerve surgery, including modernized versions of older techniques. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  1. Peripheral facial weakness (Bell's palsy).

    Science.gov (United States)

    Basić-Kes, Vanja; Dobrota, Vesna Dermanović; Cesarik, Marijan; Matovina, Lucija Zadro; Madzar, Zrinko; Zavoreo, Iris; Demarin, Vida

    2013-06-01

    Peripheral facial weakness is a facial nerve damage that results in muscle weakness on one side of the face. It may be idiopathic (Bell's palsy) or may have a detectable cause. Almost 80% of peripheral facial weakness cases are primary and the rest of them are secondary. The most frequent causes of secondary peripheral facial weakness are systemic viral infections, trauma, surgery, diabetes, local infections, tumor, immune disorders, drugs, degenerative diseases of the central nervous system, etc. The diagnosis relies upon the presence of typical signs and symptoms, blood chemistry tests, cerebrospinal fluid investigations, nerve conduction studies and neuroimaging methods (cerebral MRI, x-ray of the skull and mastoid). Treatment of secondary peripheral facial weakness is based on therapy for the underlying disorder, unlike the treatment of Bell's palsy that is controversial due to the lack of large, randomized, controlled, prospective studies. There are some indications that steroids or antiviral agents are beneficial but there are also studies that show no beneficial effect. Additional treatments include eye protection, physiotherapy, acupuncture, botulinum toxin, or surgery. Bell's palsy has a benign prognosis with complete recovery in about 80% of patients, 15% experience some mode of permanent nerve damage and severe consequences remain in 5% of patients.

  2. Outcome of different facial nerve reconstruction techniques.

    Science.gov (United States)

    Mohamed, Aboshanif; Omi, Eigo; Honda, Kohei; Suzuki, Shinsuke; Ishikawa, Kazuo

    There is no technique of facial nerve reconstruction that guarantees facial function recovery up to grade III. To evaluate the efficacy and safety of different facial nerve reconstruction techniques. Facial nerve reconstruction was performed in 22 patients (facial nerve interpositional graft in 11 patients and hypoglossal-facial nerve transfer in another 11 patients). All patients had facial function House-Brackmann (HB) grade VI, either caused by trauma or after resection of a tumor. All patients were submitted to a primary nerve reconstruction except 7 patients, where late reconstruction was performed two weeks to four months after the initial surgery. The follow-up period was at least two years. For facial nerve interpositional graft technique, we achieved facial function HB grade III in eight patients and grade IV in three patients. Synkinesis was found in eight patients, and facial contracture with synkinesis was found in two patients. In regards to hypoglossal-facial nerve transfer using different modifications, we achieved facial function HB grade III in nine patients and grade IV in two patients. Facial contracture, synkinesis and tongue atrophy were found in three patients, and synkinesis was found in five patients. However, those who had primary direct facial-hypoglossal end-to-side anastomosis showed the best result without any neurological deficit. Among various reanimation techniques, when indicated, direct end-to-side facial-hypoglossal anastomosis through epineural suturing is the most effective technique with excellent outcomes for facial reanimation and preservation of tongue movement, particularly when performed as a primary technique. Copyright © 2016 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  3. Multistage feature extraction for accurate face alignment

    NARCIS (Netherlands)

    Zuo, F.; With, de P.H.N.

    2004-01-01

    We propose a novel multistage facial feature extraction approach using a combination of 'global' and 'local' techniques. At the first stage, we use template matching, based on an Edge-Orientation-Map for fast feature position estimation. Using this result, a statistical framework applying the Active

  4. Dissociable roles of internal feelings and face recognition ability in facial expression decoding.

    Science.gov (United States)

    Zhang, Lin; Song, Yiying; Liu, Ling; Liu, Jia

    2016-05-15

    The problem of emotion recognition has been tackled by researchers in both affective computing and cognitive neuroscience. While affective computing relies on analyzing visual features from facial expressions, it has been proposed that humans recognize emotions by internally simulating the emotional states conveyed by others' expressions, in addition to perceptual analysis of facial features. Here we investigated whether and how our internal feelings contributed to the ability to decode facial expressions. In two independent large samples of participants, we observed that individuals who generally experienced richer internal feelings exhibited a higher ability to decode facial expressions, and the contribution of internal feelings was independent of face recognition ability. Further, using voxel-based morphometry, we found that the gray matter volume (GMV) of bilateral superior temporal sulcus (STS) and the right inferior parietal lobule was associated with facial expression decoding through the mediating effect of internal feelings, while the GMV of bilateral STS, precuneus, and the right central opercular cortex contributed to facial expression decoding through the mediating effect of face recognition ability. In addition, the clusters in bilateral STS involved in the two components were neighboring yet separate. Our results may provide clues about the mechanism by which internal feelings, in addition to face recognition ability, serve as an important instrument for humans in facial expression decoding. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Anaplastology in times of facial transplantation: Still a reasonable treatment option?

    Science.gov (United States)

    Toso, Sabine Maria; Menzel, Kerstin; Motzkus, Yvonne; Klein, Martin; Menneking, Horst; Raguse, Jan-Dirk; Nahles, Susanne; Hoffmeister, Bodo; Adolphs, Nicolai

    2015-09-01

    Optimum functional and aesthetic facial reconstruction is still a challenge in patients who suffer from inborn or acquired facial deformity. It is known that functional and aesthetic impairment can result in significant psychosocial strain, leading to the social isolation of patients who are affected by major facial deformities. Microvascular techniques and increasing experience in facial transplantation certainly contribute to better restorative outcomes. However, these technologies also have some drawbacks, limitations and unsolved problems. Extensive facial defects which include several aesthetic units and dentition can be restored by combining dental prostheses and anaplastology, thus providing an adequate functional and aesthetic outcome in selected patients without the drawbacks of major surgical procedures. Referring to some representative patient cases, it is shown how extreme facial disfigurement after oncological surgery can be palliated by combining intraoral dentures with extraoral facial prostheses using individualized treatment and without the need for major reconstructive surgery. Copyright © 2015 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  6. Facial expression influences face identity recognition during the attentional blink.

    Science.gov (United States)

    Bach, Dominik R; Schmidt-Daffy, Martin; Dolan, Raymond J

    2014-12-01

    Emotional stimuli (e.g., negative facial expressions) enjoy prioritized memory access when task relevant, consistent with their ability to capture attention. Whether emotional expression also impacts on memory access when task-irrelevant is important for arbitrating between feature-based and object-based attentional capture. Here, the authors address this question in 3 experiments using an attentional blink task with face photographs as first and second target (T1, T2). They demonstrate reduced neutral T2 identity recognition after angry or happy T1 expression, compared to neutral T1, and this supports attentional capture by a task-irrelevant feature. Crucially, after neutral T1, T2 identity recognition was enhanced and not suppressed when T2 was angry-suggesting that attentional capture by this task-irrelevant feature may be object-based and not feature-based. As an unexpected finding, both angry and happy facial expressions suppress memory access for competing objects, but only angry facial expression enjoyed privileged memory access. This could imply that these 2 processes are relatively independent from one another.

  7. Facial expressions of emotion are not culturally universal.

    Science.gov (United States)

    Jack, Rachael E; Garrod, Oliver G B; Yu, Hui; Caldara, Roberto; Schyns, Philippe G

    2012-05-08

    Since Darwin's seminal works, the universality of facial expressions of emotion has remained one of the longest standing debates in the biological and social sciences. Briefly stated, the universality hypothesis claims that all humans communicate six basic internal emotional states (happy, surprise, fear, disgust, anger, and sad) using the same facial movements by virtue of their biological and evolutionary origins [Susskind JM, et al. (2008) Nat Neurosci 11:843-850]. Here, we refute this assumed universality. Using a unique computer graphics platform that combines generative grammars [Chomsky N (1965) MIT Press, Cambridge, MA] with visual perception, we accessed the mind's eye of 30 Western and Eastern culture individuals and reconstructed their mental representations of the six basic facial expressions of emotion. Cross-cultural comparisons of the mental representations challenge universality on two separate counts. First, whereas Westerners represent each of the six basic emotions with a distinct set of facial movements common to the group, Easterners do not. Second, Easterners represent emotional intensity with distinctive dynamic eye activity. By refuting the long-standing universality hypothesis, our data highlight the powerful influence of culture on shaping basic behaviors once considered biologically hardwired. Consequently, our data open a unique nature-nurture debate across broad fields from evolutionary psychology and social neuroscience to social networking via digital avatars.

  8. Biometric morphing: a novel technique for the analysis of morphologic outcomes after facial surgery.

    Science.gov (United States)

    Pahuta, Markian A; Mainprize, James G; Rohlf, F James; Antonyshyn, Oleh M

    2009-01-01

    The results of facial surgery are intuitively judged in terms of the visible changes in facial features or proportions. However, describing these morphologic outcomes objectively remains a challenge. Biometric morphing addresses this issue by merging statistical shape analysis and image processing. This study describes the implementation of biometric morphing in describing the average morphologic result of facial surgery. The biometric morphing protocol was applied to pre- and postoperative images of the following: (1) 40 dorsal hump reduction rhinoplasties and (2) 20 unilateral enophthalmos repairs. Pre- and postoperative average images (average morphs) were generated. The average morphs provided an objective rendering of nasal and periorbital morphology, which summarized the average features and extent of deformity in a population of patients. Subtle alterations in morphology after surgery, which would otherwise be difficult to identify or demonstrate, were clearly illustrated. Biometric morphing is an effective instrument for describing average facial morphology in a population of patients.

  9. Three-dimensional facial analyses of Indian and Malaysian women.

    Science.gov (United States)

    Kusugal, Preethi; Ruttonji, Zarir; Gowda, Roopa; Rajpurohit, Ladusingh; Lad, Pritam; Ritu

    2015-01-01

    Facial measurements serve as a valuable tool in the treatment planning of maxillofacial rehabilitation, orthodontic treatment, and orthognathic surgeries. The esthetic guidelines of face are still based on neoclassical canons, which were used in the ancient art. These canons are considered to be highly subjective, and there is ample evidence in the literature, which raises such questions as whether or not these canons can be applied for the modern population. This study was carried out to analyze the facial features of Indian and Malaysian women by using three-dimensional (3D) scanner and thus determine the prevalence of neoclassical facial esthetic canons in both the groups. The study was carried out on 60 women in the age range of 18-25 years, out of whom 30 were Indian and 30 Malaysian. As many as 16 facial measurements were taken by using a noncontact 3D scanner. Unpaired t-test was used for comparison of facial measurements between Indian and Malaysian females. Two-tailed Fisher exact test was used to determine the prevalence of neoclassical canons. Orbital Canon was prevalent in 80% of Malaysian women; the same was found only in 16% of Indian women (P = 0.00013). About 43% of Malaysian women exhibited orbitonasal canon (P = 0.0470) whereas nasoaural canon was prevalent in 73% of Malaysian and 33% of Indian women (P = 0.0068). Orbital, orbitonasal, and nasoaural canon were more prevalent in Malaysian women. Facial profile canon, nasooral, and nasofacial canons were not seen in either group. Though some canons provide guidelines in esthetic analyses of face, complete reliance on these canons is not justifiable.

  10. Outcome of different facial nerve reconstruction techniques

    Directory of Open Access Journals (Sweden)

    Aboshanif Mohamed

    Full Text Available Abstract Introduction: There is no technique of facial nerve reconstruction that guarantees facial function recovery up to grade III. Objective: To evaluate the efficacy and safety of different facial nerve reconstruction techniques. Methods: Facial nerve reconstruction was performed in 22 patients (facial nerve interpositional graft in 11 patients and hypoglossal-facial nerve transfer in another 11 patients. All patients had facial function House-Brackmann (HB grade VI, either caused by trauma or after resection of a tumor. All patients were submitted to a primary nerve reconstruction except 7 patients, where late reconstruction was performed two weeks to four months after the initial surgery. The follow-up period was at least two years. Results: For facial nerve interpositional graft technique, we achieved facial function HB grade III in eight patients and grade IV in three patients. Synkinesis was found in eight patients, and facial contracture with synkinesis was found in two patients. In regards to hypoglossal-facial nerve transfer using different modifications, we achieved facial function HB grade III in nine patients and grade IV in two patients. Facial contracture, synkinesis and tongue atrophy were found in three patients, and synkinesis was found in five patients. However, those who had primary direct facial-hypoglossal end-to-side anastomosis showed the best result without any neurological deficit. Conclusion: Among various reanimation techniques, when indicated, direct end-to-side facial-hypoglossal anastomosis through epineural suturing is the most effective technique with excellent outcomes for facial reanimation and preservation of tongue movement, particularly when performed as a primary technique.

  11. Dermal fillers for facial soft tissue augmentation.

    Science.gov (United States)

    Dastoor, Sarosh F; Misch, Carl E; Wang, Hom-Lay

    2007-01-01

    Nowadays, patients are demanding not only enhancement to their dental (micro) esthetics, but also their overall facial (macro) esthetics. Soft tissue augmentation via dermal filling agents may be used to correct facial defects such as wrinkles caused by age, gravity, and trauma; thin lips; asymmetrical facial appearances; buccal fold depressions; and others. This article will review the pathogenesis of facial wrinkles, history, techniques, materials, complications, and clinical controversies regarding dermal fillers for soft tissue augmentation.

  12. Facial skin care products and cosmetics.

    Science.gov (United States)

    Draelos, Zoe Diana

    2014-01-01

    Facial skin care products and cosmetics can both aid or incite facial dermatoses. Properly selected skin care can create an environment for barrier repair aiding in the re-establishment of a healing biofilm and diminution of facial redness; however, skin care products that aggressively remove intercellular lipids or cause irritation must be eliminated before the red face will resolve. Cosmetics are an additive variable either aiding or challenging facial skin health. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. Gd-DTPA enhancement of the facial nerve in Ramsay Hunt's syndrome

    Energy Technology Data Exchange (ETDEWEB)

    Kato, Tsutomu; Yanagida, Masahiro; Yamauchi, Yasuo (Kansai Medical School, Moriguchi, Osaka (Japan)) (and others)

    1992-10-01

    A total of 21 MR images in 16 Ramsay Hunt's syndrome were evaluated. In all images, the involved side of peripheral facial nerve were enhanced in intensity after Gd-DTPA. However, 2 cases had recovered facial palsy when MR images were taken. Nine of 19 cases with the enhancement of internal auditory canal portion had vertigo or tinnitus. Thus, it was suggested that the enhancement of internal auditory canal portion and clinical feature are closely related. (author).

  14. A Report of Two Cases of Solid Facial Edema in Acne

    OpenAIRE

    Kuhn-R?gnier, Sarah; Mangana, Joanna; Kerl, Katrin; Kamarachev, Jivko; French, Lars E.; Cozzio, Antonio; Navarini, Alexander A.

    2017-01-01

    Introduction Solid facial edema (SFE) is a rare complication of acne vulgaris. To examine the clinical features of acne patients with solid facial edema, and to give an overview on the outcome of previous topical and systemic treatments in the cases so far published. Methods We report two cases from Switzerland, both young men with initially papulopustular acne resistant to topical retinoids. Results Both cases responded to oral isotretinoin, in one case combined with oral steroids. Our cases...

  15. Cerebral Angiographic Findings of Cosmetic Facial Filler-related Ophthalmic and Retinal Artery Occlusion

    OpenAIRE

    Kim, Yong-Kyu; Jung, Cheolkyu; Woo, Se Joon; Park, Kyu Hyung

    2015-01-01

    Cosmetic facial filler-related ophthalmic artery occlusion is rare but is a devastating complication, while the exact pathophysiology is still elusive. Cerebral angiography provides more detailed information on blood flow of ophthalmic artery as well as surrounding orbital area which cannot be covered by fundus fluorescein angiography. This study aimed to evaluate cerebral angiographic features of cosmetic facial filler-related ophthalmic artery occlusion patients. We retrospectively reviewed...

  16. Gd-DTPA enhancement of the facial nerve in Ramsay Hunt's syndrome

    Energy Technology Data Exchange (ETDEWEB)

    Kato, Tsutomu; Yanagida, Masahiro; Yamauchi, Yasuo [Kansai Medical School, Moriguchi, Osaka (Japan); and others

    1992-10-01

    A total of 21 MR images in 16 Ramsay Hunt's syndrome were evaluated. In all images, the involved side of peripheral facial nerve were enhanced in intensity after Gd-DTPA. However, 2 cases had recovered facial palsy when MR images were taken. Nine of 19 cases with the enhancement of internal auditory canal portion had vertigo or tinnitus. Thus, it was suggested that the enhancement of internal auditory canal portion and clinical feature are closely related. (author).

  17. Gd-DTPA enhancement of the facial nerve in Ramsay Hunt's syndrome

    International Nuclear Information System (INIS)

    Kato, Tsutomu; Yanagida, Masahiro; Yamauchi, Yasuo

    1992-01-01

    A total of 21 MR images in 16 Ramsay Hunt's syndrome were evaluated. In all images, the involved side of peripheral facial nerve were enhanced in intensity after Gd-DTPA. However, 2 cases had recovered facial palsy when MR images were taken. Nine of 19 cases with the enhancement of internal auditory canal portion had vertigo or tinnitus. Thus, it was suggested that the enhancement of internal auditory canal portion and clinical feature are closely related. (author)

  18. Neural mechanism for judging the appropriateness of facial affect.

    Science.gov (United States)

    Kim, Ji-Woong; Kim, Jae-Jin; Jeong, Bum Seok; Ki, Seon Wan; Im, Dong-Mi; Lee, Soo Jung; Lee, Hong Shick

    2005-12-01

    Questions regarding the appropriateness of facial expressions in particular situations arise ubiquitously in everyday social interactions. To determine the appropriateness of facial affect, first of all, we should represent our own or the other's emotional state as induced by the social situation. Then, based on these representations, we should infer the possible affective response of the other person. In this study, we identified the brain mechanism mediating special types of social evaluative judgments of facial affect in which the internal reference is related to theory of mind (ToM) processing. Many previous ToM studies have used non-emotional stimuli, but, because so much valuable social information is conveyed through nonverbal emotional channels, this investigation used emotionally salient visual materials to tap ToM. Fourteen right-handed healthy subjects volunteered for our study. We used functional magnetic resonance imaging to examine brain activation during the judgmental task for the appropriateness of facial affects as opposed to gender matching tasks. We identified activation of a brain network, which includes both medial frontal cortex, left temporal pole, left inferior frontal gyrus, and left thalamus during the judgmental task for appropriateness of facial affect compared to the gender matching task. The results of this study suggest that the brain system involved in ToM plays a key role in judging the appropriateness of facial affect in an emotionally laden situation. In addition, our result supports that common neural substrates are involved in performing diverse kinds of ToM tasks irrespective of perceptual modalities and the emotional salience of test materials.

  19. Unspoken vowel recognition using facial electromyogram.

    Science.gov (United States)

    Arjunan, Sridhar P; Kumar, Dinesh K; Yau, Wai C; Weghorn, Hans

    2006-01-01

    The paper aims to identify speech using the facial muscle activity without the audio signals. The paper presents an effective technique that measures the relative muscle activity of the articulatory muscles. Five English vowels were used as recognition variables. This paper reports using moving root mean square (RMS) of surface electromyogram (SEMG) of four facial muscles to segment the signal and identify the start and end of the utterance. The RMS of the signal between the start and end markers was integrated and normalised. This represented the relative muscle activity of the four muscles. These were classified using back propagation neural network to identify the speech. The technique was successfully used to classify 5 vowels into three classes and was not sensitive to the variation in speed and the style of speaking of the different subjects. The results also show that this technique was suitable for classifying the 5 vowels into 5 classes when trained for each of the subjects. It is suggested that such a technology may be used for the user to give simple unvoiced commands when trained for the specific user.

  20. Facial aging: A clinical classification

    Directory of Open Access Journals (Sweden)

    Shiffman Melvin

    2007-01-01

    Full Text Available The purpose of this classification of facial aging is to have a simple clinical method to determine the severity of the aging process in the face. This allows a quick estimate as to the types of procedures that the patient would need to have the best results. Procedures that are presently used for facial rejuvenation include laser, chemical peels, suture lifts, fillers, modified facelift and full facelift. The physician is already using his best judgment to determine which procedure would be best for any particular patient. This classification may help to refine these decisions.

  1. Distinct facial processing in schizophrenia and schizoaffective disorders

    Science.gov (United States)

    Chen, Yue; Cataldo, Andrea; Norton, Daniel J; Ongur, Dost

    2011-01-01

    Although schizophrenia and schizoaffective disorders have both similar and differing clinical features, it is not well understood whether similar or differing pathophysiological processes mediate patients’ cognitive functions. Using psychophysical methods, this study compared the performances of schizophrenia (SZ) patients, patients with schizoaffective disorder (SA), and a healthy control group in two face-related cognitive tasks: emotion discrimination, which tested perception of facial affect, and identity discrimination, which tested perception of non-affective facial features. Compared to healthy controls, SZ patients, but not SA patients, exhibited deficient performance in both fear and happiness discrimination, as well as identity discrimination. SZ patients, but not SA patients, also showed impaired performance in a theory-of-mind task for which emotional expressions are identified based upon the eye regions of face images. This pattern of results suggests distinct processing of face information in schizophrenia and schizoaffective disorders. PMID:21868199

  2. Facial Baroparesis Caused by Scuba Diving

    Directory of Open Access Journals (Sweden)

    Daisuke Kamide

    2012-01-01

    tympanic membrane and right facial palsy without other neurological findings. But facial palsy was disappeared immediately after myringotomy. We considered that the etiology of this case was neuropraxia of facial nerve in middle ear caused by over pressure of middle ear.

  3. Control de accesos mediante reconocimiento facial

    OpenAIRE

    Rodríguez Rodríguez, Bruno

    2011-01-01

    En esta memoria expone el trabajo que se ha llevado a cabo para intentar crear un sistema de reconocimiento facial. This paper outlines the work carried out in the attempt of creating a facial recognition system. En aquesta memòria exposa el treball que s'ha dut a terme en l'intent de crear un sistema de reconeixement facial.

  4. Botulinum Toxin (Botox) for Facial Wrinkles

    Science.gov (United States)

    ... Stories Español Eye Health / Eye Health A-Z Botulinum Toxin (Botox) for Facial Wrinkles Sections Botulinum Toxin (Botox) ... Facial Wrinkles How Does Botulinum Toxin (Botox) Work? Botulinum Toxin (Botox) for Facial Wrinkles Leer en Español: La ...

  5. Less Empathic and More Reactive: The Different Impact of Childhood Maltreatment on Facial Mimicry and Vagal Regulation.

    Directory of Open Access Journals (Sweden)

    Martina Ardizzi

    Full Text Available Facial mimicry and vagal regulation represent two crucial physiological responses to others' facial expressions of emotions. Facial mimicry, defined as the automatic, rapid and congruent electromyographic activation to others' facial expressions, is implicated in empathy, emotional reciprocity and emotions recognition. Vagal regulation, quantified by the computation of Respiratory Sinus Arrhythmia (RSA, exemplifies the autonomic adaptation to contingent social cues. Although it has been demonstrated that childhood maltreatment induces alterations in the processing of the facial expression of emotions, both at an explicit and implicit level, the effects of maltreatment on children's facial mimicry and vagal regulation in response to facial expressions of emotions remain unknown. The purpose of the present study was to fill this gap, involving 24 street-children (maltreated group and 20 age-matched controls (control group. We recorded their spontaneous facial electromyographic activations of corrugator and zygomaticus muscles and RSA responses during the visualization of the facial expressions of anger, fear, joy and sadness. Results demonstrated a different impact of childhood maltreatment on facial mimicry and vagal regulation. Maltreated children did not show the typical positive-negative modulation of corrugator mimicry. Furthermore, when only negative facial expressions were considered, maltreated children demonstrated lower corrugator mimicry than controls. With respect to vagal regulation, whereas maltreated children manifested the expected and functional inverse correlation between RSA value at rest and RSA response to angry facial expressions, controls did not. These results describe an early and divergent functional adaptation to hostile environment of the two investigated physiological mechanisms. On the one side, maltreatment leads to the suppression of the spontaneous facial mimicry normally concurring to empathic understanding of

  6. Facial Pain Followed by Unilateral Facial Nerve Palsy: A Case Report with Literature Review

    OpenAIRE

    GV, Sowmya; BS, Manjunatha; Goel, Saurabh; Singh, Mohit Pal; Astekar, Madhusudan

    2014-01-01

    Peripheral facial nerve palsy is the commonest cranial nerve motor neuropathy. The causes range from cerebrovascular accident to iatrogenic damage, but there are few reports of facial nerve paralysis attributable to odontogenic infections. In majority of the cases, recovery of facial muscle function begins within first three weeks after onset. This article reports a unique case of 32-year-old male patient who developed facial pain followed by unilateral facial nerve paralysis due to odontogen...

  7. Facial expressions and pair bonds in hylobatids.

    Science.gov (United States)

    Florkiewicz, Brittany; Skollar, Gabriella; Reichard, Ulrich H

    2018-06-06

    Facial expressions are an important component of primate communication that functions to transmit social information and modulate intentions and motivations. Chimpanzees and macaques, for example, produce a variety of facial expressions when communicating with conspecifics. Hylobatids also produce various facial expressions; however, the origin and function of these facial expressions are still largely unclear. It has been suggested that larger facial expression repertoires may have evolved in the context of social complexity, but this link has yet to be tested at a broader empirical basis. The social complexity hypothesis offers a possible explanation for the evolution of complex communicative signals such as facial expressions, because as the complexity of an individual's social environment increases so does the need for communicative signals. We used an intraspecies, pair-focused study design to test the link between facial expressions and sociality within hylobatids, specifically the strength of pair-bonds. The current study compared 206 hr of video and 103 hr of focal animal data for ten hylobatid pairs from three genera (Nomascus, Hoolock, and Hylobates) living at the Gibbon Conservation Center. Using video footage, we explored 5,969 facial expressions along three dimensions: repertoire use, repertoire breadth, and facial expression synchrony [FES]. We then used focal animal data to compare dimensions of facial expressiveness to pair bond strength and behavioral synchrony. Hylobatids in our study overlapped in only half of their facial expressions (50%) with the only other detailed, quantitative study of hylobatid facial expressions, while 27 facial expressions were uniquely observed in our study animals. Taken together, hylobatids have a large facial expression repertoire of at least 80 unique facial expressions. Contrary to our prediction, facial repertoire composition was not significantly correlated with pair bond strength, rates of territorial synchrony

  8. Optimal Geometrical Set for Automated Marker Placement to Virtualized Real-Time Facial Emotions.

    Directory of Open Access Journals (Sweden)

    Vasanthan Maruthapillai

    Full Text Available In recent years, real-time face recognition has been a major topic of interest in developing intelligent human-machine interaction systems. Over the past several decades, researchers have proposed different algorithms for facial expression recognition, but there has been little focus on detection in real-time scenarios. The present work proposes a new algorithmic method of automated marker placement used to classify six facial expressions: happiness, sadness, anger, fear, disgust, and surprise. Emotional facial expressions were captured using a webcam, while the proposed algorithm placed a set of eight virtual markers on each subject's face. Facial feature extraction methods, including marker distance (distance between each marker to the center of the face and change in marker distance (change in distance between the original and new marker positions, were used to extract three statistical features (mean, variance, and root mean square from the real-time video sequence. The initial position of each marker was subjected to the optical flow algorithm for marker tracking with each emotional facial expression. Finally, the extracted statistical features were mapped into corresponding emotional facial expressions using two simple non-linear classifiers, K-nearest neighbor and probabilistic neural network. The results indicate that the proposed automated marker placement algorithm effectively placed eight virtual markers on each subject's face and gave a maximum mean emotion classification rate of 96.94% using the probabilistic neural network.

  9. Optimal Geometrical Set for Automated Marker Placement to Virtualized Real-Time Facial Emotions.

    Science.gov (United States)

    Maruthapillai, Vasanthan; Murugappan, Murugappan

    2016-01-01

    In recent years, real-time face recognition has been a major topic of interest in developing intelligent human-machine interaction systems. Over the past several decades, researchers have proposed different algorithms for facial expression recognition, but there has been little focus on detection in real-time scenarios. The present work proposes a new algorithmic method of automated marker placement used to classify six facial expressions: happiness, sadness, anger, fear, disgust, and surprise. Emotional facial expressions were captured using a webcam, while the proposed algorithm placed a set of eight virtual markers on each subject's face. Facial feature extraction methods, including marker distance (distance between each marker to the center of the face) and change in marker distance (change in distance between the original and new marker positions), were used to extract three statistical features (mean, variance, and root mean square) from the real-time video sequence. The initial position of each marker was subjected to the optical flow algorithm for marker tracking with each emotional facial expression. Finally, the extracted statistical features were mapped into corresponding emotional facial expressions using two simple non-linear classifiers, K-nearest neighbor and probabilistic neural network. The results indicate that the proposed automated marker placement algorithm effectively placed eight virtual markers on each subject's face and gave a maximum mean emotion classification rate of 96.94% using the probabilistic neural network.

  10. An Adult Developmental Approach to Perceived Facial Attractiveness and Distinctiveness

    OpenAIRE

    Natalie C. Ebner; Natalie C. Ebner; Natalie C. Ebner; Joerg Luedicke; Manuel C. Voelkle; Manuel C. Voelkle; Michaela Riediger; Michaela Riediger; Tian Lin; Ulman Lindenberger; Ulman Lindenberger

    2018-01-01

    Attractiveness and distinctiveness constitute facial features with high biological and social relevance. Bringing a developmental perspective to research on social-cognitive face perception, we used a large set of faces taken from the FACES Lifespan Database to examine effects of face and perceiver characteristics on subjective evaluations of attractiveness and distinctiveness in young (20–31 years), middle-aged (44–55 years), and older (70–81 years) men and women. We report novel findings su...

  11. A Statistical Model for Synthesis of Detailed Facial Geometry

    OpenAIRE

    Golovinskiy, Aleksey; Matusik, Wojciech; Pfister, Hanspeter; Rusinkiewicz, Szymon; Funkhouser, Thomas

    2006-01-01

    Detailed surface geometry contributes greatly to the visual realism of 3D face models. However, acquiring high-resolution face geometry is often tedious and expensive. Consequently, most face models used in games, virtual reality, or computer vision look unrealistically smooth. In this paper, we introduce a new statistical technique for the analysis and synthesis of small three-dimensional facial features, such as wrinkles and pores. We acquire high-resolution face geometry for people across ...

  12. Penetrating gunshot wound to the head: transotic approach to remove the bullet and masseteric-facial nerve anastomosis for early facial reanimation.

    Science.gov (United States)

    Donnarumma, Pasquale; Tarantino, Roberto; Gennaro, Paolo; Mitro, Valeria; Valentini, Valentino; Magliulo, Giuseppe; Delfini, Roberto

    2014-01-01

    Gunshot wounds to the head (GSWH) account for the majority of penetrating brain injuries, and are the most lethal. Since they are rare in Europe, the number of neurosurgeons who have experienced this type of traumatic injury is decreasing, and fewer cases are reported in the literature. We describe a case of gunshot to the temporal bone in which the bullet penetrated the skull resulting in the facial nerve paralysis. It was excised with the transotic approach. Microsurgical anastomosis among the masseteric nerve and the facial nerve was performed. GSWH are often devastating. The in-hospital mortality for civilians with penetrating craniocerebral injury is very high. Survivors often have high rate of complications. When facial paralysis is present, masseteric-facial direct neurorraphy represent a good treatment.

  13. Facial Features Can Induce Emotion: Evidence from Affective Priming Tasks

    Directory of Open Access Journals (Sweden)

    Chia-Chen Wu

    2011-05-01

    Full Text Available Our previous study found that schematic faces with direct gazes, with mouths, with horizontal oval eyes, or without noses, tend to be perceived as in negative emotion. In this study we further explore these factors by the affective priming task. Faces were taking as prime, and positive or negative words were probe. The task was to judge the valence of the probe. If the faces could induce emotions, a target word with the same emotional valence should be judged faster than with opposite valence (the congruency effect. Experiment 1 used the most positive and negative rated faces in previous study as the primes. The positive faces were with vertical oval eyes and without mouth, while the negative faces were with horizontal eyes and with mouth. Results of 34 participants showed that those faces indeed elicited congruency effects. Experiment 2 manipulated gaze directions (N = 16. After the task the participants were asked to rate the prime faces. According to their rating, faces with direct gaze was perceive as positive, and elicited congruency effect with positive words in affective priming task. Our data thus support the conjecture that shape of eyes, the existence of mouths, and gaze directions could induces emotion.

  14. The role of great auricular-facial nerve neurorrhaphy in facial nerve damage.

    Science.gov (United States)

    Sun, Yan; Liu, Limei; Han, Yuechen; Xu, Lei; Zhang, Daogong; Wang, Haibo

    2015-01-01

    Facial nerve is easy to be damaged, and there are many reconstructive methods for facial nerve reconstructive, such as facial nerve end to end anastomosis, the great auricular nerve graft, the sural nerve graft, or hypoglossal-facial nerve anastomosis. However, there is still little study about great auricular-facial nerve neurorrhaphy. The aim of the present study was to identify the role of great auricular-facial nerve neurorrhaphy and the mechanism. Rat models of facial nerve cut (FC), facial nerve end to end anastomosis (FF), facial-great auricular neurorrhaphy (FG), and control (Ctrl) were established. Apex nasi amesiality observation, electrophysiology and immunofluorescence assays were employed to investigate the function and mechanism. In apex nasi amesiality observation, it was found apex nasi amesiality of FG group was partly recovered. Additionally, electrophysiology and immunofluorescence assays revealed that facial-great auricular neurorrhaphy could transfer nerve impulse and express AChR which was better than facial nerve cut and worse than facial nerve end to end anastomosis. The present study indicated that great auricular-facial nerve neurorrhaphy is a substantial solution for facial lesion repair, as it is efficiently preventing facial muscles atrophy by generating neurotransmitter like ACh.

  15. Prediction of mortality based on facial characteristics

    Directory of Open Access Journals (Sweden)

    Arnaud Delorme

    2016-05-01

    Full Text Available Recent studies have shown that characteristics of the face contain a wealth of information about health, age and chronic clinical conditions. Such studies involve objective measurement of facial features correlated with historical health information. But some individuals also claim to be adept at gauging mortality based on a glance at a person’s photograph. To test this claim, we invited 12 such individuals to see if they could determine if a person was alive or dead based solely on a brief examination of facial photographs. All photos used in the experiment were transformed into a uniform gray scale and then counterbalanced across eight categories: gender, age, gaze direction, glasses, head position, smile, hair color, and image resolution. Participants examined 404 photographs displayed on a computer monitor, one photo at a time, each shown for a maximum of 8 seconds. Half of the individuals in the photos were deceased, and half were alive at the time the experiment was conducted. Participants were asked to press a button if they thought the person in a photo was living or deceased. Overall mean accuracy on this task was 53.8%, where 50% was expected by chance (p < 0.004, two-tail. Statistically significant accuracy was independently obtained in 5 of the 12 participants. We also collected 32-channel electrophysiological recordings and observed a robust difference between images of deceased individuals correctly vs. incorrectly classified in the early event related potential at 100 ms post-stimulus onset. Our results support claims of individuals who report that some as-yet unknown features of the face predict mortality. The results are also compatible with claims about clairvoyance and warrants further investigation.

  16. Facial sculpting and tissue augmentation.

    Science.gov (United States)

    Carruthers, Jean D A; Carruthers, Alastair

    2005-11-01

    Until recently, deep facial sculpting was exclusively the domain of surgical interventions. Recent advances in the available array of dermal and subdermal fillers combined with an esthetic appreciation by both surgeons and nonsurgeons alike of the positive effect of filling the volume-depleted face have led to an expansion in the indications for the use of soft tissue augmenting agents. Subdermal support of the lateral two-thirds of the brow, the nasojugal fold, the malar and buccal fat pads, the lateral lip commissures, and the perioral region, including the pre-jowl sulcus, all restore youthful facial contour and harmony. An important advance in technique is the subdermal rather than the intradermal injection plane. "Instant" facial sculpting giving a brow-lift, cheek-lift, lip expansion, and perioral augmentation is possible using modern soft tissue augmenting agents. The softer, more relaxed appearance contrasts to the somewhat "pulled" appearance of subjects who have had surgical overcorrections. Treatments can be combined with botulinum toxin and other procedures if required. Newer advances in the use of fillers include the use of fillers injected in the subdermal plane for "lunchtime" facial sculpting. Using the modern esthetic filler compounds, which are biodegradable but longer lasting, subjects can have a "rehearsal" treatment or make it ongoing. Some individuals, such as those with human immunodeficiency virus (HIV)-related lipoatrophy or those who desire to obtain a longer-lasting effect, may elect to use a nonbiodegradable filling agent.

  17. Asyndromic Bilateral Transverse Facial Cleft

    African Journals Online (AJOL)

    2013-04-23

    of this atypical cleft is unknown although the frequency ... on Tuesday, April 23, 2013, IP: 41.132.185.55] || Click here to download free Android application for this journal ... Facial cleft remains a source of social anxiety and in the past has lead ...

  18. Genetic determinants of facial clefting

    DEFF Research Database (Denmark)

    Jugessur, Astanand; Shi, Min; Gjessing, Håkon Kristian

    2009-01-01

    BACKGROUND: Facial clefts are common birth defects with a strong genetic component. To identify fetal genetic risk factors for clefting, 1536 SNPs in 357 candidate genes were genotyped in two population-based samples from Scandinavia (Norway: 562 case-parent and 592 control-parent triads; Denmark...

  19. Mapping and Manipulating Facial Expression

    Science.gov (United States)

    Theobald, Barry-John; Matthews, Iain; Mangini, Michael; Spies, Jeffrey R.; Brick, Timothy R.; Cohn, Jeffrey F.; Boker, Steven M.

    2009-01-01

    Nonverbal visual cues accompany speech to supplement the meaning of spoken words, signify emotional state, indicate position in discourse, and provide back-channel feedback. This visual information includes head movements, facial expressions and body gestures. In this article we describe techniques for manipulating both verbal and nonverbal facial…

  20. Decoding facial expressions based on face-selective and motion-sensitive areas.

    Science.gov (United States)

    Liang, Yin; Liu, Baolin; Xu, Junhai; Zhang, Gaoyan; Li, Xianglin; Wang, Peiyuan; Wang, Bin

    2017-06-01

    Humans can easily recognize others' facial expressions. Among the brain substrates that enable this ability, considerable attention has been paid to face-selective areas; in contrast, whether motion-sensitive areas, which clearly exhibit sensitivity to facial movements, are involved in facial expression recognition remained unclear. The present functional magnetic resonance imaging (fMRI) study used multi-voxel pattern analysis (MVPA) to explore facial expression decoding in both face-selective and motion-sensitive areas. In a block design experiment, participants viewed facial expressions of six basic emotions (anger, disgust, fear, joy, sadness, and surprise) in images, videos, and eyes-obscured videos. Due to the use of multiple stimulus types, the impacts of facial motion and eye-related information on facial expression decoding were also examined. It was found that motion-sensitive areas showed significant responses to emotional expressions and that dynamic expressions could be successfully decoded in both face-selective and motion-sensitive areas. Compared with static stimuli, dynamic expressions elicited consistently higher neural responses and decoding performance in all regions. A significant decrease in both activation and decoding accuracy due to the absence of eye-related information was also observed. Overall, the findings showed that emotional expressions are represented in motion-sensitive areas in addition to conventional face-selective areas, suggesting that motion-sensitive regions may also effectively contribute to facial expression recognition. The results also suggested that facial motion and eye-related information played important roles by carrying considerable expression information that could facilitate facial expression recognition. Hum Brain Mapp 38:3113-3125, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  1. Targeted presurgical decompensation in patients with yaw-dependent facial asymmetry

    OpenAIRE

    Kim, Kyung-A; Lee, Ji-Won; Park, Jeong-Ho; Kim, Byoung-Ho; Ahn, Hyo-Won; Kim, Su-Jung

    2017-01-01

    Facial asymmetry can be classified into the rolling-dominant type (R-type), translation-dominant type (T-type), yawing-dominant type (Y-type), and atypical type (A-type) based on the distorted skeletal components that cause canting, translation, and yawing of the maxilla and/or mandible. Each facial asymmetry type represents dentoalveolar compensations in three dimensions that correspond to the main skeletal discrepancies. To obtain sufficient surgical correction, it is necessary to analyze t...

  2. Facial nerve paralysis associated with temporal bone masses.

    Science.gov (United States)

    Nishijima, Hironobu; Kondo, Kenji; Kagoya, Ryoji; Iwamura, Hitoshi; Yasuhara, Kazuo; Yamasoba, Tatsuya

    2017-10-01

    To investigate the clinical and electrophysiological features of facial nerve paralysis (FNP) due to benign temporal bone masses (TBMs) and elucidate its differences as compared with Bell's palsy. FNP assessed by the House-Brackmann (HB) grading system and by electroneurography (ENoG) were compared retrospectively. We reviewed 914 patient records and identified 31 patients with FNP due to benign TBMs. Moderate FNP (HB Grades II-IV) was dominant for facial nerve schwannoma (FNS) (n=15), whereas severe FNP (Grades V and VI) was dominant for cholesteatomas (n=8) and hemangiomas (n=3). The average ENoG value was 19.8% for FNS, 15.6% for cholesteatoma, and 0% for hemangioma. Analysis of the correlation between HB grade and ENoG value for FNP due to TBMs and Bell's palsy revealed that given the same ENoG value, the corresponding HB grade was better for FNS, followed by cholesteatoma, and worst in Bell's palsy. Facial nerve damage caused by benign TBMs could depend on the underlying pathology. Facial movement and ENoG values did not correlate when comparing TBMs and Bell's palsy. When the HB grade is found to be unexpectedly better than the ENoG value, TBMs should be included in the differential diagnosis. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Holistic face processing can inhibit recognition of forensic facial composites.

    Science.gov (United States)

    McIntyre, Alex H; Hancock, Peter J B; Frowd, Charlie D; Langton, Stephen R H

    2016-04-01

    Facial composite systems help eyewitnesses to show the appearance of criminals. However, likenesses created by unfamiliar witnesses will not be completely accurate, and people familiar with the target can find them difficult to identify. Faces are processed holistically; we explore whether this impairs identification of inaccurate composite images and whether recognition can be improved. In Experiment 1 (n = 64) an imaging technique was used to make composites of celebrity faces more accurate and identification was contrasted with the original composite images. Corrected composites were better recognized, confirming that errors in production of the likenesses impair identification. The influence of holistic face processing was explored by misaligning the top and bottom parts of the composites (cf. Young, Hellawell, & Hay, 1987). Misalignment impaired recognition of corrected composites but identification of the original, inaccurate composites significantly improved. This effect was replicated with facial composites of noncelebrities in Experiment 2 (n = 57). We conclude that, like real faces, facial composites are processed holistically: recognition is impaired because unlike real faces, composites contain inaccuracies and holistic face processing makes it difficult to perceive identifiable features. This effect was consistent across composites of celebrities and composites of people who are personally familiar. Our findings suggest that identification of forensic facial composites can be enhanced by presenting composites in a misaligned format. (c) 2016 APA, all rights reserved).

  4. Pseudotumoural hypertrophic neuritis of the facial nerve

    OpenAIRE

    Zanoletti, E; Mazzoni, A; Barbò, R

    2008-01-01

    In a retrospective study of our cases of recurrent paralysis of the facial nerve of tumoural and non-tumoural origin, a tumour-like lesion of the intra-temporal course of the facial nerve, mimicking facial nerve schwannoma, was found and investigated in 4 cases. This was defined as, pseudotumoral hypertrophic neuritis of the facial nerve. The picture was one of recurrent acute facial palsy with incomplete recovery and imaging of a benign tumour. It was different from the well-known recurrent ...

  5. Possibilities of pfysiotherapy in facial nerve paresis

    OpenAIRE

    ZIFČÁKOVÁ, Šárka

    2015-01-01

    The bachelor thesis addresses paresis of the facial nerve. The facial nerve paresis is a rather common illness, which cannot be often cured without consequences despite all the modern treatments. The paresis of the facial nerve occurs in two forms, central and peripheral. A central paresis is a result of a lesion located above the motor nucleus of the facial nerve. A peripheral paresis is caused by a lesion located either in the location of the motor nucleus or in the course of the facial ner...

  6. Antenatal diagnosis of complete facial duplication--a case report of a rare craniofacial defect.

    Science.gov (United States)

    Rai, V S; Gaffney, G; Manning, N; Pirrone, P G; Chamberlain, P F

    1998-06-01

    We report a case of the prenatal sonographic detection of facial duplication, the diprosopus abnormality, in a twin pregnancy. The characteristic sonographic features of the condition include duplication of eyes, mouth, nose and both mid- and anterior intracranial structures. A heart-shaped abnormality of the cranial vault should prompt more detailed examination for other supportive features of this rare condition.

  7. Four siblings with distal renal tubular acidosis and nephrocalcinosis, neurobehavioral impairment, short stature, and distinctive facial appearance: a possible new autosomal recessive syndrome.

    Science.gov (United States)

    Faqeih, Eissa; Al-Akash, Samhar I; Sakati, Nadia; Teebi, Prof Ahmad S

    2007-09-01

    We report on four siblings (three males, one female) born to first cousin Arab parents with the constellation of distal renal tubular acidosis (RTA), small kidneys, nephrocalcinosis, neurobehavioral impairment, short stature, and distinctive facial features. They presented with early developmental delay with subsequent severe mental, behavioral and social impairment and autistic-like features. Their facial features are unique with prominent cheeks, well-defined philtrum, large bulbous nose, V-shaped upper lip border, full lower lip, open mouth with protruded tongue, and pits on the ear lobule. All had proteinuria, hypercalciuria, hypercalcemia, and normal anion-gap metabolic acidosis. Renal ultrasound examinations revealed small kidneys, with varying degrees of hyperechogenicity and nephrocalcinosis. Additional findings included dilated ventricles and cerebral demyelination on brain imaging studies. Other than distal RTA, common causes of nephrocalcinosis were excluded. The constellation of features in this family currently likely represents a possibly new autosomal recessive syndrome providing further evidence of heterogeneity of nephrocalcinosis syndromes. Copyright 2007 Wiley-Liss, Inc.

  8. [Neurological disease and facial recognition].

    Science.gov (United States)

    Kawamura, Mitsuru; Sugimoto, Azusa; Kobayakawa, Mutsutaka; Tsuruya, Natsuko

    2012-07-01

    To discuss the neurological basis of facial recognition, we present our case reports of impaired recognition and a review of previous literature. First, we present a case of infarction and discuss prosopagnosia, which has had a large impact on face recognition research. From a study of patient symptoms, we assume that prosopagnosia may be caused by unilateral right occipitotemporal lesion and right cerebral dominance of facial recognition. Further, circumscribed lesion and degenerative disease may also cause progressive prosopagnosia. Apperceptive prosopagnosia is observed in patients with posterior cortical atrophy (PCA), pathologically considered as Alzheimer's disease, and associative prosopagnosia in frontotemporal lobar degeneration (FTLD). Second, we discuss face recognition as part of communication. Patients with Parkinson disease show social cognitive impairments, such as difficulty in facial expression recognition and deficits in theory of mind as detected by the reading the mind in the eyes test. Pathological and functional imaging studies indicate that social cognitive impairment in Parkinson disease is possibly related to damages in the amygdalae and surrounding limbic system. The social cognitive deficits can be observed in the early stages of Parkinson disease, and even in the prodromal stage, for example, patients with rapid eye movement (REM) sleep behavior disorder (RBD) show impairment in facial expression recognition. Further, patients with myotonic dystrophy type 1 (DM 1), which is a multisystem disease that mainly affects the muscles, show social cognitive impairment similar to that of Parkinson disease. Our previous study showed that facial expression recognition impairment of DM 1 patients is associated with lesion in the amygdalae and insulae. Our study results indicate that behaviors and personality traits in DM 1 patients, which are revealed by social cognitive impairment, are attributable to dysfunction of the limbic system.

  9. A moat around castle walls. The role of axillary and facial hair in lymph node protection from mutagenic factors.

    Science.gov (United States)

    Komarova, Svetlana V

    2006-01-01

    Axillary hair is a highly conserved phenotypical feature in humans, and as such deserves at least consideration of its functional significance. Protection from environmental factors is one of the main functions attributed to hair in furred vertebrates, but is believed to be inapplicable to humans. I considered the hypothesis that the phenotypic preservation of axillary hair is due to its unrecognized role in the organism protection. Two immediate questions arise--what exactly is being protected and what it is protected from. A large group of axillary lymph nodes represents a major difference between underarms and the adjacent areas of the trunk. The consideration of potential factors from which hair can offer protection identifies sunlight as the most likely candidate. Intense sweat production underarms may represent an independent defense mechanism, specifically protecting lymph nodes from overheating. Moreover, the pattern of facial hair growth in males strikingly overlaps with the distribution of superficial lymph nodes, suggesting potential role for facial hair in protection of lymph nodes, and possibly thymus and thyroid. The idea of lymph node protection from environmental mutagenic factors, such as UV radiation and heat, appears particularly important in light of wide association of lymph nodes with cancers. The position of contemporary fashion towards body hair is aggressively negative, including the social pressure for removal of axillary and bikini line hair for women, facial hair for men in many professional occupations, and even body hair for men. If this hypothesis is proven to be true, the implications will be significant for immunology (by providing new insights in lymph node physiology), health sciences (depilation is painful and therefore easily modifiable habit if proven to increase disease risk), as well as art, social fashion and economy.

  10. Operant conditioning of facial displays of pain.

    Science.gov (United States)

    Kunz, Miriam; Rainville, Pierre; Lautenbacher, Stefan

    2011-06-01

    The operant model of chronic pain posits that nonverbal pain behavior, such as facial expressions, is sensitive to reinforcement, but experimental evidence supporting this assumption is sparse. The aim of the present study was to investigate in a healthy population a) whether facial pain behavior can indeed be operantly conditioned using a discriminative reinforcement schedule to increase and decrease facial pain behavior and b) to what extent these changes affect pain experience indexed by self-ratings. In the experimental group (n = 29), the participants were reinforced every time that they showed pain-indicative facial behavior (up-conditioning) or a neutral expression (down-conditioning) in response to painful heat stimulation. Once facial pain behavior was successfully up- or down-conditioned, respectively (which occurred in 72% of participants), facial pain displays and self-report ratings were assessed. In addition, a control group (n = 11) was used that was yoked to the reinforcement plans of the experimental group. During the conditioning phases, reinforcement led to significant changes in facial pain behavior in the majority of the experimental group (p .136). Fine-grained analyses of facial muscle movements revealed a similar picture. Furthermore, the decline in facial pain displays (as observed during down-conditioning) strongly predicted changes in pain ratings (R(2) = 0.329). These results suggest that a) facial pain displays are sensitive to reinforcement and b) that changes in facial pain displays can affect self-report ratings.

  11. Recognizing Facial Expressions Automatically from Video

    Science.gov (United States)

    Shan, Caifeng; Braspenning, Ralph

    Facial expressions, resulting from movements of the facial muscles, are the face changes in response to a person's internal emotional states, intentions, or social communications. There is a considerable history associated with the study on facial expressions. Darwin [22] was the first to describe in details the specific facial expressions associated with emotions in animals and humans, who argued that all mammals show emotions reliably in their faces. Since that, facial expression analysis has been a area of great research interest for behavioral scientists [27]. Psychological studies [48, 3] suggest that facial expressions, as the main mode for nonverbal communication, play a vital role in human face-to-face communication. For illustration, we show some examples of facial expressions in Fig. 1.

  12. Imaging the Facial Nerve: A Contemporary Review

    International Nuclear Information System (INIS)

    Gupta, S.; Roehm, P.C.; Mends, F.; Hagiwara, M.; Fatterpekar, G.

    2013-01-01

    Imaging plays a critical role in the evaluation of a number of facial nerve disorders. The facial nerve has a complex anatomical course; thus, a thorough understanding of the course of the facial nerve is essential to localize the sites of pathology. Facial nerve dysfunction can occur from a variety of causes, which can often be identified on imaging. Computed tomography and magnetic resonance imaging are helpful for identifying bony facial canal and soft tissue abnormalities, respectively. Ultrasound of the facial nerve has been used to predict functional outcomes in patients with Bell’s palsy. More recently, diffusion tensor tractography has appeared as a new modality which allows three-dimensional display of facial nerve fibers

  13. Facial Displays Are Tools for Social Influence.

    Science.gov (United States)

    Crivelli, Carlos; Fridlund, Alan J

    2018-05-01

    Based on modern theories of signal evolution and animal communication, the behavioral ecology view of facial displays (BECV) reconceives our 'facial expressions of emotion' as social tools that serve as lead signs to contingent action in social negotiation. BECV offers an externalist, functionalist view of facial displays that is not bound to Western conceptions about either expressions or emotions. It easily accommodates recent findings of diversity in facial displays, their public context-dependency, and the curious but common occurrence of solitary facial behavior. Finally, BECV restores continuity of human facial behavior research with modern functional accounts of non-human communication, and provides a non-mentalistic account of facial displays well-suited to new developments in artificial intelligence and social robotics. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  14. Misrecognition of facial expressions in delinquents

    Directory of Open Access Journals (Sweden)

    Matsuura Naomi

    2009-09-01

    Full Text Available Abstract Background Previous reports have suggested impairment in facial expression recognition in delinquents, but controversy remains with respect to how such recognition is impaired. To address this issue, we investigated facial expression recognition in delinquents in detail. Methods We tested 24 male adolescent/young adult delinquents incarcerated in correctional facilities. We compared their performances with those of 24 age- and gender-matched control participants. Using standard photographs of facial expressions illustrating six basic emotions, participants matched each emotional facial expression with an appropriate verbal label. Results Delinquents were less accurate in the recognition of facial expressions that conveyed disgust than were control participants. The delinquents misrecognized the facial expressions of disgust as anger more frequently than did controls. Conclusion These results suggest that one of the underpinnings of delinquency might be impaired recognition of emotional facial expressions, with a specific bias toward interpreting disgusted expressions as hostile angry expressions.

  15. Facial Expression Recognition By Using Fisherface Methode With Backpropagation Neural Network

    Directory of Open Access Journals (Sweden)

    Zaenal Abidin

    2011-01-01

    Full Text Available Abstract— In daily lives, especially in interpersonal communication, face often used for expression. Facial expressions give information about the emotional state of the person. A facial expression is one of the behavioral characteristics. The components of a basic facial expression analysis system are face detection, face data extraction, and facial expression recognition. Fisherface method with backpropagation artificial neural network approach can be used for facial expression recognition. This method consists of two-stage process, namely PCA and LDA. PCA is used to reduce the dimension, while the LDA is used for features extraction of facial expressions. The system was tested with 2 databases namely JAFFE database and MUG database. The system correctly classified the expression with accuracy of 86.85%, and false positive 25 for image type I of JAFFE, for image type II of JAFFE 89.20% and false positive 15,  for type III of JAFFE 87.79%, and false positive for 16. The image of MUG are 98.09%, and false positive 5. Keywords— facial expression, fisherface method, PCA, LDA, backpropagation neural network.

  16. Deliberately generated and imitated facial expressions of emotions in people with eating disorders.

    Science.gov (United States)

    Dapelo, Marcela Marin; Bodas, Sergio; Morris, Robin; Tchanturia, Kate

    2016-02-01

    People with eating disorders have difficulties in socio emotional functioning that could contribute to maintaining the functional consequences of the disorder. This study aimed to explore the ability to deliberately generate (i.e., pose) and imitate facial expressions of emotions in women with anorexia (AN) and bulimia nervosa (BN), compared to healthy controls (HC). One hundred and three participants (36 AN, 25 BN, and 42 HC) were asked to pose and imitate facial expressions of anger, disgust, fear, happiness, and sadness. Their facial expressions were recorded and coded. Participants with eating disorders (both AN and BN) were less accurate than HC when posing facial expressions of emotions. Participants with AN were less accurate compared to HC imitating facial expressions, whilst BN participants had a middle range performance. All results remained significant after controlling for anxiety, depression and autistic features. The relatively small number of BN participants recruited for this study. The study findings suggest that people with eating disorders, particularly those with AN, have difficulties posing and imitating facial expressions of emotions. These difficulties could have an impact in social communication and social functioning. This is the first study to investigate the ability to pose and imitate facial expressions of emotions in people with eating disorders, and the findings suggest this area should be further explored in future studies. Copyright © 2015. Published by Elsevier B.V.

  17. Evidence for Anger Saliency during the Recognition of Chimeric Facial Expressions of Emotions in Underage Ebola Survivors

    Directory of Open Access Journals (Sweden)

    Martina Ardizzi

    2017-06-01

    Full Text Available One of the crucial features defining basic emotions and their prototypical facial expressions is their value for survival. Childhood traumatic experiences affect the effective recognition of facial expressions of negative emotions, normally allowing the recruitment of adequate behavioral responses to environmental threats. Specifically, anger becomes an extraordinarily salient stimulus unbalancing victims’ recognition of negative emotions. Despite the plethora of studies on this topic, to date, it is not clear whether this phenomenon reflects an overall response tendency toward anger recognition or a selective proneness to the salience of specific facial expressive cues of anger after trauma exposure. To address this issue, a group of underage Sierra Leonean Ebola virus disease survivors (mean age 15.40 years, SE 0.35; years of schooling 8.8 years, SE 0.46; 14 males and a control group (mean age 14.55, SE 0.30; years of schooling 8.07 years, SE 0.30, 15 males performed a forced-choice chimeric facial expressions recognition task. The chimeric facial expressions were obtained pairing upper and lower half faces of two different negative emotions (selected from anger, fear and sadness for a total of six different combinations. Overall, results showed that upper facial expressive cues were more salient than lower facial expressive cues. This priority was lost among Ebola virus disease survivors for the chimeric facial expressions of anger. In this case, differently from controls, Ebola virus disease survivors recognized anger regardless of the upper or lower position of the facial expressive cues of this emotion. The present results demonstrate that victims’ performance in the recognition of the facial expression of anger does not reflect an overall response tendency toward anger recognition, but rather the specific greater salience of facial expressive cues of anger. Furthermore, the present results show that traumatic experiences deeply modify

  18. Rejuvenecimiento facial en "doble sigma" "Double ogee" facial rejuvenation

    Directory of Open Access Journals (Sweden)

    O. M. Ramírez

    2007-03-01

    Full Text Available Las técnicas subperiósticas descritas por Tessier revolucionaron el tratamiento del envejecimiento facial, recomendando esta vía para tratar los signos tempranos del envejecimiento en pacientes jóvenes y de mediana edad. Psillakis refinó la técnica y Ramírez describió un método más seguro y eficaz de lifting subperióstico, demostrando que la técnica subperióstica de rejuveneciento facial se puede aplicar en el amplio espectro del envejecimiento facial. La introducción del endoscopio en el tratamiento del envejecimiento facial ha abierto una nueva era en la Cirugía Estética. Hoy la disección subperióstica asistida endocópicamente del tercio superior, medio e inferior de la cara, proporciona un medio eficaz para la reposición de los tejidos blandos, con posibilidad de aumento del esqueleto óseo craneofacial, menor edema facial postoperatorio, mínima lesión de las ramas del nervio facial y mejor tratamiento de las mejillas. Este abordaje, desarrollado y refinado durante la última década, se conoce como "Ritidectomía en Doble Sigma". El Arco Veneciano en doble sigma, bien conocido en Arquitectura desde la antigüedad, se caracteriza por ser un trazo armónico de curva convexa y a continuación curva cóncava. Cuando se observa una cara joven, desde un ángulo oblicuo, presenta una distribución característica de los tejidos, previamente descrita para el tercio medio como un arco ojival arquitectónico o una curva en forma de "S". Sin embargo, en un examen más detallado de la cara joven, en la vista de tres cuartos, el perfil completo revela una "arco ojival doble" o una sigma "S" doble. Para ver este recíproco y multicurvilíneo trazo de la belleza, debemos ver la cara en posición oblicua y así poder ver ambos cantos mediales. En esta posición, la cara joven presenta una convexidad característica de la cola de la ceja que confluye en la concavidad de la pared orbitaria lateral formando así el primer arco (superior

  19. Dermoscopic clues to differentiate facial lentigo maligna from pigmented actinic keratosis.

    Science.gov (United States)

    Lallas, A; Tschandl, P; Kyrgidis, A; Stolz, W; Rabinovitz, H; Cameron, A; Gourhant, J Y; Giacomel, J; Kittler, H; Muir, J; Argenziano, G; Hofmann-Wellenhof, R; Zalaudek, I

    2016-05-01

    Dermoscopy is limited in differentiating accurately between pigmented lentigo maligna (LM) and pigmented actinic keratosis (PAK). This might be related to the fact that most studies have focused on pigmented criteria only, without considering additional recognizable features. To investigate the diagnostic accuracy of established dermoscopic criteria for pigmented LM and PAK, but including in the evaluation features previously associated with nonpigmented facial actinic keratosis. Retrospectively enrolled cases of histopathologically diagnosed LM, PAK and solar lentigo/early seborrhoeic keratosis (SL/SK) were dermoscopically evaluated for the presence of predefined criteria. Univariate and multivariate regression analyses were performed and receiver operating characteristic curves were used. The study sample consisted of 70 LMs, 56 PAKs and 18 SL/SKs. In a multivariate analysis, the most potent predictors of LM were grey rhomboids (sixfold increased probability of LM), nonevident follicles (fourfold) and intense pigmentation (twofold). In contrast, white circles, scales and red colour were significantly correlated with PAK, posing a 14-fold, eightfold and fourfold probability for PAK, respectively. The absence of evident follicles also represented a frequent LM criterion, characterizing 71% of LMs. White and evident follicles, scales and red colour represent significant diagnostic clues for PAK. Conversely, intense pigmentation and grey rhomboidal lines appear highly suggestive of LM. © 2015 British Association of Dermatologists.

  20. Imaging features of thalassemia

    Energy Technology Data Exchange (ETDEWEB)

    Tunaci, M.; Tunaci, A.; Engin, G.; Oezkorkmaz, B.; Acunas, G.; Acunas, B. [Dept. of Radiology, Istanbul Univ. (Turkey); Dincol, G. [Dept. of Internal Medicine, Istanbul Univ. (Turkey)

    1999-07-01

    Thalassemia is a kind of chronic, inherited, microcytic anemia characterized by defective hemoglobin synthesis and ineffective erythropoiesis. In all thalassemias clinical features that result from anemia, transfusional, and absorptive iron overload are similar but vary in severity. The radiographic features of {beta}-thalassemia are due in large part to marrow hyperplasia. Markedly expanded marrow space lead to various skeletal manifestations including spine, skull, facial bones, and ribs. Extramedullary hematopoiesis (ExmH), hemosiderosis, and cholelithiasis are among the non-skeletal manifestations of thalassemia. The skeletal X-ray findings show characteristics of chronic overactivity of the marrow. In this article both skeletal and non-skeletal manifestations of thalassemia are discussed with an overview of X-ray findings, including MRI and CT findings. (orig.)

  1. Imaging features of thalassemia

    International Nuclear Information System (INIS)

    Tunaci, M.; Tunaci, A.; Engin, G.; Oezkorkmaz, B.; Acunas, G.; Acunas, B.; Dincol, G.

    1999-01-01

    Thalassemia is a kind of chronic, inherited, microcytic anemia characterized by defective hemoglobin synthesis and ineffective erythropoiesis. In all thalassemias clinical features that result from anemia, transfusional, and absorptive iron overload are similar but vary in severity. The radiographic features of β-thalassemia are due in large part to marrow hyperplasia. Markedly expanded marrow space lead to various skeletal manifestations including spine, skull, facial bones, and ribs. Extramedullary hematopoiesis (ExmH), hemosiderosis, and cholelithiasis are among the non-skeletal manifestations of thalassemia. The skeletal X-ray findings show characteristics of chronic overactivity of the marrow. In this article both skeletal and non-skeletal manifestations of thalassemia are discussed with an overview of X-ray findings, including MRI and CT findings. (orig.)

  2. Real Time Facial Expression Recognition Using Webcam and SDK Affectiva

    Directory of Open Access Journals (Sweden)

    Martin Magdin

    2018-06-01

    Full Text Available Facial expression is an essential part of communication. For this reason, the issue of human emotions evaluation using a computer is a very interesting topic, which has gained more and more attention in recent years. It is mainly related to the possibility of applying facial expression recognition in many fields such as HCI, video games, virtual reality, and analysing customer satisfaction etc. Emotions determination (recognition process is often performed in 3 basic phases: face detection, facial features extraction, and last stage - expression classification. Most often you can meet the so-called Ekman’s classification of 6 emotional expressions (or 7 - neutral expression as well as other types of classification - the Russell circular model, which contains up to 24 or the Plutchik’s Wheel of Emotions. The methods used in the three phases of the recognition process have not only improved over the last 60 years, but new methods and algorithms have also emerged that can determine the ViolaJones detector with greater accuracy and lower computational demands. Therefore, there are currently various solutions in the form of the Software Development Kit (SDK. In this publication, we point to the proposition and creation of our system for real-time emotion classification. Our intention was to create a system that would use all three phases of the recognition process, work fast and stable in real time. That’s why we’ve decided to take advantage of existing Affectiva SDKs. By using the classic webcamera we can detect facial landmarks on the image automatically using the Software Development Kit (SDK from Affectiva. Geometric feature based approach is used for feature extraction. The distance between landmarks is used as a feature, and for selecting an optimal set of features, the brute force method is used. The proposed system uses neural network algorithm for classification. The proposed system recognizes 6 (respectively 7 facial expressions

  3. Supradural inflammatory soup in awake and freely moving rats induces facial allodynia that is blocked by putative immune modulators.

    Science.gov (United States)

    Wieseler, Julie; Ellis, Amanda; McFadden, Andrew; Stone, Kendra; Brown, Kimberley; Cady, Sara; Bastos, Leandro F; Sprunger, David; Rezvani, Niloofar; Johnson, Kirk; Rice, Kenner C; Maier, Steven F; Watkins, Linda R

    2017-06-01

    Facial allodynia is a migraine symptom that is generally considered to represent a pivotal point in migraine progression. Treatment before development of facial allodynia tends to be more successful than treatment afterwards. As such, understanding the underlying mechanisms of facial allodynia may lead to a better understanding of the mechanisms underlying migraine. Migraine facial allodynia is modeled by applying inflammatory soup (histamine, bradykinin, serotonin, prostaglandin E2) over the dura. Whether glial and/or immune activation contributes to such pain is unknown. Here we tested if trigeminal nucleus caudalis (Sp5C) glial and/or immune cells are activated following supradural inflammatory soup, and if putative glial/immune inhibitors suppress the consequent facial allodynia. Inflammatory soup was administered via bilateral indwelling supradural catheters in freely moving rats, inducing robust and reliable facial allodynia. Gene expression for microglial/macrophage activation markers, interleukin-1β, and tumor necrosis factor-α increased following inflammatory soup along with robust expression of facial allodynia. This provided the basis for pursuing studies of the behavioral effects of 3 diverse immunomodulatory drugs on facial allodynia. Pretreatment with either of two compounds broadly used as putative glial/immune inhibitors (minocycline, ibudilast) prevented the development of facial allodynia, as did treatment after supradural inflammatory soup but prior to the expression of facial allodynia. Lastly, the toll-like receptor 4 (TLR4) antagonist (+)-naltrexone likewise blocked development of facial allodynia after supradural inflammatory soup. Taken together, these exploratory data support that activated glia and/or immune cells may drive the development of facial allodynia in response to supradural inflammatory soup in unanesthetized male rats. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Effects of facial emotion recognition remediation on visual scanning of novel face stimuli.

    Science.gov (United States)

    Marsh, Pamela J; Luckett, Gemma; Russell, Tamara; Coltheart, Max; Green, Melissa J

    2012-11-01

    Previous research shows that emotion recognition in schizophrenia can be improved with targeted remediation that draws attention to important facial features (eyes, nose, mouth). Moreover, the effects of training have been shown to last for up to one month after training. The aim of this study was to investigate whether improved emotion recognition of novel faces is associated with concomitant changes in visual scanning of these same novel facial expressions. Thirty-nine participants with schizophrenia received emotion recognition training using Ekman's Micro-Expression Training Tool (METT), with emotion recognition and visual scanpath (VSP) recordings to face stimuli collected simultaneously. Baseline ratings of interpersonal and cognitive functioning were also collected from all participants. Post-METT training, participants showed changes in foveal attention to the features of facial expressions of emotion not used in METT training, which were generally consistent with the information about important features from the METT. In particular, there were changes in how participants looked at the features of facial expressions of emotion surprise, disgust, fear, happiness, and neutral, demonstrating that improved emotion recognition is paralleled by changes in the way participants with schizophrenia viewed novel facial expressions of emotion. However, there were overall decreases in foveal attention to sad and neutral faces that indicate more intensive instruction might be needed for these faces during training. Most importantly, the evidence shows that participant gender may affect training outcomes. Copyright © 2012 Elsevier B.V. All rights reserved.

  5. Facial Expression at Retrieval Affects Recognition of Facial Identity

    Directory of Open Access Journals (Sweden)

    Wenfeng eChen

    2015-06-01

    Full Text Available It is well known that memory can be modulated by emotional stimuli at the time of encoding and consolidation. For example, happy faces create better identity recognition than faces with certain other expressions. However, the influence of facial expression at the time of retrieval remains unknown in the literature. To separate the potential influence of expression at retrieval from its effects at earlier stages, we had participants learn neutral faces but manipulated facial expression at the time of memory retrieval in a standard old/new recognition task. The results showed a clear effect of facial expression, where happy test faces were identified more successfully than angry test faces. This effect is unlikely due to greater image similarity between the neutral learning face and the happy test face, because image analysis showed that the happy test faces are in fact less similar to the neutral learning faces relative to the angry test faces. In the second experiment, we investigated whether this emotional effect is influenced by the expression at the time of learning. We employed angry or happy faces as learning stimuli, and angry, happy, and neutral faces as test stimuli. The results showed that the emotional effect at retrieval is robust across different encoding conditions with happy or angry expressions. These findings indicate that emotional expressions affect the retrieval process in identity recognition, and identity recognition does not rely on emotional association between learning and test faces.

  6. Novel Noninvasive Brain Disease Detection System Using a Facial Image Sensor

    Directory of Open Access Journals (Sweden)

    Ting Shu

    2017-12-01

    Full Text Available Brain disease including any conditions or disabilities that affect the brain is fast becoming a leading cause of death. The traditional diagnostic methods of brain disease are time-consuming, inconvenient and non-patient friendly. As more and more individuals undergo examinations to determine if they suffer from any form of brain disease, developing noninvasive, efficient, and patient friendly detection systems will be beneficial. Therefore, in this paper, we propose a novel noninvasive brain disease detection system based on the analysis of facial colors. The system consists of four components. A facial image is first captured through a specialized sensor, where four facial key blocks are next located automatically from the various facial regions. Color features are extracted from each block to form a feature vector for classification via the Probabilistic Collaborative based Classifier. To thoroughly test the system and its performance, seven facial key block combinations were experimented. The best result was achieved using the second facial key block, where it showed that the Probabilistic Collaborative based Classifier is the most suitable. The overall performance of the proposed system achieves an accuracy −95%, a sensitivity −94.33%, a specificity −95.67%, and an average processing time (for one sample of <1 min at brain disease detection.

  7. Intact Rapid Facial Mimicry as well as Generally Reduced Mimic Responses in Stable Schizophrenia Patients

    Science.gov (United States)

    Chechko, Natalya; Pagel, Alena; Otte, Ellen; Koch, Iring; Habel, Ute

    2016-01-01

    Spontaneous emotional expressions (rapid facial mimicry) perform both emotional and social functions. In the current study, we sought to test whether there were deficits in automatic mimic responses to emotional facial expressions in patients (15 of them) with stable schizophrenia compared to 15 controls. In a perception-action interference paradigm (the Simon task; first experiment), and in the context of a dual-task paradigm (second experiment), the task-relevant stimulus feature was the gender of a face, which, however, displayed a smiling or frowning expression (task-irrelevant stimulus feature). We measured the electromyographical activity in the corrugator supercilii and zygomaticus major muscle regions in response to either compatible or incompatible stimuli (i.e., when the required response did or did not correspond to the depicted facial expression). The compatibility effect based on interactions between the implicit processing of a task-irrelevant emotional facial expression and the conscious production of an emotional facial expression did not differ between the groups. In stable patients (in spite of a reduced mimic reaction), we observed an intact capacity to respond spontaneously to facial emotional stimuli. PMID:27303335

  8. Relative preservation of the recognition of positive facial expression "happiness" in Alzheimer disease.

    Science.gov (United States)

    Maki, Yohko; Yoshida, Hiroshi; Yamaguchi, Tomoharu; Yamaguchi, Haruyasu

    2013-01-01

    Positivity recognition bias has been reported for facial expression as well as memory and visual stimuli in aged individuals, whereas emotional facial recognition in Alzheimer disease (AD) patients is controversial, with possible involvement of confounding factors such as deficits in spatial processing of non-emotional facial features and in verbal processing to express emotions. Thus, we examined whether recognition of positive facial expressions was preserved in AD patients, by adapting a new method that eliminated the influences of these confounding factors. Sensitivity of six basic facial expressions (happiness, sadness, surprise, anger, disgust, and fear) was evaluated in 12 outpatients with mild AD, 17 aged normal controls (ANC), and 25 young normal controls (YNC). To eliminate the factors related to non-emotional facial features, averaged faces were prepared as stimuli. To eliminate the factors related to verbal processing, the participants were required to match the images of stimulus and answer, avoiding the use of verbal labels. In recognition of happiness, there was no difference in sensitivity between YNC and ANC, and between ANC and AD patients. AD patients were less sensitive than ANC in recognition of sadness, surprise, and anger. ANC were less sensitive than YNC in recognition of surprise, anger, and disgust. Within the AD patient group, sensitivity of happiness was significantly higher than those of the other five expressions. In AD patient, recognition of happiness was relatively preserved; recognition of happiness was most sensitive and was preserved against the influences of age and disease.

  9. Lateral facial profile may reveal the risk for sleep disordered breathing in children--the PANIC-study.

    Science.gov (United States)

    Ikävalko, Tiina; Närhi, Matti; Lakka, Timo; Myllykangas, Riitta; Tuomilehto, Henri; Vierola, Anu; Pahkala, Riitta

    2015-01-01

    To evaluate the lateral view photography of the face as a tool for assessing morphological properties (i.e. facial convexity) as a risk factor for sleep disordered breathing (SDB) in children and to test how reliably oral health and non-oral healthcare professionals can visually discern the lateral profile of the face from the photographs. The present study sample consisted of 382 children 6-8 years of age who were participants in the Physical Activity and Nutrition in Children (PANIC) Study. Sleep was assessed by a sleep questionnaire administered by the parents. SDB was defined as apnoeas, frequent or loud snoring or nocturnal mouth breathing observed by the parents. The facial convexity was assessed with three different methods. First, it was clinically evaluated by the reference orthodontist (T.I.). Second, lateral view photographs were taken to visually sub-divide the facial profile into convex, normal or concave. The photos were examined by a reference orthodontist and seven different healthcare professionals who work with children and also by a dental student. The inter- and intra-examiner consistencies were calculated by Kappa statistics. Three soft tissue landmarks of the facial profile, soft tissue Glabella (G`), Subnasale (Sn) and soft tissue Pogonion (Pg`) were digitally identified to analyze convexity of the face and the intra-examiner reproducibility of the reference orthodontist was determined by calculating intra-class correlation coefficients (ICCs). The third way to express the convexity of the face was to calculate the angle of facial convexity (G`-Sn-Pg`) and to group it into quintiles. For analysis the lowest quintile (≤164.2°) was set to represent the most convex facial profile. The prevalence of the SDB in children with the most convex profiles expressed with the lowest quintile of the angle G`-Sn-Pg` (≤164.2°) was almost 2-fold (14.5%) compared to those with normal profile (8.1%) (p = 0.084). The inter-examiner Kappa values between the

  10. Intelligent Facial Recognition Systems: Technology advancements for security applications

    Energy Technology Data Exchange (ETDEWEB)

    Beer, C.L.

    1993-07-01

    Insider problems such as theft and sabotage can occur within the security and surveillance realm of operations when unauthorized people obtain access to sensitive areas. A possible solution to these problems is a means to identify individuals (not just credentials or badges) in a given sensitive area and provide full time personnel accountability. One approach desirable at Department of Energy facilities for access control and/or personnel identification is an Intelligent Facial Recognition System (IFRS) that is non-invasive to personnel. Automatic facial recognition does not require the active participation of the enrolled subjects, unlike most other biological measurement (biometric) systems (e.g., fingerprint, hand geometry, or eye retinal scan systems). It is this feature that makes an IFRS attractive for applications other than access control such as emergency evacuation verification, screening, and personnel tracking. This paper discusses current technology that shows promising results for DOE and other security applications. A survey of research and development in facial recognition identified several companies and universities that were interested and/or involved in the area. A few advanced prototype systems were also identified. Sandia National Laboratories is currently evaluating facial recognition systems that are in the advanced prototype stage. The initial application for the evaluation is access control in a controlled environment with a constant background and with cooperative subjects. Further evaluations will be conducted in a less controlled environment, which may include a cluttered background and subjects that are not looking towards the camera. The outcome of the evaluations will help identify areas of facial recognition systems that need further development and will help to determine the effectiveness of the current systems for security applications.

  11. Overview of pediatric peripheral facial nerve paralysis: analysis of 40 patients.

    Science.gov (United States)

    Özkale, Yasemin; Erol, İlknur; Saygı, Semra; Yılmaz, İsmail

    2015-02-01

    Peripheral facial nerve paralysis in children might be an alarming sign of serious disease such as malignancy, systemic disease, congenital anomalies, trauma, infection, middle ear surgery, and hypertension. The cases of 40 consecutive children and adolescents who were diagnosed with peripheral facial nerve paralysis at Baskent University Adana Hospital Pediatrics and Pediatric Neurology Unit between January 2010 and January 2013 were retrospectively evaluated. We determined that the most common cause was Bell palsy, followed by infection, tumor lesion, and suspected chemotherapy toxicity. We noted that younger patients had generally poorer outcome than older patients regardless of disease etiology. Peripheral facial nerve paralysis has been reported in many countries in America and Europe; however, knowledge about its clinical features, microbiology, neuroimaging, and treatment in Turkey is incomplete. The present study demonstrated that Bell palsy and infection were the most common etiologies of peripheral facial nerve paralysis. © The Author(s) 2014.

  12. "Man-some": A Review of Male Facial Aging and Beauty.

    Science.gov (United States)

    Keaney, Terrence Colin

    2017-06-01

    Gender plays a significant role in determining facial anatomy and behavior, both of which are key factors in the aging process. Understanding the pattern of male facial aging is critical when planning aesthetic treatments on men. Men develop more severe rhytides in a unique pattern, show increased periocular aging changes, and are more prone to hair loss. What also needs to be considered when planning a treatment is what makes men beautiful or "man-some". Male beauty strikes a balance between masculine and feminine facial features. A hypermasculine face can have negative associations. Men also exhibit different cosmetic concerns. Men tend to focus on three areas of the face - hairline, periocular area, and jawline. A comprehensive understanding of the male patient including anatomy, facial aging, cosmetic concerns, and beauty are needed for successful cosmetic outcomes. J Drugs Dermatol. 2017;16(6 Suppl):s91-93..

  13. Forensic Facial Reconstruction: Relationship Between the Alar Cartilage and Piriform Aperture.

    Science.gov (United States)

    Strapasson, Raíssa Ananda Paim; Herrera, Lara Maria; Melani, Rodolfo Francisco Haltenhoff

    2017-11-01

    During forensic facial reconstruction, facial features may be predicted based on the parameters of the skull. This study evaluated the relationships between alar cartilage and piriform aperture and nose morphology and facial typology. Ninety-six cone beam computed tomography images of Brazilian subjects (49 males and 47 females) were used in this study. OsiriX software was used to perform the following measurements: nasal width, distance between alar base insertion points, lower width of the piriform aperture, and upper width of the piriform aperture. Nasal width was associated with the lower width of the piriform aperture, sex, skeletal vertical pattern of the face, and age. The current study contributes to the improvement of forensic facial guides by identifying the relationships between the alar cartilages and characteristics of the biological profile of members of a population that has been little studied thus far. © 2017 American Academy of Forensic Sciences.

  14. [Surgical treatment in otogenic facial nerve palsy].

    Science.gov (United States)

    Feng, Guo-Dong; Gao, Zhi-Qiang; Zhai, Meng-Yao; Lü, Wei; Qi, Fang; Jiang, Hong; Zha, Yang; Shen, Peng

    2008-06-01

    To study the character of facial nerve palsy due to four different auris diseases including chronic otitis media, Hunt syndrome, tumor and physical or chemical factors, and to discuss the principles of the surgical management of otogenic facial nerve palsy. The clinical characters of 24 patients with otogenic facial nerve palsy because of the four different auris diseases were retrospectively analyzed, all the cases were performed surgical management from October 1991 to March 2007. Facial nerve function was evaluated with House-Brackmann (HB) grading system. The 24 patients including 10 males and 14 females were analysis, of whom 12 cases due to cholesteatoma, 3 cases due to chronic otitis media, 3 cases due to Hunt syndrome, 2 cases resulted from acute otitis media, 2 cases due to physical or chemical factors and 2 cases due to tumor. All cases were treated with operations included facial nerve decompression, lesion resection with facial nerve decompression and lesion resection without facial nerve decompression, 1 patient's facial nerve was resected because of the tumor. According to HB grade system, I degree recovery was attained in 4 cases, while II degree in 10 cases, III degree in 6 cases, IV degree in 2 cases, V degree in 2 cases and VI degree in 1 case. Removing the lesions completely was the basic factor to the surgery of otogenic facial palsy, moreover, it was important to have facial nerve decompression soon after lesion removal.

  15. Reconstructing dynamic mental models of facial expressions in prosopagnosia reveals distinct representations for identity and expression.

    Science.gov (United States)

    Richoz, Anne-Raphaëlle; Jack, Rachael E; Garrod, Oliver G B; Schyns, Philippe G; Caldara, Roberto

    2015-04-01

    The human face transmits a wealth of signals that readily provide crucial information for social interactions, such as facial identity and emotional expression. Yet, a fundamental question remains unresolved: does the face information for identity and emotional expression categorization tap into common or distinct representational systems? To address this question we tested PS, a pure case of acquired prosopagnosia with bilateral occipitotemporal lesions anatomically sparing the regions that are assumed to contribute to facial expression (de)coding (i.e., the amygdala, the insula and the posterior superior temporal sulcus--pSTS). We previously demonstrated that PS does not use information from the eye region to identify faces, but relies on the suboptimal mouth region. PS's abnormal information use for identity, coupled with her neural dissociation, provides a unique opportunity to probe the existence of a dichotomy in the face representational system. To reconstruct the mental models of the six basic facial expressions of emotion in PS and age-matched healthy observers, we used a novel reverse correlation technique tracking information use on dynamic faces. PS was comparable to controls, using all facial features to (de)code facial expressions with the exception of fear. PS's normal (de)coding of dynamic facial expressions suggests that the face system relies either on distinct representational systems for identity and expression, or dissociable cortical pathways to access them. Interestingly, PS showed a selective impairment for categorizing many static facial expressions, which could be accounted for by her lesion in the right inferior occipital gyrus. PS's advantage for dynamic facial expressions might instead relate to a functionally distinct and sufficient cortical pathway directly connecting the early visual cortex to the spared pSTS. Altogether, our data provide critical insights on the healthy and impaired face systems, question evidence of deficits

  16. Perineural extension of facial melanoma

    Energy Technology Data Exchange (ETDEWEB)

    Kalina, Peter [Mayo Clinic, Department of Radiology, Rochester, Minnesota (United States); Bevilacqua, Paula

    2005-05-01

    A 64-year-old man presented with a pigmented cutaneous lesion on the right side of his face along with right facial numbness. Histological examination revealed malignant melanoma. Magnetic resonance imaging (MRI) revealed perineural extension along the entire course of the maxillary division of the right trigeminal nerve. This is a rare but important manifestation of the spread of head and neck malignancy. (orig.)

  17. [Fat grafting in facial burns sequelae].

    Science.gov (United States)

    Viard, R; Bouguila, J; Voulliaume, D; Comparin, J-P; Dionyssopoulos, A; Foyatier, J-L

    2012-06-01

    Fat graft is now part of the armamentarium in face plastic surgery. It is successfully used in burn scars. The aim of our study is the discussion of the value of this technique in optimizing cosmetic result of burns face sequelae. Fifteen adult patients (10 females and five males) with scars resulting from severe burns 2 to 9 years previously were selected. The patients were treated by injection of adipose tissue harvested from abdominal subcutaneous fat and processed according to Coleman's technique. Two to three injections were administered at the dermohypodermal junction. Ages, sexes, aetiology of burn, facial burn sequelae, recipient sites, quantity of fat injected, aesthetic results are discussed. Patient age ranged from 21 to 55 years (average: 38). The mean follow-up of the study was 66 months (23-118). Patients received 7.5 (5-11) facial restorative surgeries before fat graft. Patients underwent two sessions of fat transfer, 33cc average per session. We did not report any complications. The clinical appearance, discussed by three surgeons and subjective patient feelings, after a 6-month follow-up period, suggests considerable improvement in the mimic features, skin texture, and thickness. The result is good in 86% of cases and acceptable in the other cases. Burns sequelae offer local conditions which justify special cannula can cross fibrosis and explaining the value of multiplying the sessions. Indications for lipostructure include four distinct nosological situations, sometimes combined. Lipostructure can restore a missing relief, filling a localized depression, reshape a lack of face volume or smooth a scarring skin. Fat graft seems to complete and improve the results of the standard surgical approach in burned face. Copyright © 2011 Elsevier Masson SAS. All rights reserved.

  18. In-the-wild facial expression recognition in extreme poses

    Science.gov (United States)

    Yang, Fei; Zhang, Qian; Zheng, Chi; Qiu, Guoping

    2018-04-01

    In the computer research area, facial expression recognition is a hot research problem. Recent years, the research has moved from the lab environment to in-the-wild circumstances. It is challenging, especially under extreme poses. But current expression detection systems are trying to avoid the pose effects and gain the general applicable ability. In this work, we solve the problem in the opposite approach. We consider the head poses and detect the expressions within special head poses. Our work includes two parts: detect the head pose and group it into one pre-defined head pose class; do facial expression recognize within each pose class. Our experiments show that the recognition results with pose class grouping are much better than that of direct recognition without considering poses. We combine the hand-crafted features, SIFT, LBP and geometric feature, with deep learning feature as the representation of the expressions. The handcrafted features are added into the deep learning framework along with the high level deep learning features. As a comparison, we implement SVM and random forest to as the prediction models. To train and test our methodology, we labeled the face dataset with 6 basic expressions.

  19. Morphing between expressions dissociates continuous from categorical representations of facial expression in the human brain

    Science.gov (United States)

    Harris, Richard J.; Young, Andrew W.; Andrews, Timothy J.

    2012-01-01

    Whether the brain represents facial expressions as perceptual continua or as emotion categories remains controversial. Here, we measured the neural response to morphed images to directly address how facial expressions of emotion are represented in the brain. We found that face-selective regions in the posterior superior temporal sulcus and the amygdala responded selectively to changes in facial expression, independent of changes in identity. We then asked whether the responses in these regions reflected categorical or continuous neural representations of facial expression. Participants viewed images from continua generated by morphing between faces posing different expressions such that the expression could be the same, could involve a physical change but convey the same emotion, or could differ by the same physical amount but be perceived as two different emotions. We found that the posterior superior temporal sulcus was equally sensitive to all changes in facial expression, consistent with a continuous representation. In contrast, the amygdala was only sensitive to changes in expression that altered the perceived emotion, demonstrating a more categorical representation. These results offer a resolution to the controversy about how facial expression is processed in the brain by showing that both continuous and categorical representations underlie our ability to extract this important social cue. PMID:23213218

  20. Reverse Correlating Love: Highly Passionate Women Idealize Their Partner’s Facial Appearance

    Science.gov (United States)

    Gunaydin, Gul; DeLong, Jordan E.

    2015-01-01

    A defining feature of passionate love is idealization—evaluating romantic partners in an overly favorable light. Although passionate love can be expected to color how favorably individuals represent their partner in their mind, little is known about how passionate love is linked with visual representations of the partner. Using reverse correlation techniques for the first time to study partner representations, the present study investigated whether women who are passionately in love represent their partner’s facial appearance more favorably than individuals who are less passionately in love. In a within-participants design, heterosexual women completed two forced-choice classification tasks, one for their romantic partner and one for a male acquaintance, and a measure of passionate love. In each classification task, participants saw two faces superimposed with noise and selected the face that most resembled their partner (or an acquaintance). Classification images for each of high passion and low passion groups were calculated by averaging across noise patterns selected as resembling the partner or the acquaintance and superimposing the averaged noise on an average male face. A separate group of women evaluated the classification images on attractiveness, trustworthiness, and competence. Results showed that women who feel high (vs. low) passionate love toward their partner tend to represent his face as more attractive and trustworthy, even when controlling for familiarity effects using the acquaintance representation. Using an innovative method to study partner representations, these findings extend our understanding of cognitive processes in romantic relationships. PMID:25806540

  1. Reverse correlating love: highly passionate women idealize their partner's facial appearance.

    Science.gov (United States)

    Gunaydin, Gul; DeLong, Jordan E

    2015-01-01

    A defining feature of passionate love is idealization--evaluating romantic partners in an overly favorable light. Although passionate love can be expected to color how favorably individuals represent their partner in their mind, little is known about how passionate love is linked with visual representations of the partner. Using reverse correlation techniques for the first time to study partner representations, the present study investigated whether women who are passionately in love represent their partner's facial appearance more favorably than individuals who are less passionately in love. In a within-participants design, heterosexual women completed two forced-choice classification tasks, one for their romantic partner and one for a male acquaintance, and a measure of passionate love. In each classification task, participants saw two faces superimposed with noise and selected the face that most resembled their partner (or an acquaintance). Classification images for each of high passion and low passion groups were calculated by averaging across noise patterns selected as resembling the partner or the acquaintance and superimposing the averaged noise on an average male face. A separate group of women evaluated the classification images on attractiveness, trustworthiness, and competence. Results showed that women who feel high (vs. low) passionate love toward their partner tend to represent his face as more attractive and trustworthy, even when controlling for familiarity effects using the acquaintance representation. Using an innovative method to study partner representations, these findings extend our understanding of cognitive processes in romantic relationships.

  2. Facial Identification in Observers with Colour-Grapheme Synaesthesia

    DEFF Research Database (Denmark)

    Sørensen, Thomas Alrik

    2013-01-01

    Synaesthesia between colours and graphemes is often reported as one of the most common forms cross modal perception [Colizolo et al, 2012, PLoS ONE, 7(6), e39799]. In this particular synesthetic sub-type the perception of a letterform is followed by an additional experience of a colour quality....... Both colour [McKeefry and Zeki, 1997, Brain, 120(12), 2229–2242] and visual word forms [McCandliss et al, 2003, Trends in Cognitive Sciences, 7(7), 293–299] have previously been linked to the fusiform gyrus. By being neighbouring functions speculations of cross wiring between the areas have been...... of Neuroscience, 17(11), 4302–4311], increased colour-word form representations in observers with colour-grapheme synaesthesia may affect facial identification in people with synaesthesia. This study investigates the ability to process facial features for identification in observers with colour...

  3. Biometric identification based on novel frequency domain facial asymmetry measures

    Science.gov (United States)

    Mitra, Sinjini; Savvides, Marios; Vijaya Kumar, B. V. K.

    2005-03-01

    In the modern world, the ever-growing need to ensure a system's security has spurred the growth of the newly emerging technology of biometric identification. The present paper introduces a novel set of facial biometrics based on quantified facial asymmetry measures in the frequency domain. In particular, we show that these biometrics work well for face images showing expression variations and have the potential to do so in presence of illumination variations as well. A comparison of the recognition rates with those obtained from spatial domain asymmetry measures based on raw intensity values suggests that the frequency domain representation is more robust to intra-personal distortions and is a novel approach for performing biometric identification. In addition, some feature analysis based on statistical methods comparing the asymmetry measures across different individuals and across different expressions is presented.

  4. Automatic change detection to facial expressions in adolescents

    DEFF Research Database (Denmark)

    Liu, Tongran; Xiao, Tong; Jiannong, Shi

    2016-01-01

    Adolescence is a critical period for the neurodevelopment of social-emotional processing, wherein the automatic detection of changes in facial expressions is crucial for the development of interpersonal communication. Two groups of participants (an adolescent group and an adult group) were...... in facial expressions between the two age groups. The current findings demonstrated that the adolescent group featured more negative vMMN amplitudes than the adult group in the fronto-central region during the 120–200 ms interval. During the time window of 370–450 ms, only the adult group showed better...... automatic processing on fearful faces than happy faces. The present study indicated that adolescent’s posses stronger automatic detection of changes in emotional expression relative to adults, and sheds light on the neurodevelopment of automatic processes concerning social-emotional information....

  5. Síndrome de dolor facial

    Directory of Open Access Journals (Sweden)

    DR. F. Eugenio Tenhamm

    2014-07-01

    Full Text Available El dolor o algia facial constituye un síndrome doloroso de las estructuras cráneo faciales bajo el cual se agrupan un gran número de enfermedades. La mejor manera de abordar el diagnóstico diferencial de las entidades que causan el dolor facial es usando un algoritmo que identifica cuatro síndromes dolorosos principales que son: las neuralgias faciales, los dolores faciales con síntomas y signos neurológicos, las cefaleas autonómicas trigeminales y los dolores faciales sin síntomas ni signos neurológicos. Una evaluación clínica detallada de los pacientes, permite una aproximación etiológica lo que orienta el estudio diagnóstico y permite ofrecer una terapia específica a la mayoría de los casos

  6. Reconstruction of facial nerve injuries in children.

    Science.gov (United States)

    Fattah, Adel; Borschel, Gregory H; Zuker, Ron M

    2011-05-01

    Facial nerve trauma is uncommon in children, and many spontaneously recover some function; nonetheless, loss of facial nerve activity leads to functional impairment of ocular and oral sphincters and nasal orifice. In many cases, the impediment posed by facial asymmetry and reduced mimetic function more significantly affects the child's psychosocial interactions. As such, reconstruction of the facial nerve affords great benefits in quality of life. The therapeutic strategy is dependent on numerous factors, including the cause of facial nerve injury, the deficit, the prognosis for recovery, and the time elapsed since the injury. The options for treatment include a diverse range of surgical techniques including static lifts and slings, nerve repairs, nerve grafts and nerve transfers, regional, and microvascular free muscle transfer. We review our strategies for addressing facial nerve injuries in children.

  7. Agency and facial emotion judgment in context.

    Science.gov (United States)

    Ito, Kenichi; Masuda, Takahiko; Li, Liman Man Wai

    2013-06-01

    Past research showed that East Asians' belief in holism was expressed as their tendencies to include background facial emotions into the evaluation of target faces more than North Americans. However, this pattern can be interpreted as North Americans' tendency to downplay background facial emotions due to their conceptualization of facial emotion as volitional expression of internal states. Examining this alternative explanation, we investigated whether different types of contextual information produce varying degrees of effect on one's face evaluation across cultures. In three studies, European Canadians and East Asians rated the intensity of target facial emotions surrounded with either affectively salient landscape sceneries or background facial emotions. The results showed that, although affectively salient landscapes influenced the judgment of both cultural groups, only European Canadians downplayed the background facial emotions. The role of agency as differently conceptualized across cultures and multilayered systems of cultural meanings are discussed.

  8. Magnetic resonance imaging of facial muscles

    Energy Technology Data Exchange (ETDEWEB)

    Farrugia, M.E. [Department of Clinical Neurology, University of Oxford, Radcliffe Infirmary, Oxford (United Kingdom)], E-mail: m.e.farrugia@doctors.org.uk; Bydder, G.M. [Department of Radiology, University of California, San Diego, CA 92103-8226 (United States); Francis, J.M.; Robson, M.D. [OCMR, Department of Cardiovascular Medicine, University of Oxford, John Radcliffe Hospital, Oxford (United Kingdom)

    2007-11-15

    Facial and tongue muscles are commonly involved in patients with neuromuscular disorders. However, these muscles are not as easily accessible for biopsy and pathological examination as limb muscles. We have previously investigated myasthenia gravis patients with MuSK antibodies for facial and tongue muscle atrophy using different magnetic resonance imaging sequences, including ultrashort echo time techniques and image analysis tools that allowed us to obtain quantitative assessments of facial muscles. This imaging study had shown that facial muscle measurement is possible and that useful information can be obtained using a quantitative approach. In this paper we aim to review in detail the methods that we applied to our study, to enable clinicians to study these muscles within the domain of neuromuscular disease, oncological or head and neck specialties. Quantitative assessment of the facial musculature may be of value in improving the understanding of pathological processes occurring within facial muscles in certain neuromuscular disorders.

  9. Magnetic resonance imaging of facial muscles

    International Nuclear Information System (INIS)

    Farrugia, M.E.; Bydder, G.M.; Francis, J.M.; Robson, M.D.

    2007-01-01

    Facial and tongue muscles are commonly involved in patients with neuromuscular disorders. However, these muscles are not as easily accessible for biopsy and pathological examination as limb muscles. We have previously investigated myasthenia gravis patients with MuSK antibodies for facial and tongue muscle atrophy using different magnetic resonance imaging sequences, including ultrashort echo time techniques and image analysis tools that allowed us to obtain quantitative assessments of facial muscles. This imaging study had shown that facial muscle measurement is possible and that useful information can be obtained using a quantitative approach. In this paper we aim to review in detail the methods that we applied to our study, to enable clinicians to study these muscles within the domain of neuromuscular disease, oncological or head and neck specialties. Quantitative assessment of the facial musculature may be of value in improving the understanding of pathological processes occurring within facial muscles in certain neuromuscular disorders

  10. Facial neuroma masquerading as acoustic neuroma.

    Science.gov (United States)

    Sayegh, Eli T; Kaur, Gurvinder; Ivan, Michael E; Bloch, Orin; Cheung, Steven W; Parsa, Andrew T

    2014-10-01

    Facial nerve neuromas are rare benign tumors that may be initially misdiagnosed as acoustic neuromas when situated near the auditory apparatus. We describe a patient with a large cystic tumor with associated trigeminal, facial, audiovestibular, and brainstem dysfunction, which was suspicious for acoustic neuroma on preoperative neuroimaging. Intraoperative investigation revealed a facial nerve neuroma located in the cerebellopontine angle and internal acoustic canal. Gross total resection of the tumor via retrosigmoid craniotomy was curative. Transection of the facial nerve necessitated facial reanimation 4 months later via hypoglossal-facial cross-anastomosis. Clinicians should recognize the natural history, diagnostic approach, and management of this unusual and mimetic lesion. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Role of temporal processing stages by inferior temporal neurons in facial recognition

    Directory of Open Access Journals (Sweden)

    Yasuko eSugase-Miyamoto

    2011-06-01

    Full Text Available In this review, we focus on the role of temporal stages of encoded facial information in the visual system, which might enable the efficient determination of species, identity, and expression. Facial recognition is an important function of our brain and is known to be processed in the ventral visual pathway, where visual signals are processed through areas V1, V2, V4, and the inferior temporal (IT cortex. In the IT cortex, neurons show selective responses to complex visual images such as faces, and at each stage along the pathway the stimulus selectivity of the neural responses becomes sharper, particularly in the later portion of the responses.In the IT cortex of the monkey, facial information is represented by different temporal stages of neural responses, as shown in our previous study: the initial transient response of face-responsive neurons represents information about global categories, i.e., human vs. monkey vs. simple shapes, whilst the later portion of these responses represents information about detailed facial categories, i.e., expression and/or identity. This suggests that the temporal stages of the neuronal firing pattern play an important role in the coding of visual stimuli, including faces. This type of coding may be a plausible mechanism underlying the temporal dynamics of recognition, including the process of detection/categorization followed by the identification of objects. Recent single-unit studies in monkeys have also provided evidence consistent with the important role of the temporal stages of encoded facial information. For example, view-invariant facial identity information is represented in the response at a later period within a region of face-selective neurons. Consistent with these findings, temporally modulated neural activity has also been observed in human studies. These results suggest a close correlation between the temporal processing stages of facial information by IT neurons and the temporal dynamics of

  12. Analysis of Facial Expression by Taste Stimulation

    Science.gov (United States)

    Tobitani, Kensuke; Kato, Kunihito; Yamamoto, Kazuhiko

    In this study, we focused on the basic taste stimulation for the analysis of real facial expressions. We considered that the expressions caused by taste stimulation were unaffected by individuality or emotion, that is, such expressions were involuntary. We analyzed the movement of facial muscles by taste stimulation and compared real expressions with artificial expressions. From the result, we identified an obvious difference between real and artificial expressions. Thus, our method would be a new approach for facial expression recognition.

  13. Dogs Evaluate Threatening Facial Expressions by Their Biological Validity--Evidence from Gazing Patterns.

    Directory of Open Access Journals (Sweden)

    Sanni Somppi

    Full Text Available Appropriate response to companions' emotional signals is important for all social creatures. The emotional expressions of humans and non-human animals have analogies in their form and function, suggesting shared evolutionary roots, but very little is known about how animals other than primates view and process facial expressions. In primates, threat-related facial expressions evoke exceptional viewing patterns compared with neutral or positive stimuli. Here, we explore if domestic dogs (Canis familiaris have such an attentional bias toward threatening social stimuli and whether observed emotional expressions affect dogs' gaze fixation distribution among the facial features (eyes, midface and mouth. We recorded the voluntary eye gaze of 31 domestic dogs during viewing of facial photographs of humans and dogs with three emotional expressions (threatening, pleasant and neutral. We found that dogs' gaze fixations spread systematically among facial features. The distribution of fixations was altered by the seen expression, but eyes were the most probable targets of the first fixations and gathered longer looking durations than mouth regardless of the viewed expression. The examination of the inner facial features as a whole revealed more pronounced scanning differences among expressions. This suggests that dogs do not base their perception of facial expressions on the viewing of single structures, but the interpretation of the composition formed by eyes, midface and mouth. Dogs evaluated social threat rapidly and this evaluation led to attentional bias, which was dependent on the depicted species: threatening conspecifics' faces evoked heightened attention but threatening human faces instead an avoidance response. We propose that threatening signals carrying differential biological validity are processed via distinctive neurocognitive pathways. Both of these mechanisms may have an adaptive significance for domestic dogs. The findings provide a novel

  14. Support vector machine-based facial-expression recognition method combining shape and appearance

    Science.gov (United States)

    Han, Eun Jung; Kang, Byung Jun; Park, Kang Ryoung; Lee, Sangyoun

    2010-11-01

    Facial expression recognition can be widely used for various applications, such as emotion-based human-machine interaction, intelligent robot interfaces, face recognition robust to expression variation, etc. Previous studies have been classified as either shape- or appearance-based recognition. The shape-based method has the disadvantage that the individual variance of facial feature points exists irrespective of similar expressions, which can cause a reduction of the recognition accuracy. The appearance-based method has a limitation in that the textural information of the face is very sensitive to variations in illumination. To overcome these problems, a new facial-expression recognition method is proposed, which combines both shape and appearance information, based on the support vector machine (SVM). This research is novel in the following three ways as compared to previous works. First, the facial feature points are automatically detected by using an active appearance model. From these, the shape-based recognition is performed by using the ratios between the facial feature points based on the facial-action coding system. Second, the SVM, which is trained to recognize the same and different expression classes, is proposed to combine two matching scores obtained from the shape- and appearance-based recognitions. Finally, a single SVM is trained to discriminate four different expressions, such as neutral, a smile, anger, and a scream. By determining the expression of the input facial image whose SVM output is at a minimum, the accuracy of the expression recognition is much enhanced. The experimental results showed that the recognition accuracy of the proposed method was better than previous researches and other fusion methods.

  15. The neurosurgical treatment of neuropathic facial pain.

    Science.gov (United States)

    Brown, Jeffrey A

    2014-04-01

    This article reviews the definition, etiology and evaluation, and medical and neurosurgical treatment of neuropathic facial pain. A neuropathic origin for facial pain should be considered when evaluating a patient for rhinologic surgery because of complaints of facial pain. Neuropathic facial pain is caused by vascular compression of the trigeminal nerve in the prepontine cistern and is characterized by an intermittent prickling or stabbing component or a constant burning, searing pain. Medical treatment consists of anticonvulsant medication. Neurosurgical treatment may require microvascular decompression of the trigeminal nerve. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. Predicting facial characteristics from complex polygenic variations

    DEFF Research Database (Denmark)

    Fagertun, Jens; Wolffhechel, Karin Marie Brandt; Pers, Tune

    2015-01-01

    Research into the importance of the human genome in the context of facial appearance is receiving increasing attention and has led to the detection of several Single Nucleotide Polymorphisms (SNPs) of importance. In this work we attempt a holistic approach predicting facial characteristics from...... genetic principal components across a population of 1,266 individuals. For this we perform a genome-wide association analysis to select a large number of SNPs linked to specific facial traits, recode these to genetic principal components and then use these principal components as predictors for facial...

  17. Desarrollo de un sistema de reconocimiento facial

    OpenAIRE

    Vivas Imparato, Abdón Alejandro

    2014-01-01

    El objetivo principal alrededor del cual se desenvuelve este proyecto es el desarrollo de un sistema de reconocimiento facial. Entre sus objetivos específicos se encuentran: realizar una primera aproximación sobre las técnicas de reconocimiento facial existentes en la actualidad, elegir una aplicación donde pueda ser útil el reconocimiento facial, diseñar y desarrollar un programa en MATLAB que lleve a cabo la función de reconocimiento facial, y evaluar el funcionamiento del sistema desarroll...

  18. Social Use of Facial Expressions in Hylobatids

    Science.gov (United States)

    Scheider, Linda; Waller, Bridget M.; Oña, Leonardo; Burrows, Anne M.; Liebal, Katja

    2016-01-01

    Non-human primates use various communicative means in interactions with others. While primate gestures are commonly considered to be intentionally and flexibly used signals, facial expressions are often referred to as inflexible, automatic expressions of affective internal states. To explore whether and how non-human primates use facial expressions in specific communicative interactions, we studied five species of small apes (gibbons) by employing a newly established Facial Action Coding System for hylobatid species (GibbonFACS). We found that, despite individuals often being in close proximity to each other, in social (as opposed to non-social contexts) the duration of facial expressions was significantly longer when gibbons were facing another individual compared to non-facing situations. Social contexts included grooming, agonistic interactions and play, whereas non-social contexts included resting and self-grooming. Additionally, gibbons used facial expressions while facing another individual more often in social contexts than non-social contexts where facial expressions were produced regardless of the attentional state of the partner. Also, facial expressions were more likely ‘responded to’ by the partner’s facial expressions when facing another individual than non-facing. Taken together, our results indicate that gibbons use their facial expressions differentially depending on the social context and are able to use them in a directed way in communicative interactions with other conspecifics. PMID:26978660

  19. Radiologic evaluation of facial injury; Avaliacao radiologica dos traumatismos faciais

    Energy Technology Data Exchange (ETDEWEB)

    Souza, Ricardo Pires de; Volpato, Richard [Complexo Hospitalar Heliopolis, Sao Paulo, SP (Brazil). Servico de Diagnostico por Imagem]. E-mail: richard_volpato@uol.com.br; Nascimento, Lia Paula [Complexo Hospitalar Heliopolis, Sao Paulo, SP (Brazil)

    2003-03-01

    A detailed radiological investigation of the maxillofacial injuries is essential to achieve good treatment results. The images should identify every lesion and guide the treatment, thus improving esthetic and functional results. With the aim of simplifying the diagnostic task, the face may be seen as a five regions structure that may suffer a regional fracture or combined fractures involving the adjacent regions. These regions represent areas of focus for pre surgical planning and are as follows: nasal, orbital, zygomatic, maxillary, and mandibular. In order to understand the injury mechanisms and their consequences it is useful to know the supporting buttresses, which are divided in five sagittal planes, three horizontal planes and two coronal planes. We reviewed the cases of patients with facial trauma treated at Complexo Hospitalar Heliopolis, Sao Paulo, Brazil. A review of the relevant issues concerning radiological investigation of these injuries is presented. This study allowed standardization and ordering of the radiological investigation in patients with facial trauma. (author)

  20. Delayed appearance of tracer lead in facial hair

    International Nuclear Information System (INIS)

    Rabinowitz, M.; Wetherill, G.; Kopple, J.

    1976-01-01

    Three adult men were fed 204 Pb--a rare, stable isotope of lead--daily for about 100 days. Simultaneous blood and facial hair measurements of this tracer and of total lead concentrations were made by mass spectrometric isotope dilution analysis. Although the blood showed an immediate response to the intake of the tracer, the facial hair showed a more gradual response and a delay of approximately 35 days. Since the pattern of appearance of lead in hair does not appear to represent a simple time delay of blood lead concentration, the existence of a physiological pool of lead fed by the blood and giving rise to the content in hair is suggested. Hair lead values should therefore, be interpreted as the integral of the blood lead values over the mean life of this intermediate pool--about 100 days

  1. Facial EMG responses to dynamic emotional facial expressions in boys with disruptive behavior disorders

    NARCIS (Netherlands)

    Wied, de M.; Boxtel, van Anton; Zaalberg, R.; Goudena, P.P.; Matthys, W.

    2006-01-01

    Based on the assumption that facial mimicry is a key factor in emotional empathy, and clinical observations that children with disruptive behavior disorders (DBD) are weak empathizers, the present study explored whether DBD boys are less facially responsive to facial expressions of emotions than

  2. Localized scleroderma: imaging features

    International Nuclear Information System (INIS)

    Liu, P.; Uziel, Y.; Chuang, S.; Silverman, E.; Krafchik, B.; Laxer, R.

    1994-01-01

    Localized scleroderma is distinct from the diffuse form of scleroderma and does not show Raynaud's phenomenon and visceral involvement. The imaging features in 23 patients ranging from 2 to 17 years of age (mean 11.1 years) were reviewed. Leg length discrepancy and muscle atrophy were the most common findings (five patients), with two patients also showing modelling deformity of the fibula. One patient with lower extremity involvement showed abnormal bone marrow signals on MR. Disabling joint contracture requiring orthopedic intervention was noted in one patient. In two patients with ''en coup de sabre'' facial deformity, CT and MR scans revealed intracranial calcifications and white matter abnormality in the ipsilateral frontal lobes, with one also showing migrational abnormality. In a third patient, CT revealed white matter abnormality in the ipsilateral parietal lobe. In one patient with progressive facial hemiatrophy, CT and MR scans showed the underlying hypoplastic left maxillary antrum and cheek. Imaging studies of areas of clinical concern revealed positive findings in half our patients. (orig.)

  3. Localized scleroderma: imaging features

    Energy Technology Data Exchange (ETDEWEB)

    Liu, P. (Dept. of Diagnostic Imaging, Hospital for Sick Children, Toronto, ON (Canada)); Uziel, Y. (Div. of Rheumatology, Hospital for Sick Children, Toronto, ON (Canada)); Chuang, S. (Dept. of Diagnostic Imaging, Hospital for Sick Children, Toronto, ON (Canada)); Silverman, E. (Div. of Rheumatology, Hospital for Sick Children, Toronto, ON (Canada)); Krafchik, B. (Div. of Dermatology, Dept. of Pediatrics, Hospital for Sick Children, Toronto, ON (Canada)); Laxer, R. (Div. of Rheumatology, Hospital for Sick Children, Toronto, ON (Canada))

    1994-06-01

    Localized scleroderma is distinct from the diffuse form of scleroderma and does not show Raynaud's phenomenon and visceral involvement. The imaging features in 23 patients ranging from 2 to 17 years of age (mean 11.1 years) were reviewed. Leg length discrepancy and muscle atrophy were the most common findings (five patients), with two patients also showing modelling deformity of the fibula. One patient with lower extremity involvement showed abnormal bone marrow signals on MR. Disabling joint contracture requiring orthopedic intervention was noted in one patient. In two patients with ''en coup de sabre'' facial deformity, CT and MR scans revealed intracranial calcifications and white matter abnormality in the ipsilateral frontal lobes, with one also showing migrational abnormality. In a third patient, CT revealed white matter abnormality in the ipsilateral parietal lobe. In one patient with progressive facial hemiatrophy, CT and MR scans showed the underlying hypoplastic left maxillary antrum and cheek. Imaging studies of areas of clinical concern revealed positive findings in half our patients. (orig.)

  4. Quantitative Anthropometric Measures of Facial Appearance of Healthy Hispanic/Latino White Children: Establishing Reference Data for Care of Cleft Lip With or Without Cleft Palate

    Science.gov (United States)

    Lee, Juhun; Ku, Brian; Combs, Patrick D.; Da Silveira, Adriana. C.; Markey, Mia K.

    2017-06-01

    Cleft lip with or without cleft palate (CL ± P) is one of the most common congenital facial deformities worldwide. To minimize negative social consequences of CL ± P, reconstructive surgery is conducted to modify the face to a more normal appearance. Each race/ethnic group requires its own facial norm data, yet there are no existing facial norm data for Hispanic/Latino White children. The objective of this paper is to identify measures of facial appearance relevant for planning reconstructive surgery for CL ± P of Hispanic/Latino White children. Quantitative analysis was conducted on 3D facial images of 82 (41 girls, 41 boys) healthy Hispanic/Latino White children whose ages ranged from 7 to 12 years. Twenty-eight facial anthropometric features related to CL ± P (mainly in the nasal and mouth area) were measured from 3D facial images. In addition, facial aesthetic ratings were obtained from 16 non-clinical observers for the same 3D facial images using a 7-point Likert scale. Pearson correlation analysis was conducted to find features that were correlated with the panel ratings of observers. Boys with a longer face and nose, or thicker upper and lower lips are considered more attractive than others while girls with a less curved middle face contour are considered more attractive than others. Associated facial landmarks for these features are primary focus areas for reconstructive surgery for CL ± P. This study identified anthropometric measures of facial features of Hispanic/Latino White children that are pertinent to CL ± P and which correlate with the panel attractiveness ratings.

  5. The impact of face skin tone on perceived facial attractiveness: A study realized with an innovative methodology.

    Science.gov (United States)

    Vera Cruz, Germano

    2017-12-19

    This study aimed to assess the impact of target faces' skin tone and perceivers' skin tone on the participants' attractiveness judgment regarding a symmetrical representative range of target faces as stimuli. Presented with a set of facial features, 240 Mozambican adults rated their attractiveness along a continuous scale. ANOVA and Chi-square were used to analyze the data. The results revealed that the skin tone of the target faces had an impact on the participants' attractiveness judgment. Overall, participants preferred light-skinned faces over dark-skinned ones. This finding is not only consistent with previous results on skin tone preferences, but it is even more powerful because it demonstrates that the light skin tone preference occurs regardless of the symmetry and baseline attractiveness of the stimuli.

  6. Facial trauma in the Trojan War.

    Science.gov (United States)

    Ralli, Ioanna; Stathopoulos, Panagiotis; Mourouzis, Konstantinos; Piagkou, Mara; Rallis, George

    2015-06-01

    The Iliad and Odyssey of Homer represent the cornerstones of classical Greek literature and subsequently the foundations of literature of the Western civilization. The Iliad, particularly, is the most famous and influential epic poem ever conceived and is considered to be the most prominent and representative work of the ancient Greek epic poetry. We present the injuries that involve the face, mentioned so vividly in the Iliad, and discuss the aetiology of their extraordinary mortality rate. We recorded the references of the injuries, the attacker and defender involved, the weapons that were used, the site and the result of the injury. The face was involved in 21 trauma cases. The frontal area was traumatized in 7 cases; the oral cavity in 6; the auricular area in 4; the orbits and the retromandibular area in 3; the mandible and the nose in 2; and the maxilla, the submental and the buccal area in 1, respectively. The mortality rate concerning the facial injuries reaches 100%. Homer's literate dexterity, charisma and his unique aptitude in the narration of the events of the Trojan War have established him as the greatest epic poet. We consider the study of these vibrantly described events to be recreational and entertaining for everyone but especially for a surgeon.

  7. Enhanced MRI in patients with facial palsy

    International Nuclear Information System (INIS)

    Yanagida, Masahiro; Kato, Tsutomu; Ushiro, Koichi; Kitajiri, Masanori; Yamashita, Toshio; Kumazawa, Tadami; Tanaka, Yoshimasa

    1991-01-01

    We performed Gd-DTPA-enhanced magnetic resonance imaging (MRI) examinations at several stages in 40 patients with peripheral facial nerve palsy (Bell's palsy and Ramsay-Hunt syndrome). In 38 of the 40 patients, one and more enhanced region could be seen in certain portion of the facial nerve in the temporal bone on the affected side, whereas no enhanced regions were seen on the intact side. Correlations between the timing of the MRI examination and the location of the enhanced regions were analysed. In all 6 patients examined by MRI within 5 days after the onset of facial nerve palsy, enhanced regions were present in the meatal portion. In 3 of the 8 patients (38%) examined by MRI 6 to 10 days after the onset of facial palsy, enhanced areas were seen in both the meatal and labyrinthine portions. In 8 of the 9 patients (89%) tested 11 to 20 days after the onset of palsy, the vertical portion was enhanced. In the 12 patients examined by MRI 21 to 40 days after the onset of facial nerve palsy, the meatal portion was not enhanced while the labyrinthine portion, the horizontal portion and the vertical portion were enhanced in 5 (42%), 8 (67%) and 11 (92%), respectively. Enhancement in the vertical portion was observed in all 5 patients examined more than 41 days after the onset of facial palsy. These results suggest that the central portion of the facial nerve in the temporal bone tends to be enhanced in the early stage of facial nerve palsy, while the peripheral portion is enhanced in the late stage. These changes of Gd-DTPA enhanced regions in the facial nerve may suggest dromic degeneration of the facial nerve in peripheral facial nerve palsy. (author)

  8. Slowing down Presentation of Facial Movements and Vocal Sounds Enhances Facial Expression Recognition and Induces Facial-Vocal Imitation in Children with Autism

    Science.gov (United States)

    Tardif, Carole; Laine, France; Rodriguez, Melissa; Gepner, Bruno

    2007-01-01

    This study examined the effects of slowing down presentation of facial expressions and their corresponding vocal sounds on facial expression recognition and facial and/or vocal imitation in children with autism. Twelve autistic children and twenty-four normal control children were presented with emotional and non-emotional facial expressions on…

  9. Facial identification in very low-resolution images simulating prosthetic vision.

    Science.gov (United States)

    Chang, M H; Kim, H S; Shin, J H; Park, K S

    2012-08-01

    Familiar facial identification is important to blind or visually impaired patients and can be achieved using a retinal prosthesis. Nevertheless, there are limitations in delivering the facial images with a resolution sufficient to distinguish facial features, such as eyes and nose, through multichannel electrode arrays used in current visual prostheses. This study verifies the feasibility of familiar facial identification under low-resolution prosthetic vision and proposes an edge-enhancement method to deliver more visual information that is of higher quality. We first generated a contrast-enhanced image and an edge image by applying the Sobel edge detector and blocked each of them by averaging. Then, we subtracted the blocked edge image from the blocked contrast-enhanced image and produced a pixelized image imitating an array of phosphenes. Before subtraction, every gray value of the edge images was weighted as 50% (mode 2), 75% (mode 3) and 100% (mode 4). In mode 1, the facial image was blocked and pixelized with no further processing. The most successful identification was achieved with mode 3 at every resolution in terms of identification index, which covers both accuracy and correct response time. We also found that the subjects recognized a distinctive face especially more accurately and faster than the other given facial images even under low-resolution prosthetic vision. Every subject could identify familiar faces even in very low-resolution images. And the proposed edge-enhancement method seemed to contribute to intermediate-stage visual prostheses.

  10. Neurinomas of the facial nerve extending to the middle cranial fossa

    International Nuclear Information System (INIS)

    Ichikawa, Akimichi; Tanaka, Ryuichi; Matsumura, Kenichiro; Takeda, Norio; Ishii, Ryoji; Ito, Jusuke.

    1986-01-01

    Three cases with neurinomas of the facial nerve are reported, especially with regard to the computerized tomographic (CT) findings. All of them had a long history of facial-nerve dysfunction, associated with hearing loss over periods from several to twenty-five years. Intraoperative findings demonstrated that these tumors arose from the intrapetrous portion, the horizontal portion, or the geniculate portion of the facial nerve and that they were located in the middle cranial fossa. The histological diagnoses were neurinomas. CT scans of three cases demonstrated round and low-density masses with marginal high-density areas in the middle cranial fossa, in one associated with diffuse low-density areas in the left temporal and parietal lobes. The low-density areas on CT were thought to be cysts; this was confirmed by surgery. Enhanced CT scans showed irregular enhancement in one case and ring-like enhancement in two cases. High-resolution CT scans of the temporal bone in two cases revealed a soft tissue mass in the middle ear, a well-circumscribed irregular destruction of the anterior aspect of the petrous bone, and calcifications. These findings seemed to be significant features of the neurinomas of the facial nerve extending to the middle cranial fossa. We emphasize that bone-window CT of the temporal bone is most useful in detecting a neurinoma of the facial nerve in its early stage in order to preserve the facial- and acoustic-nerve functions. (author)

  11. Facial and extrafacial eosinophilic pustular folliculitis: a clinical and histopathological comparative study.

    Science.gov (United States)

    Lee, W J; Won, K H; Won, C H; Chang, S E; Choi, J H; Moon, K C; Lee, M W

    2014-05-01

    Although more than 300 cases of eosinophilic pustular folliculitis (EPF) have been reported to date, differences in clinicohistopathological findings among affected sites have not yet been evaluated. To evaluate differences in the clinical and histopathological features of facial and extrafacial EPF. Forty-six patients diagnosed with EPF were classified into those with facial and extrafacial disease according to the affected site. Clinical and histopathological characteristics were retrospectively compared, using all data available in the patient medical records. There were no significant between-group differences in subject ages at presentation, but a male predominance was observed in the extrafacial group. In addition, immunosuppression-associated type EPF was more common in the extrafacial group. Eruptions of plaques with an annular appearance were more common in the facial group. Histologically, perifollicular infiltration of eosinophils occurred more frequently in the facial group, whereas perivascular patterns occurred more frequently in the extrafacial group. Follicular mucinosis and exocytosis of inflammatory cells in the hair follicles were strongly associated with facial EPF. The clinical and histopathological characteristics of patients with facial and extrafacial EPF differ, suggesting the involvement of different pathogenic processes in the development of EPF at different sites. © 2013 British Association of Dermatologists.

  12. Mapping the impairment in decoding static facial expressions of emotion in prosopagnosia.

    Science.gov (United States)

    Fiset, Daniel; Blais, Caroline; Royer, Jessica; Richoz, Anne-Raphaëlle; Dugas, Gabrielle; Caldara, Roberto

    2017-08-01

    Acquired prosopagnosia is characterized by a deficit in face recognition due to diverse brain lesions, but interestingly most prosopagnosic patients suffering from posterior lesions use the mouth instead of the eyes for face identification. Whether this bias is present for the recognition of facial expressions of emotion has not yet been addressed. We tested PS, a pure case of acquired prosopagnosia with bilateral occipitotemporal lesions anatomically sparing the regions dedicated for facial expression recognition. PS used mostly the mouth to recognize facial expressions even when the eye area was the most diagnostic. Moreover, PS directed most of her fixations towards the mouth. Her impairment was still largely present when she was instructed to look at the eyes, or when she was forced to look at them. Control participants showed a performance comparable to PS when only the lower part of the face was available. These observations suggest that the deficits observed in PS with static images are not solely attentional, but are rooted at the level of facial information use. This study corroborates neuroimaging findings suggesting that the Occipital Face Area might play a critical role in extracting facial features that are integrated for both face identification and facial expression recognition in static images. © The Author (2017). Published by Oxford University Press.

  13. The facial nerve: anatomy and associated disorders for oral health professionals.

    Science.gov (United States)

    Takezawa, Kojiro; Townsend, Grant; Ghabriel, Mounir

    2018-04-01

    The facial nerve, the seventh cranial nerve, is of great clinical significance to oral health professionals. Most published literature either addresses the central connections of the nerve or its peripheral distribution but few integrate both of these components and also highlight the main disorders affecting the nerve that have clinical implications in dentistry. The aim of the current study is to provide a comprehensive description of the facial nerve. Multiple aspects of the facial nerve are discussed and integrated, including its neuroanatomy, functional anatomy, gross anatomy, clinical problems that may involve the nerve, and the use of detailed anatomical knowledge in the diagnosis of the site of facial nerve lesion in clinical neurology. Examples are provided of disorders that can affect the facial nerve during its intra-cranial, intra-temporal and extra-cranial pathways, and key aspects of clinical management are discussed. The current study is complemented by original detailed dissections and sketches that highlight key anatomical features and emphasise the extent and nature of anatomical variations displayed by the facial nerve.

  14. Branches of the Facial Artery.

    Science.gov (United States)

    Hwang, Kun; Lee, Geun In; Park, Hye Jin

    2015-06-01

    The aim of this study is to review the name of the branches, to review the classification of the branching pattern, and to clarify a presence percentage of each branch of the facial artery, systematically. In a PubMed search, the search terms "facial," AND "artery," AND "classification OR variant OR pattern" were used. The IBM SPSS Statistics 20 system was used for statistical analysis. Among the 500 titles, 18 articles were selected and reviewed systematically. Most of the articles focused on "classification" according to the "terminal branch." Several authors classified the facial artery according to their terminal branches. Most of them, however, did not describe the definition of "terminal branch." There were confusions within the classifications. When the inferior labial artery was absent, 3 different types were used. The "alar branch" or "nasal branch" was used instead of the "lateral nasal branch." The angular branch was used to refer to several different branches. The presence as a percentage of each branch according to the branches in Gray's Anatomy (premasseteric, inferior labial, superior labial, lateral nasal, and angular) varied. No branch was used with 100% consistency. The superior labial branch was most frequently cited (95.7%, 382 arteries in 399 hemifaces). The angular branch (53.9%, 219 arteries in 406 hemifaces) and the premasseteric branch were least frequently cited (53.8%, 43 arteries in 80 hemifaces). There were significant differences among each of the 5 branches (P < 0.05) except between the angular branch and the premasseteric branch and between the superior labial branch and the inferior labial branch. The authors believe identifying the presence percentage of each branch will be helpful for surgical procedures.

  15. Photographic Standards for Patients With Facial Palsy and Recommendations by Members of the Sir Charles Bell Society.

    Science.gov (United States)

    Santosa, Katherine B; Fattah, Adel; Gavilán, Javier; Hadlock, Tessa A; Snyder-Warwick, Alison K

    2017-07-01

    There is no widely accepted assessment tool or common language used by clinicians caring for patients with facial palsy, making exchange of information challenging. Standardized photography may represent such a language and is imperative for precise exchange of information and comparison of outcomes in this special patient population. To review the literature to evaluate the use of facial photography in the management of patients with facial palsy and to examine the use of photography in documenting facial nerve function among members of the Sir Charles Bell Society-a group of medical professionals dedicated to care of patients with facial palsy. A literature search was performed to review photographic standards in patients with facial palsy. In addition, a cross-sectional survey of members of the Sir Charles Bell Society was conducted to examine use of medical photography in documenting facial nerve function. The literature search and analysis was performed in August and September 2015, and the survey was conducted in August and September 2013. The literature review searched EMBASE, CINAHL, and MEDLINE databases from inception of each database through September 2015. Additional studies were identified by scanning references from relevant studies. Only English-language articles were eligible for inclusion. Articles that discussed patients with facial palsy and outlined photographic guidelines for this patient population were included in the study. The survey was disseminated to the Sir Charles Bell Society members in electronic form. It consisted of 10 questions related to facial grading scales, patient-reported outcome measures, other psychological assessment tools, and photographic and videographic recordings. In total, 393 articles were identified in the literature search, 7 of which fit the inclusion criteria. Six of the 7 articles discussed or proposed views specific to patients with facial palsy. However, none of the articles specifically focused on

  16. Facial image identification using Photomodeler

    DEFF Research Database (Denmark)

    Lynnerup, Niels; Andersen, Marie; Lauritsen, Helle Petri

    2003-01-01

    consist of many images of the same person taken from different angles. We wanted to see if it was possible to combine such a suite of images in useful 3-D renderings of facial proportions.Fifteen male adults were photographed from four different angles. Based on these photographs, a 3-D wireframe model......We present the results of a preliminary study on the use of 3-D software (Photomodeler) for identification purposes. Perpetrators may be photographed or filmed by surveillance systems. The police may wish to have these images compared to photographs of suspects. The surveillance imagery will often...

  17. A case of oral-facial-digital syndrome with overlapping manifestations of type V and type VI: a possible new OFD syndrome

    International Nuclear Information System (INIS)

    Chung Wongyiu; Chung Laupo

    1999-01-01

    We report a child with clinical and radiological manifestations characteristic of both V'aradi syndrome (oral-facial-digital syndrome type VI) and Thurston syndrome (oral-facial-digital syndrome type V). The findings have not been reported previously, and we believe that it represents a new variant. (orig.)

  18. Facial skin pores: a multiethnic study.

    Science.gov (United States)

    Flament, Frederic; Francois, Ghislain; Qiu, Huixia; Ye, Chengda; Hanaya, Tomoo; Batisse, Dominique; Cointereau-Chardon, Suzy; Seixas, Mirela Donato Gianeti; Dal Belo, Susi Elaine; Bazin, Roland

    2015-01-01

    Skin pores (SP), as they are called by laymen, are common and benign features mostly located on the face (nose, cheeks, etc) that generate many aesthetic concerns or complaints. Despite the prevalence of skin pores, related literature is scarce. With the aim of describing the prevalence of skin pores and anatomic features among ethnic groups, a dermatoscopic instrument, using polarized lighting, coupled to a digital camera recorded the major features of skin pores (size, density, coverage) on the cheeks of 2,585 women in different countries and continents. A detection threshold of 250 μm, correlated to clinical scorings by experts, was input into a specific software to further allow for automatic counting of the SP density (N/cm(2)) and determination of their respective sizes in mm(2). Integrating both criteria also led to establishing the relative part of the skin surface (as a percentage) that is actually covered by SP on cheeks. The results showed that the values of respective sizes, densities, and skin coverage: 1) were recorded in all studied subjects; 2) varied greatly with ethnicity; 3) plateaued with age in most cases; and 4) globally refected self-assessment by subjects, in particular those who self-declare having "enlarged pores" like Brazilian women. Inversely, Chinese women were clearly distinct from other ethnicities in having very low density and sizes. Analyzing the present results suggests that facial skin pore's morphology as perceived by human eye less result from functional criteria of associated appendages such as sebaceous glands. To what extent skin pores may be viewed as additional criteria of a photo-altered skin is an issue to be further addressed.

  19. Polyacrylamide gel for facial wasting rehabilitation: how many milliliters per session?

    Science.gov (United States)

    Rauso, R; Gherardini, G; Parlato, V; Amore, R; Tartaro, G

    2012-02-01

    Facial lipoatrophy is most distressing for HIV patients in pharmacologic treatment. Nonabsorbable fillers are widely used to restore facial features in these patients. We evaluated the safety and aesthetic outcomes of two samples of HIV+ patients affected by facial wasting who received different filling protocols of the nonabsorbable filler Aquamid® to restore facial wasting. Thirty-one HIV+ patients affected by facial wasting received injections of the nonabsorbable filler Aquamid for facial wasting rehabilitation. Patients were randomly divided into two groups: A and B. In group A, the facial defect was corrected by injecting up to 8 ml of product in the first session; patients were retreated after every 8th week with touch-up procedures until full correction was observed. In group B, facial defects were corrected by injecting 2 ml of product per session; patients were retreated after every 8th week until full correction was observed. Patients of group A noted a great improvement after the first filling procedure. Patients in group B noted improvement of their face after four filling procedures on average. Local infection, foreign-body reaction, and migration of the product were not observed in either group during follow-up. The rehabilitation obtained with a megafilling session and further touch-up procedures and that with a gradual build-up of the localized soft-tissue loss seem not to have differences in terms of safety for the patients. However, with a megafilling session satisfaction is achieved earlier and it is possible to reduce hospital costs in terms of gauze, gloves, and other items.

  20. Intact mirror mechanisms for automatic facial emotions in children and adolescents with autism spectrum disorder.

    Science.gov (United States)

    Schulte-Rüther, Martin; Otte, Ellen; Adigüzel, Kübra; Firk, Christine; Herpertz-Dahlmann, Beate; Koch, Iring; Konrad, Kerstin

    2017-02-01

    It has been suggested that an early deficit in the human mirror neuron system (MNS) is an important feature of autism. Recent findings related to simple hand and finger movements do not support a general dysfunction of the MNS in autism. Studies investigating facial actions (e.g., emotional expressions) have been more consistent, however, mostly relied on passive observation tasks. We used a new variant of a compatibility task for the assessment of automatic facial mimicry responses that allowed for simultaneous control of attention to facial stimuli. We used facial electromyography in 18 children and adolescents with Autism spectrum disorder (ASD) and 18 typically developing controls (TDCs). We observed a robust compatibility effect in ASD, that is, the execution of a facial expression was facilitated if a congruent facial expression was observed. Time course analysis of RT distributions and comparison to a classic compatibility task (symbolic Simon task) revealed that the facial compatibility effect appeared early and increased with time, suggesting fast and sustained activation of motor codes during observation of facial expressions. We observed a negative correlation of the compatibility effect with age across participants and in ASD, and a positive correlation between self-rated empathy and congruency for smiling faces in TDC but not in ASD. This pattern of results suggests that basic motor mimicry is intact in ASD, but is not associated with complex social cognitive abilities such as emotion understanding and empathy. Autism Res 2017, 10: 298-310. © 2016 International Society for Autism Research, Wiley Periodicals, Inc. © 2016 International Society for Autism Research, Wiley Periodicals, Inc.

  1. Mobius syndrome: MRI features

    International Nuclear Information System (INIS)

    Markarian, Maria F.; Villarroel, Gonzalo M.; Nagel, Jorge R.

    2003-01-01

    Purpose: Mobius Syndrome or congenital facial diplegia is associated with paralysis of the lateral gaze movements. This syndrome may include other cranial nerve palsies and be associated to musculoskeletal anomalies. Our objective is to show the MRI findings in Mobius Syndrome. Material and methods: MRI study was performed in 3 patients with clinic diagnosis of Mobius Syndrome. RMI (1.5T); exams included axial FSE (T1 and T2), FLAIR, SE/EPI, GRE/20, sagittal FSE T2 , coronal T1, diffusion, angio MRI and Spectroscopy sequences. Results: The common features of this syndrome found in MRI were: depression or straightening of the floor of the fourth ventricle, brainstem anteroposterior diameter diminution, morphologic alteration of the pons and medulla oblongata and of the hypoglossal nuclei as well as severe micrognathia. Conclusion: The morphologic alterations of Mobius Syndrome can be clearly identified by MRI; this method has proved to be a useful diagnostic examination. (author)

  2. The fate of facial asymmetry after surgery for "muscular torticollis" in early childhood

    Directory of Open Access Journals (Sweden)

    Dinesh Kittur

    2016-01-01

    Full Text Available Aims and Objectives: To study wheather the facial features return to normal after surgery for muscular torticollis done in early childhood. Materials and Methods: This is a long-term study of the fate of facial asymmetry in four children who have undergone operation for muscular torticollis in early childhood. All the patients presented late, i.e., after the age of 4 years with a scarred sternomastoid and plagiocephaly, so conservative management with physiotherapy was not considered. All the patients had an x-ray of cervical spine and eye and dental checkup before making a diagnosis of muscular torticollis. Preoperative photograph of the patient′s face was taken to counsel the parents about the secondary effect of short sternomastoid on facial features and the need for surgery. After division of sternomastoid muscle and release of cervical fascia when indicated, the head was maintained in a hyperextended position supported by sand bags for three days. Gradual physiotherapy was then started followed by wearing of a Minerva collar that the child wore for a maximum period of time in 24 h. Physiotherapy was continued three times a day till the range of movements of the head returned to normal. During the follow-up, serial photographs were taken to note the changes in the facial features. Results: In all four patients, the asymmetry of the face got corrected and the facial features returned to normal. Conclusion: Most of the deformity of facial asymmetry gets corrected in the first two years after surgery. By adolescence, the face returns to normal.

  3. Humanoid Head Face Mechanism with Expandable Facial Expressions

    Directory of Open Access Journals (Sweden)

    Wagshum Techane Asheber

    2016-02-01

    Full Text Available Recently a social robot for daily life activities is becoming more common. To this end a humanoid robot with realistic facial expression is a strong candidate for common chores. In this paper, the development of a humanoid face mechanism with a simplified system complexity to generate human like facial expression is presented. The distinctive feature of this face robot is the use of significantly fewer actuators. Only three servo motors for facial expressions and five for the rest of the head motions have been used. This leads to effectively low energy consumption, making it suitable for applications such as mobile humanoid robots. Moreover, the modular design makes it possible to have as many face appearances as needed on one structure. The mechanism allows expansion to generate more expressions without addition or alteration of components. The robot is also equipped with an audio system and camera inside each eyeball, consequently hearing and vision sensibility are utilized in localization, communication and enhancement of expression exposition processes.

  4. Facial dysmorphopsia: a notable variant of the "thin man" phenomenon?

    Science.gov (United States)

    Ganssauge, Martin; Papageorgiou, Eleni; Schiefer, Ulrich

    2012-10-01

    The aim of this work is to investigate the facial distortion (dysmorphopsia) experienced by patients with homonymous paracentral scotomas and to analyze the interrelationship with the previously described "thin man" phenomenon. Routine neuro-ophthalmological examination and brain MRI in three patients who suffered from small homonymous paracentral scotomas due to infarction or arteriovenous malformations of the occipital lobe. They all complained of distortion and shrinkage of their interlocutor's face contralateral to the brain lesion. The phenomenon appeared some seconds after steady fixation on the interlocutor's nose and was evident with both left and right homonymous scotomas. The patients did not notice a gap in the area corresponding to the scotoma and objects other than faces were perceived normally. Homonymous paracentral scotomas can lead to focal displacement of facial features towards the center of the field defect with resulting distortion of the face on the affected side. This so-called "dysmorphopsia" makes faces appear regionally narrower than they are in reality and may be induced even by visual field defects that remain undetected by conventional perimetry using 6° × 6° grids. Predilection for faces is probably associated with the superior location of scotomas or specific impairment of face processing abilities related to the lesion site. Facial dysmorphopsia is most probably associated with cortical "filling-in" and spatial distortion, and can hence be regarded as a special entity of the "thin man" phenomenon.

  5. Photometric facial analysis of the Igbo Nigerian adult male

    Science.gov (United States)

    Ukoha, Ukoha Ukoha; Udemezue, Onochie Okwudili; Oranusi, Chidi Kingsley; Asomugha, Azuoma Lasbrey; Dimkpa, Uchechukwu; Nzeukwu, Lynda Chinenye

    2012-01-01

    Background: A carefully performed facial analysis can serve as a strong foundation for successful facial reconstructive and plastic surgeries, rhinoplasty or orthodontics. Aim: The purpose of this study is to determine the facial features and qualities of the Igbo Nigerian adult male using photometry. Materials and Methods: One hundred and twenty subjects aged between 18 and 28 years were studied at the Anambra State University, Uli, Nigeria. The frontal and right lateral view photographs of their faces were taken and traced out on tracing papers. On these, two vertical distances, nasion to subnasal and subnasale to menton, and four angles, nasofrontal (NF), nasofacial, nasomental (NM) and mentocervical, were measured. Results: The result showed that the Igbo Nigerian adult male had a middle face that was shorter than the lower one (41.76% vs.58.24%), a moderate glabella (NF=133.97°), a projected nose (NM=38.68°) and a less prominent chin (NM=125.87°). Conclusion: This study is very important in medical practice as it can be used to compare the pre- and post-operative results of plastic surgery and other related surgeries of the face. PMID:23661886

  6. Facial Affect Displays during Tutoring Sessions

    NARCIS (Netherlands)

    Ghijsen, M.; Heylen, Dirk K.J.; Nijholt, Antinus; op den Akker, Hendrikus J.A.

    2005-01-01

    An emotionally intelligent tutoring system should be able to provide feedback to students, taking into account relevant aspects of the mental state of the student. Facial expressions, put in context, might provide some cues with respect to this state. We discuss the analysis of the facial expression

  7. Case Report: Magnetically retained silicone facial prosthesis ...

    African Journals Online (AJOL)

    Prosthetic camouflaging of facial defects and use of silicone maxillofacial material are the alternatives to the surgical retreatment. Silicone elastomers provide more options to clinician for customization of the facial prosthesis which is simple, esthetically good when coupled with bio magnets for retention. Key words: Magnet ...

  8. Facial Feedback Mechanisms in Autistic Spectrum Disorders

    Science.gov (United States)

    Stel, Marielle; van den Heuvel, Claudia; Smeets, Raymond C.

    2008-01-01

    Facial feedback mechanisms of adolescents with Autistic Spectrum Disorders (ASD) were investigated utilizing three studies. Facial expressions, which became activated via automatic (Studies 1 and 2) or intentional (Study 2) mimicry, or via holding a pen between the teeth (Study 3), influenced corresponding emotions for controls, while individuals…

  9. Some Aspects of Facial Nerve Paralysis

    African Journals Online (AJOL)

    1973-01-20

    Jan 20, 1973 ... the facial nerve has tremendous regenerative ability. The paretic, or flaccid, ... fresh axoplasm moving into it from the cell-body. Only when the axon .... tivity of the ear to sound, homolateral to the facial paralysis. The cause is ...

  10. Neural Temporal Dynamics of Facial Emotion Processing: Age Effects and Relationship to Cognitive Function

    Directory of Open Access Journals (Sweden)

    Xiaoyan Liao

    2017-06-01

    Full Text Available This study used event-related potentials (ERPs to investigate the effects of age on neural temporal dynamics of processing task-relevant facial expressions and their relationship to cognitive functions. Negative (sad, afraid, angry, and disgusted, positive (happy, and neutral faces were presented to 30 older and 31 young participants who performed a facial emotion categorization task. Behavioral and ERP indices of facial emotion processing were analyzed. An enhanced N170 for negative faces, in addition to intact right-hemispheric N170 for positive faces, was observed in older adults relative to their younger counterparts. Moreover, older adults demonstrated an attenuated within-group N170 laterality effect for neutral faces, while younger adults showed the opposite pattern. Furthermore, older adults exhibited sustained temporo-occipital negativity deflection over the time range of 200–500 ms post-stimulus, while young adults showed posterior positivity and subsequent emotion-specific frontal negativity deflections. In older adults, decreased accuracy for labeling negative faces was positively correlated with Montreal Cognitive Assessment Scores, and accuracy for labeling neutral faces was negatively correlated with age. These findings suggest that older people may exert more effort in structural encoding for negative faces and there are different response patterns for the categorization of different facial emotions. Cognitive functioning may be related to facial emotion categorization deficits observed in older adults. This may not be attributable to positivity effects: it may represent a selective deficit for the processing of negative facial expressions in older adults.

  11. Computer-analyzed facial expression as a surrogate marker for autism spectrum social core symptoms.

    Directory of Open Access Journals (Sweden)

    Keiho Owada

    Full Text Available To develop novel interventions for autism spectrum disorder (ASD core symptoms, valid, reliable, and sensitive longitudinal outcome measures are required for detecting symptom change over time. Here, we tested whether a computerized analysis of quantitative facial expression measures could act as a marker for core ASD social symptoms. Facial expression intensity values during a semi-structured socially interactive situation extracted from the Autism Diagnostic Observation Schedule (ADOS were quantified by dedicated software in 18 high-functioning adult males with ASD. Controls were 17 age-, gender-, parental socioeconomic background-, and intellectual level-matched typically developing (TD individuals. Statistical analyses determined whether values representing the strength and variability of each facial expression element differed significantly between the ASD and TD groups and whether they correlated with ADOS reciprocal social interaction scores. Compared with the TD controls, facial expressions in the ASD group appeared more "Neutral" (d = 1.02, P = 0.005, PFDR 0.05 with lower variability in Happy expression (d = 1.10, P = 0.003, PFDR < 0.05. Moreover, the stronger Neutral facial expressions in the ASD participants were positively correlated with poorer ADOS reciprocal social interaction scores (ρ = 0.48, P = 0.042. These findings indicate that our method for quantitatively measuring reduced facial expressivity during social interactions can be a promising marker for core ASD social symptoms.

  12. Computer-analyzed facial expression as a surrogate marker for autism spectrum social core symptoms.

    Science.gov (United States)

    Owada, Keiho; Kojima, Masaki; Yassin, Walid; Kuroda, Miho; Kawakubo, Yuki; Kuwabara, Hitoshi; Kano, Yukiko; Yamasue, Hidenori

    2018-01-01

    To develop novel interventions for autism spectrum disorder (ASD) core symptoms, valid, reliable, and sensitive longitudinal outcome measures are required for detecting symptom change over time. Here, we tested whether a computerized analysis of quantitative facial expression measures could act as a marker for core ASD social symptoms. Facial expression intensity values during a semi-structured socially interactive situation extracted from the Autism Diagnostic Observation Schedule (ADOS) were quantified by dedicated software in 18 high-functioning adult males with ASD. Controls were 17 age-, gender-, parental socioeconomic background-, and intellectual level-matched typically developing (TD) individuals. Statistical analyses determined whether values representing the strength and variability of each facial expression element differed significantly between the ASD and TD groups and whether they correlated with ADOS reciprocal social interaction scores. Compared with the TD controls, facial expressions in the ASD group appeared more "Neutral" (d = 1.02, P = 0.005, PFDR Neutral expression (d = 1.08, P = 0.003, PFDR 0.05) with lower variability in Happy expression (d = 1.10, P = 0.003, PFDR Neutral facial expressions in the ASD participants were positively correlated with poorer ADOS reciprocal social interaction scores (ρ = 0.48, P = 0.042). These findings indicate that our method for quantitatively measuring reduced facial expressivity during social interactions can be a promising marker for core ASD social symptoms.

  13. Cultural similarities and differences in perceiving and recognizing facial expressions of basic emotions.

    Science.gov (United States)

    Yan, Xiaoqian; Andrews, Timothy J; Young, Andrew W

    2016-03-01

    The ability to recognize facial expressions of basic emotions is often considered a universal human ability. However, recent studies have suggested that this commonality has been overestimated and that people from different cultures use different facial signals to represent expressions (Jack, Blais, Scheepers, Schyns, & Caldara, 2009; Jack, Caldara, & Schyns, 2012). We investigated this possibility by examining similarities and differences in the perception and categorization of facial expressions between Chinese and white British participants using whole-face and partial-face images. Our results showed no cultural difference in the patterns of perceptual similarity of expressions from whole-face images. When categorizing the same expressions, however, both British and Chinese participants were slightly more accurate with whole-face images of their own ethnic group. To further investigate potential strategy differences, we repeated the perceptual similarity and categorization tasks with presentation of only the upper or lower half of each face. Again, the perceptual similarity of facial expressions was similar between Chinese and British participants for both the upper and lower face regions. However, participants were slightly better at categorizing facial expressions of their own ethnic group for the lower face regions, indicating that the way in which culture shapes the categorization of facial expressions is largely driven by differences in information decoding from this part of the face. (c) 2016 APA, all rights reserved).

  14. Recognition of facial expressions and prosodic cues with graded emotional intensities in adults with Asperger syndrome.

    Science.gov (United States)

    Doi, Hirokazu; Fujisawa, Takashi X; Kanai, Chieko; Ohta, Haruhisa; Yokoi, Hideki; Iwanami, Akira; Kato, Nobumasa; Shinohara, Kazuyuki

    2013-09-01

    This study investigated the ability of adults with Asperger syndrome to recognize emotional categories of facial expressions and emotional prosodies with graded emotional intensities. The individuals with Asperger syndrome showed poorer recognition performance for angry and sad expressions from both facial and vocal information. The group difference in facial expression recognition was prominent for stimuli with low or intermediate emotional intensities. In contrast to this, the individuals with Asperger syndrome exhibited lower recognition accuracy than typically-developed controls mainly for emotional prosody with high emotional intensity. In facial expression recognition, Asperger and control groups showed an inversion effect for all categories. The magnitude of this effect was less in the Asperger group for angry and sad expressions, presumably attributable to reduced recruitment of the configural mode of face processing. The individuals with Asperger syndrome outperformed the control participants in recognizing inverted sad expressions, indicating enhanced processing of local facial information representing sad emotion. These results suggest that the adults with Asperger syndrome rely on modality-specific strategies in emotion recognition from facial expression and prosodic information.

  15. Younger and Older Users’ Recognition of Virtual Agent Facial Expressions

    Science.gov (United States)

    Beer, Jenay M.; Smarr, Cory-Ann; Fisk, Arthur D.; Rogers, Wendy A.

    2015-01-01

    As technology advances, robots and virtual agents will be introduced into the home and healthcare settings to assist individuals, both young and old, with everyday living tasks. Understanding how users recognize an agent’s social cues is therefore imperative, especially in social interactions. Facial expression, in particular, is one of the most common non-verbal cues used to display and communicate emotion in on-screen agents (Cassell, Sullivan, Prevost, & Churchill, 2000). Age is important to consider because age-related differences in emotion recognition of human facial expression have been supported (Ruffman et al., 2008), with older adults showing a deficit for recognition of negative facial expressions. Previous work has shown that younger adults can effectively recognize facial emotions displayed by agents (Bartneck & Reichenbach, 2005; Courgeon et al. 2009; 2011; Breazeal, 2003); however, little research has compared in-depth younger and older adults’ ability to label a virtual agent’s facial emotions, an import consideration because social agents will be required to interact with users of varying ages. If such age-related differences exist for recognition of virtual agent facial expressions, we aim to understand if those age-related differences are influenced by the intensity of the emotion, dynamic formation of emotion (i.e., a neutral expression developing into an expression of emotion through motion), or the type of virtual character differing by human-likeness. Study 1 investigated the relationship between age-related differences, the implication of dynamic formation of emotion, and the role of emotion intensity in emotion recognition of the facial expressions of a virtual agent (iCat). Study 2 examined age-related differences in recognition expressed by three types of virtual characters differing by human-likeness (non-humanoid iCat, synthetic human, and human). Study 2 also investigated the role of configural and featural processing as a

  16. Automatic face morphing for transferring facial animation

    NARCIS (Netherlands)

    Bui Huu Trung, B.H.T.; Bui, T.D.; Poel, Mannes; Heylen, Dirk K.J.; Nijholt, Antinus; Hamza, H.M.

    2003-01-01

    In this paper, we introduce a novel method of automatically finding the training set of RBF networks for morphing a prototype face to represent a new face. This is done by automatically specifying and adjusting corresponding feature points on a target face. The RBF networks are then used to transfer

  17. The Prevalence of Cosmetic Facial Plastic Procedures among Facial Plastic Surgeons.

    Science.gov (United States)

    Moayer, Roxana; Sand, Jordan P; Han, Albert; Nabili, Vishad; Keller, Gregory S

    2018-04-01

    This is the first study to report on the prevalence of cosmetic facial plastic surgery use among facial plastic surgeons. The aim of this study is to determine the frequency with which facial plastic surgeons have cosmetic procedures themselves. A secondary aim is to determine whether trends in usage of cosmetic facial procedures among facial plastic surgeons are similar to that of nonsurgeons. The study design was an anonymous, five-question, Internet survey distributed via email set in a single academic institution. Board-certified members of the American Academy of Facial Plastic and Reconstructive Surgery (AAFPRS) were included in this study. Self-reported history of cosmetic facial plastic surgery or minimally invasive procedures were recorded. The survey also queried participants for demographic data. A total of 216 members of the AAFPRS responded to the questionnaire. Ninety percent of respondents were male ( n  = 192) and 10.3% were female ( n  = 22). Thirty-three percent of respondents were aged 31 to 40 years ( n  = 70), 25% were aged 41 to 50 years ( n  = 53), 21.4% were aged 51 to 60 years ( n  = 46), and 20.5% were older than 60 years ( n  = 44). Thirty-six percent of respondents had a surgical cosmetic facial procedure and 75% has at least one minimally invasive cosmetic facial procedure. Facial plastic surgeons are frequent users of cosmetic facial plastic surgery. This finding may be due to access, knowledge base, values, or attitudes. By better understanding surgeon attitudes toward facial plastic surgery, we can improve communication with patients and delivery of care. This study is a first step in understanding use of facial plastic procedures among facial plastic surgeons. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  18. Development of the Korean Facial Emotion Stimuli: Korea University Facial Expression Collection 2nd Edition

    Directory of Open Access Journals (Sweden)

    Sun-Min Kim

    2017-05-01

    Full Text Available Background: Developing valid emotional facial stimuli for specific ethnicities creates ample opportunities to investigate both the nature of emotional facial information processing in general and clinical populations as well as the underlying mechanisms of facial emotion processing within and across cultures. Given that most entries in emotional facial stimuli databases were developed with western samples, and given that very few of the eastern emotional facial stimuli sets were based strictly on the Ekman’s Facial Action Coding System, developing valid emotional facial stimuli of eastern samples remains a high priority.Aims: To develop and examine the psychometric properties of six basic emotional facial stimuli recruiting professional Korean actors and actresses based on the Ekman’s Facial Action Coding System for the Korea University Facial Expression Collection-Second Edition (KUFEC-II.Materials And Methods: Stimulus selection was done in two phases. First, researchers evaluated the clarity and intensity of each stimulus developed based on the Facial Action Coding System. Second, researchers selected a total of 399 stimuli from a total of 57 actors and actresses, which were then rated on accuracy, intensity, valence, and arousal by 75 independent raters.Conclusion: The hit rates between the targeted and rated expressions of the KUFEC-II were all above 80%, except for fear (50% and disgust (63%. The KUFEC-II appears to be a valid emotional facial stimuli database, providing the largest set of emotional facial stimuli. The mean intensity score was 5.63 (out of 7, suggesting that the stimuli delivered the targeted emotions with great intensity. All positive expressions were rated as having a high positive valence, whereas all negative expressions were rated as having a high negative valence. The KUFEC II is expected to be widely used in various psychological studies on emotional facial expression. KUFEC-II stimuli can be obtained through

  19. Microalbuminuria Represents a Feature of Advanced Renal Disease ...

    African Journals Online (AJOL)

    opsig

    2006-12-02

    Dec 2, 2006 ... beta thalassemia J Nephrol 1997; 10(3):163-167. 3. Abbott,KC, Hypolite, IO and Agodoa, LY. Sickle cell nephropathy at end-stage renal disease in the United States: patient characteristics and sur- vival Clin Nephrol 2002; 58(1): 9-15. 4. Polkinghome ,KR Detection and measure- ment of urinary protein ...

  20. Event-related theta synchronization predicts deficit in facial affect recognition in schizophrenia.

    Science.gov (United States)

    Csukly, Gábor; Stefanics, Gábor; Komlósi, Sarolta; Czigler, István; Czobor, Pál

    2014-02-01

    Growing evidence suggests that abnormalities in the synchronized oscillatory activity of neurons in schizophrenia may lead to impaired neural activation and temporal coding and thus lead to neurocognitive dysfunctions, such as deficits in facial affect recognition. To gain an insight into the neurobiological processes linked to facial affect recognition, we investigated both induced and evoked oscillatory activity by calculating the Event Related Spectral Perturbation (ERSP) and the Inter Trial Coherence (ITC) during facial affect recognition. Fearful and neutral faces as well as nonface patches were presented to 24 patients with schizophrenia and 24 matched healthy controls while EEG was recorded. The participants' task was to recognize facial expressions. Because previous findings with healthy controls showed that facial feature decoding was associated primarily with oscillatory activity in the theta band, we analyzed ERSP and ITC in this frequency band in the time interval of 140-200 ms, which corresponds to the N170 component. Event-related theta activity and phase-locking to facial expressions, but not to nonface patches, predicted emotion recognition performance in both controls and patients. Event-related changes in theta amplitude and phase-locking were found to be significantly weaker in patients compared with healthy controls, which is in line with previous investigations showing decreased neural synchronization in the low frequency bands in patients with schizophrenia. Neural synchrony is thought to underlie distributed information processing. Our results indicate a less effective functioning in the recognition process of facial features, which may contribute to a less effective social cognition in schizophrenia. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  1. MR imaging of the intraparotid facial nerve

    International Nuclear Information System (INIS)

    Kurihara, Hiroaki; Iwasawa, Tae; Yoshida, Tetsuo; Furukawa, Masaki

    1996-01-01

    Using a 1.5T MR imaging system, seven normal volunteers and 6 patients with parotid tumors were studied and their intraparotid facial nerves were directly imaged. The findings were evaluated by T1-weighted axial, sagittal and oblique images. The facial nerve appeared to be relatively hypointensive within the highsignal parotid parenchyma, and the main trunks of the facial nerves were observed directly in all the cases examined. Their main divisions were detected in all the volunteers and 5 of 6 patients were imaged obliquely. The facial nerves run in various fashions and so the oblique scan planes were determined individually to detect this running figure directly. To verify our observations, surgical findings of the facial nerve were compared with the MR images or results. (author)

  2. Variant facial artery in the submandibular region.

    Science.gov (United States)

    Vadgaonkar, Rajanigandha; Rai, Rajalakshmi; Prabhu, Latha V; Bv, Murlimanju; Samapriya, Neha

    2012-07-01

    Facial artery has been considered to be the most important vascular pedicle in facial rejuvenation procedures and submandibular gland (SMG) resection. It usually arises from the external carotid artery and passes from the carotid to digastric triangle, deep to the posterior belly of digastric muscle, and lodges in a groove at the posterior end of the SMG. It then passes between SMG and the mandible to reach the face after winding around the base of the mandible. During a routine dissection, in a 62-year-old female cadaver, in Kasturba Medical College Mangalore, an unusual pattern in the cervical course of facial artery was revealed. The right facial artery was found to pierce the whole substance of the SMG before winding around the lower border of the mandible to enter the facial region. Awareness of existence of such a variant and its comparison to the normal anatomy will be useful to oral and maxillofacial surgeons.

  3. Facial Animations: Future Research Directions & Challenges

    Science.gov (United States)

    Alkawaz, Mohammed Hazim; Mohamad, Dzulkifli; Rehman, Amjad; Basori, Ahmad Hoirul

    2014-06-01

    Nowadays, computer facial animation is used in a significant multitude fields that brought human and social to study the computer games, films and interactive multimedia reality growth. Authoring the computer facial animation, complex and subtle expressions are challenging and fraught with problems. As a result, the current most authored using universal computer animation techniques often limit the production quality and quantity of facial animation. With the supplement of computer power, facial appreciative, software sophistication and new face-centric methods emerging are immature in nature. Therefore, this paper concentrates to define and managerially categorize current and emerged surveyed facial animation experts to define the recent state of the field, observed bottlenecks and developing techniques. This paper further presents a real-time simulation model of human worry and howling with detail discussion about their astonish, sorrow, annoyance and panic perception.

  4. Platelet Preparations for Use in Facial Rejuvenation and Wound Healing: A Critical Review of Current Literature.

    Science.gov (United States)

    Sclafani, Anthony P; Azzi, James

    2015-08-01

    In facial plastic surgery, the potential for direct delivery of growth factors from platelet preparations has been of particular interest for use in facial rejuvenation, recovery after facial surgery, and wound healing. A literature search was conducted through PubMed for the terms PRP, PRFM, platelet-rich plasma, platelet-rich fibrin matrix, platelet preparations, platelet therapy, growth factors, platelet facial, platelet facial rejuvenation, platelet wound healing, platelet plastic surgery. Articles pertaining to the use of platelet preparations in facial surgery and wound healing in plastic surgery after 2001 were included. Thirteen in vitro studies showed use of platelet-rich plasma (PRP) and platelet-rich fibrin matrix (PRFM) had a significant effect on cellular activity. Twenty-four out of 28 animal studies exhibited favorable results with use of a platelet preparation, including five of six studies that showed enhanced fat graft survival with addition of a platelet preparation. Twenty-three case series and clinical trials were identified, only two of which showed no differences. Twenty-one reported favorable results with use of various platelet preparations. A total of 47 studies used PRP, four studies evaluated Leukocyte-rich PRP, and fourteen studies used PRFM. The vast majority of studies examined show a significant and measurable effect on cellular changes, wound healing, and facial esthetic outcomes with use of platelet preparations, both topical and injectable. One must also consider possible publication bias against null results that may have had an influence on the data that were available for review. However, the preponderance of studies suggests that platelet preparations might represent an as-of-yet untapped adjunct in facial plastic surgery.

  5. Preoperative embolization of facial angiomas

    International Nuclear Information System (INIS)

    Causmano, F.; Bruschi, G.; De Donatis, M.; Piazza, P.; Bassi, P.

    1988-01-01

    Preoperative embolization was performed on 27 patients with facial angiomas supplied by the external carotid branches. Sixteen were males and 11 females; 13 of these angiomas were high-flow arterio-venous (A-V), 14 were low-flow capillary malformations. Fourteen patients underwent surgical removal after preoperative embolization; in this group embolization was carried out with Spongel in 3 cases and with Lyodura in 11 cases. In 12 of these patients the last angiographic examination was performed 3-6 years later: angiography evidenced no recurrence in 8 cases (67%), while in 3 cases (25%) there was capillary residual angioma of negligible size. Treatment was unsuccessful in one patient only, due to the large recurrent A-V angioma. Thirteen patients underwent embolization only, which was carried out with Lyodura in 10 cases, and with Ivalon in 3 cases. On 12 of these patients the last angiographic study was performed 2-14 months later: there was recurrent A-V angioma in 5 patients (42%), who underwent a subsequent embolization; angiography evidenced no recurrence in the other 7 patients (58%). In both series, the best results were obtained in the patients with low-flow capillary angiomas. Embolization and subsequent surgical removal are the treatment of choice for facial angiomas; embolization alone is useful in the management of surgically inacessible vascular malformations, and it can be the only treatment in patients with small low-flow angiomas when distal occlusion of the feeding vessel with Lyodura or Ivalon particles is performed

  6. Orangutans modify facial displays depending on recipient attention

    Directory of Open Access Journals (Sweden)

    Bridget M. Waller

    2015-03-01

    Full Text Available Primate facial expressions are widely accepted as underpinned by reflexive emotional processes and not under voluntary control. In contrast, other modes of primate communication, especially gestures, are widely accepted as underpinned by intentional, goal-driven cognitive processes. One reason for this distinction is that production of primate gestures is often sensitive to the attentional state of the recipient, a phenomenon used as one of the key behavioural criteria for identifying intentionality in signal production. The reasoning is that modifying/producing a signal when a potential recipient is looking could demonstrate that the sender intends to communicate with them. Here, we show that the production of a primate facial expression can also be sensitive to the attention of the play partner. Using the orangutan (Pongo pygmaeus Facial Action Coding System (OrangFACS, we demonstrate that facial movements are more intense and more complex when recipient attention is directed towards the sender. Therefore, production of the playface is not an automated response to play (or simply a play behaviour itself and is instead produced flexibly depending on the context. If sensitivity to attentional stance is a good indicator of intentionality, we must also conclude that the orangutan playface is intentionally produced. However, a number of alternative, lower level interpretations for flexible production of signals in response to the attention of another are discussed. As intentionality is a key feature of human language, claims of intentional communication in related primate species are powerful drivers in language evolution debates, and thus caution in identifying intentionality is important.

  7. Signs of Facial Aging in Men in a Diverse, Multinational Study: Timing and Preventive Behaviors.

    Science.gov (United States)

    Rossi, Anthony M; Eviatar, Joseph; Green, Jeremy B; Anolik, Robert; Eidelman, Michael; Keaney, Terrence C; Narurkar, Vic; Jones, Derek; Kolodziejczyk, Julia; Drinkwater, Adrienne; Gallagher, Conor J

    2017-11-01

    Men are a growing patient population in aesthetic medicine and are increasingly seeking minimally invasive cosmetic procedures. To examine differences in the timing of facial aging and in the prevalence of preventive facial aging behaviors in men by race/ethnicity. Men aged 18 to 75 years in the United States, Canada, United Kingdom, and Australia rated their features using photonumeric rating scales for 10 facial aging characteristics. Impact of race/ethnicity (Caucasian, black, Asian, Hispanic) on severity of each feature was assessed. Subjects also reported the frequency of dermatologic facial product use. The study included 819 men. Glabellar lines, crow's feet lines, and nasolabial folds showed the greatest change with age. Caucasian men reported more severe signs of aging and earlier onset, by 10 to 20 years, compared with Asian, Hispanic, and, particularly, black men. In all racial/ethnic groups, most men did not regularly engage in basic, antiaging preventive behaviors, such as use of sunscreen. Findings from this study conducted in a globally diverse sample may guide clinical discussions with men about the prevention and treatment of signs of facial aging, to help men of all races/ethnicities achieve their desired aesthetic outcomes.

  8. Internal representations reveal cultural diversity in expectations of facial expressions of emotion.

    Science.gov (United States)

    Jack, Rachael E; Caldara, Roberto; Schyns, Philippe G

    2012-02-01

    Facial expressions have long been considered the "universal language of emotion." Yet consistent cultural differences in the recognition of facial expressions contradict such notions (e.g., R. E. Jack, C. Blais, C. Scheepers, P. G. Schyns, & R. Caldara, 2009). Rather, culture--as an intricate system of social concepts and beliefs--could generate different expectations (i.e., internal representations) of facial expression signals. To investigate, they used a powerful psychophysical technique (reverse correlation) to estimate the observer-specific internal representations of the 6 basic facial expressions of emotion (i.e., happy, surprise, fear, disgust, anger, and sad) in two culturally distinct groups (i.e., Western Caucasian [WC] and East Asian [EA]). Using complementary statistical image analyses, cultural specificity was directly revealed in these representations. Specifically, whereas WC internal representations predominantly featured the eyebrows and mouth, EA internal representations showed a preference for expressive information in the eye region. Closer inspection of the EA observer preference revealed a surprising feature: changes of gaze direction, shown primarily among the EA group. For the first time, it is revealed directly that culture can finely shape the internal representations of common facial expressions of emotion, challenging notions of a biologically hardwired "universal language of emotion."

  9. Internal versus external features in triggering the brain waveforms for conjunction and feature faces in recognition.

    Science.gov (United States)

    Nie, Aiqing; Jiang, Jingguo; Fu, Qiao

    2014-08-20

    Previous research has found that conjunction faces (whose internal features, e.g. eyes, nose, and mouth, and external features, e.g. hairstyle and ears, are from separate studied faces) and feature faces (partial features of these are studied) can produce higher false alarms than both old and new faces (i.e. those that are exactly the same as the studied faces and those that have not been previously presented) in recognition. The event-related potentials (ERPs) that relate to conjunction and feature faces at recognition, however, have not been described as yet; in addition, the contributions of different facial features toward ERPs have not been differentiated. To address these issues, the present study compared the ERPs elicited by old faces, conjunction faces (the internal and the external features were from two studied faces), old internal feature faces (whose internal features were studied), and old external feature faces (whose external features were studied) with those of new faces separately. The results showed that old faces not only elicited an early familiarity-related FN400, but a more anterior distributed late old/new effect that reflected recollection. Conjunction faces evoked similar late brain waveforms as old internal feature faces, but not to old external feature faces. These results suggest that, at recognition, old faces hold higher familiarity than compound faces in the profiles of ERPs and internal facial features are more crucial than external ones in triggering the brain waveforms that are characterized as reflecting the result of familiarity.

  10. Facial emotion identification in early-onset psychosis.

    Science.gov (United States)

    Barkl, Sophie J; Lah, Suncica; Starling, Jean; Hainsworth, Cassandra; Harris, Anthony W F; Williams, Leanne M

    2014-12-01

    Facial emotion identification (FEI) deficits are common in patients with chronic schizophrenia and are strongly related to impaired functioning. The objectives of this study were to determine whether FEI deficits are present and emotion specific in people experiencing early-onset psychosis (EOP), and related to current clinical symptoms and functioning. Patients with EOP (n=34, mean age=14.11, 53% female) and healthy controls (HC, n=42, mean age 13.80, 51% female) completed a task of FEI that measured accuracy, error pattern and response time. Relative to HC, patients with EOP (i) had lower accuracy for identifying facial expressions of emotions, especially fear, anger and disgust, (ii) were more likely to misattribute other emotional expressions as fear or disgust, and (iii) were slower at accurately identifying all facial expressions. FEI accuracy was not related to clinical symptoms or current functioning. Deficits in FEI (especially for fear, anger and disgust) are evident in EOP. Our findings suggest that while emotion identification deficits may reflect a trait susceptibility marker, functional deficits may represent a sequelae of illness. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Extreme Facial Expressions Classification Based on Reality Parameters

    Science.gov (United States)

    Rahim, Mohd Shafry Mohd; Rad, Abdolvahab Ehsani; Rehman, Amjad; Altameem, Ayman

    2014-09-01

    Extreme expressions are really type of emotional expressions that are basically stimulated through the strong emotion. An example of those extreme expression is satisfied through tears. So to be able to provide these types of features; additional elements like fluid mechanism (particle system) plus some of physics techniques like (SPH) are introduced. The fusion of facile animation with SPH exhibits promising results. Accordingly, proposed fluid technique using facial animation is the real tenor for this research to get the complex expression, like laugh, smile, cry (tears emergence) or the sadness until cry strongly, as an extreme expression classification that's happens on the human face in some cases.

  12. Multiple recurrent and de novo odontogenic keratocysts associated with oral-facial-digital syndrome

    NARCIS (Netherlands)

    Lindeboom, Jerome A. H.; Kroon, Frans H. M.; de Vires, Jan; van den Akker, Hans P.

    2003-01-01

    In 1954, Papillon-Leage and Psaume were the first to describe the clinical characteristics of oral-facial-digital syndrome (OFDS). On the basis of their clinical features and the inheritance pattern, 2 variants were initially distinguished, namely OFDS type I (Papillon-Leage and Psaume) and OFDS

  13. Reduced Reliance on Optimal Facial Information for Identity Recognition in Autism Spectrum Disorder

    Science.gov (United States)

    Leonard, Hayley C.; Annaz, Dagmara; Karmiloff-Smith, Annette; Johnson, Mark H.

    2013-01-01

    Previous research into face processing in autism spectrum disorder (ASD) has revealed atypical biases toward particular facial information during identity recognition. Specifically, a focus on features (or high spatial frequencies [HSFs]) has been reported for both face and nonface processing in ASD. The current study investigated the development…

  14. Understanding Legacy Features with Featureous

    DEFF Research Database (Denmark)

    Olszak, Andrzej; Jørgensen, Bo Nørregaard

    2011-01-01

    Java programs called Featureous that addresses this issue. Featureous allows a programmer to easily establish feature-code traceability links and to analyze their characteristics using a number of visualizations. Featureous is an extension to the NetBeans IDE, and can itself be extended by third...

  15. Perceived functional impact of abnormal facial appearance.

    Science.gov (United States)

    Rankin, Marlene; Borah, Gregory L

    2003-06-01

    Functional facial deformities are usually described as those that impair respiration, eating, hearing, or speech. Yet facial scars and cutaneous deformities have a significant negative effect on social functionality that has been poorly documented in the scientific literature. Insurance companies are declining payments for reconstructive surgical procedures for facial deformities caused by congenital disabilities and after cancer or trauma operations that do not affect mechanical facial activity. The purpose of this study was to establish a large, sample-based evaluation of the perceived social functioning, interpersonal characteristics, and employability indices for a range of facial appearances (normal and abnormal). Adult volunteer evaluators (n = 210) provided their subjective perceptions based on facial physical appearance, and an analysis of the consequences of facial deformity on parameters of preferential treatment was performed. A two-group comparative research design rated the differences among 10 examples of digitally altered facial photographs of actual patients among various age and ethnic groups with "normal" and "abnormal" congenital deformities or posttrauma scars. Photographs of adult patients with observable congenital and posttraumatic deformities (abnormal) were digitally retouched to eliminate the stigmatic defects (normal). The normal and abnormal photographs of identical patients were evaluated by the large sample study group on nine parameters of social functioning, such as honesty, employability, attractiveness, and effectiveness, using a visual analogue rating scale. Patients with abnormal facial characteristics were rated as significantly less honest (p = 0.007), less employable (p = 0.001), less trustworthy (p = 0.01), less optimistic (p = 0.001), less effective (p = 0.02), less capable (p = 0.002), less intelligent (p = 0.03), less popular (p = 0.001), and less attractive (p = 0.001) than were the same patients with normal facial

  16. [Peripheral facial nerve lesion induced long-term dendritic retraction in pyramidal cortico-facial neurons].

    Science.gov (United States)

    Urrego, Diana; Múnera, Alejandro; Troncoso, Julieta

    2011-01-01

    Little evidence is available concerning the morphological modifications of motor cortex neurons associated with peripheral nerve injuries, and the consequences of those injuries on post lesion functional recovery. Dendritic branching of cortico-facial neurons was characterized with respect to the effects of irreversible facial nerve injury. Twenty-four adult male rats were distributed into four groups: sham (no lesion surgery), and dendritic assessment at 1, 3 and 5 weeks post surgery. Eighteen lesion animals underwent surgical transection of the mandibular and buccal branches of the facial nerve. Dendritic branching was examined by contralateral primary motor cortex slices stained with the Golgi-Cox technique. Layer V pyramidal (cortico-facial) neurons from sham and injured animals were reconstructed and their dendritic branching was compared using Sholl analysis. Animals with facial nerve lesions displayed persistent vibrissal paralysis throughout the five week observation period. Compared with control animal neurons, cortico-facial pyramidal neurons of surgically injured animals displayed shrinkage of their dendritic branches at statistically significant levels. This shrinkage persisted for at least five weeks after facial nerve injury. Irreversible facial motoneuron axonal damage induced persistent dendritic arborization shrinkage in contralateral cortico-facial neurons. This morphological reorganization may be the physiological basis of functional sequelae observed in peripheral facial palsy patients.

  17. [Evidence of facial palsy and facial malformations in pottery from Peruvian Moche and Lambayeque pre-Columbian cultures].

    Science.gov (United States)

    Carod-Artal, F J; Vázquez Cabrera, C B

    2006-01-01

    Moche (100-700 AD) and Lambayeque-Sicán (750-1100 AD) are pre-Columbian cultures from Regional States Period, developed in Northern Peru. Information about daily life, religion and medicine has been obtained through the study of Moche ceramics found in lords and priests tombs, pyramids and temples. To analyze archeological evidences of Moche Medicine and neurological diseases through ceramics. Representations of diseases in Moche and Lambayeque iconography and Moche pottery collections exposed in Casinelli museum from Trujillo, and Brüning National Archeological museum from Lambayeque, Peru, were studied. The most representative cases were analyzed and photographed, previous authorization from authorities and curators of the museums. The following pathologies were observed in ceramic collections: peripheral facial palsy, facial malformations such as cleft lip, hemifacial spasm, legs and arm amputations, scoliosis and Siamese patients. Male and females Moche doctors were also observed in the ceramics in ritual ceremonies treating patients. The main pathologies observed in Moche and Lambayeque pottery are facial palsy and cleft lip. These are one of the earliest registries of these pathologies in pre-Columbian cultures in South-America.

  18. Reproducibility of the dynamics of facial expressions in unilateral facial palsy.

    Science.gov (United States)

    Alagha, M A; Ju, X; Morley, S; Ayoub, A

    2018-02-01

    The aim of this study was to assess the reproducibility of non-verbal facial expressions in unilateral facial paralysis using dynamic four-dimensional (4D) imaging. The Di4D system was used to record five facial expressions of 20 adult patients. The system captured 60 three-dimensional (3D) images per second; each facial expression took 3-4seconds which was recorded in real time. Thus a set of 180 3D facial images was generated for each expression. The procedure was repeated after 30min to assess the reproducibility of the expressions. A mathematical facial mesh consisting of thousands of quasi-point 'vertices' was conformed to the face in order to determine the morphological characteristics in a comprehensive manner. The vertices were tracked throughout the sequence of the 180 images. Five key 3D facial frames from each sequence of images were analyzed. Comparisons were made between the first and second capture of each facial expression to assess the reproducibility of facial movements. Corresponding images were aligned using partial Procrustes analysis, and the root mean square distance between them was calculated and analyzed statistically (paired Student t-test, PFacial expressions of lip purse, cheek puff, and raising of eyebrows were reproducible. Facial expressions of maximum smile and forceful eye closure were not reproducible. The limited coordination of various groups of facial muscles contributed to the lack of reproducibility of these facial expressions. 4D imaging is a useful clinical tool for the assessment of facial expressions. Copyright © 2017 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  19. Shadows alter facial expressions of Noh masks.

    Directory of Open Access Journals (Sweden)

    Nobuyuki Kawai

    Full Text Available BACKGROUND: A Noh mask, worn by expert actors during performance on the Japanese traditional Noh drama, conveys various emotional expressions despite its fixed physical properties. How does the mask change its expressions? Shadows change subtly during the actual Noh drama, which plays a key role in creating elusive artistic enchantment. We here describe evidence from two experiments regarding how attached shadows of the Noh masks influence the observers' recognition of the emotional expressions. METHODOLOGY/PRINCIPAL FINDINGS: In Experiment 1, neutral-faced Noh masks having the attached shadows of the happy/sad masks were recognized as bearing happy/sad expressions, respectively. This was true for all four types of masks each of which represented a character differing in sex and age, even though the original characteristics of the masks also greatly influenced the evaluation of emotions. Experiment 2 further revealed that frontal Noh mask images having shadows of upward/downward tilted masks were evaluated as sad/happy, respectively. This was consistent with outcomes from preceding studies using actually tilted Noh mask images. CONCLUSIONS/SIGNIFICANCE: Results from the two experiments concur that purely manipulating attached shadows of the different types of Noh masks significantly alters the emotion recognition. These findings go in line with the mysterious facial expressions observed in Western paintings, such as the elusive qualities of Mona Lisa's smile. They also agree with the aesthetic principle of Japanese traditional art "yugen (profound grace and subtlety", which highly appreciates subtle emotional expressions in the darkness.

  20. The identification of unfolding facial expressions.

    Science.gov (United States)

    Fiorentini, Chiara; Schmidt, Susanna; Viviani, Paolo

    2012-01-01

    We asked whether the identification of emotional facial expressions (FEs) involves the simultaneous perception of the facial configuration or the detection of emotion-specific diagnostic cues. We recorded at high speed (500 frames s-1) the unfolding of the FE in five actors, each expressing six emotions (anger, surprise, happiness, disgust, fear, sadness). Recordings were coded every 10 frames (20 ms of real time) with the Facial Action Coding System (FACS, Ekman et al 2002, Salt Lake City, UT: Research Nexus eBook) to identify the facial actions contributing to each expression, and their intensity changes over time. Recordings were shown in slow motion (1/20 of recording speed) to one hundred observers in a forced-choice identification task. Participants were asked to identify the emotion during the presentation as soon as they felt confident to do so. Responses were recorded along with the associated response times (RTs). The RT probability density functions for both correct and incorrect responses were correlated with the facial activity during the presentation. There were systematic correlations between facial activities, response probabilities, and RT peaks, and significant differences in RT distributions for correct and incorrect answers. The results show that a reliable response is possible long before the full FE configuration is reached. This suggests that identification is reached by integrating in time individual diagnostic facial actions, and does not require perceiving the full apex configuration.