WorldWideScience

Sample records for based facial feature

  1. Facial Feature Extraction Based on Wavelet Transform

    Science.gov (United States)

    Hung, Nguyen Viet

    Facial feature extraction is one of the most important processes in face recognition, expression recognition and face detection. The aims of facial feature extraction are eye location, shape of eyes, eye brow, mouth, head boundary, face boundary, chin and so on. The purpose of this paper is to develop an automatic facial feature extraction system, which is able to identify the eye location, the detailed shape of eyes and mouth, chin and inner boundary from facial images. This system not only extracts the location information of the eyes, but also estimates four important points in each eye, which helps us to rebuild the eye shape. To model mouth shape, mouth extraction gives us both mouth location and two corners of mouth, top and bottom lips. From inner boundary we obtain and chin, we have face boundary. Based on wavelet features, we can reduce the noise from the input image and detect edge information. In order to extract eyes, mouth, inner boundary, we combine wavelet features and facial character to design these algorithms for finding midpoint, eye's coordinates, four important eye's points, mouth's coordinates, four important mouth's points, chin coordinate and then inner boundary. The developed system is tested on Yale Faces and Pedagogy student's faces.

  2. Facial symmetry assessment based on geometric features

    Science.gov (United States)

    Xu, Guoping; Cao, Hanqiang

    2015-12-01

    Face image symmetry is an important factor affecting the accuracy of automatic face recognition. Selecting high symmetrical face image could improve the performance of the recognition. In this paper, we proposed a novel facial symmetry evaluation scheme based on geometric features, including centroid, singular value, in-plane rotation angle of face and the structural similarity index (SSIM). First, we calculate the value of the four features according to the corresponding formula. Then, we use fuzzy logic algorithm to integrate the value of the four features into a single number which represents the facial symmetry. The proposed method is efficient and can adapt to different recognition methods. Experimental results demonstrate its effectiveness in improving the robustness of face detection and recognition.

  3. Automatic facial expression recognition based on features extracted from tracking of facial landmarks

    Science.gov (United States)

    Ghimire, Deepak; Lee, Joonwhoan

    2014-01-01

    In this paper, we present a fully automatic facial expression recognition system using support vector machines, with geometric features extracted from the tracking of facial landmarks. Facial landmark initialization and tracking is performed by using an elastic bunch graph matching algorithm. The facial expression recognition is performed based on the features extracted from the tracking of not only individual landmarks, but also pair of landmarks. The recognition accuracy on the Extended Kohn-Kanade (CK+) database shows that our proposed set of features produces better results, because it utilizes time-varying graph information, as well as the motion of individual facial landmarks.

  4. Facial Features for Template Matching Based Face Recognition

    Directory of Open Access Journals (Sweden)

    Chai T. Yuen

    2009-01-01

    Full Text Available Problem statement: Template matching had been a conventional method for object detection especially facial features detection at the early stage of face recognition research. The appearance of moustache and beard had affected the performance of features detection and face recognition system since ages ago. Approach: The proposed algorithm aimed to reduce the effect of beard and moustache for facial features detection and introduce facial features based template matching as the classification method. An automated algorithm for face recognition system based on detected facial features, iris and mouth had been developed. First, the face region was located using skin color information. Next, the algorithm computed the costs for each pair of iris candidates from intensity valleys as references for iris selection. As for mouth detection, color space method was used to allocate lips region, image processing methods to eliminate unwanted noises and corner detection technique to refine the exact location of mouth. Finally, template matching was used to classify faces based on the extracted features. Results: The proposed method had shown a better features detection rate (iris = 93.06%, mouth = 95.83% than conventional method. Template matching had achieved a recognition rate of 86.11% with acceptable processing time (0.36 sec. Conclusion: The results indicate that the elimination of moustache and beard has not affected the performance of facial features detection. The proposed features based template matching has significantly improved the processing time of this method in face recognition research.

  5. Frame-Based Facial Expression Recognition Using Geometrical Features

    Directory of Open Access Journals (Sweden)

    Anwar Saeed

    2014-01-01

    Full Text Available To improve the human-computer interaction (HCI to be as good as human-human interaction, building an efficient approach for human emotion recognition is required. These emotions could be fused from several modalities such as facial expression, hand gesture, acoustic data, and biophysiological data. In this paper, we address the frame-based perception of the universal human facial expressions (happiness, surprise, anger, disgust, fear, and sadness, with the help of several geometrical features. Unlike many other geometry-based approaches, the frame-based method does not rely on prior knowledge of a person-specific neutral expression; this knowledge is gained through human intervention and not available in real scenarios. Additionally, we provide a method to investigate the performance of the geometry-based approaches under various facial point localization errors. From an evaluation on two public benchmark datasets, we have found that using eight facial points, we can achieve the state-of-the-art recognition rate. However, this state-of-the-art geometry-based approach exploits features derived from 68 facial points and requires prior knowledge of the person-specific neutral expression. The expression recognition rate using geometrical features is adversely affected by the errors in the facial point localization, especially for the expressions with subtle facial deformations.

  6. Frame-Based Facial Expression Recognition Using Geometrical Features

    OpenAIRE

    Anwar Saeed; Ayoub Al-Hamadi; Robert Niese; Moftah Elzobi

    2014-01-01

    To improve the human-computer interaction (HCI) to be as good as human-human interaction, building an efficient approach for human emotion recognition is required. These emotions could be fused from several modalities such as facial expression, hand gesture, acoustic data, and biophysiological data. In this paper, we address the frame-based perception of the universal human facial expressions (happiness, surprise, anger, disgust, fear, and sadness), with the help of several geometrical featur...

  7. Sequential Clustering based Facial Feature Extraction Method for Automatic Creation of Facial Models from Orthogonal Views

    CERN Document Server

    Ghahari, Alireza

    2009-01-01

    Multiview 3D face modeling has attracted increasing attention recently and has become one of the potential avenues in future video systems. We aim to make more reliable and robust automatic feature extraction and natural 3D feature construction from 2D features detected on a pair of frontal and profile view face images. We propose several heuristic algorithms to minimize possible errors introduced by prevalent nonperfect orthogonal condition and noncoherent luminance. In our approach, we first extract the 2D features that are visible to both cameras in both views. Then, we estimate the coordinates of the features in the hidden profile view based on the visible features extracted in the two orthogonal views. Finally, based on the coordinates of the extracted features, we deform a 3D generic model to perform the desired 3D clone modeling. Present study proves the scope of resulted facial models for practical applications like face recognition and facial animation.

  8. Facial Feature Extraction Method Based on Coefficients of Variances

    Institute of Scientific and Technical Information of China (English)

    Feng-Xi Song; David Zhang; Cai-Kou Chen; Jing-Yu Yang

    2007-01-01

    Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) are two popular feature ex- traction techniques in statistical pattern recognition field. Due to small sample size problem LDA cannot be directly applied to appearance-based face recognition tasks. As a consequence, a lot of LDA-based facial feature extraction techniques are proposed to deal with the problem one after the other. Nullspace Method is one of the most effective methods among them. The Nullspace Method tries to find a set of discriminant vectors which maximize the between-class scatter in the null space of the within-class scatter matrix. The calculation of its discriminant vectors will involve performing singular value decomposition on a high-dimensional matrix. It is generally memory- and time-consuming. Borrowing the key idea in Nullspace method and the concept of coefficient of variance in statistical analysis we present a novel facial feature extraction method, i.e., Discriminant based on Coefficient of Variance (DCV) in this paper. Experimental results performed on the FERET and AR face image databases demonstrate that DCV is a promising technique in comparison with Eigenfaces, Nullspace Method, and other state-of-the-art facial feature extraction methods.

  9. Novel Facial Features Segmentation Algorithm

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    An efficient algorithm for facial features extractions is proposed. The facial features we segment are the two eyes, nose and mouth. The algorithm is based on an improved Gabor wavelets edge detector, morphological approach to detect the face region and facial features regions, and an improved T-shape face mask to locate the extract location of facial features. The experimental results show that the proposed method is robust against facial expression, illumination, and can be also effective if the person wearing glasses, and so on.

  10. Recognition of facial expressions based on salient geometric features and support vector machines

    OpenAIRE

    Ghimire, Deepak; Lee, Joonwhoan; Li, Ze-Nian; Jeong, Sunghwan

    2016-01-01

    Facial expressions convey nonverbal cues which play an important role in interpersonal relations, and are widely used in behavior interpretation of emotions, cognitive science, and social interactions. In this paper we analyze different ways of representing geometric feature and present a fully automatic facial expression recognition (FER) system using salient geometric features. In geometric feature-based FER approach, the first important step is to initialize and track dense set of facial p...

  11. The relative salience of facial features when differentiating faces based on an interference paradigm

    OpenAIRE

    Ruiz-Soler, Marcos; Salvador Beltrán, Francesc

    2012-01-01

    Research on face recognition and social judgment usually addresses the manipulation of facial features (eyes, nose, mouth, etc.). Using a procedure based on a Stroop-like task, Montepare and Opeyo (J Nonverbal Behav 26(1):43-59, 2002) established a hierarchy of the relative salience of cues based on facial attributes when differentiating faces. Using the same perceptual interference task, we established a hierarchy of facial features. Twenty-three participants (13 men and 10 women) volunteere...

  12. Model Based Analysis of Face Images for Facial Feature Extraction

    Science.gov (United States)

    Riaz, Zahid; Mayer, Christoph; Beetz, Michael; Radig, Bernd

    This paper describes a comprehensive approach to extract a common feature set from the image sequences. We use simple features which are easily extracted from a 3D wireframe model and efficiently used for different applications on a benchmark database. Features verstality is experimented on facial expressions recognition, face reognition and gender classification. We experiment different combinations of the features and find reasonable results with a combined features approach which contain structural, textural and temporal variations. The idea follows in fitting a model to human face images and extracting shape and texture information. We parametrize these extracted information from the image sequences using active appearance model (AAM) approach. We further compute temporal parameters using optical flow to consider local feature variations. Finally we combine these parameters to form a feature vector for all the images in our database. These features are then experimented with binary decision tree (BDT) and Bayesian Network (BN) for classification. We evaluated our results on image sequences of Cohn Kanade Facial Expression Database (CKFED). The proposed system produced very promising recognition rates for our applications with same set of features and classifiers. The system is also realtime capable and automatic.

  13. A spatiotemporal feature-based approach for facial expression recognition from depth video

    Science.gov (United States)

    Uddin, Md. Zia

    2015-07-01

    In this paper, a novel spatiotemporal feature-based method is proposed to recognize facial expressions from depth video. Independent Component Analysis (ICA) spatial features of the depth faces of facial expressions are first augmented with the optical flow motion features. Then, the augmented features are enhanced by Fisher Linear Discriminant Analysis (FLDA) to make them robust. The features are then combined with on Hidden Markov Models (HMMs) to model different facial expressions that are later used to recognize appropriate expression from a test expression depth video. The experimental results show superior performance of the proposed approach over the conventional methods.

  14. Facial Composite System Using Real Facial Features

    Directory of Open Access Journals (Sweden)

    Duchovičová Soňa

    2014-12-01

    Full Text Available Facial feature points identification plays an important role in many facial image applications, like face detection, face recognition, facial expression classification, etc. This paper describes the early stages of the research in the field of evolving a facial composite, primarily the main steps of face detection and facial features extraction. Technological issues are identified and possible strategies to solve some of the problems are proposed.

  15. Facial Composite System Using Real Facial Features

    OpenAIRE

    Duchovičová Soňa; Zahradníková Barbora; Schreiber Peter

    2014-01-01

    Facial feature points identification plays an important role in many facial image applications, like face detection, face recognition, facial expression classification, etc. This paper describes the early stages of the research in the field of evolving a facial composite, primarily the main steps of face detection and facial features extraction. Technological issues are identified and possible strategies to solve some of the problems are proposed.

  16. Live facial feature extraction

    Institute of Scientific and Technical Information of China (English)

    ZHAO JieYu

    2008-01-01

    Precise facial feature extraction is essential to the high-level face recognition and expression analysis. This paper presents a novel method for the real-time geomet-ric facial feature extraction from live video. In this paper, the input image is viewed as a weighted graph. The segmentation of the pixels corresponding to the edges of facial components of the mouth, eyes, brows, and nose is implemented by means of random walks on the weighted graph. The graph has an 8-connected lattice structure and the weight value associated with each edge reflects the likelihood that a random walker will cross that edge. The random walks simulate an anisot-ropic diffusion process that filters out the noise while preserving the facial expres-sion pixels. The seeds for the segmentation are obtained from a color and motion detector. The segmented facial pixels are represented with linked lists in the origi-nal geometric form and grouped into different parts corresponding to facial com-ponents. For the convenience of implementing high-level vision, the geometric description of facial component pixels is further decomposed into shape and reg-istration information. Shape is defined as the geometric information that is invari-ant under the registration transformation, such as translation, rotation, and iso-tropic scale. Statistical shape analysis is carried out to capture global facial fea-tures where the Procrustes shape distance measure is adopted. A Bayesian ap-proach is used to incorporate high-level prior knowledge of face structure. Ex-perimental results show that the proposed method is capable of real-time extraction of precise geometric facial features from live video. The feature extraction is robust against the illumination changes, scale variation, head rotations, and hand inter-ference.

  17. Learning representative features for facial images based on a modified principal component analysis

    Science.gov (United States)

    Averkin, Anton; Potapov, Alexey

    2013-05-01

    The paper is devoted to facial image analysis and particularly deals with the problem of automatic evaluation of the attractiveness of human faces. We propose a new approach for automatic construction of feature space based on a modified principal component analysis. Input data sets for the algorithm are the learning data sets of facial images, which are rated by one person. The proposed approach allows one to extract features of the individual subjective face beauty perception and to predict attractiveness values for new facial images, which were not included into a learning data set. The Pearson correlation coefficient between values predicted by our method for new facial images and personal attractiveness estimation values equals to 0.89. This means that the new approach proposed is promising and can be used for predicting subjective face attractiveness values in real systems of the facial images analysis.

  18. Facial expression recognition based on fused Feature of PCA and LDP

    Science.gov (United States)

    Yi, Zhang; Mao, Hou-lin; Luo, Yuan

    2014-11-01

    Facial expression recognition is an important part of the study in man-machine interaction. Principal component analysis (PCA) is an extraction method based on statistical features which were extracted from the global grayscale features of the whole image .But the grayscale global features are environmentally sensitive. In order to recognize facial expression accurately, a fused method of principal component analysis and local direction pattern (LDP) is introduced in this paper. First, PCA extracts the global features of the whole grayscale image; LDP extracts the local grayscale texture features of the mouth and eyes region, which contribute most to facial expression recognition, to complement the global grayscale features of PCA. Then we adopt Support Vector Machine (SVM) classifier for expression classification. Experimental results demonstrate that this method can classify different expressions more effectively and get higher recognition rate compared with the traditional method.

  19. Feature Extraction for Facial Expression Recognition based on Hybrid Face Regions

    Directory of Open Access Journals (Sweden)

    LAJEVARDI, S.M.

    2009-10-01

    Full Text Available Facial expression recognition has numerous applications, including psychological research, improved human computer interaction, and sign language translation. A novel facial expression recognition system based on hybrid face regions (HFR is investigated. The expression recognition system is fully automatic, and consists of the following modules: face detection, facial detection, feature extraction, optimal features selection, and classification. The features are extracted from both whole face image and face regions (eyes and mouth using log Gabor filters. Then, the most discriminate features are selected based on mutual information criteria. The system can automatically recognize six expressions: anger, disgust, fear, happiness, sadness and surprise. The selected features are classified using the Naive Bayesian (NB classifier. The proposed method has been extensively assessed using Cohn-Kanade database and JAFFE database. The experiments have highlighted the efficiency of the proposed HFR method in enhancing the classification rate.

  20. A Classifier Model based on the Features Quantitative Analysis for Facial Expression Recognition

    Directory of Open Access Journals (Sweden)

    Amir Jamshidnezhad

    2011-01-01

    Full Text Available In recent decades computer technology has considerable developed in use of intelligent systems for classification. The development of HCI systems is highly depended on accurate understanding of emotions. However, facial expressions are difficult to classify by a mathematical models because of natural quality. In this paper, quantitative analysis is used in order to find the most effective features movements between the selected facial feature points. Therefore, the features are extracted not only based on the psychological studies, but also based on the quantitative methods to arise the accuracy of recognitions. Also in this model, fuzzy logic and genetic algorithm are used to classify facial expressions. Genetic algorithm is an exclusive attribute of proposed model which is used for tuning membership functions and increasing the accuracy.

  1. Multi-Cue-Based Face and Facial Feature Detection on Video Segments

    Institute of Scientific and Technical Information of China (English)

    PENG ZhenYun(彭振云); AI HaiZhou(艾海舟); Hong Wei(洪微); LIANG LuHong(梁路宏); XU GuangYou(徐光祐)

    2003-01-01

    An approach is presented to detect faces and facial features on a video segmentbased on multi-cues, including gray-level distribution, color, motion, templates, algebraic featuresand so on. Faces are first detected across the frames by using color segmentation, template matchingand artificial neural network. A PCA-based (Principal Component Analysis) feature detector forstill images is then used to detect facial features on each single frame until the resulting features ofthree adjacent frames, named as base frames, are consistent with each other. The features of framesneighboring the base frames are first detected by the still-image feature detector, then verifiedand corrected according to the smoothness constraint and the planar surface motion constraint.Experiments have been performed on video segments captured under different environments, andthe presented method is proved to be robust and accurate over variable poses, ages and illuminationconditions.

  2. Simultaneous facial feature tracking and facial expression recognition.

    Science.gov (United States)

    Li, Yongqiang; Wang, Shangfei; Zhao, Yongping; Ji, Qiang

    2013-07-01

    The tracking and recognition of facial activities from images or videos have attracted great attention in computer vision field. Facial activities are characterized by three levels. First, in the bottom level, facial feature points around each facial component, i.e., eyebrow, mouth, etc., capture the detailed face shape information. Second, in the middle level, facial action units, defined in the facial action coding system, represent the contraction of a specific set of facial muscles, i.e., lid tightener, eyebrow raiser, etc. Finally, in the top level, six prototypical facial expressions represent the global facial muscle movement and are commonly used to describe the human emotion states. In contrast to the mainstream approaches, which usually only focus on one or two levels of facial activities, and track (or recognize) them separately, this paper introduces a unified probabilistic framework based on the dynamic Bayesian network to simultaneously and coherently represent the facial evolvement in different levels, their interactions and their observations. Advanced machine learning methods are introduced to learn the model based on both training data and subjective prior knowledge. Given the model and the measurements of facial motions, all three levels of facial activities are simultaneously recognized through a probabilistic inference. Extensive experiments are performed to illustrate the feasibility and effectiveness of the proposed model on all three level facial activities.

  3. Recognition of Facial Expression Using Eigenvector Based Distributed Features and Euclidean Distance Based Decision Making Technique

    Directory of Open Access Journals (Sweden)

    Jeemoni Kalita

    2013-03-01

    Full Text Available In this paper, an Eigenvector based system has been presented to recognize facial expressions from digital facial images. In the approach, firstly the images were acquired and cropping of five significant portions from the image was performed to extract and store the Eigenvectors specific to the expressions. The Eigenvectors for the test images were also computed, and finally the input facial image was recognized when similarity was obtained by calculating the minimum Euclidean distance between the test image and the different expressions.

  4. Facial expression recognition based on local region specific features and support vector machines

    OpenAIRE

    Ghimire, Deepak; Jeong, Sunghwan; Lee, Joonwhoan; Park, Sang Hyun

    2016-01-01

    Facial expressions are one of the most powerful, natural and immediate means for human being to communicate their emotions and intensions. Recognition of facial expression has many applications including human-computer interaction, cognitive science, human emotion analysis, personality development etc. In this paper, we propose a new method for the recognition of facial expressions from single image frame that uses combination of appearance and geometric features with support vector machines ...

  5. Tracking facial features with occlusions

    Institute of Scientific and Technical Information of China (English)

    MARKIN Evgeny; PRAKASH Edmond C.

    2006-01-01

    Facial expression recognition consists of determining what kind of emotional content is presented in a human face.The problem presents a complex area for exploration, since it encompasses face acquisition, facial feature tracking, facial expression classification. Facial feature tracking is of the most interest. Active Appearance Model (AAM) enables accurate tracking of facial features in real-time, but lacks occlusions and self-occlusions. In this paper we propose a solution to improve the accuracy of fitting technique. The idea is to include occluded images into AAM training data. We demonstrate the results by running ex periments using gradient descent algorithm for fitting the AAM. Our experiments show that using fitting algorithm with occluded training data improves the fitting quality of the algorithm.

  6. A Classification Method of Normal and Overweight Females Based on Facial Features for Automated Medical Applications

    Directory of Open Access Journals (Sweden)

    Bum Ju Lee

    2012-01-01

    Full Text Available Obesity and overweight have become serious public health problems worldwide. Obesity and abdominal obesity are associated with type 2 diabetes, cardiovascular diseases, and metabolic syndrome. In this paper, we first suggest a method of predicting normal and overweight females according to body mass index (BMI based on facial features. A total of 688 subjects participated in this study. We obtained the area under the ROC curve (AUC value of 0.861 and kappa value of 0.521 in Female: 21–40 (females aged 21–40 years group, and AUC value of 0.76 and kappa value of 0.401 in Female: 41–60 (females aged 41–60 years group. In two groups, we found many features showing statistical differences between normal and overweight subjects by using an independent two-sample t-test. We demonstrated that it is possible to predict BMI status using facial characteristics. Our results provide useful information for studies of obesity and facial characteristics, and may provide useful clues in the development of applications for alternative diagnosis of obesity in remote healthcare.

  7. Automatic Facial Expression Recognition Using Features of Salient Facial Patches

    OpenAIRE

    Happy, S L; Routray, Aurobinda

    2015-01-01

    Extraction of discriminative features from salient facial patches plays a vital role in effective facial expression recognition. The accurate detection of facial landmarks improves the localization of the salient patches on face images. This paper proposes a novel framework for expression recognition by using appearance features of selected facial patches. A few prominent facial patches, depending on the position of facial landmarks, are extracted which are active during emotion elicitation. ...

  8. Automatic facial feature extraction and expression recognition based on neural network

    CERN Document Server

    Khandait, S P; Khandait, P D

    2012-01-01

    In this paper, an approach to the problem of automatic facial feature extraction from a still frontal posed image and classification and recognition of facial expression and hence emotion and mood of a person is presented. Feed forward back propagation neural network is used as a classifier for classifying the expressions of supplied face into seven basic categories like surprise, neutral, sad, disgust, fear, happy and angry. For face portion segmentation and localization, morphological image processing operations are used. Permanent facial features like eyebrows, eyes, mouth and nose are extracted using SUSAN edge detection operator, facial geometry, edge projection analysis. Experiments are carried out on JAFFE facial expression database and gives better performance in terms of 100% accuracy for training set and 95.26% accuracy for test set.

  9. Facial expression recognition with facial parts based sparse representation classifier

    Science.gov (United States)

    Zhi, Ruicong; Ruan, Qiuqi

    2009-10-01

    Facial expressions play important role in human communication. The understanding of facial expression is a basic requirement in the development of next generation human computer interaction systems. Researches show that the intrinsic facial features always hide in low dimensional facial subspaces. This paper presents facial parts based facial expression recognition system with sparse representation classifier. Sparse representation classifier exploits sparse representation to select face features and classify facial expressions. The sparse solution is obtained by solving l1 -norm minimization problem with constraint of linear combination equation. Experimental results show that sparse representation is efficient for facial expression recognition and sparse representation classifier obtain much higher recognition accuracies than other compared methods.

  10. Geometric Feature-Based Facial Expression Recognition in Image Sequences Using Multi-Class AdaBoost and Support Vector Machines

    OpenAIRE

    Joonwhoan Lee; Deepak Ghimire

    2013-01-01

    Facial expressions are widely used in the behavioral interpretation of emotions, cognitive science, and social interactions. In this paper, we present a novel method for fully automatic facial expression recognition in facial image sequences. As the facial expression evolves over time facial landmarks are automatically tracked in consecutive video frames, using displacements based on elastic bunch graph matching displacement estimation. Feature vectors from individual landmarks, as well as pa...

  11. An Efficient Multimodal 2D + 3D Feature-based Approach to Automatic Facial Expression Recognition

    KAUST Repository

    Li, Huibin

    2015-07-29

    We present a fully automatic multimodal 2D + 3D feature-based facial expression recognition approach and demonstrate its performance on the BU-3DFE database. Our approach combines multi-order gradient-based local texture and shape descriptors in order to achieve efficiency and robustness. First, a large set of fiducial facial landmarks of 2D face images along with their 3D face scans are localized using a novel algorithm namely incremental Parallel Cascade of Linear Regression (iPar-CLR). Then, a novel Histogram of Second Order Gradients (HSOG) based local image descriptor in conjunction with the widely used first-order gradient based SIFT descriptor are used to describe the local texture around each 2D landmark. Similarly, the local geometry around each 3D landmark is described by two novel local shape descriptors constructed using the first-order and the second-order surface differential geometry quantities, i.e., Histogram of mesh Gradients (meshHOG) and Histogram of mesh Shape index (curvature quantization, meshHOS). Finally, the Support Vector Machine (SVM) based recognition results of all 2D and 3D descriptors are fused at both feature-level and score-level to further improve the accuracy. Comprehensive experimental results demonstrate that there exist impressive complementary characteristics between the 2D and 3D descriptors. We use the BU-3DFE benchmark to compare our approach to the state-of-the-art ones. Our multimodal feature-based approach outperforms the others by achieving an average recognition accuracy of 86.32%. Moreover, a good generalization ability is shown on the Bosphorus database.

  12. Automatic facial feature extraction and expression recognition based on neural network

    OpenAIRE

    Khandait, S. P.; Dr. R.C.Thool; P.D.Khandait

    2012-01-01

    In this paper, an approach to the problem of automatic facial feature extraction from a still frontal posed image and classification and recognition of facial expression and hence emotion and mood of a person is presented. Feed forward back propagation neural network is used as a classifier for classifying the expressions of supplied face into seven basic categories like surprise, neutral, sad, disgust, fear, happy and angry. For face portion segmentation and localization, morphological image...

  13. Facial Expression Recognition Using 3D Facial Feature Distances

    OpenAIRE

    Soyel, Hamit; Hasan DEMIREL

    2008-01-01

    In this chapter we have shown that probabilistic neural network classifier can be used for the 3D analysis of facial expressions without relying on all of the 84 facial features and errorprone face pose normalization stage. Face deformation as well as facial muscle contraction and expansion are important indicators for facial expression and by using only 11 facial feature points and symmetry of the human face, we are able to extract enough information from a from a face image. Our results sho...

  14. Unifying Geometric Features and Facial Action Units for Improved Performance of Facial Expression Analysis

    OpenAIRE

    Ghayoumi, Mehdi; Bansal, Arvind K.

    2016-01-01

    Previous approaches to model and analyze facial expression analysis use three different techniques: facial action units, geometric features and graph based modelling. However, previous approaches have treated these technique separately. There is an interrelationship between these techniques. The facial expression analysis is significantly improved by utilizing these mappings between major geometric features involved in facial expressions and the subset of facial action units whose presence or...

  15. Tracking subtle stereotypes of children with trisomy 21: from facial-feature-based to implicit stereotyping.

    Directory of Open Access Journals (Sweden)

    Claire Enea-Drapeau

    Full Text Available BACKGROUND: Stigmatization is one of the greatest obstacles to the successful integration of people with Trisomy 21 (T21 or Down syndrome, the most frequent genetic disorder associated with intellectual disability. Research on attitudes and stereotypes toward these people still focuses on explicit measures subjected to social-desirability biases, and neglects how variability in facial stigmata influences attitudes and stereotyping. METHODOLOGY/PRINCIPAL FINDINGS: The participants were 165 adults including 55 young adult students, 55 non-student adults, and 55 professional caregivers working with intellectually disabled persons. They were faced with implicit association tests (IAT, a well-known technique whereby response latency is used to capture the relative strength with which some groups of people--here photographed faces of typically developing children and children with T21--are automatically (without conscious awareness associated with positive versus negative attributes in memory. Each participant also rated the same photographed faces (consciously accessible evaluations. We provide the first evidence that the positive bias typically found in explicit judgments of children with T21 is smaller for those whose facial features are highly characteristic of this disorder, compared to their counterparts with less distinctive features and to typically developing children. We also show that this bias can coexist with negative evaluations at the implicit level (with large effect sizes, even among professional caregivers. CONCLUSION: These findings support recent models of feature-based stereotyping, and more importantly show how crucial it is to go beyond explicit evaluations to estimate the true extent of stigmatization of intellectually disabled people.

  16. Geometric Feature-Based Facial Expression Recognition in Image Sequences Using Multi-Class AdaBoost and Support Vector Machines

    Directory of Open Access Journals (Sweden)

    Joonwhoan Lee

    2013-06-01

    Full Text Available Facial expressions are widely used in the behavioral interpretation of emotions, cognitive science, and social interactions. In this paper, we present a novel method for fully automatic facial expression recognition in facial image sequences. As the facial expression evolves over time facial landmarks are automatically tracked in consecutive video frames, using displacements based on elastic bunch graph matching displacement estimation. Feature vectors from individual landmarks, as well as pairs of landmarks tracking results are extracted, and normalized, with respect to the first frame in the sequence. The prototypical expression sequence for each class of facial expression is formed, by taking the median of the landmark tracking results from the training facial expression sequences. Multi-class AdaBoost with dynamic time warping similarity distance between the feature vector of input facial expression and prototypical facial expression, is used as a weak classifier to select the subset of discriminative feature vectors. Finally, two methods for facial expression recognition are presented, either by using multi-class AdaBoost with dynamic time warping, or by using support vector machine on the boosted feature vectors. The results on the Cohn-Kanade (CK+ facial expression database show a recognition accuracy of 95.17% and 97.35% using multi-class AdaBoost and support vector machines, respectively.

  17. Emotion Recognition based on 2D-3D Facial Feature Extraction from Color Image Sequences

    Directory of Open Access Journals (Sweden)

    Robert Niese

    2010-10-01

    Full Text Available Normal 0 21 false false false DE X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Normale Tabelle"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} In modern human computer interaction systems, emotion recognition from video is becoming an imperative feature. In this work we propose a new method for automatic recognition of facial expressions related to categories of basic emotions from image data. Our method incorporates a series of image processing, low level 3D computer vision and pattern recognition techniques. For image feature extraction, color and gradient information is used. Further, in terms of 3D processing, camera models are applied along with an initial registration step, in which person specific face models are automatically built from stereo. Based on these face models, geometric feature measures are computed and normalized using photogrammetric techniques. For recognition this normalization leads to minimal mixing between different emotion classes, which are determined with an artificial neural network classifier. Our framework achieves robust and superior classification results, also across a variety of head poses with resulting perspective foreshortening and changing face size. Results are presented for domestic and publicly available databases.

  18. Artificial Neural Networks and Gene Expression Programing based age estimation using facial features

    Directory of Open Access Journals (Sweden)

    Baddrud Z. Laskar

    2015-10-01

    Full Text Available This work is about estimating human age automatically through analysis of facial images. It has got a lot of real-world applications. Due to prompt advances in the fields of machine vision, facial image processing, and computer graphics, automatic age estimation via faces in computer is one of the dominant topics these days. This is due to widespread real-world applications, in areas of biometrics, security, surveillance, control, forensic art, entertainment, online customer management and support, along with cosmetology. As it is difficult to estimate the exact age, this system is to estimate a certain range of ages. Four sets of classifications have been used to differentiate a person’s data into one of the different age groups. The uniqueness about this study is the usage of two technologies i.e., Artificial Neural Networks (ANN and Gene Expression Programing (GEP to estimate the age and then compare the results. New methodologies like Gene Expression Programing (GEP have been explored here and significant results were found. The dataset has been developed to provide more efficient results by superior preprocessing methods. This proposed approach has been developed, tested and trained using both the methods. A public data set was used to test the system, FG-NET. The quality of the proposed system for age estimation using facial features is shown by broad experiments on the available database of FG-NET.

  19. Research of Facial Expression Recognization Based on Facial Features%基于人脸五官结构特征的表情识别研究

    Institute of Scientific and Technical Information of China (English)

    马飞; 刘红娟; 程荣花

    2011-01-01

    In the field of the facial expression recognization, this paper analyzed the facial features and present the method of the facial features for facial expression recognization. This paper constructed a new feature vector weight function to discrete the features data and present a new facial expression recognization classifier, the experiments show that the algorithms are effective.%在对人脸表情识别的研究中,对人脸五官的结构特征进行了分析,提出了基于五官结构特征的方法进行人脸表情的识别,文章构造了一种新的表情特征向量权重函数对五官各结构特征向量进行离散化,并构建了一个表情识别分类器,实验表明文章所提出的表情识别方法是有效的.

  20. Facial Expression Recognition using Entropy and Brightness Features

    OpenAIRE

    Khan, Rizwan Ahmed; Meyer, Alexandre; Konik, Hubert; Bouakaz, Saïda

    2011-01-01

    International audience This paper proposes a novel framework for universal facial expression recognition. The framework is based on two sets of features extracted from the face image: entropy and brightness. First, saliency maps are obtained by state-of-the-art saliency detection algorithm i.e. "frequencytuned salient region detection". Then only localized salient facial regions from saliency maps are processed to extract entropy and brightness features. To validate the performance of sali...

  1. A New Method of Diagnosing Constitutional Types Based on Vocal and Facial Features for Personalized Medicine

    Directory of Open Access Journals (Sweden)

    Bum Ju Lee

    2012-01-01

    Full Text Available The aim of the present study is to develop an accurate constitution diagnostic method based solely on the individual’s physical characteristics, irrespective of psychologic traits, characteristics of clinical medicine, and genetic factors. In this paper, we suggest a novel method for diagnosing constitutional types using only speech and face characteristics. Based on 514 subjects, the area under the receiver operating characteristics curve (AUC values of classification models in age and gender groups ranged from 0.64 to 0.89. We identified significant features showing statistical differences among three constitutional types by performing statistical analysis. Also, we selected a compact and discriminative feature subset for constitution diagnosis in each age and gender group. Our method may support the direction of improved diagnosis prediction and will serve to develop a personal and automatic constitution diagnosis software for improvement of the effectiveness of prescribed medications and development of personalized medicine.

  2. Detection of Facial Features in Scale-Space

    Directory of Open Access Journals (Sweden)

    P. Hosten

    2007-01-01

    Full Text Available This paper presents a new approach to the detection of facial features. A scale adapted Harris Corner detector is used to find interest points in scale-space. These points are described by the SIFT descriptor. Thus invariance with respect to image scale, rotation and illumination is obtained. Applying a Karhunen-Loeve transform reduces the dimensionality of the feature space. In the training process these features are clustered by the k-means algorithm, followed by a cluster analysis to find the most distinctive clusters, which represent facial features in feature space. Finally, a classifier based on the nearest neighbor approach is used to decide whether the features obtained from the interest points are facial features or not. 

  3. 基于链码的人脸表情几何特征提取%Facial Expression Geometrical Feature Extraction Based on Chain Code

    Institute of Scientific and Technical Information of China (English)

    张庆; 代锐; 朱雪莹; 韦穗

    2012-01-01

    已有人脸表情特征提取算法的表情识别率较低.为此,提出一种基于链码的人脸表情几何特征提取算法.以主动形状模型特征点定位为基础,对面部目标上定位的特征点位置进行循环链码编码,以提取出人脸表情几何特征.实验结果表明,相比经典的LBP表情特征鉴别方法,该算法的识别率提高约10%.%The existing facial expression recognition rate of facial expression feature extraction algorithm is low. For this, this paper proposes a facial geometric feature extraction algorithm based chain codes. Based on active shape model that locates feature points and outputs the points' coordinates of facial targets the coding method gives a circular codes to extract the facial geometric feature. Experimental results show that, compared with the method of typical LBP expression recognition, the accuracy of the algorithm is increased by nearly 10%.

  4. An Improved Method of feature extraction technique for Facial Expression Recognition using Adaboost Neural Network

    Directory of Open Access Journals (Sweden)

    Aruna Bhadu

    2012-06-01

    Full Text Available The objective of this research is comparative study of different feature extraction techniques for facial expression recognition & develops a algorithm, for feature extraction using AdaBoost classifier to reduce the generalization error and improve performance by getting the high recognition rate. For facial feature extraction, I will follow 2 different techniques: Discrete Cosine Transform, Wavelet Transform. Upon extraction of the facial expression information the feature vector is given to facial expression classifier. An Adaboost-based classifier is be designed to deal with varies of facial expressions which are to be recognized and for the facial expression recognition JAFFE database is used.

  5. Extraction of Facial Features from Color Images

    Directory of Open Access Journals (Sweden)

    J. Pavlovicova

    2008-09-01

    Full Text Available In this paper, a method for localization and extraction of faces and characteristic facial features such as eyes, mouth and face boundaries from color image data is proposed. This approach exploits color properties of human skin to localize image regions – face candidates. The facial features extraction is performed only on preselected face-candidate regions. Likewise, for eyes and mouth localization color information and local contrast around eyes are used. The ellipse of face boundary is determined using gradient image and Hough transform. Algorithm was tested on image database Feret.

  6. Towards the automation of forensic facial individualisation: Comparing forensic to non forensic eyebrow features

    OpenAIRE

    Zeinstra, Chris; Veldhuis, Raymond; Spreeuwers, Luuk

    2014-01-01

    The Facial Identification Scientific Working Group (FISWG) publishes recommendations regarding one-to-one facial comparisons. At this moment a draft version of a facial image comparison feature list for morphological analysis has been published. This feature list is based on casework experience by forensic facial examiners. This paper investigates whether the performance of the FISWG eyebrow feature set can be considered as being "state-of-the-art". We compare the recognition performance of o...

  7. Odor valence linearly modulates attractiveness, but not age assessment, of invariant facial features in a memory-based rating task.

    Directory of Open Access Journals (Sweden)

    Janina Seubert

    Full Text Available Scented cosmetic products are used across cultures as a way to favorably influence one's appearance. While crossmodal effects of odor valence on perceived attractiveness of facial features have been demonstrated experimentally, it is unknown whether they represent a phenomenon specific to affective processing. In this experiment, we presented odors in the context of a face battery with systematic feature manipulations during a speeded response task. Modulatory effects of linear increases of odor valence were investigated by juxtaposing subsequent memory-based ratings tasks--one predominantly affective (attractiveness and a second, cognitive (age. The linear modulation pattern observed for attractiveness was consistent with additive effects of face and odor appraisal. Effects of odor valence on age perception were not linearly modulated and may be the result of cognitive interference. Affective and cognitive processing of faces thus appear to differ in their susceptibility to modulation by odors, likely as a result of privileged access of olfactory stimuli to affective brain networks. These results are critically discussed with respect to potential biases introduced by the preceding speeded response task.

  8. Detection and tracking of facial features

    Science.gov (United States)

    De Silva, Liyanage C.; Aizawa, Kiyoharu; Hatori, Mitsutoshi

    1995-04-01

    Detection and tracking of facial features without using any head mounted devices may become required in various future visual communication applications, such as teleconferencing, virtual reality etc. In this paper we propose an automatic method of face feature detection using a method called edge pixel counting. Instead of utilizing color or gray scale information of the facial image, the proposed edge pixel counting method utilized the edge information to estimate the face feature positions such as eyes, nose and mouth in the first frame of a moving facial image sequence, using a variable size face feature template. For the remaining frames, feature tracking is carried out alternatively using a method called deformable template matching and edge pixel counting. One main advantage of using edge pixel counting in feature tracking is that it does not require the condition of a high inter frame correlation around the feature areas as is required in template matching. Some experimental results are shown to demonstrate the effectiveness of the proposed method.

  9. Towards the automation of forensic facial individualisation: Comparing forensic to non forensic eyebrow features

    NARCIS (Netherlands)

    Zeinstra, Chris; Veldhuis, Raymond; Spreeuwers, Luuk

    2014-01-01

    The Facial Identification Scientific Working Group (FISWG) publishes recommendations regarding one-to-one facial comparisons. At this moment a draft version of a facial image comparison feature list for morphological analysis has been published. This feature list is based on casework experience by f

  10. Combining appearance and geometric features for facial expression recognition

    Science.gov (United States)

    Yu, Hui; Liu, Honghai

    2015-03-01

    This paper introduces a method for facial expression recognition combining appearance and geometric facial features. The proposed framework consistently combines multiple facial representations at both global and local levels. First, covariance descriptors are computed to represent regional features combining various feature information with a low dimensionality. Then geometric features are detected to provide a general facial movement description of the facial expression. These appearance and geometric features are combined to form a vector representation of the facial expression. The proposed method is tested on the CK+ database and shows encouraging performance.

  11. Internal facial features are signals of personality and health.

    Science.gov (United States)

    Kramer, Robin S S; Ward, Robert

    2010-11-01

    We investigated forms of socially relevant information signalled from static images of the face. We created composite images from women scoring high and low values on personality and health dimensions and measured the accuracy of raters in discriminating high from low trait values. We also looked specifically at the information content within the internal facial features, by presenting the composite images with an occluding mask. Four of the Big Five traits were accurately discriminated on the basis of the internal facial features alone (conscientiousness was the exception), as was physical health. The addition of external features in the full-face images led to improved detection for extraversion and physical health and poorer performance on intellect/imagination (or openness). Visual appearance based on internal facial features alone can therefore accurately predict behavioural biases in the form of personality, as well as levels of physical health. PMID:20486018

  12. Dynamic Facial Fatigue Recognition Based on Independent Features Fusion%基于独立特征融合的面部疲劳状态识别

    Institute of Scientific and Technical Information of China (English)

    孙艳丰; 卢冰; 王立春

    2013-01-01

    To improve the facial fatigue state recognition effect, a facial fatigue features representation method of fusing the global and local features was proposed. This method combined the discrete cosine transform (DCT), independent component analysis (ICA) technology and Gabor transformation, and obtained the final facial fatigue features representation through fusing the global independent DCT features and local dynamic Gabor features. The experiments based on the previous self-built fatigue image sequences show that the fatigue features extracted by this method are more discriminative.%为提高面部疲劳状态的识别效果,提出了一种融合全局特征和局部特征的面部疲劳特征表示方法.该方法将离散余弦变换(discrete cosine transform,DCT)和独立元分析(independent component analysis,ICA)技术以及Gabor变换相结合,通过融合全局独立DCT特征和局部动态Gabor特征得到最终的面部疲劳特征表示.基于前人自建的疲劳图像序列库进行了实验,结果表明该方法提取的疲劳特征更加具有鉴别力.

  13. A Novel Feature Extraction Technique for Facial Expression Recognition

    Directory of Open Access Journals (Sweden)

    Mohammad Shahidul Islam

    2013-01-01

    Full Text Available This paper presents a new technique to extract the light invariant local feature for facial expression recognition. It is not only robust to monotonic gray-scale changes caused by light variations but also very simple to perform which makes it possible for analyzing images in challenging real-time settings. The local feature for a pixel is computed by finding the direction of the neighboring of the pixel with the particular rank in term of its gray scale value among all the neighboring pixels. When eight neighboring pixels are considered, the direction of the neighboring pixel with the second minima of the gray scale intensity can yield the best performance for the facial expression recognition in our experiment. The facial expression classification in the experiment was performed using a support vector machine on CK+ dataset The average recognition rate achieved is 90.1 3.8%, which is better than other previous local feature based methods for facial expression analysis. The experimental results do show that the proposed feature extraction technique is fast, accurate and efficient for facial expression recognition.

  14. 人脸显性特征的融合构造方法及识别%Face Recognition Based on Explicit Facial Features by Fusion Construction Method

    Institute of Scientific and Technical Information of China (English)

    杨飞; 苏剑波

    2012-01-01

    In the current research on face recognition,facial geometric features have not been fully utilized.Thus,the importance of geometric features in face recognition is explicated, and a novel technique of facial geometric feature extraction is proposed. Then a facial explicit feature is constructed based on the fusion of geometric and texture information. The corresponding face recognition method using these features is also given. This novel face recognition method not only possesses some advantages over the popular subspace methods based on statistical learning,but can be a complement to the latter. Experiments demonstrate that the extracted features and the corresponding face recognition algorithm are robust to facial expression and environmental illumination variations.%目前的人脸识别研究中,面部几何特征没有得到很好的利用.本文阐述了几何特征对于人脸识别的重要性,在此基础上提出了一种提取面部几何特征的新方法;通过融合几何信息和纹理信息构造出一种面部显性特征,并给出了相应的人脸识别方法.这种新的人脸识别方法相对于基于统计学习的子空间方法具有一定的优势,同时也可作为后者的有益补充.实验表明,本文提出的人脸表示特征及识别方法对人脸表情变化和环境光照变化均有一定的鲁棒性.

  15. Facial expression identification based on combinational feature of facial action units%基于面部动作单元组合特征的表情识别

    Institute of Scientific and Technical Information of China (English)

    欧阳琰; 桑农

    2011-01-01

    人脸表情可以被看作是由面部表情编码系统(FACS)定义的不同面部运动单元的组合.不同于人脸图像的灰度、纹理等表象特征,基于面部运动单元的表情混合特征能够更准确地描述表情,然而,面部运动单元很难精确定位,为了避免这个问题.在前人的工作中通过将图像分成许多子块,并从子块中提取面部运动单元信息来组成基于面部运动单元的表情成分特征.在此基础上.本文首先通过对人脸图像的眼睛和口部作粗定位,接着根据眼睛和口部的水平位置,提取眼睛区域、口部区域和鼻子区域的图像子块,然后对每个子块提取Haar特征,并采用错误率最小策略从这些子块中选出面部运动单元组合特征,最后使用组合特征进行学习得出弱分类器,并嵌入到Boost学习结构中构造出强分类器.通过在Cohn-Kanada数据库上的测试,证明本文的方法能够取得很好的表情分类效果.%Facial expressions may be described as combination of facial action units defined by facial action coding system. Unlike appearance features of face images, such as gray and texture, the combinational feature of facial action units can describe the facial expressions better. However, it is difficult to detect facial action units accurately. So, many previous works try to divided face image into local patches, and extract the information of facial action units to compose the compositional features of facial expressions.According to these works, in this paper we firstly locate the position of eye and mouth in face images, and then divide face images into local patches due to the position of eye and mouth, after that extracted Haar features from each patches and use a minimum error based combination strategy to build combinational feature of facial action units from these features of patches, then use combinational feature to build weak learners. Finally boosting learning structure is used to build the

  16. Mutual information-based facial expression recognition

    Science.gov (United States)

    Hazar, Mliki; Hammami, Mohamed; Hanêne, Ben-Abdallah

    2013-12-01

    This paper introduces a novel low-computation discriminative regions representation for expression analysis task. The proposed approach relies on interesting studies in psychology which show that most of the descriptive and responsible regions for facial expression are located around some face parts. The contributions of this work lie in the proposition of new approach which supports automatic facial expression recognition based on automatic regions selection. The regions selection step aims to select the descriptive regions responsible or facial expression and was performed using Mutual Information (MI) technique. For facial feature extraction, we have applied Local Binary Patterns Pattern (LBP) on Gradient image to encode salient micro-patterns of facial expressions. Experimental studies have shown that using discriminative regions provide better results than using the whole face regions whilst reducing features vector dimension.

  17. Frontal Facial Pose Recognition Using a Discriminant Splitting Feature Extraction Procedure

    OpenAIRE

    Marras, Ioannis; Nikolaidis, Nikos; Pitas, Ioannis

    2011-01-01

    Frontal facial pose recognition deals with classifying facial images into two-classes: frontal and non-frontal. Recognition of frontal poses is required as a preprocessing step to face analysis algorithms (e.g. face or facial expression recognition) that can operate only on frontal views. A novel frontal facial pose recognition technique that is based on discriminant image splitting for feature extraction is presented in this paper. Spatially homogeneous and discriminant regions for each faci...

  18. Fusing Facial Features for Face Recognition

    OpenAIRE

    Jamal Ahmad Dargham; Ali Chekima; Ervin Gubin Moung

    2012-01-01

    Face recognition is an important biometric method because of its potential applications in many fields, such as access control, surveillance, and human-computer interaction. In this paper, a face recognition system that fuses the outputs of three face recognition systems based on Gabor jets is presented. The first system uses the magnitude, the second uses the phase, and the third uses the phase-weighted magnitude of the jets. The jets are generated from facial landmarks selected using three ...

  19. Identification based on facial parts

    Directory of Open Access Journals (Sweden)

    Stevanov Zorica

    2007-01-01

    Full Text Available Two opposing views dominate face identification literature, one suggesting that the face is processed as a whole and another suggesting analysis based on parts. Our research tried to establish which of these two is the dominant strategy and our results fell in the direction of analysis based on parts. The faces were covered with a mask and the participants were uncovering different parts, one at the time, in an attempt to identify a person. Already at the level of a single facial feature, such as mouth or eye and top of the nose, some observers were capable to establish the identity of a familiar face. Identification is exceptionally successful when a small assembly of facial parts is visible, such as eye, eyebrow and the top of the nose. Some facial parts are not very informative on their own but do enhance recognition when given as a part of such an assembly. Novel finding here is importance of the top of the nose for the face identification. Additionally observers have a preference toward the left side of the face. Typically subjects view the elements in the following order: left eye, left eyebrow, right eye, lips, region between the eyes, right eyebrow, region between the eyebrows, left check, right cheek. When observers are not in a position to see eyes, eyebrows or top of the nose, they go for lips first and then region between the eyebrows, region between the eyes, left check, right cheek and finally chin.

  20. Facial Expression Recognition Using New Feature Extraction Algorithm

    OpenAIRE

    Huang, Hung-Fu; Tai, Shen-Chuan

    2012-01-01

    This paper proposes a method for facial expression recognition. Facial feature vectors are generated from keypoint descriptors using Speeded-Up Robust Features. Each facial feature vector is then normalized and next the probability density function descriptor is generated. The distance between two probability density function descriptors is calculated using Kullback Leibler divergence. Mathematical equation is employed to select certain practicable probability density function descriptors for...

  1. Improving Recognition and Identification of Facial Areas Involved in Non-verbal Communication by Feature Selection

    OpenAIRE

    Sheerman-Chase, T; Ong, E-J; Pugeault, N; Bowden, R.

    2013-01-01

    Meaningful Non-Verbal Communication (NVC) signals can be recognised by facial deformations based on video tracking. However, the geometric features previously used contain a significant amount of redundant or irrelevant information. A feature selection method is described for selecting a subset of features that improves performance and allows for the identification and visualisation of facial areas involved in NVC. The feature selection is based on a sequential backward elimination of features ...

  2. Feature selection for facial expression recognition using deformation modeling

    Science.gov (United States)

    Srivastava, Ruchir; Sim, Terence; Yan, Shuicheng; Ranganath, Surendra

    2010-02-01

    Works on Facial Expression Recognition (FER) have mostly been done using image based approaches. However, in recent years, researchers have also been trying to explore the use of 3D information for the task of FER. Most of the time, there is a need for having a neutral (expressionless) face of the subject in both the image based and 3D model based approaches. However, this might not be practical in many applications. This paper tries to address this limitations in previous works by proposing a novel technique of feature extraction which does not require any neutral face of the subjects. It has been proposed and validated experimentally that the motion of some landmark points on the face, in exhibiting a particular facial expression, is similar in different persons. Separate classifier is made and relevant feature points are selected for each expression. One vs all SVM classification gives promising results.

  3. A Cloud Model-based Approach for Facial Expression Synthesis

    Directory of Open Access Journals (Sweden)

    Juebo Wu

    2011-04-01

    Full Text Available The process to synthesize feature for human facial expression often implies both fuzziness, randomness and their certain relevance in image data. By using the advantage of cloud model, this paper presents a new approaches and applications for comprehensive analysis of human facial expression synthesis using cloud model, in order to realize the rapid and effective facial expression processing in analysis and application. It gives the comprehensive analysis for the fuzziness and randomness of facial expression feature and the relationship between them based on cloud model, including the new method of facial expression synthesis with the uncertainty. It proposes the method of facial expression feature synthesis by cloud model, using the three numerical characteristics (Expectation, Entropy and Hyper Entropy as the features and concepts of facial expression with its fuzziness, randomness and certain relevance in them. Through such three numerical characteristics, it introduces the framework of facial expression synthesis and the detail procedures based on cloud model. It puts forward the synthesis method of facial expression and gives the concrete realization and the implementation process. The facial expressions after synthesis can express the different expressions for one person, and it can meet a variety of demands for facial expression. The experimental results show that the proposed method is feasible and effective in facial expression synthesis.

  4. Perceived Attractiveness, Facial Features, and African Self-Consciousness.

    Science.gov (United States)

    Chambers, John W., Jr.; And Others

    1994-01-01

    Investigated relationships between perceived attractiveness, facial features, and African self-consciousness (ASC) among 149 African American college students. As predicted, high ASC subjects used more positive adjectives in descriptions of strong African facial features than did medium or low ASC subjects. Results are discussed in the context of…

  5. 基于NSCT和M-PCNN的人脸特征提取%Facial Feature Extraction Based on NSCT and M-PCNN

    Institute of Scientific and Technical Information of China (English)

    杨光; 王晅; 徐鹏; 陈丹丹

    2012-01-01

    In order to improve the robustness efface recognition to the changes of facial pose, position and expression, a facial feature extraction method based on Nonsubsampled Contourtlet Transform(NSCT) and Modified Pulse-coupled Neural Network(M-PCNN) is proposed in this paper. By using NSCT, the input images are decomposed into a number of sub-images with various scales and directional features. The different subbands are decomposed into a sequence of binary images by using M-PCNN. The information entropies of each binary images are calculated and regarded as facial features. A Support Vector Machine(SVM) classifier is employed to implement recognition and classification. Simulation results show that this method has good robustness, and can achieve better result in verification and classification.%为提高人脸识别对人脸姿态、位置、表情变化的鲁棒性,提出一种基于非下采样Contourlet变换(NSCT)与改进脉冲耦合神经网络(M-PCNN)的人脸特征提取方法.利用NSCT对输入图像进行多尺度分解和多方向稀疏分解,以捕获图像中的高维奇异信息,使用M-PCNN模型提取各子带的信息熵,将其作为人脸特征,利用支持向量机(SVM)实现分类与识别.仿真结果表明,该方法鲁棒性较强,在识别和分类中表现出较好的性能.

  6. Wavelet based approach for facial expression recognition

    Directory of Open Access Journals (Sweden)

    Zaenal Abidin

    2015-03-01

    Full Text Available Facial expression recognition is one of the most active fields of research. Many facial expression recognition methods have been developed and implemented. Neural networks (NNs have capability to undertake such pattern recognition tasks. The key factor of the use of NN is based on its characteristics. It is capable in conducting learning and generalizing, non-linear mapping, and parallel computation. Backpropagation neural networks (BPNNs are the approach methods that mostly used. In this study, BPNNs were used as classifier to categorize facial expression images into seven-class of expressions which are anger, disgust, fear, happiness, sadness, neutral and surprise. For the purpose of feature extraction tasks, three discrete wavelet transforms were used to decompose images, namely Haar wavelet, Daubechies (4 wavelet and Coiflet (1 wavelet. To analyze the proposed method, a facial expression recognition system was built. The proposed method was tested on static images from JAFFE database.

  7. LBP and SIFT based facial expression recognition

    Science.gov (United States)

    Sumer, Omer; Gunes, Ece O.

    2015-02-01

    This study compares the performance of local binary patterns (LBP) and scale invariant feature transform (SIFT) with support vector machines (SVM) in automatic classification of discrete facial expressions. Facial expression recognition is a multiclass classification problem and seven classes; happiness, anger, sadness, disgust, surprise, fear and comtempt are classified. Using SIFT feature vectors and linear SVM, 93.1% mean accuracy is acquired on CK+ database. On the other hand, the performance of LBP-based classifier with linear SVM is reported on SFEW using strictly person independent (SPI) protocol. Seven-class mean accuracy on SFEW is 59.76%. Experiments on both databases showed that LBP features can be used in a fairly descriptive way if a good localization of facial points and partitioning strategy are followed.

  8. Facial Expression Recognition in the Wild using Rich Deep Features

    OpenAIRE

    Karali, Abubakrelsedik; Bassiouny, Ahmad; El-Saban, Motaz

    2016-01-01

    Facial Expression Recognition is an active area of research in computer vision with a wide range of applications. Several approaches have been developed to solve this problem for different benchmark datasets. However, Facial Expression Recognition in the wild remains an area where much work is still needed to serve real-world applications. To this end, in this paper we present a novel approach towards facial expression recognition. We fuse rich deep features with domain knowledge through enco...

  9. 基于特征区域自动分割的人脸表情识别%Facial Expression Recognition Based on Feature Regions Automatic Segmentation

    Institute of Scientific and Technical Information of China (English)

    张腾飞; 闵锐; 王保云

    2011-01-01

    针对目前三维人脸表情区域分割方法复杂、费时问题,提出一种人脸表情区域自动分割方法,通过投影、曲率计算的方法检测人脸的部分特征点,以上述特征点为基础进行人脸表情区域的自动分割.为得到更加丰富的表情特征,结合人脸表情识别编码规则对提取到的特征矩阵进行扩充,利用分类器进行人脸表情的识别.通过对三维人脸表情数据库部分样本的识别结果表明,该方法可以取得较高的识别率.%To improve 3D facial expression feature regions segmentation, an automatic feature regions segmentation method is presented.The facial feature points are detected by conducting projection and curvature calculation, and are used as the basis of facial expression feature regions automatic segmentation.To obtain more abundant facial expression information, the Facial Action Coding System(FACS) coding roles is introduced to extend the extracted characteristic matrix.And facial expressions can be recognized by combining classifiers.Experimental results of 3D facial expression samples show that the method is effective with high recognition rate.

  10. 基于CBP-TOP特征的人脸表情识别%Facial expression recognition based on CBP-TOP features

    Institute of Scientific and Technical Information of China (English)

    朱勇; 詹永照

    2011-01-01

    According to effective extraction of facial expression information in space-time domain,this paper proposed a novel approach for facial expression recognition based on CBP-TOP features and SVM classifier.In this method, processed original image sequences first, including face detection, image interception and size normalized.Then extracted the features of image from the blocks of images using the CBP-TOP operator.Finally recognized six expressions by support vector machine classifier.The experiment result shows that, this method can extract movement feature of image sequences and dynamic texture information more effectively,as well as raise the accuracy of expression recognition.Compared with VLBP, CBP-TOP has greater improvement in recognition rate and recognition speed.%针对人脸表情时空域特征信息的有效提取,提出了一种CBP-TOP(centralized binary patterns from three orthogonal panels)特征和SVM分类器相结合的人脸表情识别新方法.该方法首先将原始图像序列进行图像预处理,包括人脸检测、图像截取和图像尺度归一化,然后用CBP-TOP算子对图像序列进行分块提取特征,最后采用SVM分类器进行表情识别.实验结果表明,该方法能更有效地提取图像序列的运动特征和动态纹理信息,提高了表情识别的准确率.与VLBP(volume local binary pattern)特征相比,CBP-TOP特征在表情识别中具有更高的识别率和更快的识别速度.

  11. Facial features influence the categorization of female sexual orientation.

    Science.gov (United States)

    Tskhay, Konstantin O; Feriozzo, Melissa M; Rule, Nicholas O

    2013-01-01

    Social categorization is a rapid and automatic process, and people rely on various facial cues to accurately categorize each other into social groups. Recently, studies have demonstrated that people integrate different cues to arrive at accurate impressions of others' sexual orientations. The amount of perceptual information available to perceivers could affect these categorizations, however. Here, we found that, as visual information decreased from full faces to internal facial features to just pairs of eyes, so did the accuracy of judging women's sexual orientation. Yet and still, accuracy remained significantly greater than chance across all conditions. More important, however, participants' response bias varied significantly depending on the facial feature judged. Perceivers were significantly more likely to consider that a target may be lesbian as they viewed less of the faces. Thus, although facial features may be continuously integrated in person construal, they can differentially affect how people see each other. PMID:24494440

  12. Regression-based Multi-View Facial Expression Recognition

    NARCIS (Netherlands)

    Rudovic, Ognjen; Patras, Ioannis; Pantic, Maja

    2010-01-01

    We present a regression-based scheme for multi-view facial expression recognition based on 2-D geometric features. We address the problem by mapping facial points (e.g. mouth corners) from non-frontal to frontal view where further recognition of the expressions can be performed using a state-of-the-

  13. 核典型相关分析的融合人脸识别算法%Fusing facial feature recognition algorithm based on kernel canonical correlation analysis

    Institute of Scientific and Technical Information of China (English)

    王大伟; 陈浩; 王延杰

    2009-01-01

    A new fusing facial feature recognition algorithm based on kernel Canonical Correlation Analysis ( Kernel CCA) was proposed,for mapping image data into feature space and improving classifying accuracy. In our approach, we first map the image data through kernel function,then extract feature from the directions of rows and columns. Our algorithm simplifies the computation without decomposing the mapped matrix and gains the more discriminated features. The experiment results on OTCBVS V/IR face database of Ohio state university show that our algorithm gets better performance than other facial recognition method based on CCA with recognition accuracytate more than 90%. In addition,it also can get the excellent results with the illumination and expression variance.%为了更有效地映射图像数据样本到可分类特征空间,提高分类正确率,提出了一种新的基于核函数的典型相关分析的融合人脸识别算法.该方法首先把图像矩阵通过核函数影射到核空间,然后从核空间的行和列两个方向进行特征抽取,同时避免分解映射后的数据矩阵,简化了数据运算,获得了更具鉴别力的分类特征.在Ohio州立大学的OTCBVS可见/红外人脸数据库中进行了分类识别实验,实验结果表明:该方法可以获得90%以上的识别正确率,优于其他的典型相关分析的人脸识别方法的分类正确率.此外,对不均匀光照变化,表情变化等人脸识别的常见问题具有很好的抵抗能力.

  14. Concealing the Level-3 features of Fingerprint in a Facial Image

    Directory of Open Access Journals (Sweden)

    Dr.R.Seshadri,

    2010-11-01

    Full Text Available individual based on their physical, chemical and behavioral characteristics of the person. Biometrics is increasingly being used for authentication and protection purposes and this has generated considerable interest from many parts of the information technology people. In this paper we proposed facial image Watermarking methods that can embedded fingerprint level-3 features information into host facial images. This scheme has the advantage that in addition to facial matching, the recovered fingerprint level-3 features during the decoding can be used to establish the authentication. Here the proposed system concealing of vital information human being for identification and at the same time the system protect themselves fromattackers.

  15. Face recognition using improved-LDA with facial combined feature

    Science.gov (United States)

    Zhou, Dake; Yang, Xin; Peng, Ningsong

    2005-06-01

    Face recognition subjected to various conditions is a challenging task. This paper presents a combined feature improved Fisher classifier method for face recognition. Both of the facial holistic information and local information are used for face representation. In addition, the improved linear discriminant analysis (I-LDA) is employed for good generalization capability. Experiments show that the method is not only robust to moderate changes of illumination, pose and facial expression but also superior to the traditional methods, such as eigenfaces and Fisherfaces.

  16. Face recognition using improved-LDA with facial combined feature

    Institute of Scientific and Technical Information of China (English)

    Dake Zhou; Xin Yang; Ningsong Peng

    2005-01-01

    @@ Face recognition subjected to various conditions is a challenging task. This paper presents a combined feature improved Fisher classifier method for face recognition. Both of the facial holistic information and local information are used for face representation. In addition, the improved linear discriminant analysis (I-LDA) is employed for good generalization capability. Experiments show that the method is not only robust to moderate changes of illumination, pose and facial expression but also superior to the traditional methods, such as eigenfaces and Fisherfaces.

  17. Facial Affect Recognition Using Regularized Discriminant Analysis-Based Algorithms

    OpenAIRE

    Lee Chien-Cheng; Huang Shin-Sheng; Shih Cheng-Yuan

    2010-01-01

    This paper presents a novel and effective method for facial expression recognition including happiness, disgust, fear, anger, sadness, surprise, and neutral state. The proposed method utilizes a regularized discriminant analysis-based boosting algorithm (RDAB) with effective Gabor features to recognize the facial expressions. Entropy criterion is applied to select the effective Gabor feature which is a subset of informative and nonredundant Gabor features. The proposed RDAB algorithm uses RD...

  18. Geometric Feature Based Face-Sketch Recognition

    OpenAIRE

    Pramanik, Sourav; Bhattacharjee, Debotosh

    2013-01-01

    This paper presents a novel facial sketch image or face-sketch recognition approach based on facial feature extraction. To recognize a face-sketch, we have concentrated on a set of geometric face features like eyes, nose, eyebrows, lips, etc and their length and width ratio because it is difficult to match photos and sketches because they belong to two different modalities. In this system, first the facial features/components from training images are extracted, then ratios of length, width, a...

  19. Which 3D Geometric Facial Features Give Up Your Identity ?

    OpenAIRE

    Ballihi, Lahoucine; Boulbaba, Ben Amor; Daoudi, Mohamed; Srivastava, Anuj; Aboutajdine, Driss

    2012-01-01

    International audience The 3D face recognition literature has many papers that represent facial shapes as collections of curves of different kinds (level-curves, iso-level curves, radial curves, profiles, geodesic polarization, iso-depth lines, iso-stripes, etc.). In contrast with the holistic approaches, the approaches that match faces based on whole surfaces, the curve-based parametrization allows local analysis of facial shapes. This, in turn, facilitates handling of pose variations (pr...

  20. Detection of Facial Features in Scale-Space

    OpenAIRE

    P. Hosten; M. Asbach

    2007-01-01

    This paper presents a new approach to the detection of facial features. A scale adapted Harris Corner detector is used to find interest points in scale-space. These points are described by the SIFT descriptor. Thus invariance with respect to image scale, rotation and illumination is obtained. Applying a Karhunen-Loeve transform reduces the dimensionality of the feature space. In the training process these features are clustered by the k-means algorithm, followed by a cluster analysis to find ...

  1. Extracted facial feature of racial closely related faces

    Science.gov (United States)

    Liewchavalit, Chalothorn; Akiba, Masakazu; Kanno, Tsuneo; Nagao, Tomoharu

    2010-02-01

    Human faces contain a lot of demographic information such as identity, gender, age, race and emotion. Human being can perceive these pieces of information and use it as an important clue in social interaction with other people. Race perception is considered the most delicacy and sensitive parts of face perception. There are many research concerning image-base race recognition, but most of them are focus on major race group such as Caucasoid, Negroid and Mongoloid. This paper focuses on how people classify race of the racial closely related group. As a sample of racial closely related group, we choose Japanese and Thai face to represents difference between Northern and Southern Mongoloid. Three psychological experiment was performed to study the strategies of face perception on race classification. As a result of psychological experiment, it can be suggested that race perception is an ability that can be learn. Eyes and eyebrows are the most attention point and eyes is a significant factor in race perception. The Principal Component Analysis (PCA) was performed to extract facial features of sample race group. Extracted race features of texture and shape were used to synthesize faces. As the result, it can be suggested that racial feature is rely on detailed texture rather than shape feature. This research is a indispensable important fundamental research on the race perception which are essential in the establishment of human-like race recognition system.

  2. Facial expression feature selection method based on neighborhood rough set theory and quantum genetic algorithm%基于邻域粗糙集与量子遗传算法的人脸表情特征选择方法

    Institute of Scientific and Technical Information of China (English)

    冯林; 李聪; 沈莉

    2013-01-01

    人脸表情特征选择是人脸表情识别研究领域关注的一个热点.基于量子遗传算法与邻域粗糙集理论,文章提出一种新的人脸表情特征选择方法(Feature Selection based on Neighborhood Rough Set Theory and Quantum Genetic Algorithm,简称FSNRSTQGA),以邻域粗糙集理论为基础,定义了最优特征集的适应度函数来评价表情特征子集的选择效果;并结合量子遗传算法进化策略,提出了一种表情特征选择方法.Cohn-Kanade表情数据集上的仿真实验结果表明了该方法的有效性.%Facial expression feature selection is one of the hot issues in the field of facial expression recognition. A novel facial expression feature selection method named feature selection based on neighborhood rough set theory and quantum genetic algorithm (FSNRSTQGA) is proposed. First, an evaluation criterion of the optimization expression feature subset is established based on neighborhood rough set theory and used as the fitness function. Then, by quantum genetic algorithm evolutionary strategy, an approach of facial expression feature selection is proposed. The results of the simulation on Cohn-Kanade expression dataset illustrate that the FSNRSTQGA method is effective.

  3. FACIAL EXPRESSION RECOGNITION BASED ON EDGE DETECTION

    OpenAIRE

    Chen, Xiaoming; Cheng, Wushan

    2015-01-01

    Relational Over the last two decades, the advances in computer vision and pattern recognition power have opened the door to new opportunity of automatic facial expression recognition system[1]. This paper use Canny edge detection method for facial expression recognition. Image color space transformation in the first place and then to identify and locate human face .Next pick up the edge of eyes and mouth's features extraction. Last we judge the facial expressions after compared wi...

  4. Invariant facial feature extraction using biologically inspired strategies

    Science.gov (United States)

    Du, Xing; Gong, Weiguo

    2011-12-01

    In this paper, a feature extraction model for face recognition is proposed. This model is constructed by implementing three biologically inspired strategies, namely a hierarchical network, a learning mechanism of the V1 simple cells, and a data-driven attention mechanism. The hierarchical network emulates the functions of the V1 cortex to progressively extract facial features invariant to illumination, expression, slight pose change, and variations caused by local transformation of facial parts. In the network, filters that account for the local structures of the face are derived through the learning mechanism and used for the invariant feature extraction. The attention mechanism computes a saliency map for the face, and enhances the salient regions of the invariant features to further improve the performance. Experiments on the FERET and AR face databases show that the proposed model boosts the recognition accuracy effectively.

  5. 基于Trace变换的人脸特征提取技术研究%Research of facial features extraction technology based on Trace transform

    Institute of Scientific and Technical Information of China (English)

    王景中; 王国庆; 伍淳华; 王龙

    2012-01-01

    为了提高人脸特征的稳定性和区分度,提出了一种基于Trace变换的人脸特征提取算法.算法通过几种不同的泛函函数对预处理后的人脸图像进行组合作用,得到该图像的一个Trace特征向量,从而建立了一种新的人脸特征表达方式.基于ORL人脸数据库的实验结果表明,该算法所提出的人脸特征对同一个人不同表情、不同光照条件下的图像变化能够保持较好的稳定性,同时对不同人的人脸图像具有较高的区分能力,在人脸识别的实际应用中是一种可行的方法.%This paper proposed a novel facial feature extraction scheme and a new way for facial representation, which could improve the ability to discriminate different persons' face images while also recognize same person' s face images with different expressions or under different lighting conditions. It combined several functions together to process the preprocessed original face images, which would produce feature vectors of the original face images. Experimental results on ORL face database prove that the facial feature extracted by this scheme can discriminate different persons' face images while also recognize the same person' s different face images. Therefore,this algorithm is feasible in practical applications of face recognition.

  6. 3D Facial Gender Classification Based on Multi-angle LBP Feature%基于多角度LBP特征的三维人脸性别分类

    Institute of Scientific and Technical Information of China (English)

    赵海英; 杨一帆; 徐正光

    2012-01-01

    人脸性别分类是一个富有挑战的研究方向,目前的研究尚不完善.本文提出一种三维人脸的性别分类方法,首先对数据集进行局部区域最近邻点迭代算法(Iterative closest point,ICP)匹配,自动实现人脸正向姿态校正;对数据集人脸统一做俯仰角度的旋转,从不同视角上提取基于深度缩略图的多角度LBP (Local binary patterns)特征;再由支持向量机(Support vector machine,SVM)分类器完成训练分类.该方法在CASIA数据库上实验,对全库中性表情人脸进行性别分类,可以得到最高98.374%的正确率.%Facial gender classification is a challenging topic, and it's still not perfect until now. In this paper, we propose a series of methods of gender classification based on three-dimension faces. Automatic front-pose adjustment is needed through local region iterative closest point (ICP) registration firstly; then we do pitching rotating and extract muni-angle LBP features from depth thumbnail map in different viewing angles; at last, we use support vector machine (SVM) classifier to do training and prediction. This algorithm has been experimented on CASIA database, and for the neutral faces in this database, we can get a highest correct classification rate of 98.374%.

  7. Facial Expression Recognition Based on Gabor Feature and Adaboost%基于Gabor特征和Adaboost的人脸表情识别

    Institute of Scientific and Technical Information of China (English)

    刘燚; 高智勇; 王军

    2011-01-01

    为了改菩人脸表情的识别率,提高分类器的性能,通过提取人脸表情图像的Gabor特征,再结合Adaboost算法,从而进行人脸表情的识别(facial expression recognition,FER).利用Gabor滤波器是人脸表情特征提取的一个重要手段,Adaboost算法则将一系列的弱分类器组合,最终生成一个强分类器.对表情识别这个多类识别问题,采取1:1的办法来解决,总共产生k(k-1)/2(k为总类别数)个强分类器,将多个强分类器进行级联实现人脸表情的多类分类.实验结果表明,相对于其他识别方法如MVBoost算法等,这种方法的识别准确率有很大的提高.%In order to improve the recognition rate of facial expression and enhance the performance of classifier,an approach is proposed to recognize facial expression using Gabor feature combined Adaboost algorithm.Gabor filter is one of the most important methods to extract features, weak classifiers would be constructed by Adaboost algorithm to generate a strong classifier.To solve the multi-class classification problem, we designed classifier by one-to-one mode,so the number of strong classifiers of Adaboost was k(k-1)/2 (k,number of categories).Finally, all strong classifiers were cascaded, Gabor features were feed into these classifiers and facial expression classification can be recognized.Experiment resuks showed that the recognition rate of Gabor plus Adaboost algorithm is significantly higher than that of other methods such as MVBoost algorithm.

  8. Facial features and social attractiveness: preferences of Bosnian female students

    Directory of Open Access Journals (Sweden)

    Nina Bosankić

    2015-09-01

    Full Text Available This research aimed at testing multiple fitness hypothesis of attraction, investigating relationship between male facial characteristic and female students' reported readiness to engage in various social relations. A total of 27 male photos were evaluated on five dimensions on a seven-point Likert-type scale ranging from -3 to 3, by convenient sample of 90 female students of University of Sarajevo. The dimensions were: desirable to date – not desirable to date; desirable to marry – not desirable to marry; desirable to have sex with – not desirable to have sex with; desirable to be a friend – not desirable to be a friend; attractive - not attractive. Facial metric measurements of facial features such as distance between the eyes, smile width and height were performed using AutoCad. The results indicate that only smile width positively correlates with desirability of establishing friendship, whilst none of the other characteristics correlates with any of the other dimensions. This leads to the conclusion that motivation to establish various social relations cannot be reduced to mere physical appearance, mainly facial features, but many other variables yet to be investigated.

  9. Extraction of Facial Feature Points Using Cumulative Histogram

    CERN Document Server

    Paul, Sushil Kumar; Bouakaz, Saida

    2012-01-01

    This paper proposes a novel adaptive algorithm to extract facial feature points automatically such as eyebrows corners, eyes corners, nostrils, nose tip, and mouth corners in frontal view faces, which is based on cumulative histogram approach by varying different threshold values. At first, the method adopts the Viola-Jones face detector to detect the location of face and also crops the face region in an image. From the concept of the human face structure, the six relevant regions such as right eyebrow, left eyebrow, right eye, left eye, nose, and mouth areas are cropped in a face image. Then the histogram of each cropped relevant region is computed and its cumulative histogram value is employed by varying different threshold values to create a new filtering image in an adaptive way. The connected component of interested area for each relevant filtering image is indicated our respective feature region. A simple linear search algorithm for eyebrows, eyes and mouth filtering images and contour algorithm for nos...

  10. Extraction of Facial Feature Points Using Cumulative Histogram

    Directory of Open Access Journals (Sweden)

    Sushil Kumar Paul

    2012-01-01

    Full Text Available This paper proposes a novel adaptive algorithm to extract facial feature points automatically such as eyebrows corners, eyes corners, nostrils, nose tip, and mouth corners in frontal view faces, which is based on cumulative histogram approach by varying different threshold values. At first, the method adopts the Viola-Jones face detector to detect the location of face and also crops the face region in an image. From the concept of the human face structure, the six relevant regions such as right eyebrow, left eyebrow, right eye, left eye, nose, and mouth areas are cropped in a face image. Then the histogram of each cropped relevant region is computed and its cumulative histogram value is employed by varying different threshold values to create a new filtering image in an adaptive way. The connected component of interested area for each relevant filtering image is indicated our respective feature region. A simple linear search algorithm for eyebrows, eyes and mouth filtering images and contour algorithm for nose filtering image are applied to extract our desired corner points automatically. The method was tested on a large BioID frontal face database in different illuminations, expressions and lighting conditions and the experimental results have achieved average success rates of 95.27%.

  11. Facial expression identification using 3D geometric features from Microsoft Kinect device

    Science.gov (United States)

    Han, Dongxu; Al Jawad, Naseer; Du, Hongbo

    2016-05-01

    Facial expression identification is an important part of face recognition and closely related to emotion detection from face images. Various solutions have been proposed in the past using different types of cameras and features. Microsoft Kinect device has been widely used for multimedia interactions. More recently, the device has been increasingly deployed for supporting scientific investigations. This paper explores the effectiveness of using the device in identifying emotional facial expressions such as surprise, smile, sad, etc. and evaluates the usefulness of 3D data points on a face mesh structure obtained from the Kinect device. We present a distance-based geometric feature component that is derived from the distances between points on the face mesh and selected reference points in a single frame. The feature components extracted across a sequence of frames starting and ending by neutral emotion represent a whole expression. The feature vector eliminates the need for complex face orientation correction, simplifying the feature extraction process and making it more efficient. We applied the kNN classifier that exploits a feature component based similarity measure following the principle of dynamic time warping to determine the closest neighbors. Preliminary tests on a small scale database of different facial expressions show promises of the newly developed features and the usefulness of the Kinect device in facial expression identification.

  12. 基于多特征融合的轨道交通智能视频人像识别技术研究%Intelligent Video Facial Recognition Technology Based on Features Fusion for Rail Transportation

    Institute of Scientific and Technical Information of China (English)

    沈海燕; 史宏; 冯云梅

    2011-01-01

    Considering the features of passenger flow and application requirements of rail transportation, this paper focuses on the global features extraction. It explores the upper features extracting and the T-zone features extracting based on detailed description of image pre-processing technology, facial space establishment and eigenface recognition method. The recognition method based on global features is the identification by extracting the overall facial shape features which is easily affected by expressions, poses, and shades. The recognition method based on the upper features and the T-zone features is proved to be superior to the recognition method based on global features in terms of solving problems of impressions and shades. They are summed according to their own weight. Therefore, the paper proposes a facial recognition method based on features fusion. Pilot application of the proposed method at some stations on Beijing-Shanghai high speed line proves that this method effectively improves the facial recognition accuracy.%结合轨道交通客流特点和应用需求,在详细描述人像识别中的图像预处理、人像空间的建立及特征脸识别方法的基础上,研究了全局特征提取、Upper特征提取和Tzone特征提取方法.基于全局特征的识别方法通过提取人像的整体形状特征进行识别,易受表情、姿势、遮挡等的影响.基于Upper特征和Tzone特征在处理表情和遮挡等问题时,与基于全局特征的方法相比具有一定的优势,因此,本文对全局特征、Upper特征和Tzone特征进行了加权融合,提出一种基于多特征融合的人像识别方法.经京沪高速铁路部分车站试点验证,该方法能有效提高人像识别的准确率.

  13. Facial Feature Extraction Using Frequency Map Series in PCNN

    Directory of Open Access Journals (Sweden)

    Rencan Nie

    2016-01-01

    Full Text Available Pulse coupled neural network (PCNN has been widely used in image processing. The 3D binary map series (BMS generated by PCNN effectively describes image feature information such as edges and regional distribution, so BMS can be treated as the basis of extracting 1D oscillation time series (OTS for an image. However, the traditional methods using BMS did not consider the correlation of the binary sequence in BMS and the space structure for every map. By further processing for BMS, a novel facial feature extraction method is proposed. Firstly, consider the correlation among maps in BMS; a method is put forward to transform BMS into frequency map series (FMS, and the method lessens the influence of noncontinuous feature regions in binary images on OTS-BMS. Then, by computing the 2D entropy for every map in FMS, the 3D FMS is transformed into 1D OTS (OTS-FMS, which has good geometry invariance for the facial image, and contains the space structure information of the image. Finally, by analyzing the OTS-FMS, the standard Euclidean distance is used to measure the distances for OTS-FMS. Experimental results verify the effectiveness of OTS-FMS in facial recognition, and it shows better recognition performance than other feature extraction methods.

  14. Automatic Facial Expression Recognition Based on Hybrid Approach

    Directory of Open Access Journals (Sweden)

    Ali K. K. Bermani

    2012-12-01

    Full Text Available The topic of automatic recognition of facial expressions deduce a lot of researchers in the late last century and has increased a great interest in the past few years. Several techniques have emerged in order to improve the efficiency of the recognition by addressing problems in face detection and extraction features in recognizing expressions. This paper has proposed automatic system for facial expression recognition which consists of hybrid approach in feature extraction phase which represent a combination between holistic and analytic approaches by extract 307 facial expression features (19 features by geometric, 288 feature by appearance. Expressions recognition is performed by using radial basis function (RBF based on artificial neural network to recognize the six basic emotions (anger, fear, disgust, happiness, surprise, sadness in addition to the natural.The system achieved recognition rate 97.08% when applying on person-dependent database and 93.98% when applying on person-independent.

  15. Contactless measurement of muscles fatigue by tracking facial feature points in a video

    DEFF Research Database (Denmark)

    Irani, Ramin; Nasrollahi, Kamal; Moeslund, Thomas B.

    2014-01-01

    with such cases, in this paper we present a contactless method based on computer vision techniques to measure tiredness by detecting, tracking, and analyzing some facial feature points during the exercise. Experimental results on several test subjects and comparing them against ground truth data show...

  16. A Novel Feature Extraction Technique for Facial Expression Recognition

    OpenAIRE

    Mohammad Shahidul Islam; Surapong Auwatanamongkol

    2013-01-01

    This paper presents a new technique to extract the light invariant local feature for facial expression recognition. It is not only robust to monotonic gray-scale changes caused by light variations but also very simple to perform which makes it possible for analyzing images in challenging real-time settings. The local feature for a pixel is computed by finding the direction of the neighboring of the pixel with the particular rank in term of its gray scale value among all the neighboring pixels...

  17. 基于差分纹理的人脸表情识别%Facial expression recognition based on differential texture features

    Institute of Scientific and Technical Information of China (English)

    夏海英; 徐鲁辉

    2015-01-01

    考虑到自动人脸表情识别背景复杂性问题,提出了一个新的表情识别方法———基于差分纹理的人脸表情识别,该方法在一定程度上能够有效地屏蔽掉个体人脸之间的差异,同时保留住人脸表情信息。首先选定一个标准人脸参考模型,该模型合理分布面部55个基准点,这些基准点主要分布于眼睛、鼻子、嘴和包含表情丰富的外部轮廓上;然后利用 Delaunay 三角剖分获取这些基准点的相对位置信息。对于人脸表情图像,首先利用主动形状模型(ASM)跟踪定位这55个基准点,然后利用三角剖分获得的相对位置信息,以及应用纹理映射技术将表情图像映射到标准人脸参考模型中,这样中性表情图像(不含表情信息的人脸)和非中性表情(六种基本表情)图像均被映射到同一大小的框架内,最后将它们的差值图像作为表情特征,称为 DT(differential texture,差分纹理)特征。最后分别将 JAFFE 人脸表情库和 CK 人脸表情库中的部分样本组成混合数据并进行实验,结果表明提出的方法对六种基本表情具有较好的识别率,并且该方法优于传统的 Gabor 特征和 LBP 特征方法,并能扩展到动态图像中的表情识别中去。%Considering the problem of automatically recognizing facial expression with complex background,this paper pro-posed a novel method,which could extract expression features regardless of face information.First,the method selected a standard reference model,in which 55 facial landmark points were reasonably distributed by geometric information of the face. Those landmark points mainly located at facial contour,eyebrows,eyes,nose and lips,which constituted the convex hull of face model.Then it deployed the Delaunay triangulation to get the relative location information of those points in the standard reference model.It got 55 landmark points by using ASMlocation for

  18. FACIAL EXPRESSION CLASSIFICATION WITH HAAR FEATURES, GEOMETRIC FEATURES AND CUBIC BÉZIER CURVES

    OpenAIRE

    Kandemir, Rembiye; Özmen, Gonca

    2013-01-01

    Facial expressions are nonverbal communication channels to interact with other people.  Computer recognition of human emotions based on facial expression is  an interesting and difficult problem. In this study, images were analyzed based on facial expressions and tried to identify different emotions, such as smile, surprise, sadness, fear, disgust, anger and  neutral. In practice, it was used Viola-Jones face detector  used  AdaBoost  algorithm for  finding the location of the face. Haar filt...

  19. Featural processing in recognition of emotional facial expressions.

    Science.gov (United States)

    Beaudry, Olivia; Roy-Charland, Annie; Perron, Melanie; Cormier, Isabelle; Tapp, Roxane

    2014-04-01

    The present study aimed to clarify the role played by the eye/brow and mouth areas in the recognition of the six basic emotions. In Experiment 1, accuracy was examined while participants viewed partial and full facial expressions; in Experiment 2, participants viewed full facial expressions while their eye movements were recorded. Recognition rates were consistent with previous research: happiness was highest and fear was lowest. The mouth and eye/brow areas were not equally important for the recognition of all emotions. More precisely, while the mouth was revealed to be important in the recognition of happiness and the eye/brow area of sadness, results are not as consistent for the other emotions. In Experiment 2, consistent with previous studies, the eyes/brows were fixated for longer periods than the mouth for all emotions. Again, variations occurred as a function of the emotions, the mouth having an important role in happiness and the eyes/brows in sadness. The general pattern of results for the other four emotions was inconsistent between the experiments as well as across different measures. The complexity of the results suggests that the recognition process of emotional facial expressions cannot be reduced to a simple feature processing or holistic processing for all emotions.

  20. Facial Expression Feature Extraction Based on the J-divergence Entropy of IMF%基于IMF解析信号能量熵的人脸表情特征提取方法磁

    Institute of Scientific and Technical Information of China (English)

    李茹; 张建伟

    2016-01-01

    人脸表情识别是指利用计算机技术、图像处理、机器视觉等技术对人脸表情图像或图像序列进行特征提取、建模,以及表情分类的过程,从而使得计算机程序能够依据人的脸部表情信息推断人的心理状态。人脸表情识别主要分为三个阶段:人脸检测、表情特征提取、表情特征分类。其中,表情特征的选取是人脸表情识别的关键步骤,特征选取的好坏直接影响表情分类的效果。论文提出了一种基于IM F解析信号能量熵的人脸表情特征提取方法,将希尔伯特黄变换方法应用到人脸表情识别中。首先,对表情图像进行Radon变换,得到人脸表情信号,然后对该信号进行经验模态分解(EMD),得到一系列本征模态函数(IM F),对得到本征模态函数(IM F)进行 Hilbert变换,得到IM F解析信号,计算瞬时振幅,瞬时频率。选择IM F以及其解析信号的振幅作为特征向量,计算其能量判别熵,选择同类之间有较小判别熵,不同信号类之间有较大判别熵的特征作为表情分类的特征向量。采用PCA算法对选取的特征进行降维,使用支持向量机(SVM )对两类表情进行分类。%Facial emotion or facial expression recognition refers to using computer technology ,image processing and machine vision technology to process the object from a given image or video sequence for feature extraction ,modeling ,classi‐fication to identify the psychological mood of the subject .Facial expression recognition is mainly divided into three stages ,in‐cluding face detection ,face feature extraction and expression classification .Expression feature extraction and selection is a key step in efficient and effective facial emotion recognition and may affect the classification results .In this study ,a novel ap‐proach of face expression feature extraction is proposed based on energy entropy of IMF analytic signal

  1. Men's preference for women's facial features: testing homogamy and the paternity uncertainty hypothesis.

    Directory of Open Access Journals (Sweden)

    Jeanne Bovet

    Full Text Available Male mate choice might be based on both absolute and relative strategies. Cues of female attractiveness are thus likely to reflect both fitness and reproductive potential, as well as compatibility with particular male phenotypes. In humans, absolute clues of fertility and indices of favorable developmental stability are generally associated with increased women's attractiveness. However, why men exhibit variable preferences remains less studied. Male mate choice might be influenced by uncertainty of paternity, a selective factor in species where the survival of the offspring depends on postnatal paternal care. For instance, in humans, a man might prefer a woman with recessive traits, thereby increasing the probability that his paternal traits will be visible in the child and ensuring paternity. Alternatively, attractiveness is hypothesized to be driven by self-resembling features (homogamy, which would reduce outbreeding depression. These hypotheses have been simultaneously evaluated for various facial traits using both real and artificial facial stimuli. The predicted preferences were then compared to realized mate choices using facial pictures from couples with at least 1 child. No evidence was found to support the paternity uncertainty hypothesis, as recessive features were not preferred by male raters. Conversely, preferences for self-resembling mates were found for several facial traits (hair and eye color, chin dimple, and thickness of lips and eyebrows. Moreover, realized homogamy for facial traits was also found in a sample of long-term mates. The advantages of homogamy in evolutionary terms are discussed.

  2. 基于小波变换的人脸表情特征提取%Wavelet-based Facial Expression Feature Extraction

    Institute of Scientific and Technical Information of China (English)

    张秀艳; 裴雷雷

    2011-01-01

    First,a histogram equalization process is used to enhance the overall image contrast to make image detail clearer.By discrete cosine transform we can reduce the image feature dimension,remove redundant information,retain an important low-frequency information.Then it uses the Gabor wavelet transform,selects a different scale and direction of facial expression feature extraction.Finally,by comparing experimental results,it proves that during the pre-image after wavelet transform we can save a lot of computing time.%首先通过直方图均衡化处理增强图像的整体对比度,使图像的细节更加清晰.通过离散余弦变换来降低图像特征维数、去除冗余信息、保留重要的低频信息.然后利用Gabor小波变换,选取不同的尺度和方向对人脸表情特征进行提取.最后通过实验结果对比证明预处理后的图片在进行小波变换时能节省大量的运算时间.

  3. Orientation-sensitivity to facial features explains the Thatcher illusion.

    Science.gov (United States)

    Psalta, Lilia; Young, Andrew W; Thompson, Peter; Andrews, Timothy J

    2014-10-09

    The Thatcher illusion provides a compelling example of the perceptual cost of face inversion. The Thatcher illusion is often thought to result from a disruption to the processing of spatial relations between face features. Here, we show the limitations of this account and instead demonstrate that the effect of inversion in the Thatcher illusion is better explained by a disruption to the processing of purely local facial features. Using a matching task, we found that participants were able to discriminate normal and Thatcherized versions of the same face when they were presented in an upright orientation, but not when the images were inverted. Next, we showed that the effect of inversion was also apparent when only the eye region or only the mouth region was visible. These results demonstrate that a key component of the Thatcher illusion is to be found in orientation-specific encoding of the expressive features (eyes and mouth) of the face.

  4. Quantification of Cranial Asymmetry in Infants by Facial Feature Extraction

    Institute of Scientific and Technical Information of China (English)

    Chun-Ming Chang; Wei-Cheng Li; Chung-Lin Huang; Pei-Yeh Chang

    2014-01-01

    In this paper, a facial feature extracting method is proposed to transform three-dimension (3D) head images of infants with deformational plagiocephaly for assessment of asymmetry. The features of 3D point clouds of an infant’s cranium can be identified by local feature analysis and a two-phase k-means classification algorithm. The 3D images of infants with asymmetric cranium can then be aligned to the same pose. The mirrored head model obtained from the symmetry plane is compared with the original model for the measurement of asymmetry. Numerical data of the cranial volume can be reviewed by a pediatrician to adjust the treatment plan. The system can also be used to demonstrate the treatment progress.

  5. An Improved Method of feature extraction technique for Facial Expression Recognition using Adaboost Neural Network

    OpenAIRE

    Aruna Bhadu; Dr. Vijay Kumar; Mr. Hardayal Singh Shekhawat; Rajbala Tokas

    2012-01-01

    The objective of this research is comparative study of different feature extraction techniques for facial expression recognition & develops a algorithm, for feature extraction using AdaBoost classifier to reduce the generalization error and improve performance by getting the high recognition rate. For facial feature extraction, I will follow 2 different techniques: Discrete Cosine Transform, Wavelet Transform. Upon extraction of the facial expression information the feature vector is given t...

  6. Facial Expression Recognition Based on Feature Point Vector and Texture Deformation Energy Parameters%基于特征点矢量与纹理形变能量参数融合的人脸表情识别

    Institute of Scientific and Technical Information of China (English)

    易积政; 毛峡; Ishizuka Mitsuru; 薛雨丽

    2013-01-01

    Facial expression recognition is a popular and difficult research field in human-computer interaction. In order to remove effectively the differences in expression feature caused by individual differences, this paper firstly presents the feature point distance ratio coefficient based on feature point vector, and then gives the concept of texture deformation energy parameters. Finally, merges previously mentioned two parts to form a new expression feature for facial expression recognition. The proposed method is tested in the Cohn-Kanade database and the BHU facial expression database, and the experimental results show the recognition rates of the proposed method comparing with the existing ones increased by 4.5%and 3.9%.%人脸表情识别是人机交互领域的研究热点和难点之一。为了有效去除由于个体差异而造成的表情特征的差异,该文首先基于特征点矢量提出特征点距离比例系数;其后,又给出纹理形变能量参数的概念;最后,将二者融合用于人脸表情识别。所提方法在Cohn-Kanade数据库及BHU人脸表情数据库进行了测试,实验结果表明该方法较传统的方法在识别率上分别提高了4.5%与3.9%。

  7. Facial Affect Recognition Using Regularized Discriminant Analysis-Based Algorithms

    Directory of Open Access Journals (Sweden)

    Cheng-Yuan Shih

    2010-01-01

    Full Text Available This paper presents a novel and effective method for facial expression recognition including happiness, disgust, fear, anger, sadness, surprise, and neutral state. The proposed method utilizes a regularized discriminant analysis-based boosting algorithm (RDAB with effective Gabor features to recognize the facial expressions. Entropy criterion is applied to select the effective Gabor feature which is a subset of informative and nonredundant Gabor features. The proposed RDAB algorithm uses RDA as a learner in the boosting algorithm. The RDA combines strengths of linear discriminant analysis (LDA and quadratic discriminant analysis (QDA. It solves the small sample size and ill-posed problems suffered from QDA and LDA through a regularization technique. Additionally, this study uses the particle swarm optimization (PSO algorithm to estimate optimal parameters in RDA. Experiment results demonstrate that our approach can accurately and robustly recognize facial expressions.

  8. Facial Affect Recognition Using Regularized Discriminant Analysis-Based Algorithms

    Science.gov (United States)

    Lee, Chien-Cheng; Huang, Shin-Sheng; Shih, Cheng-Yuan

    2010-12-01

    This paper presents a novel and effective method for facial expression recognition including happiness, disgust, fear, anger, sadness, surprise, and neutral state. The proposed method utilizes a regularized discriminant analysis-based boosting algorithm (RDAB) with effective Gabor features to recognize the facial expressions. Entropy criterion is applied to select the effective Gabor feature which is a subset of informative and nonredundant Gabor features. The proposed RDAB algorithm uses RDA as a learner in the boosting algorithm. The RDA combines strengths of linear discriminant analysis (LDA) and quadratic discriminant analysis (QDA). It solves the small sample size and ill-posed problems suffered from QDA and LDA through a regularization technique. Additionally, this study uses the particle swarm optimization (PSO) algorithm to estimate optimal parameters in RDA. Experiment results demonstrate that our approach can accurately and robustly recognize facial expressions.

  9. Facial Beautification Method Based on Age Evolution

    Institute of Scientific and Technical Information of China (English)

    CHEN Yan; DING Shou-hong; HU Gan-le; MA Li-zhuang

    2013-01-01

    This paper proposes a new facial beautification method using facial rejuvenation based on the age evolution. Traditional facial beautification methods only focus on the color of skin and deformation and do the transformation based on an experimental standard of beauty. Our method achieves the beauty effect by making facial image looks younger, which is different from traditional methods and is more reasonable than them. Firstly, we decompose the image into different layers and get a detail layer. Secondly, we get an age-related parameter:the standard deviation of the Gaussian distribution that the detail layer follows, and the support vector machine (SVM) regression is used to fit a function about the age and the standard deviation. Thirdly, we use this function to estimate the age of input image and generate a new detail layer with a new standard deviation which is calculated by decreasing the age. Lastly, we combine the original layers and the new detail layer to get a new face image. Experimental results show that this algo-rithm can make facial image become more beautiful by facial rejuvenation. The proposed method opens up a new way about facial beautification, and there are great potentials for applications.

  10. Contribution of Facial Feature Dimensions and Velocity Parameters on Particle Inhalability

    OpenAIRE

    Anthony, T. Renée

    2010-01-01

    To examine whether the actual dimensions of human facial features are important to the development of a low-velocity inhalable particulate mass sampling criterion, this study evaluated the effect of facial feature dimensions (nose and lips) on estimates of aspiration efficiency of inhalable particles using computational fluid dynamics modeling over a range of indoor air and breathing velocities. Fluid flow and particle transport around four humanoid forms with different facial feature dimensi...

  11. Implicit binding of facial features during change blindness.

    Directory of Open Access Journals (Sweden)

    Pessi Lyyra

    Full Text Available Change blindness refers to the inability to detect visual changes if introduced together with an eye-movement, blink, flash of light, or with distracting stimuli. Evidence of implicit detection of changed visual features during change blindness has been reported in a number of studies using both behavioral and neurophysiological measurements. However, it is not known whether implicit detection occurs only at the level of single features or whether complex organizations of features can be implicitly detected as well. We tested this in adult humans using intact and scrambled versions of schematic faces as stimuli in a change blindness paradigm while recording event-related potentials (ERPs. An enlargement of the face-sensitive N170 ERP component was observed at the right temporal electrode site to changes from scrambled to intact faces, even if the participants were not consciously able to report such changes (change blindness. Similarly, the disintegration of an intact face to scrambled features resulted in attenuated N170 responses during change blindness. Other ERP deflections were modulated by changes, but unlike the N170 component, they were indifferent to the direction of the change. The bidirectional modulation of the N170 component during change blindness suggests that implicit change detection can also occur at the level of complex features in the case of facial stimuli.

  12. Implicit binding of facial features during change blindness.

    Science.gov (United States)

    Lyyra, Pessi; Mäkelä, Hanna; Hietanen, Jari K; Astikainen, Piia

    2014-01-01

    Change blindness refers to the inability to detect visual changes if introduced together with an eye-movement, blink, flash of light, or with distracting stimuli. Evidence of implicit detection of changed visual features during change blindness has been reported in a number of studies using both behavioral and neurophysiological measurements. However, it is not known whether implicit detection occurs only at the level of single features or whether complex organizations of features can be implicitly detected as well. We tested this in adult humans using intact and scrambled versions of schematic faces as stimuli in a change blindness paradigm while recording event-related potentials (ERPs). An enlargement of the face-sensitive N170 ERP component was observed at the right temporal electrode site to changes from scrambled to intact faces, even if the participants were not consciously able to report such changes (change blindness). Similarly, the disintegration of an intact face to scrambled features resulted in attenuated N170 responses during change blindness. Other ERP deflections were modulated by changes, but unlike the N170 component, they were indifferent to the direction of the change. The bidirectional modulation of the N170 component during change blindness suggests that implicit change detection can also occur at the level of complex features in the case of facial stimuli. PMID:24498165

  13. Alexithymic features and automatic amygdala reactivity to facial emotion.

    Science.gov (United States)

    Kugel, Harald; Eichmann, Mischa; Dannlowski, Udo; Ohrmann, Patricia; Bauer, Jochen; Arolt, Volker; Heindel, Walter; Suslow, Thomas

    2008-04-11

    Alexithymic individuals have difficulties in identifying and verbalizing their emotions. The amygdala is known to play a central role in processing emotion stimuli and in generating emotional experience. In the present study automatic amygdala reactivity to facial emotion was investigated as a function of alexithymia (as assessed by the 20-Item Toronto Alexithymia Scale). The Beck-Depression Inventory (BDI) and the State-Trait-Anxiety Inventory (STAI) were administered to measure participants' depressivity and trait anxiety. During 3T fMRI scanning, pictures of faces bearing sad, happy, and neutral expressions masked by neutral faces were presented to 21 healthy volunteers. The amygdala was selected as the region of interest (ROI) and voxel values of the ROI were extracted, summarized by mean and tested among the different conditions. A detection task was applied to assess participants' awareness of the masked emotional faces shown in the fMRI experiment. Masked sad and happy facial emotions led to greater right amygdala activation than masked neutral faces. The alexithymia feature difficulties identifying feelings was negatively correlated with the neural response of the right amygdala to masked sad faces, even when controlling for depressivity and anxiety. Reduced automatic amygdala responsivity may contribute to problems in identifying one's emotions in everyday life. Low spontaneous reactivity of the amygdala to sad faces could implicate less engagement in the encoding of negative emotional stimuli. PMID:18314269

  14. High-resolution face verification using pore-scale facial features.

    Science.gov (United States)

    Li, Dong; Zhou, Huiling; Lam, Kin-Man

    2015-08-01

    Face recognition methods, which usually represent face images using holistic or local facial features, rely heavily on alignment. Their performances also suffer a severe degradation under variations in expressions or poses, especially when there is one gallery per subject only. With the easy access to high-resolution (HR) face images nowadays, some HR face databases have recently been developed. However, few studies have tackled the use of HR information for face recognition or verification. In this paper, we propose a pose-invariant face-verification method, which is robust to alignment errors, using the HR information based on pore-scale facial features. A new keypoint descriptor, namely, pore-Principal Component Analysis (PCA)-Scale Invariant Feature Transform (PPCASIFT)-adapted from PCA-SIFT-is devised for the extraction of a compact set of distinctive pore-scale facial features. Having matched the pore-scale features of two-face regions, an effective robust-fitting scheme is proposed for the face-verification task. Experiments show that, with one frontal-view gallery only per subject, our proposed method outperforms a number of standard verification methods, and can achieve excellent accuracy even the faces are under large variations in expression and pose. PMID:25781876

  15. Facial expression recognition based on multi-feature fusion by multi-kernel SVM%基于多核学习特征融合的人脸表情识别

    Institute of Scientific and Technical Information of China (English)

    钟志鹏; 张立保

    2015-01-01

    传统的基于纹理特征的表情识别采用单一纹理特征构建单核支持向量机( SVM)进行表情特征分类,势必会造成表情特征信息的丢失,影响识别率;然而过多的特征又会带来冗余,产生过拟合现象,降低识别率。针对传统方法的不足,提出了基于多核学习特征融合的人脸表情识别方法,即提取图像的Gabor纹理特征、灰度直方图特征、LBP纹理特征三种特征并进行主成分分析( PCA)降维,在多核支持向量机训练中利用基于核函数组合的特征融合模型,寻找一组最优的特征组合系数,构建基于特征融合模型的核函数,进行表情的分类。该方法能更大限度利用表情图像中的有用特征,还能避免无关特征和冗余特征带来的过拟合现象。通过在学生听课表情表情库中的实验结果表明,方法的识别率为88%,好于传统方法80%的识别率。%The traditional facial expression recognition based on texture characteristics uses single texture feature to build single-core Support Vector Machine ( SVM ) to classify expressions. It will inevitably lead to loss of facial expression characteristic information and negative effect on recognition rate, but too many features will bring redundant, over-fitting and low recognition rate. For the shortcomings of the traditional methods, a facial expression recognition method based on multi-feature fusion by multi-kernel SVM was presented, which extracted three features such as Gabor texture feature, histogram feature and LBP texture feature, then reduced the dimension with Principal Component Analysis ( PCA) , used feature fusion model based on combination of Radial Basis Function ( RBF) in multi-core SVM training, found an optimal set of features combination coefficients to build RBF based on features fusion model, classified expressions. Features fusion can not only make better use of useful features than single feature, but also

  16. Constraint-based facial animation

    NARCIS (Netherlands)

    Ruttkay, Z.M.

    1999-01-01

    Constraints have been traditionally used for computer animation applications to define side conditions for generating synthesized motion according to a standard, usually physically realistic, set of motion equations. The case of facial animation is very different, as no set of motion equations for f

  17. An Improved AAM Method for Extracting Human Facial Features

    Directory of Open Access Journals (Sweden)

    Tao Zhou

    2012-01-01

    Full Text Available Active appearance model is a statistically parametrical model, which is widely used to extract human facial features and recognition. However, intensity values used in original AAM cannot provide enough information for image texture, which will lead to a larger error or a failure fitting of AAM. In order to overcome these defects and improve the fitting performance of AAM model, an improved texture representation is proposed in this paper. Firstly, translation invariant wavelet transform is performed on face images and then image structure is represented using the measure which is obtained by fusing the low-frequency coefficients with edge intensity. Experimental results show that the improved algorithm can increase the accuracy of the AAM fitting and express more information for structures of edge and texture.

  18. 基于概率关系的面部特征点定位技术方法%New technique based on probabilistic model for facial features location

    Institute of Scientific and Technical Information of China (English)

    彭小宁; 邹北骥; 王磊; 罗平

    2009-01-01

    This paper described a novel technique called analytic boosted cascade detector (ABCD) to automatically locate features on the human face. ABCD extended the original boosted cascade detector (BCD) in three ways: a) a probabilistic model was included to connect the classifier responses with the facial features, b) formulated a features location method based on the probabilistic model, c) presented two selection criterions for face candidates. The new technique melted face detection and facial features location into a unified process. It outperformed average positions (AVG) and boosted classifiers + best response (BestHit). It also shows great speed superior to the methods based on nonlinear optimization, e.g. AAM and SOS.%基于BCD提出了一种新的面部特征点定位方法,该方法在以下三个方面扩展了传统的BCD(boosted cascade detector):a) 建立了BCD决策响应与特征点位置之间的概率关系;b) 提出一种基于上述概率关系的特征点定位方法;c) 设计了两种最佳人脸候选区域的选择方法.解析式的BCD把人脸检测和面部特征点定位融合成一个统一的过程.实验表明其精度和速度高于平均位置法(AVG)和基于boosted classifiers的最佳命中法(BestHit),并且它的运行速度也明显高于基于非线性优化的AAM和SOS法.

  19. Microanatomy and Histological Features of Central Myelin in the Root Exit Zone of Facial Nerve

    OpenAIRE

    Yee, Gi-Taek; Yoo, Chan-Jong; Han, Seong-Rok; Choi, Chan-Young

    2014-01-01

    Objective The aim of this study was to evaluate the microanatomy and histological features of the central myelin in the root exit zone of facial nerve. Methods Forty facial nerves with brain stem were obtained from 20 formalin fixed cadavers. Among them 17 facial nerves were ruined during preparation and 23 root entry zone (REZ) of facial nerves could be examined. The length of medial REZ, from detach point of facial nerve at the brain stem to transitional area, and the thickness of glial mem...

  20. Facial attractiveness: evolutionary based research

    OpenAIRE

    Little, Anthony C.; Jones, Benedict C.; DeBruine, Lisa M

    2011-01-01

    Face preferences affect a diverse range of critical social outcomes, from mate choices and decisions about platonic relationships to hiring decisions and decisions about social exchange. Firstly, we review the facial characteristics that influence attractiveness judgements of faces (e.g. symmetry, sexually dimorphic shape cues, averageness, skin colour/texture and cues to personality) and then review several important sources of individual differences in face preferences (e.g. hormone levels ...

  1. Facial expression recognition based on improved parallel features fusion%基于改进的并行特征融合人脸表情识别

    Institute of Scientific and Technical Information of China (English)

    罗飞; 王国胤; 杨勇; 李振静

    2009-01-01

    An improved maximum scatter difference discriminate criterion method based on information fusion theory is proposed in emotion recognition. Firstly, the complex feature vectors of different features are computed. Then, complex features are extracted by the maximum scatter difference discriminate criterion of different weight. Experiment results with different samples and features show the efficiency of the method and it can avoid the "Small Sample Size" and "inferior" problems. The correct recognition rate is further improved by the proposed feature fusion method.%基于信息融合理论和线性鉴别分析,提出了一种改进的并行特征融合人脸表情识别方法.该方法首先将不同表征下的人脸表情特征利用复向量组合起来,构成复特征向量,然后利用具有不同权重的最大散度差鉴别分析方法进行进一步的复特征提取.在不同样本库、不同类型特征融合下的实验结果表明,该方法在优化投影轴和避免"小样本"问题的同时得到了满意的识别结果.

  2. Interpretation of appearance: the effect of facial features on first impressions and personality

    DEFF Research Database (Denmark)

    Wolffhechel, Karin Marie Brandt; Fagertun, Jens; Jacobsen, Ulrik Plesner;

    2014-01-01

    Appearance is known to influence social interactions, which in turn could potentially influence personality development. In this study we focus on discovering the relationship between self-reported personality traits, first impressions and facial characteristics. The results reveal that several...... personality traits can be read above chance from a face, and that facial features influence first impressions. Despite the former, our prediction model fails to reliably infer personality traits from either facial features or first impressions. First impressions, however, could be inferred more reliably from...... facial features. We have generated artificial, extreme faces visualising the characteristics having an effect on first impressions for several traits. Conclusively, we find a relationship between first impressions, some personality traits and facial features and consolidate that people on average assess...

  3. Facial attractiveness: evolutionary based research.

    Science.gov (United States)

    Little, Anthony C; Jones, Benedict C; DeBruine, Lisa M

    2011-06-12

    Face preferences affect a diverse range of critical social outcomes, from mate choices and decisions about platonic relationships to hiring decisions and decisions about social exchange. Firstly, we review the facial characteristics that influence attractiveness judgements of faces (e.g. symmetry, sexually dimorphic shape cues, averageness, skin colour/texture and cues to personality) and then review several important sources of individual differences in face preferences (e.g. hormone levels and fertility, own attractiveness and personality, visual experience, familiarity and imprinting, social learning). The research relating to these issues highlights flexible, sophisticated systems that support and promote adaptive responses to faces that appear to function to maximize the benefits of both our mate choices and more general decisions about other types of social partners. PMID:21536551

  4. Robust Facial Feature Tracking Using Shape-Constrained Multi-Resolution Selected Linear Predictors.

    OpenAIRE

    Ong, EJ; Bowden, R.

    2011-01-01

    This paper proposes a learnt {\\em data-driven} approach for accurate, real-time tracking of facial features using only intensity information, a non-trivial task since the face is a highly deformable object with large textural variations and motion in certain regions. The framework proposed here largely avoids the need for apriori design of feature trackers by automatically identifying the optimal visual support required for tracking a single facial feature point. This is essentially equivalen...

  5. 一种基于几何特征的表情相似性度量方法%A Similarity Measurement Method of Facial Expression Based on Geometric Features

    Institute of Scientific and Technical Information of China (English)

    黄忠; 胡敏; 王晓华

    2015-01-01

    在表演驱动、表情克隆等人脸动画中,需要寻找最相似表情以提高动画真实感和逼真度。基于面部表情几何特征提出一种特征加权的表情相似性度量方法。首先,在主动外观模型上,利用链码描述各区域的形状特征以刻画局部表情细节,并根据区域特征点间的拓扑关系构建形变特征以反映整体表情信息。然后,采用特征加权方式对融合的几何特征进行相似性度量,并将权重的求解过程转化为加权目标函数最小化。最后,利用求解的权重以及特征加权函数度量表情间的相似性,寻找与之最相似的表情图像。在BU-3DFE数据库和FEEDTUM数据库上的实验结果表明,该方法在寻找相似表情的正确率方面明显高于现有的度量方法,并且对不同类型、不同强度的表情描述保持较好鲁棒性,尤其在嘴型、脸颊收缩、嘴开合幅度等表情细节维持较高相似度。%In facial animations such as performance-driven and expression cloning, it needs to find the most similar expression to enhance the reality and fidelity of animations. A feature-weighted expression similarity measurement method is proposed based on facial geometric features. Firstly, chain code is used to characterize shape features for local expression regions, meanwhile deformation features are built based on topological relations among regional feature points to reflect holistic expression information. Then, feature-weighted method is adopted to measure the similarities of fused geometric features, and the solving process of feature weights is transformed to minimizing process of the weighted objective function. Finally, the solved weights as well as feature weighting functions are performed to measure similarities between two expressions and seek the most similar image with a input expression image. The experimental results on BU-3 DFE database and FEEDTUM database show that the proposed method

  6. Contributions of feature shapes and surface cues to the recognition of facial expressions.

    Science.gov (United States)

    Sormaz, Mladen; Young, Andrew W; Andrews, Timothy J

    2016-10-01

    Theoretical accounts of face processing often emphasise feature shapes as the primary visual cue to the recognition of facial expressions. However, changes in facial expression also affect the surface properties of the face. In this study, we investigated whether this surface information can also be used in the recognition of facial expression. First, participants identified facial expressions (fear, anger, disgust, sadness, happiness) from images that were manipulated such that they varied mainly in shape or mainly in surface properties. We found that the categorization of facial expression is possible in either type of image, but that different expressions are relatively dependent on surface or shape properties. Next, we investigated the relative contributions of shape and surface information to the categorization of facial expressions. This employed a complementary method that involved combining the surface properties of one expression with the shape properties from a different expression. Our results showed that the categorization of facial expressions in these hybrid images was equally dependent on the surface and shape properties of the image. Together, these findings provide a direct demonstration that both feature shape and surface information make significant contributions to the recognition of facial expressions. PMID:27425385

  7. Facial biometrics based on 2D vector geometry

    Science.gov (United States)

    Malek, Obaidul; Venetsanopoulos, Anastasios; Androutsos, Dimitrios

    2014-05-01

    The main challenge of facial biometrics is its robustness and ability to adapt to changes in position orientation, facial expression, and illumination effects. This research addresses the predominant deficiencies in this regard and systematically investigates a facial authentication system in the Euclidean domain. In the proposed method, Euclidean geometry in 2D vector space is being constructed for features extraction and the authentication method. In particular, each assigned point of the candidates' biometric features is considered to be a 2D geometrical coordinate in the Euclidean vector space. Algebraic shapes of the extracted candidate features are also computed and compared. The proposed authentication method is being tested on images from the public "Put Face Database". The performance of the proposed method is evaluated based on Correct Recognition (CRR), False Acceptance (FAR), and False Rejection (FRR) rates. The theoretical foundation of the proposed method along with the experimental results are also presented in this paper. The experimental results demonstrate the effectiveness of the proposed method.

  8. Effect of Different Occlusion on Facial Expressions Recognition

    OpenAIRE

    Ankita Vyas; Ramchand Hablani

    2014-01-01

    Occlusions around facial parts complicate the task of recognizing facial expressions from their facial images. We propose facial expressions recognition method based on local facial regions, which provides better recognition rate in the presence of facial occlusions. Proposed method uses Uniform Local Binary pattern as a feature extractor, which extract discriminative features from some important parts of facial image. Feature vectors are classified using simplest classifier th...

  9. Extraction of Subject-Specific Facial Expression Categories and Generation of Facial Expression Feature Space using Self-Mapping

    Directory of Open Access Journals (Sweden)

    Masaki Ishii

    2008-06-01

    Full Text Available This paper proposes a generation method of a subject-specific Facial Expression Map (FEMap using the Self-Organizing Maps (SOM of unsupervised learning and Counter Propagation Networks (CPN of supervised learning together. The proposed method consists of two steps. In the first step, the topological change of a face pattern in the expressional process of facial expression is learned hierarchically using the SOM of a narrow mapping space, and the number of subject-specific facial expression categories and the representative images of each category are extracted. Psychological significance based on the neutral and six basic emotions (anger, sadness, disgust, happiness, surprise, and fear is assigned to each extracted category. In the latter step, the categories and the representative images described above are learned using the CPN of a large mapping space, and a category map that expresses the topological characteristics of facial expression is generated. This paper defines this category map as an FEMap. Experimental results for six subjects show that the proposed method can generate a subject-specific FEMap based on the topological characteristics of facial expression appearing on face images.

  10. Facial animation on an anatomy-based hierarchical face model

    Science.gov (United States)

    Zhang, Yu; Prakash, Edmond C.; Sung, Eric

    2003-04-01

    In this paper we propose a new hierarchical 3D facial model based on anatomical knowledge that provides high fidelity for realistic facial expression animation. Like real human face, the facial model has a hierarchical biomechanical structure, incorporating a physically-based approximation to facial skin tissue, a set of anatomically-motivated facial muscle actuators and underlying skull structure. The deformable skin model has multi-layer structure to approximate different types of soft tissue. It takes into account the nonlinear stress-strain relationship of the skin and the fact that soft tissue is almost incompressible. Different types of muscle models have been developed to simulate distribution of the muscle force on the skin due to muscle contraction. By the presence of the skull model, our facial model takes advantage of both more accurate facial deformation and the consideration of facial anatomy during the interactive definition of facial muscles. Under the muscular force, the deformation of the facial skin is evaluated using numerical integration of the governing dynamic equations. The dynamic facial animation algorithm runs at interactive rate with flexible and realistic facial expressions to be generated.

  11. De Novo Mutation in ABCC9 Causes Hypertrichosis Acromegaloid Facial Features Disorder.

    Science.gov (United States)

    Afifi, Hanan H; Abdel-Hamid, Mohamed S; Eid, Maha M; Mostafa, Inas S; Abdel-Salam, Ghada M H

    2016-01-01

    A 13-year-old Egyptian girl with generalized hypertrichosis, gingival hyperplasia, coarse facial appearance, no cardiovascular or skeletal anomalies, keloid formation, and multiple labial frenula was referred to our clinic for counseling. Molecular analysis of the ABCC9 gene showed a de novo missense mutation located in exon 27, which has been described previously with Cantu syndrome. An overlap between Cantu syndrome, acromegaloid facial syndrome, and hypertrichosis acromegaloid facial features disorder is apparent at the phenotypic and molecular levels. The patient reported here gives further evidence that these syndromes are an expression of the ABCC9-related disorders, ranging from hypertrichosis and acromegaloid facies to the severe end of Cantu syndrome.

  12. Facial Features: What Women Perceive as Attractive and What Men Consider Attractive.

    Science.gov (United States)

    Muñoz-Reyes, José Antonio; Iglesias-Julios, Marta; Pita, Miguel; Turiegano, Enrique

    2015-01-01

    Attractiveness plays an important role in social exchange and in the ability to attract potential mates, especially for women. Several facial traits have been described as reliable indicators of attractiveness in women, but very few studies consider the influence of several measurements simultaneously. In addition, most studies consider just one of two assessments to directly measure attractiveness: either self-evaluation or men's ratings. We explored the relationship between these two estimators of attractiveness and a set of facial traits in a sample of 266 young Spanish women. These traits are: facial fluctuating asymmetry, facial averageness, facial sexual dimorphism, and facial maturity. We made use of the advantage of having recently developed methodologies that enabled us to measure these variables in real faces. We also controlled for three other widely used variables: age, body mass index and waist-to-hip ratio. The inclusion of many different variables allowed us to detect any possible interaction between the features described that could affect attractiveness perception. Our results show that facial fluctuating asymmetry is related both to self-perceived and male-rated attractiveness. Other facial traits are related only to one direct attractiveness measurement: facial averageness and facial maturity only affect men's ratings. Unmodified faces are closer to natural stimuli than are manipulated photographs, and therefore our results support the importance of employing unmodified faces to analyse the factors affecting attractiveness. We also discuss the relatively low equivalence between self-perceived and male-rated attractiveness and how various anthropometric traits are relevant to them in different ways. Finally, we highlight the need to perform integrated-variable studies to fully understand female attractiveness. PMID:26161954

  13. Facial Features: What Women Perceive as Attractive and What Men Consider Attractive.

    Science.gov (United States)

    Muñoz-Reyes, José Antonio; Iglesias-Julios, Marta; Pita, Miguel; Turiegano, Enrique

    2015-01-01

    Attractiveness plays an important role in social exchange and in the ability to attract potential mates, especially for women. Several facial traits have been described as reliable indicators of attractiveness in women, but very few studies consider the influence of several measurements simultaneously. In addition, most studies consider just one of two assessments to directly measure attractiveness: either self-evaluation or men's ratings. We explored the relationship between these two estimators of attractiveness and a set of facial traits in a sample of 266 young Spanish women. These traits are: facial fluctuating asymmetry, facial averageness, facial sexual dimorphism, and facial maturity. We made use of the advantage of having recently developed methodologies that enabled us to measure these variables in real faces. We also controlled for three other widely used variables: age, body mass index and waist-to-hip ratio. The inclusion of many different variables allowed us to detect any possible interaction between the features described that could affect attractiveness perception. Our results show that facial fluctuating asymmetry is related both to self-perceived and male-rated attractiveness. Other facial traits are related only to one direct attractiveness measurement: facial averageness and facial maturity only affect men's ratings. Unmodified faces are closer to natural stimuli than are manipulated photographs, and therefore our results support the importance of employing unmodified faces to analyse the factors affecting attractiveness. We also discuss the relatively low equivalence between self-perceived and male-rated attractiveness and how various anthropometric traits are relevant to them in different ways. Finally, we highlight the need to perform integrated-variable studies to fully understand female attractiveness.

  14. Facial Features: What Women Perceive as Attractive and What Men Consider Attractive.

    Directory of Open Access Journals (Sweden)

    José Antonio Muñoz-Reyes

    Full Text Available Attractiveness plays an important role in social exchange and in the ability to attract potential mates, especially for women. Several facial traits have been described as reliable indicators of attractiveness in women, but very few studies consider the influence of several measurements simultaneously. In addition, most studies consider just one of two assessments to directly measure attractiveness: either self-evaluation or men's ratings. We explored the relationship between these two estimators of attractiveness and a set of facial traits in a sample of 266 young Spanish women. These traits are: facial fluctuating asymmetry, facial averageness, facial sexual dimorphism, and facial maturity. We made use of the advantage of having recently developed methodologies that enabled us to measure these variables in real faces. We also controlled for three other widely used variables: age, body mass index and waist-to-hip ratio. The inclusion of many different variables allowed us to detect any possible interaction between the features described that could affect attractiveness perception. Our results show that facial fluctuating asymmetry is related both to self-perceived and male-rated attractiveness. Other facial traits are related only to one direct attractiveness measurement: facial averageness and facial maturity only affect men's ratings. Unmodified faces are closer to natural stimuli than are manipulated photographs, and therefore our results support the importance of employing unmodified faces to analyse the factors affecting attractiveness. We also discuss the relatively low equivalence between self-perceived and male-rated attractiveness and how various anthropometric traits are relevant to them in different ways. Finally, we highlight the need to perform integrated-variable studies to fully understand female attractiveness.

  15. Perceptually Valid Facial Expressions for Character-Based Applications

    Directory of Open Access Journals (Sweden)

    Ali Arya

    2009-01-01

    Full Text Available This paper addresses the problem of creating facial expression of mixed emotions in a perceptually valid way. The research has been done in the context of a “game-like” health and education applications aimed at studying social competency and facial expression awareness in autistic children as well as native language learning, but the results can be applied to many other applications such as games with need for dynamic facial expressions or tools for automating the creation of facial animations. Most existing methods for creating facial expressions of mixed emotions use operations like averaging to create the combined effect of two universal emotions. Such methods may be mathematically justifiable but are not necessarily valid from a perceptual point of view. The research reported here starts by user experiments aiming at understanding how people combine facial actions to express mixed emotions, and how the viewers perceive a set of facial actions in terms of underlying emotions. Using the results of these experiments and a three-dimensional emotion model, we associate facial actions to dimensions and regions in the emotion space, and create a facial expression based on the location of the mixed emotion in the three-dimensional space. We call these regionalized facial actions “facial expression units.”

  16. Faces in-between: evaluations reflect the interplay of facial features and task-dependent fluency.

    Science.gov (United States)

    Winkielman, Piotr; Olszanowski, Michal; Gola, Mateusz

    2015-04-01

    Facial features influence social evaluations. For example, faces are rated as more attractive and trustworthy when they have more smiling features and also more female features. However, the influence of facial features on evaluations should be qualified by the affective consequences of fluency (cognitive ease) with which such features are processed. Further, fluency (along with its affective consequences) should depend on whether the current task highlights conflict between specific features. Four experiments are presented. In 3 experiments, participants saw faces varying in expressions ranging from pure anger, through mixed expression, to pure happiness. Perceivers first categorized faces either on a control dimension, or an emotional dimension (angry/happy). Thus, the emotional categorization task made "pure" expressions fluent and "mixed" expressions disfluent. Next, participants made social evaluations. Results show that after emotional categorization, but not control categorization, targets with mixed expressions are relatively devalued. Further, this effect is mediated by categorization disfluency. Additional data from facial electromyography reveal that on a basic physiological level, affective devaluation of mixed expressions is driven by their objective ambiguity. The fourth experiment shows that the relative devaluation of mixed faces that vary in gender ambiguity requires a gender categorization task. Overall, these studies highlight that the impact of facial features on evaluation is qualified by their fluency, and that the fluency of features is a function of the current task. The discussion highlights the implications of these findings for research on emotional reactions to ambiguity. PMID:25642724

  17. Scattered Data Processing Approach Based on Optical Facial Motion Capture

    OpenAIRE

    Qiang Zhang; Xiaoying Liang; Xiaopeng Wei

    2013-01-01

    In recent years, animation reconstruction of facial expressions has become a popular research field in computer science and motion capture-based facial expression reconstruction is now emerging in this field. Based on the facial motion data obtained using a passive optical motion capture system, we propose a scattered data processing approach, which aims to solve the common problems of missing data and noise. To recover missing data, given the nonlinear relationships among neighbors with the ...

  18. Facial Expression Recognition

    OpenAIRE

    Neeta Sarode; Prof. Shalini Bhatia

    2010-01-01

    Facial expression analysis is rapidly becoming an area of intense interest in computer science and human-computer interaction design communities. The most expressive way humans display emotions is through facial expressions. In this paper a method is implemented using 2D appearance-based local approach for the extraction of intransient facial features and recognition of four facial expressions. The algorithm implements Radial Symmetry Transform and further uses edge projection analysis for fe...

  19. Facial expression analysis using LBP features. Computer Engineering and Applications, 2011,47(2): 149-152.%人脸表情的LBP特征分析

    Institute of Scientific and Technical Information of China (English)

    刘伟锋; 李树娟; 王延江

    2011-01-01

    为了有效提取面部表情特征,提出了一种新的基于LBP(局部二值模式)特征的人脸表情识别特征提取方法.首先用均值方差法对表情图像进行灰度规一化,通过对图像进行积分投影,定位出眉毛、眼睛、鼻和嘴巴这些关键特征点,进而划分出各特征部件所在子区域,然后对子区域进行分块,提取各个子区域的分块LBP直方图特征.为了验证所提出的方法的合理性,最后在JAFFE表情库上进行了实验,结果表明提出的方法能够有效地描述表情的特征.%In order to effectively extract facial expression feature,a novel facial feature extraction approach for facial expression recognition based on Local Binary Pattern(LBP) is proposed in the paper. Firstly,facial expression images' gray level is normalized with the average-variance method. By doing integral projection, some critical facial feature points are located,such as eyebrow,eye,nose and mouth. Then sub-regions belong to each facial component are partitioned. And then facial expression features are presented with LBP histograms of each sub-region, which is divided into several blocks. To validate the rationality of the method proposed,experiments are implemented on JAFEE(Japanese female facial expression database) database. The results illustrate that the method proposed is effective to represent facial expression feature.

  20. A kind of Face Recognition Method Based on CCA Feature Information Fusion

    OpenAIRE

    Li-Xia NIU; Li, Guo

    2013-01-01

    In order to achieve more local facial feature, a kind of sub image face recognition method based on RS-Sp CCA feature information fusion is proposed in this paper. According to take samples for the local facial feature of sub image and use CCA to fuse the global facial feature and the local facial feature information after sampling, the global feature of image can be fully used to construct much more different kinds of component classifiers. Then, make experimental analysis on databases of 3 ...

  1. Cranial base topology and basic trends in the facial evolution of Homo.

    Science.gov (United States)

    Bastir, Markus; Rosas, Antonio

    2016-02-01

    Facial prognathism and projection are important characteristics in human evolution but their three-dimensional (3D) architectonic relationships to basicranial morphology are not clear. We used geometric morphometrics and measured 51 3D-landmarks in a comparative sample of modern humans (N = 78) and fossil Pleistocene hominins (N = 10) to investigate the spatial features of covariation between basicranial and facial elements. The study reveals complex morphological integration patterns in craniofacial evolution of Middle and Late Pleistocene hominins. A downwards-orientated cranial base correlates with alveolar maxillary prognathism, relatively larger faces, and relatively larger distances between the anterior cranial base and the frontal bone (projection). This upper facial projection correlates with increased overall relative size of the maxillary alveolar process. Vertical facial height is associated with tall nasal cavities and is accommodated by an elevated anterior cranial base, possibly because of relations between the cribriform and the nasal cavity in relation to body size and energetics. Variation in upper- and mid-facial projection can further be produced by basicranial topology in which the midline base and nasal cavity are shifted anteriorly relative to retracted lateral parts of the base and the face. The zygomatics and the middle cranial fossae act together as bilateral vertical systems that are either projected or retracted relative to the midline facial elements, causing either midfacial flatness or midfacial projection correspondingly. We propose that facial flatness and facial projection reflect classical principles of craniofacial growth counterparts, while facial orientation relative to the basicranium as well as facial proportions reflect the complex interplay of head-body integration in the light of encephalization and body size decrease in Middle to Late Pleistocene hominin evolution. Developmental and evolutionary patterns of integration may

  2. Binary pattern flavored feature extractors for Facial Expression Recognition: An overview

    DEFF Research Database (Denmark)

    Kristensen, Rasmus Lyngby; Tan, Zheng-Hua; Ma, Zhanyu;

    2015-01-01

    This paper conducts a survey of modern binary pattern flavored feature extractors applied to the Facial Expression Recognition (FER) problem. In total, 26 different feature extractors are included, of which six are selected for in depth description. In addition, the paper unifies important FER...... terminology, describes open challenges, and provides recommendations to scientific evaluation of FER systems. Lastly, it studies the facial expression recognition accuracy and blur invariance of the Local Frequency Descriptor. The paper seeks to bring together disjointed studies, and the main contribution...

  3. Image Based Hair Segmentation Algorithm for the Application of Automatic Facial Caricature Synthesis

    OpenAIRE

    2014-01-01

    Hair is a salient feature in human face region and are one of the important cues for face analysis. Accurate detection and presentation of hair region is one of the key components for automatic synthesis of human facial caricature. In this paper, an automatic hair detection algorithm for the application of automatic synthesis of facial caricature based on a single image is proposed. Firstly, hair regions in training images are labeled manually and then the hair position prior distributions an...

  4. Facial-paralysis diagnostic system based on 3D reconstruction

    Science.gov (United States)

    Khairunnisaa, Aida; Basah, Shafriza Nisha; Yazid, Haniza; Basri, Hassrizal Hassan; Yaacob, Sazali; Chin, Lim Chee

    2015-05-01

    The diagnostic process of facial paralysis requires qualitative assessment for the classification and treatment planning. This result is inconsistent assessment that potential affect treatment planning. We developed a facial-paralysis diagnostic system based on 3D reconstruction of RGB and depth data using a standard structured-light camera - Kinect 360 - and implementation of Active Appearance Models (AAM). We also proposed a quantitative assessment for facial paralysis based on triangular model. In this paper, we report on the design and development process, including preliminary experimental results. Our preliminary experimental results demonstrate the feasibility of our quantitative assessment system to diagnose facial paralysis.

  5. Vascular Ehlers-Danlos syndrome without the characteristic facial features: a case report.

    Science.gov (United States)

    Inokuchi, Ryota; Kurata, Hideaki; Endo, Kiyoshi; Kitsuta, Yoichi; Nakajima, Susumu; Hatamochi, Atsushi; Yahagi, Naoki

    2014-12-01

    As a type of Ehlers-Danlos syndrome (EDS), vascular EDs (vEDS) is typified by a number of characteristic facial features (eg, large eyes, small chin, sunken cheeks, thin nose and lips, lobeless ears). However, vEDs does not typically display hypermobility of the large joints and skin hyperextensibility, which are features typical of the more common forms of EDS. Thus, colonic perforation or aneurysm rupture may be the first presentation of the disease. Because both complications are associated with a reduced life expectancy for individuals with this condition, an awareness of the clinical features of vEDS is important. Here, we describe the treatment of vEDS lacking the characteristic facial attributes in a 24-year-old healthy man who presented to the emergency room with abdominal pain. Enhanced computed tomography revealed diverticula and perforation in the sigmoid colon. The lesion of the sigmoid colon perforation was removed, and Hartmann procedure was performed. During the surgery, the control of bleeding was required because of vascular fragility. Subsequent molecular and genetic analysis was performed based on the suspected diagnosis of vEDS. These analyses revealed reduced type III collagen synthesis in cultured skin fibroblasts and identified a previously undocumented mutation in the gene for a1 type III collagen, confirming the diagnosis of vEDS. After eliciting a detailed medical profile, we learned his mother had a history of extensive bruising since childhood and idiopathic hematothorax. Both were prescribed oral celiprolol. One year after admission, the patient was free of recurrent perforation. This case illustrates an awareness of the clinical characteristics of vEDS and the family history is important because of the high mortality from this condition even in young people. Importantly, genetic assays could help in determining the surgical procedure and offer benefits to relatives since this condition is inherited in an autosomal dominant manner. PMID

  6. Dense mesh sampling for video-based facial animation

    Science.gov (United States)

    Peszor, Damian; Wojciechowska, Marzena

    2016-06-01

    The paper describes an approach for selection of feature points on three-dimensional, triangle mesh obtained using various techniques from several video footages. This approach has a dual purpose. First, it allows to minimize the data stored for the purpose of facial animation, so that instead of storing position of each vertex in each frame, one could store only a small subset of vertices for each frame and calculate positions of others based on the subset. Second purpose is to select feature points that could be used for anthropometry-based retargeting of recorded mimicry to another model, with sampling density beyond that which can be achieved using marker-based performance capture techniques. Developed approach was successfully tested on artificial models, models constructed using structured light scanner, and models constructed from video footages using stereophotogrammetry.

  7. Effects of face feature and contour crowding in facial expression adaptation.

    Science.gov (United States)

    Liu, Pan; Montaser-Kouhsari, Leila; Xu, Hong

    2014-12-01

    Prolonged exposure to a visual stimulus, such as a happy face, biases the perception of subsequently presented neutral face toward sad perception, the known face adaptation. Face adaptation is affected by visibility or awareness of the adapting face. However, whether it is affected by discriminability of the adapting face is largely unknown. In the current study, we used crowding to manipulate discriminability of the adapting face and test its effect on face adaptation. Instead of presenting flanking faces near the target face, we shortened the distance between facial features (internal feature crowding), and reduced the size of face contour (external contour crowding), to introduce crowding. We are interested in whether internal feature crowding or external contour crowding is more effective in inducing crowding effect in our first experiment. We found that combining internal feature and external contour crowding, but not either of them alone, induced significant crowding effect. In Experiment 2, we went on further to investigate its effect on adaptation. We found that both internal feature crowding and external contour crowding reduced its facial expression aftereffect (FEA) significantly. However, we did not find a significant correlation between discriminability of the adapting face and its FEA. Interestingly, we found a significant correlation between discriminabilities of the adapting and test faces. Experiment 3 found that the reduced adaptation aftereffect in combined crowding by the external face contour and the internal facial features cannot be decomposed into the effects from the face contour and facial features linearly. It thus suggested a nonlinear integration between facial features and face contour in face adaptation. PMID:25449164

  8. Automated Facial Expression Recognition Using Gradient-Based Ternary Texture Patterns

    Directory of Open Access Journals (Sweden)

    Faisal Ahmed

    2013-01-01

    Full Text Available Recognition of human expression from facial image is an interesting research area, which has received increasing attention in the recent years. A robust and effective facial feature descriptor is the key to designing a successful expression recognition system. Although much progress has been made, deriving a face feature descriptor that can perform consistently under changing environment is still a difficult and challenging task. In this paper, we present the gradient local ternary pattern (GLTP—a discriminative local texture feature for representing facial expression. The proposed GLTP operator encodes the local texture of an image by computing the gradient magnitudes of the local neighborhood and quantizing those values in three discrimination levels. The location and occurrence information of the resulting micropatterns is then used as the face feature descriptor. The performance of the proposed method has been evaluated for the person-independent face expression recognition task. Experiments with prototypic expression images from the Cohn-Kanade (CK face expression database validate that the GLTP feature descriptor can effectively encode the facial texture and thus achieves improved recognition performance than some well-known appearance-based facial features.

  9. Recovering Faces from Memory: The Distracting Influence of External Facial Features

    Science.gov (United States)

    Frowd, Charlie D.; Skelton, Faye; Atherton, Chris; Pitchford, Melanie; Hepton, Gemma; Holden, Laura; McIntyre, Alex H.; Hancock, Peter J. B.

    2012-01-01

    Recognition memory for unfamiliar faces is facilitated when contextual cues (e.g., head pose, background environment, hair and clothing) are consistent between study and test. By contrast, inconsistencies in external features, especially hair, promote errors in unfamiliar face-matching tasks. For the construction of facial composites, as carried…

  10. Implicit binding of facial features during change blindness

    OpenAIRE

    Pessi Lyyra; Hanna Mäkelä; Hietanen, Jari K.; Piia Astikainen

    2014-01-01

    Abstract. Change blindness refers to the inability to detect visual changes if introduced together with an eye-movement, blink, flash of light, or with distracting stimuli. Evidence of implicit detection of changed visual features during change blindness has been reported in a number of studies using both behavioral and neurophysiological measurements. However, it is not known whether implicit detection occurs only at the level of single features or whether complex organizations of f...

  11. Support vector machine-based facial-expression recognition method combining shape and appearance

    Science.gov (United States)

    Han, Eun Jung; Kang, Byung Jun; Park, Kang Ryoung; Lee, Sangyoun

    2010-11-01

    Facial expression recognition can be widely used for various applications, such as emotion-based human-machine interaction, intelligent robot interfaces, face recognition robust to expression variation, etc. Previous studies have been classified as either shape- or appearance-based recognition. The shape-based method has the disadvantage that the individual variance of facial feature points exists irrespective of similar expressions, which can cause a reduction of the recognition accuracy. The appearance-based method has a limitation in that the textural information of the face is very sensitive to variations in illumination. To overcome these problems, a new facial-expression recognition method is proposed, which combines both shape and appearance information, based on the support vector machine (SVM). This research is novel in the following three ways as compared to previous works. First, the facial feature points are automatically detected by using an active appearance model. From these, the shape-based recognition is performed by using the ratios between the facial feature points based on the facial-action coding system. Second, the SVM, which is trained to recognize the same and different expression classes, is proposed to combine two matching scores obtained from the shape- and appearance-based recognitions. Finally, a single SVM is trained to discriminate four different expressions, such as neutral, a smile, anger, and a scream. By determining the expression of the input facial image whose SVM output is at a minimum, the accuracy of the expression recognition is much enhanced. The experimental results showed that the recognition accuracy of the proposed method was better than previous researches and other fusion methods.

  12. Robust facial expression recognition algorithm based on local metric learning

    Science.gov (United States)

    Jiang, Bin; Jia, Kebin

    2016-01-01

    In facial expression recognition tasks, different facial expressions are often confused with each other. Motivated by the fact that a learned metric can significantly improve the accuracy of classification, a facial expression recognition algorithm based on local metric learning is proposed. First, k-nearest neighbors of the given testing sample are determined from the total training data. Second, chunklets are selected from the k-nearest neighbors. Finally, the optimal transformation matrix is computed by maximizing the total variance between different chunklets and minimizing the total variance of instances in the same chunklet. The proposed algorithm can find the suitable distance metric for every testing sample and improve the performance on facial expression recognition. Furthermore, the proposed algorithm can be used for vector-based and matrix-based facial expression recognition. Experimental results demonstrate that the proposed algorithm could achieve higher recognition rates and be more robust than baseline algorithms on the JAFFE, CK, and RaFD databases.

  13. FACIAL EXPRESSION RECOGNITION BASED ON WAPA AND OEPA FASTICA

    OpenAIRE

    Humayra Binte Ali; Powers, David M. W

    2014-01-01

    Face is one of the most important biometric traits for its uniqueness and robustness. For this reason researchers from many diversified fields, like: security, psychology, image processing, and computer vision, started to do research on face detection as well as facial expression recognition. Subspace learning methods work very good for recognizing same facial features. Among subspace learning techniques PCA, ICA, NMF are the most prominent topics. In this work, our main focus is on Indepe...

  14. 精神疲劳实时监测中多面部特征时序分类模型%Time-series classification model based on multiple facial feature for real-time mental fatigue monitoring

    Institute of Scientific and Technical Information of China (English)

    陈云华; 张灵; 丁伍洋; 严明玉

    2013-01-01

    针对现有疲劳监测方法仅根据单帧图像嘴巴形态进行哈欠识别准确率低,采用阈值法分析眨眼参数适应性较差,无法对疲劳状态的过渡进行实时监测等问题,提出一种新的进行精神疲劳实时监测的多面部特征时序分类模型.首先,通过面部视觉特征提取张口度曲线与虹膜似圆比曲线;然后,采用滑动窗口分段、隐马尔可夫模型(HMM)建模等方法在张口度曲线的基础上构建哈欠特征时序并进行类别标记,在虹膜似圆比曲线的基础上构建眨眼持续时间时序并进行类别标记;最后,在HMM的基础上增加时间戳,以便自适应地选取时序初始时刻点并进行多个特征时序的同步与标记结果的融合.实验结果表明,本文模型可降低哈欠误判率,对不同年龄的人群眨眼具有很好的适应性,并可实现对精神疲劳过渡状态的实时监测.%In computer vision based fatigue monitoring,there are still some unresolved issues remained,including low recognition accuracy in yawn detection based on a single-frame; poor adaptability in blink analysis because of the required threshold,the inability to monitor the transition stages of fatigue in real-time.Attempted to solve these problems,we propose a new classification model in this paper,which is based on two feature time-series for real-time mental fatigue monitoring.First,the mouth opening degree and iris circularity ratio are calculated through facial visual feature extraction.Based on this,we can generate a corresponding time-series called α (the proportion of the time during which mouth opening exceeds a given threshold) time series and eye blink time (EBT) time series.Then,using sliding window to partition and annotate the two kinds of time series and build hidden markov model (HMM) for EBT time series.Finally,add a time stamp on HMM to adaptively calculate the initial time point of the next time series,in addition,we can use it to perform the

  15. 基于Gabor多方向特征融合与分块直方图的人脸表情识别方法%Facial Expression Recognition Method Based on Gabor Multi-orientation Features Fusion and Block Histogram

    Institute of Scientific and Technical Information of China (English)

    刘帅师; 田彦涛; 万川

    2011-01-01

    In this paper, the Gabor multi-orientation fused features are combined with block histogram to extract facial expressional features in order to overcome the disadvantage of traditional Gabor filter bank, whose high-dimensional Gabor features are redundant and the global features representation capacity is poor. First, to extract the multi-orientation information and reduce the dimension of the features, two fusion rules are proposed to fuse the original Gabor features of the same scale into a single feature. Second, to represent the global features effectively, the fused image is divided into several nonoverlapping rectangular units, and the histogram of each unit is computed and combined as facial expression features. Experimental results show that the method is effective for both dimension reduction and recognition performance. The novelty of the method is to use two fusion rules to fuse multi-orientation Gabor features. The best average recognition rate of 98.24 % is achieved in JAFFE database, which indicates this method is suitable for facial expression analysis.%针对传统的Gabor特征表征全局特征能力弱以及特征数据存在冗余性的缺点,提出一种新颖的采用Gabor多方向特征融合与分块直方图统计相结合的方法来提取表情特征.为了提取局部方向信息并降低特征维数,首先采用Gabor滤波器提取人脸表情图像的多尺度和多方向特征,然后按照两个融合规则将相同尺度不同方向的特征融合到一起.为了能够有效地表征图像全局特征,将融合图像进一步划分为若干矩形不重叠且大小相等的子块,分别计算每个子块区域内融合特征的直方图分布,将其联合起来实现图像表征.实验结果表明,这种方法无论在计算量上还是识别性能上都比传统的Gabor滤波器组更具有优势.该方法的创新处在于提出了两个Gabor多方向特征融合规则,应用在JAFFE表情库上最高平均识别率达到98.24%,

  16. Newborns' Face Recognition: Role of Inner and Outer Facial Features

    Science.gov (United States)

    Turati, Chiara; Macchi Cassia, Viola; Simion, Francesca; Leo, Irene

    2006-01-01

    Existing data indicate that newborns are able to recognize individual faces, but little is known about what perceptual cues drive this ability. The current study showed that either the inner or outer features of the face can act as sufficient cues for newborns' face recognition (Experiment 1), but the outer part of the face enjoys an advantage…

  17. De Novo Mutation in ABCC9 Causes Hypertrichosis Acromegaloid Facial Features Disorder.

    Science.gov (United States)

    Afifi, Hanan H; Abdel-Hamid, Mohamed S; Eid, Maha M; Mostafa, Inas S; Abdel-Salam, Ghada M H

    2016-01-01

    A 13-year-old Egyptian girl with generalized hypertrichosis, gingival hyperplasia, coarse facial appearance, no cardiovascular or skeletal anomalies, keloid formation, and multiple labial frenula was referred to our clinic for counseling. Molecular analysis of the ABCC9 gene showed a de novo missense mutation located in exon 27, which has been described previously with Cantu syndrome. An overlap between Cantu syndrome, acromegaloid facial syndrome, and hypertrichosis acromegaloid facial features disorder is apparent at the phenotypic and molecular levels. The patient reported here gives further evidence that these syndromes are an expression of the ABCC9-related disorders, ranging from hypertrichosis and acromegaloid facies to the severe end of Cantu syndrome. PMID:26871653

  18. Does my face FIT?: a face image task reveals structure and distortions of facial feature representation.

    Directory of Open Access Journals (Sweden)

    Christina T Fuentes

    Full Text Available Despite extensive research on face perception, few studies have investigated individuals' knowledge about the physical features of their own face. In this study, 50 participants indicated the location of key features of their own face, relative to an anchor point corresponding to the tip of the nose, and the results were compared to the true location of the same individual's features from a standardised photograph. Horizontal and vertical errors were analysed separately. An overall bias to underestimate vertical distances revealed a distorted face representation, with reduced face height. Factor analyses were used to identify separable subconfigurations of facial features with correlated localisation errors. Independent representations of upper and lower facial features emerged from the data pattern. The major source of variation across individuals was in representation of face shape, with a spectrum from tall/thin to short/wide representation. Visual identification of one's own face is excellent, and facial features are routinely used for establishing personal identity. However, our results show that spatial knowledge of one's own face is remarkably poor, suggesting that face representation may not contribute strongly to self-awareness.

  19. Detection of Human Head Direction Based on Facial Normal Algorithm

    Directory of Open Access Journals (Sweden)

    Lam Thanh Hien

    2015-01-01

    Full Text Available Many scholars worldwide have paid special efforts in searching for advance approaches to efficiently estimate human head direction which has been successfully applied in numerous applications such as human-computer interaction, teleconferencing, virtual reality, and 3D audio rendering. However, one of the existing shortcomings in the current literature is the violation of some ideal assumptions in practice. Hence, this paper aims at proposing a novel algorithm based on the normal of human face to recognize human head direction by optimizing a 3D face model combined with the facial normal model. In our experiments, a computational program was also developed based on the proposed algorithm and integrated with the surveillance system to alert the driver drowsiness. The program intakes data from either video or webcam, and then automatically identify the critical points of facial features based on the analysis of major components on the faces; and it keeps monitoring the slant angle of the head closely and makes alarming signal whenever the driver dozes off. From our empirical experiments, we found that our proposed algorithm effectively works in real-time basis and provides highly accurate results

  20. Video-based facial animation with detailed appearance texture

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Facial shape transformation described by facial animation parameters (FAPs) involves the dynamic movement or deformation of eyes, brows, mouth, and lips, while detailed facial appearance concerns the facial textures such as creases, wrinkles, etc.Video-based facial animation exhibits not only facial shape transformation but also detailed appearance updates. In this paper, a novel algorithm for effectively extracting FAPs from video is proposed. Our system adopts the ICA-enforced direct appearance model (DAM) to track faces from video sequences; and then, FAPs are extracted from every frame of the video based on an extended model of Wincandidate 3.1. Facial appearance details are transformed from each frame by mapping an expression ratio image to the original image. We adopt wavelet to synthesize expressive details by combining the low-frequency signals of the original face and high-frequency signals of the expressive face from each frame of the video. Experimental results show that our proposed algorithm is suitable for reproducing realistic, expressive facial animations.

  1. Facial Expression Recognition Based on Local Binary Patterns and Kernel Discriminant Isomap

    Directory of Open Access Journals (Sweden)

    Xiaoming Zhao

    2011-10-01

    Full Text Available Facial expression recognition is an interesting and challenging subject. Considering the nonlinear manifold structure of facial images, a new kernel-based manifold learning method, called kernel discriminant isometric mapping (KDIsomap, is proposed. KDIsomap aims to nonlinearly extract the discriminant information by maximizing the interclass scatter while minimizing the intraclass scatter in a reproducing kernel Hilbert space. KDIsomap is used to perform nonlinear dimensionality reduction on the extracted local binary patterns (LBP facial features, and produce low-dimensional discrimimant embedded data representations with striking performance improvement on facial expression recognition tasks. The nearest neighbor classifier with the Euclidean metric is used for facial expression classification. Facial expression recognition experiments are performed on two popular facial expression databases, i.e., the JAFFE database and the Cohn-Kanade database. Experimental results indicate that KDIsomap obtains the best accuracy of 81.59% on the JAFFE database, and 94.88% on the Cohn-Kanade database. KDIsomap outperforms the other used methods such as principal component analysis (PCA, linear discriminant analysis (LDA, kernel principal component analysis (KPCA, kernel linear discriminant analysis (KLDA as well as kernel isometric mapping (KIsomap.

  2. Feature Fusion Algorithm for Multimodal Emotion Recognition from Speech and Facial Expression Signal

    Directory of Open Access Journals (Sweden)

    Han Zhiyan

    2016-01-01

    Full Text Available In order to overcome the limitation of single mode emotion recognition. This paper describes a novel multimodal emotion recognition algorithm, and takes speech signal and facial expression signal as the research subjects. First, fuse the speech signal feature and facial expression signal feature, get sample sets by putting back sampling, and then get classifiers by BP neural network (BPNN. Second, measure the difference between two classifiers by double error difference selection strategy. Finally, get the final recognition result by the majority voting rule. Experiments show the method improves the accuracy of emotion recognition by giving full play to the advantages of decision level fusion and feature level fusion, and makes the whole fusion process close to human emotion recognition more, with a recognition rate 90.4%.

  3. 3D facial expression recognition using maximum relevance minimum redundancy geometrical features

    Science.gov (United States)

    Rabiu, Habibu; Saripan, M. Iqbal; Mashohor, Syamsiah; Marhaban, Mohd Hamiruce

    2012-12-01

    In recent years, facial expression recognition (FER) has become an attractive research area, which besides the fundamental challenges, it poses, finds application in areas, such as human-computer interaction, clinical psychology, lie detection, pain assessment, and neurology. Generally the approaches to FER consist of three main steps: face detection, feature extraction and expression recognition. The recognition accuracy of FER hinges immensely on the relevance of the selected features in representing the target expressions. In this article, we present a person and gender independent 3D facial expression recognition method, using maximum relevance minimum redundancy geometrical features. The aim is to detect a compact set of features that sufficiently represents the most discriminative features between the target classes. Multi-class one-against-one SVM classifier was employed to recognize the seven facial expressions; neutral, happy, sad, angry, fear, disgust, and surprise. The average recognition accuracy of 92.2% was recorded. Furthermore, inter database homogeneity was investigated between two independent databases the BU-3DFE and UPM-3DFE the results showed a strong homogeneity between the two databases.

  4. Facial Expression Recognition Based on WAPA and OEPA Fastica

    Directory of Open Access Journals (Sweden)

    Humayra Binte Ali

    2014-06-01

    Full Text Available Face is one of the most important biometric traits for its uniqueness and robustness. For this reason researchers from many diversified fields, like: security, psychology, image processing, and computer vision, started to do research on face detection as well as facial expression recognition. Subspace learning methods work very good for recognizing same facial features. Among subspace learning techniques PCA, ICA, NMF are the most prominent topics. In this work, our main focus is on Independent Component Analysis (ICA. Among several architectures of ICA,we used here FastICA and LS-ICA algorithm. We applied Fast-ICA on whole faces and on different facial parts to analyze the influence of different parts for basic facial expressions. Our extended algorithm WAPA-FastICA and OEPA-FastICA are discussed in proposed algorithm section. Locally Salient ICA is implemented on whole face by using 8x8 windows to find the more prominent facial features for facial expression. The experiment shows our proposed OEPA-FastICA and WAPA-FastICA outperform the existing prevalent Whole-FastICA and LS-ICA methods.

  5. FACIAL EXPRESSION RECOGNITION BASED ON WAPA AND OEPA FASTICA

    Directory of Open Access Journals (Sweden)

    Humayra Binte Ali

    2014-05-01

    Full Text Available Face is one of the most important biometric traits for its uniqueness and robustness. For this reason researchers from many diversified fields, like: security, psychology, image processing, and computer vision, started to do research on face detection as well as facial expression recognition. Subspace learning methods work very good for recognizing same facial features. Among subspace learning techniques PCA, ICA, NMF are the most prominent topics. In this work, our main focus is on Independent Component Analysis (ICA. Among several architectures of ICA, we used here FastICA and LS-ICA algorithm. We applied Fast-ICA on whole faces and on different facial parts to analyze the influence of different parts for basic facial expressions. Our extended algorithm WAPA-FastICA and OEPA-FastICA are discussed in proposed algorithm section. Locally Salient ICA is implemented on whole face by using 8x8 windows to find the more prominent facial features for facial expression. The experiment shows our proposed OEPAFastICA and WAPA-FastICA outperform the existing prevalent Whole-FastICA and LS-ICA methods.

  6. Facial action detection using block-based pyramid appearance descriptors

    OpenAIRE

    Jiang, Bihan; Valstar, Michel F.; Pantic, Maja

    2012-01-01

    Facial expression is one of the most important non-verbal behavioural cues in social signals. Constructing an effective face representation from images is an essential step for successful facial behaviour analysis. Most existing face descriptors operate on the same scale, and do not leverage coarse v.s. fine methods such as image pyramids. In this work, we propose the sparse appearance descriptors Block-based Pyramid Local Binary Pattern (B-PLBP) and Block-based Pyramid Local Phase Quantisati...

  7. A Method for Head-shoulder Segmentation and Human Facial Feature Positioning

    Institute of Scientific and Technical Information of China (English)

    1998-01-01

    This paper proposes a method of head-shoulder segmentation and human facial feature allocation for videotelephone application. Utilizing the characteristic of multi-resolution processing of human eyes, analyzing the edge information of only a single frame in different frequency bands, this method can automatically perform head-shoulder segmentation and locate the facial feature regions (eyes, mouth, etc.) with rather high precision, simple and fast computation. Therefore, this method makes the 3-D model automatic adaptation and 3-D motion estimation possible. However, this method may fail while processing practical images with a complex background. Then it is preferable to use some pre-known information and multi-frame joint processing.

  8. 基于图像处理的不同脏腑疾病患者面部颜色特征分析%Facial color feature analysis in the different organs of disease based on image processing

    Institute of Scientific and Technical Information of China (English)

    董梦青; 李福凤; 周睿; 王忆勤

    2013-01-01

    目的:对冠心病、慢性肾功能衰竭、慢性乙型肝炎患者面色特征信息进行客观化探讨.方法:应用中医面诊数字化检测仪采集并分析冠心病、慢性肾功能衰竭、慢性乙肝患者面色特征信息.结果:冠心病组的面色以红黄隐隐和红色多见,其面部红色指数、黑色指数和面部总体指数较慢性肾衰组和慢性乙肝组明显升高(P<0.05);慢性肾衰组的面色主要以黄色、青色和白色多见,其面部白色指数、青色指数较冠心病组和慢性乙肝组明显升高(P<0.05);慢性乙肝组面色以黄色和黑色多见,其面部红色指数、白色指数、青色指数和面色总体指数较慢性肾衰组明显降低(P<0.05).黄色指数三病之间无显著性差异.结论:不同脏腑疾病面色及其参数的变化有一定规律,中医面诊数字化检测仪辅助中医临床诊断是可行的,为慢性肾功能衰竭、冠心病、慢性乙肝的中医辨证诊断提供了客观依据.%Objective: Objectively discussing the facial color feature's information of CHD(coronary heart disease) patients, CRF(chronic renal failure) patients, CHB(chronic hepatitis B) patients. Methods: Applying traditional Chinese medicine diagnosing digital detecting instrument to gather and analysis the facial color characteristic information of CHD patients, CRF patients, and CHB patients. Results: Normal complexion and red were more appeared in the CHD group, facial red index, black index, white index, cyan index and overall index of CHD group elevated markedly than CRF group and CHB group(P<0.05). While yellow and black were more appeared in CRF group, and facial red index, white index, black index, yellow index and overall index decreased obviously than CHD group and CHB group(P<0.05). Yellow and cyan were more common in CRF group, and red index, white index, cyan index and overall index decreased obviously than CHB group(P<0.05). Yellow index was no significant difference among

  9. Automatic Facial Expression Analysis A Survey

    Directory of Open Access Journals (Sweden)

    C.P. Sumathi

    2013-01-01

    Full Text Available The Automatic Facial Expression Recognition has been one of the latest research topic since1990’s.There have been recent advances in detecting face, facial expression recognition andclassification. There are multiple methods devised for facial feature extraction which helps in identifyingface and facial expressions. This paper surveys some of the published work since 2003 till date. Variousmethods are analysed to identify the Facial expression. The Paper also discusses about the facialparameterization using Facial Action Coding System(FACS action units and the methods whichrecognizes the action units parameters using facial expression data that are extracted. Various kinds offacial expressions are present in human face which can be identified based on their geometric features,appearance features and hybrid features . The two basic concepts of extracting features are based onfacial deformation and facial motion. This article also identifies the techniques based on thecharacteristics of expressions and classifies the suitable methods that can be implemented.

  10. Surface Electromyography-Based Facial Expression Recognition in Bi-Polar Configuration

    Directory of Open Access Journals (Sweden)

    Mahyar Hamedi

    2011-01-01

    Full Text Available Problem statement: Facial expression recognition has been improved recently and it has become a significant issue in diagnostic and medical fields, particularly in the areas of assistive technology and rehabilitation. Apart from their usefulness, there are some problems in their applications like peripheral conditions, lightening, contrast and quality of video and images. Approach: Facial Action Coding System (FACS and some other methods based on images or videos were applied. This study proposed two methods for recognizing 8 different facial expressions such as natural (rest, happiness in three conditions, anger, rage, gesturing ‘a’ like in apple word and gesturing no by pulling up the eyebrows based on Three-channels in Bi-polar configuration by SEMG. Raw signals were processed in three main steps (filtration, feature extraction and active features selection sequentially. Processed data was fed into Support Vector Machine and Fuzzy C-Means classifiers for being classified into 8 facial expression groups. Results: 91.8 and 80.4% recognition ratio had been achieved for FCM and SVM respectively. Conclusion: The confirmed enough accuracy and power in this field of study and FCM showed its better ability and performance in comparison with SVM. It’s expected that in near future, new approaches in the frequency bandwidth of each facial gesture will provide better results.

  11. Facial expression recognition based on image Euclidean distance-supervised neighborhood preserving embedding

    Science.gov (United States)

    Chen, Li; Li, Yingjie; Li, Haibin

    2014-11-01

    High-dimensional data often lie on relatively low-dimensional manifold, while the nonlinear geometry of that manifold is often embedded in the similarities between the data points. These similar structures are captured by Neighborhood Preserving Embedding (NPE) effectively. But NPE as an unsupervised method can't utilize class information to guide the procedure of nonlinear dimensionality reduction. They ignore the geometrical structure information of local data points and the spatial information of pixels, which leads to the failure of classification. For this problem, a feature extraction method based on Image Euclidean Distance-Supervised NPE (IED-SNPE) is proposed, and is applied to facial expression recognition. Firstly, it employs Image Euclidean Distance (IED) to characterize the dissimilarity of data points. And then the neighborhood graph of the input data is constructed according to a certain kind of dissimilarity between data points. Finally, it fuses prior nonlinear facial expression manifold of facial expression images and class-label information to extract discriminative features for expression recognition. In the classification experiments on JAFFE facial expression database, IED-SNPE is used for feature extraction and compared with NPE, SNPE, and IED-NPE. The results reveal that IED-SNPE not only the local structure of expression manifold preserves well but also explicitly considers the spatial relationships among pixels in the images. So it excels NPE in feature extraction and is highly competitive with those well-known feature extraction methods.

  12. Application of LBP information of feature-points in facial expression recognition%特征点LBP信息在表情识别中的应用

    Institute of Scientific and Technical Information of China (English)

    刘伟锋; 王延江

    2009-01-01

    提出一种基于特征点LBP信息的表情识别方法.在分析了表情识别中的LBP特征之后,选择含有丰富表情信息的上半脸眼部周围和下半脸嘴部周围的特征点,计算每个特征点邻域的LBP信息作为表情特征进行表情识别.实验表明,基于特征点LBP信息的方法不需要对人脸进行预配准,较传统的LBP特征更有利于表情识别的实现.%An facial expression recognition method is proposed based on the Local Binary Pattern (LBP) of feature-points.First, the LBP feature in facial expression recognition is presented.Then the feature-points around the eyes of upper face and the mouth of lower face is fixed which hold rich expression information.And the LBP map of the neighbor field of each feature point is computed as expression feature for facial expression recognilion.Experimental results show that,the face normalization is not necessary by using the proposed method,which can improve the facial expression recognition.

  13. Enhancement of the Adaptive Shape Variants Average Values by Using Eight Movement Directions for Multi-Features Detection of Facial Sketch

    Directory of Open Access Journals (Sweden)

    Arif Muntasa

    2012-04-01

    Full Text Available This paper aims to detect multi features of a facial sketch by using a novel approach. The detection of multi features of facial sketch has been conducted by several researchers, but they mainly considered frontal face sketches as object samples. In fact, the detection of multi features of facial sketch with certain angle is very important to assist police for describing the criminal’s face, when criminal’s face only appears on certain angle. Integration of the maximum line gradient value enhancement and the level set methods was implemented to detect facial features sketches with tilt angle to 15 degrees. However, these methods tend to move towards non features when there are a lot of graffiti around the shape. To overcome this weakness, the author proposes a novel approach to move the shape by adding a parameter to control the movement based on enhancement of the adaptive shape variants average values with 8 movement directions. The experimental results show that the proposed method can improve the detection accuracy up to 92.74%.

  14. Facial contour deformity correction with microvascular flaps based on the 3-dimentional template and facial moulage

    Directory of Open Access Journals (Sweden)

    Dinesh Kadam

    2013-01-01

    Full Text Available Introduction: Facial contour deformities presents with varied aetiology and degrees severity. Accurate assessment, selecting a suitable tissue and sculpturing it to fill the defect is challenging and largely subjective. Objective assessment with imaging and software is not always feasible and preparing a template is complicated. A three-dimensional (3D wax template pre-fabricated over the facial moulage aids surgeons to fulfil these tasks. Severe deformities demand a stable vascular tissue for an acceptable outcome. Materials and Methods: We present review of eight consecutive patients who underwent augmentation of facial contour defects with free flaps between June 2005 and January 2011. De-epithelialised free anterolateral thigh (ALT flap in three, radial artery forearm flap and fibula osteocutaneous flap in two each and groin flap was used in one patient. A 3D wax template was fabricated by augmenting the deformity on facial moulage. It was utilised to select the flap, to determine the exact dimensions and to sculpture intraoperatively. Ancillary procedures such as genioplasty, rhinoplasty and coloboma correction were performed. Results: The average age at the presentation was 25 years and average disease free interval was 5.5 years and all flaps survived. Mean follow-up period was 21.75 months. The correction was aesthetically acceptable and was maintained without any recurrence or atrophy. Conclusion: The 3D wax template on facial moulage is simple, inexpensive and precise objective tool. It provides accurate guide for the planning and execution of the flap reconstruction. The selection of the flap is based on the type and extent of the defect. Superiority of vascularised free tissue is well-known and the ALT flap offers a versatile option for correcting varying degrees of the deformities. Ancillary procedures improve the overall aesthetic outcomes and minor flap touch-up procedures are generally required.

  15. A novel human-machine interface based on recognition of multi-channel facial bioelectric signals

    International Nuclear Information System (INIS)

    Full text: This paper presents a novel human-machine interface for disabled people to interact with assistive systems for a better quality of life. It is based on multichannel forehead bioelectric signals acquired by placing three pairs of electrodes (physical channels) on the Fron-tails and Temporalis facial muscles. The acquired signals are passes through a parallel filter bank to explore three different sub-bands related to facial electromyogram, electrooculogram and electroencephalogram. The root mean features of the bioelectric signals analyzed within non-overlapping 256 ms windows were extracted. The subtractive fuzzy c-means clustering method (SFCM) was applied to segment the feature space and generate initial fuzzy based Takagi-Sugeno rules. Then, an adaptive neuro-fuzzy inference system is exploited to tune up the premises and consequence parameters of the extracted SFCMs. rules. The average classifier discriminating ratio for eight different facial gestures (smiling, frowning, pulling up left/right lips corner, eye movement to left/right/up/down is between 93.04% and 96.99% according to different combinations and fusions of logical features. Experimental results show that the proposed interface has a high degree of accuracy and robustness for discrimination of 8 fundamental facial gestures. Some potential and further capabilities of our approach in human-machine interfaces are also discussed. (author)

  16. Local evidence aggregation for regression-based facial point detection

    NARCIS (Netherlands)

    Martinez, Brais; Valstar, Michel F.; Binefa, Xavier; Pantic, Maja

    2013-01-01

    We propose a new algorithm to detect facial points in frontal and near-frontal face images. It combines a regression-based approach with a probabilistic graphical model-based face shape model that restricts the search to anthropomorphically consistent regions. While most regression-based approaches

  17. Facial Sketch Synthesis Using 2D Direct Combined Model-Based Face-Specific Markov Network.

    Science.gov (United States)

    Tu, Ching-Ting; Chan, Yu-Hsien; Chen, Yi-Chung

    2016-08-01

    A facial sketch synthesis system is proposed, featuring a 2D direct combined model (2DDCM)-based face-specific Markov network. In contrast to the existing facial sketch synthesis systems, the proposed scheme aims to synthesize sketches, which reproduce the unique drawing style of a particular artist, where this drawing style is learned from a data set consisting of a large number of image/sketch pairwise training samples. The synthesis system comprises three modules, namely, a global module, a local module, and an enhancement module. The global module applies a 2DDCM approach to synthesize the global facial geometry and texture of the input image. The detailed texture is then added to the synthesized sketch in a local patch-based manner using a parametric 2DDCM model and a non-parametric Markov random field (MRF) network. Notably, the MRF approach gives the synthesized results an appearance more consistent with the drawing style of the training samples, while the 2DDCM approach enables the synthesis of outcomes with a more derivative style. As a result, the similarity between the synthesized sketches and the input images is greatly improved. Finally, a post-processing operation is performed to enhance the shadowed regions of the synthesized image by adding strong lines or curves to emphasize the lighting conditions. The experimental results confirm that the synthesized facial images are in good qualitative and quantitative agreement with the input images as well as the ground-truth sketches provided by the same artist. The representing power of the proposed framework is demonstrated by synthesizing facial sketches from input images with a wide variety of facial poses, lighting conditions, and races even when such images are not included in the training data set. Moreover, the practical applicability of the proposed framework is demonstrated by means of automatic facial recognition tests. PMID:27244737

  18. Scattered Data Processing Approach Based on Optical Facial Motion Capture

    Directory of Open Access Journals (Sweden)

    Qiang Zhang

    2013-01-01

    Full Text Available In recent years, animation reconstruction of facial expressions has become a popular research field in computer science and motion capture-based facial expression reconstruction is now emerging in this field. Based on the facial motion data obtained using a passive optical motion capture system, we propose a scattered data processing approach, which aims to solve the common problems of missing data and noise. To recover missing data, given the nonlinear relationships among neighbors with the current missing marker, we propose an improved version of a previous method, where we use the motion of three muscles rather than one to recover the missing data. To reduce the noise, we initially apply preprocessing to eliminate impulsive noise, before our proposed three-order quasi-uniform B-spline-based fitting method is used to reduce the remaining noise. Our experiments showed that the principles that underlie this method are simple and straightforward, and it delivered acceptable precision during reconstruction.

  19. Feature Extraction based Face Recognition, Gender and Age Classification

    OpenAIRE

    Venugopal K R2; L M Patnaik; Ramesha K; K B Raja

    2010-01-01

    The face recognition system with large sets of training sets for personal identification normally attains good accuracy. In this paper, we proposed Feature Extraction based Face Recognition, Gender and Age Classification (FEBFRGAC) algorithm with only small training sets and it yields good results even with one image per person. This process involves three stages: Pre-processing, Feature Extraction and Classification. The geometric features of facial images like eyes, nose, mouth etc. are loc...

  20. Facial expression recognition using biologically inspired features and SVM%基于生物启发特征和SVM的人脸表情识别

    Institute of Scientific and Technical Information of China (English)

    穆国旺; 王阳; 郭蔚

    2014-01-01

    将C1特征应用于静态图像人脸表情识别,提出了一种新的基于生物启发特征和SVM的表情识别算法。提取人脸图像的C1特征,利用PCA+LDA方法对特征进行降维,用SVM进行分类。在JAFFE和Extended Cohn-Kanade(CK+)人脸表情数据库上的实验结果表明,该算法具有较高的识别率,是一种有效的人脸表情识别方法。%C1 features are introduced to facial expression recognition for static images, and a new algorithm for facial expression recognition based on Biologically Inspired Features(BIFs)and SVM is proposed. C1 features of the facial images are extracted, PCA+LDA method is used to reduce the dimensionality of the C1 features, SVM is used for classifi-cation of the expression. The experiments on the JAFFE and Extended Cohn-Kanade(CK+)facial expression data sets show the effectiveness and the good performance of the algorithm.

  1. Local Illumination Normalization and Facial Feature Point Selection for Robust Face Recognition

    Directory of Open Access Journals (Sweden)

    Song HAN

    2013-03-01

    Full Text Available Face recognition systems must be robust to the variation of various factors such as facial expression, illumination, head pose and aging. Especially, the robustness against illumination variation is one of the most important problems to be solved for the practical use of face recognition systems. Gabor wavelet is widely used in face detection and recognition because it gives the possibility to simulate the function of human visual system. In this paper, we propose a method for extracting Gabor wavelet features which is stable under the variation of local illumination and show experiment results demonstrating its effectiveness.

  2. [Peculiar features of mastoiditis in a brest-fed infant with the "exposed" facial nerve].

    Science.gov (United States)

    Andreeva, I G

    2013-01-01

    This paper reports the clinical case of mastoiditis in a 5-month old child in whom an unusual localization of the totally "naked" facial nerve outside of the bone canal in the mastoid part was discovered intraoperatively. This finding was quite unexpected because nerves are not visible on CT scanograms. The author emphasizes that the clinical course of otitis media in the breast- fed infants and young children is characterized by a number of peculiarities due to specific anatomical, physiological, and immunological features of the child's organism. She also notes that the number of antromastoidotomies for the treatment of mastoiditis has increased in Tatarstan during the recent years.

  3. Is the emotion recognition deficit associated with frontotemporal dementia caused by selective inattention to diagnostic facial features?

    Science.gov (United States)

    Oliver, Lindsay D; Virani, Karim; Finger, Elizabeth C; Mitchell, Derek G V

    2014-07-01

    Frontotemporal dementia (FTD) is a debilitating neurodegenerative disorder characterized by severely impaired social and emotional behaviour, including emotion recognition deficits. Though fear recognition impairments seen in particular neurological and developmental disorders can be ameliorated by reallocating attention to critical facial features, the possibility that similar benefits can be conferred to patients with FTD has yet to be explored. In the current study, we examined the impact of presenting distinct regions of the face (whole face, eyes-only, and eyes-removed) on the ability to recognize expressions of anger, fear, disgust, and happiness in 24 patients with FTD and 24 healthy controls. A recognition deficit was demonstrated across emotions by patients with FTD relative to controls. Crucially, removal of diagnostic facial features resulted in an appropriate decline in performance for both groups; furthermore, patients with FTD demonstrated a lack of disproportionate improvement in emotion recognition accuracy as a result of isolating critical facial features relative to controls. Thus, unlike some neurological and developmental disorders featuring amygdala dysfunction, the emotion recognition deficit observed in FTD is not likely driven by selective inattention to critical facial features. Patients with FTD also mislabelled negative facial expressions as happy more often than controls, providing further evidence for abnormalities in the representation of positive affect in FTD. This work suggests that the emotional expression recognition deficit associated with FTD is unlikely to be rectified by adjusting selective attention to diagnostic features, as has proven useful in other select disorders.

  4. Is the emotion recognition deficit associated with frontotemporal dementia caused by selective inattention to diagnostic facial features?

    Science.gov (United States)

    Oliver, Lindsay D; Virani, Karim; Finger, Elizabeth C; Mitchell, Derek G V

    2014-07-01

    Frontotemporal dementia (FTD) is a debilitating neurodegenerative disorder characterized by severely impaired social and emotional behaviour, including emotion recognition deficits. Though fear recognition impairments seen in particular neurological and developmental disorders can be ameliorated by reallocating attention to critical facial features, the possibility that similar benefits can be conferred to patients with FTD has yet to be explored. In the current study, we examined the impact of presenting distinct regions of the face (whole face, eyes-only, and eyes-removed) on the ability to recognize expressions of anger, fear, disgust, and happiness in 24 patients with FTD and 24 healthy controls. A recognition deficit was demonstrated across emotions by patients with FTD relative to controls. Crucially, removal of diagnostic facial features resulted in an appropriate decline in performance for both groups; furthermore, patients with FTD demonstrated a lack of disproportionate improvement in emotion recognition accuracy as a result of isolating critical facial features relative to controls. Thus, unlike some neurological and developmental disorders featuring amygdala dysfunction, the emotion recognition deficit observed in FTD is not likely driven by selective inattention to critical facial features. Patients with FTD also mislabelled negative facial expressions as happy more often than controls, providing further evidence for abnormalities in the representation of positive affect in FTD. This work suggests that the emotional expression recognition deficit associated with FTD is unlikely to be rectified by adjusting selective attention to diagnostic features, as has proven useful in other select disorders. PMID:24905284

  5. Contributions of feature shapes and surface cues to the recognition and neural representation of facial identity.

    Science.gov (United States)

    Andrews, Timothy J; Baseler, Heidi; Jenkins, Rob; Burton, A Mike; Young, Andrew W

    2016-10-01

    A full understanding of face recognition will involve identifying the visual information that is used to discriminate different identities and how this is represented in the brain. The aim of this study was to explore the importance of shape and surface properties in the recognition and neural representation of familiar faces. We used image morphing techniques to generate hybrid faces that mixed shape properties (more specifically, second order spatial configural information as defined by feature positions in the 2D-image) from one identity and surface properties from a different identity. Behavioural responses showed that recognition and matching of these hybrid faces was primarily based on their surface properties. These behavioural findings contrasted with neural responses recorded using a block design fMRI adaptation paradigm to test the sensitivity of Haxby et al.'s (2000) core face-selective regions in the human brain to the shape or surface properties of the face. The fusiform face area (FFA) and occipital face area (OFA) showed a lower response (adaptation) to repeated images of the same face (same shape, same surface) compared to different faces (different shapes, different surfaces). From the behavioural data indicating the critical contribution of surface properties to the recognition of identity, we predicted that brain regions responsible for familiar face recognition should continue to adapt to faces that vary in shape but not surface properties, but show a release from adaptation to faces that vary in surface properties but not shape. However, we found that the FFA and OFA showed an equivalent release from adaptation to changes in both shape and surface properties. The dissociation between the neural and perceptual responses suggests that, although they may play a role in the process, these core face regions are not solely responsible for the recognition of facial identity.

  6. A novel three-dimensional smile analysis based on dynamic evaluation of facial curve contour

    OpenAIRE

    Yi Lin; Han Lin; Qiuping Lin; Jinxin Zhang; Ping Zhu; Yao Lu; Zhi Zhao; Jiahong Lv; Mln Kyeong Lee; Yue Xu

    2016-01-01

    The influence of three-dimensional facial contour and dynamic evaluation decoding on factors of smile esthetics is essential for facial beauty improvement. However, the kinematic features of the facial smile contour and the contribution from the soft tissue and underlying skeleton are uncharted. Here, the cheekbone-maxilla contour and nasolabial fold were combined into a “smile contour” delineating the overall facial topography emerges prominently in smiling. We screened out the stable and un...

  7. 3D Facial Similarity Measure Based on Geodesic Network and Curvatures

    Directory of Open Access Journals (Sweden)

    Junli Zhao

    2014-01-01

    Full Text Available Automated 3D facial similarity measure is a challenging and valuable research topic in anthropology and computer graphics. It is widely used in various fields, such as criminal investigation, kinship confirmation, and face recognition. This paper proposes a 3D facial similarity measure method based on a combination of geodesic and curvature features. Firstly, a geodesic network is generated for each face with geodesics and iso-geodesics determined and these network points are adopted as the correspondence across face models. Then, four metrics associated with curvatures, that is, the mean curvature, Gaussian curvature, shape index, and curvedness, are computed for each network point by using a weighted average of its neighborhood points. Finally, correlation coefficients according to these metrics are computed, respectively, as the similarity measures between two 3D face models. Experiments of different persons’ 3D facial models and different 3D facial models of the same person are implemented and compared with a subjective face similarity study. The results show that the geodesic network plays an important role in 3D facial similarity measure. The similarity measure defined by shape index is consistent with human’s subjective evaluation basically, and it can measure the 3D face similarity more objectively than the other indices.

  8. Uitrasonographic evaluation of fetal facial anatomy (Ⅰ):ultrasonographic features of normal fetal face in vitro study

    Institute of Scientific and Technical Information of China (English)

    李胜利; 陈琮瑛; 刘菊玲; 欧阳淑媛

    2004-01-01

    Background Because of lacking skills in scanning the normal fetal facial structures and their corresponding ultrasonic features, misdiagnoses freguently occur. Therefore, we studied the appearance features and improved displaying skills of fetal facial anatomy in order to provide basis for prenatal diagnosis. Methods Twenty fetuses with normal facial anatomy from induced labor because of other malformations except facial anomalies were immersed in a water bath and then scanned ultrasonographically on coronal, sagittal and transverse planes to define the ultrasonic image features of normal anatomy. The coronal and sagittal planes obtained from the submandibular triangle were used for displaying the soft and hard palate in particular. Results Facial anatomic structures of the fetus can be clearly displayed through the three routine orthogonal planes. However, the soft and hard palate can be displayed on the planes obtained from the submandibular triangle only. Conclusions The superficial soft tissues and deep bony structures of the fetal face can be recognized and evaluated by routine ultrasonographic images, which is a reliable prenatal diagnostic technique to evaluate the fetal facial anatomy. The soft and hard palate can be well demonstrated by the submandibular triangle approach.

  9. Facial Expression Recognition based on Independent Component Analysis

    OpenAIRE

    XiaoHui Guo; Xiao Zhang; Chao Deng; Jianyu Wei

    2013-01-01

    As an important part of artificial intelligence and pattern recognition, facial expression recognition has drawn much attention recently and numerous methods have been proposed. Feature extraction is the most important part which directly affects the final recognition results. Independent component analysis (ICA) is a subspace analysis method, which is also a novel statistical technique in signal processing and machine learning that aims at finding linear projections of the data that maximize...

  10. Fixation to features and neural processing of facial expressions in a gender discrimination task.

    Science.gov (United States)

    Neath, Karly N; Itier, Roxane J

    2015-10-01

    Early face encoding, as reflected by the N170 ERP component, is sensitive to fixation to the eyes. Whether this sensitivity varies with facial expressions of emotion and can also be seen on other ERP components such as P1 and EPN, was investigated. Using eye-tracking to manipulate fixation on facial features, we found the N170 to be the only eye-sensitive component and this was true for fearful, happy and neutral faces. A different effect of fixation to features was seen for the earlier P1 that likely reflected general sensitivity to face position. An early effect of emotion (∼120 ms) for happy faces was seen at occipital sites and was sustained until ∼350 ms post-stimulus. For fearful faces, an early effect was seen around 80 ms followed by a later effect appearing at ∼150 ms until ∼300 ms at lateral posterior sites. Results suggests that in this emotion-irrelevant gender discrimination task, processing of fearful and happy expressions occurred early and largely independently of the eye-sensitivity indexed by the N170. Processing of the two emotions involved different underlying brain networks active at different times. PMID:26277653

  11. Gender Recognition Based on Sift Features

    CERN Document Server

    Yousefi, Sahar

    2011-01-01

    This paper proposes a robust approach for face detection and gender classification in color images. Previous researches about gender recognition suppose an expensive computational and time-consuming pre-processing step in order to alignment in which face images are aligned so that facial landmarks like eyes, nose, lips, chin are placed in uniform locations in image. In this paper, a novel technique based on mathematical analysis is represented in three stages that eliminates alignment step. First, a new color based face detection method is represented with a better result and more robustness in complex backgrounds. Next, the features which are invariant to affine transformations are extracted from each face using scale invariant feature transform (SIFT) method. To evaluate the performance of the proposed algorithm, experiments have been conducted by employing a SVM classifier on a database of face images which contains 500 images from distinct people with equal ratio of male and female.

  12. Using Computers for Assessment of Facial Features and Recognition of Anatomical Variants that Result in Unfavorable Rhinoplasty Outcomes

    Directory of Open Access Journals (Sweden)

    Tarik Ozkul

    2008-04-01

    Full Text Available Rhinoplasty and facial plastic surgery are among the most frequently performed surgical procedures in the world. Although the underlying anatomical features of nose and face are very well known, performing a successful facial surgery requires not only surgical skills but also aesthetical talent from surgeon. Sculpting facial features surgically in correct proportions to end up with an aesthetically pleasing result is highly difficult. To further complicate the matter, some patients may have some anatomical features which affect rhinoplasty operation outcome negatively. If goes undetected, these anatomical variants jeopardize the surgery causing unexpected rhinoplasty outcomes. In this study, a model is developed with the aid of artificial intelligence tools, which analyses facial features of the patient from photograph, and generates an index of "appropriateness" of the facial features and an index of existence of anatomical variants that effect rhinoplasty negatively. The software tool developed is intended to detect the variants and warn the surgeon before the surgery. Another purpose of the tool is to generate an objective score to assess the outcome of the surgery.

  13. 3D face recognition based on matching of facial surfaces

    Science.gov (United States)

    Echeagaray-Patrón, Beatriz A.; Kober, Vitaly

    2015-09-01

    Face recognition is an important task in pattern recognition and computer vision. In this work a method for 3D face recognition in the presence of facial expression and poses variations is proposed. The method uses 3D shape data without color or texture information. A new matching algorithm based on conformal mapping of original facial surfaces onto a Riemannian manifold followed by comparison of conformal and isometric invariants computed in the manifold is suggested. Experimental results are presented using common 3D face databases that contain significant amount of expression and pose variations.

  14. Automated detection of pain from facial expressions: a rule-based approach using AAM

    Science.gov (United States)

    Chen, Zhanli; Ansari, Rashid; Wilkie, Diana J.

    2012-02-01

    In this paper, we examine the problem of using video analysis to assess pain, an important problem especially for critically ill, non-communicative patients, and people with dementia. We propose and evaluate an automated method to detect the presence of pain manifested in patient videos using a unique and large collection of cancer patient videos captured in patient homes. The method is based on detecting pain-related facial action units defined in the Facial Action Coding System (FACS) that is widely used for objective assessment in pain analysis. In our research, a person-specific Active Appearance Model (AAM) based on Project-Out Inverse Compositional Method is trained for each patient individually for the modeling purpose. A flexible representation of the shape model is used in a rule-based method that is better suited than the more commonly used classifier-based methods for application to the cancer patient videos in which pain-related facial actions occur infrequently and more subtly. The rule-based method relies on the feature points that provide facial action cues and is extracted from the shape vertices of AAM, which have a natural correspondence to face muscular movement. In this paper, we investigate the detection of a commonly used set of pain-related action units in both the upper and lower face. Our detection results show good agreement with the results obtained by three trained FACS coders who independently reviewed and scored the action units in the cancer patient videos.

  15. Facial Expression Recognition Based on Discriminant Neighborhood Preserving Nonnegative Tensor Factorization and ELM

    Directory of Open Access Journals (Sweden)

    Gaoyun An

    2014-01-01

    Full Text Available A novel facial expression recognition algorithm based on discriminant neighborhood preserving nonnegative tensor factorization (DNPNTF and extreme learning machine (ELM is proposed. A discriminant constraint is adopted according to the manifold learning and graph embedding theory. The constraint is useful to exploit the spatial neighborhood structure and the prior defined discriminant properties. The obtained parts-based representations by our algorithm vary smoothly along the geodesics of the data manifold and have good discriminant property. To guarantee the convergence, the project gradient method is used for optimization. Then features extracted by DNPNTF are fed into ELM which is a training method for the single hidden layer feed-forward networks (SLFNs. Experimental results on JAFFE database and Cohn-Kanade database demonstrate that our proposed algorithm could extract effective features and have good performance in facial expression recognition.

  16. GENDER RECOGNITION BASED ON SIFT FEATURES

    Directory of Open Access Journals (Sweden)

    Sahar Yousefi

    2011-08-01

    Full Text Available This paper proposes a robust approach for face detection and gender classification in color images.Previous researches about gender recognition suppose an expensive computational and time-consumingpre-processing step in order to alignment in which face images are aligned so that facial landmarks likeeyes, nose, lips, chin are placed in uniform locations in image. In this paper, a novel technique based onmathematical analysis is represented in three stages that eliminates alignment step. First, a new colorbased face detection method is represented with a better result and more robustness in complexbackgrounds. Next, the features which are invariant to affine transformations are extracted from eachface using scale invariant feature transform (SIFT method. To evaluate the performance of the proposedalgorithm, experiments have been conducted by employing a SVM classifier on a database of face imageswhich contains 500 images from distinct people with equal ratio of male and female.

  17. A stable biologically motivated learning mechanism for visual feature extraction to handle facial categorization.

    Directory of Open Access Journals (Sweden)

    Karim Rajaei

    Full Text Available The brain mechanism of extracting visual features for recognizing various objects has consistently been a controversial issue in computational models of object recognition. To extract visual features, we introduce a new, biologically motivated model for facial categorization, which is an extension of the Hubel and Wiesel simple-to-complex cell hierarchy. To address the synaptic stability versus plasticity dilemma, we apply the Adaptive Resonance Theory (ART for extracting informative intermediate level visual features during the learning process, which also makes this model stable against the destruction of previously learned information while learning new information. Such a mechanism has been suggested to be embedded within known laminar microcircuits of the cerebral cortex. To reveal the strength of the proposed visual feature learning mechanism, we show that when we use this mechanism in the training process of a well-known biologically motivated object recognition model (the HMAX model, it performs better than the HMAX model in face/non-face classification tasks. Furthermore, we demonstrate that our proposed mechanism is capable of following similar trends in performance as humans in a psychophysical experiment using a face versus non-face rapid categorization task.

  18. Data-driven facial animation based on manifold Bayesian regression

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Driving facial animation based on tens of tracked markers is a challenging task due to the complex topology and to the non-rigid nature of human faces. We propose a solution named manifold Bayesian regression. First a novel distance metric, the geodesic manifold distance, is introduced to replace the Euclidean distance. The problem of facial animation can be formulated as a sparse warping kernels regression problem, in which the geodesic manifold distance is used for modelling the topology and discontinuities of the face models. The geodesic manifold distance can be adopted in traditional regression methods, e.g. radial basis functions without much tuning. We put facial animation into the framework of Bayesian regression. Bayesian approaches provide an elegant way of dealing with noise and uncertainty. After the covariance matrix is properly modulated, Hybrid Monte Carlo is used to approximate the integration of probabilities and get deformation results. The experimental results showed that our algorithm can robustly produce facial animation with large motions and complex face models.

  19. Shape-constrained Gaussian Process Regression for Facial-point-based Head-pose Normalization

    NARCIS (Netherlands)

    Rudovic, Ognjen; Pantic, Maja

    2011-01-01

    Given the facial points extracted from an image of a face in an arbitrary pose, the goal of facial-point-based headpose normalization is to obtain the corresponding facial points in a predefined pose (e.g., frontal). This involves inference of complex and high-dimensional mappings due to the large n

  20. Image Based Hair Segmentation Algorithm for the Application of Automatic Facial Caricature Synthesis

    Directory of Open Access Journals (Sweden)

    Yehu Shen

    2014-01-01

    Full Text Available Hair is a salient feature in human face region and are one of the important cues for face analysis. Accurate detection and presentation of hair region is one of the key components for automatic synthesis of human facial caricature. In this paper, an automatic hair detection algorithm for the application of automatic synthesis of facial caricature based on a single image is proposed. Firstly, hair regions in training images are labeled manually and then the hair position prior distributions and hair color likelihood distribution function are estimated from these labels efficiently. Secondly, the energy function of the test image is constructed according to the estimated prior distributions of hair location and hair color likelihood. This energy function is further optimized according to graph cuts technique and initial hair region is obtained. Finally, K-means algorithm and image postprocessing techniques are applied to the initial hair region so that the final hair region can be segmented precisely. Experimental results show that the average processing time for each image is about 280 ms and the average hair region detection accuracy is above 90%. The proposed algorithm is applied to a facial caricature synthesis system. Experiments proved that with our proposed hair segmentation algorithm the facial caricatures are vivid and satisfying.

  1. Long-term assessment of facial features and functions needing more attention in treatment of Treacher Collins syndrome

    NARCIS (Netherlands)

    Plomp, Raul G.; Versnel, Sarah L.; van Lieshout, Manouk J. S.; Poublon, Rene M. L.; Mathijssen, Irene M. J.

    2013-01-01

    Aim: This study aimed to determine which facial features and functions need more attention during surgical treatment of Treacher Collins syndrome (TCS) in the long term. Method: A cross-sectional cohort study was conducted to compare 23 TCS patients with 206 controls (all >= 18 years) regarding sati

  2. Facial expression recognition on a people-dependent personal facial expression space (PFES)

    Science.gov (United States)

    Chandrasiri, N. P.; Park, Min Chul; Naemura, Takeshi; Harashima, Hiroshi

    2000-04-01

    In this paper, a person-specific facial expression recognition method which is based on Personal Facial Expression Space (PFES) is presented. The multidimensional scaling maps facial images as points in lower dimensions in PFES. It reflects personality of facial expressions as it is based on the peak instant of facial expression images of a specific person. In constructing PFES for a person, his/her whole normalized facial image is considered as a single pattern without block segmentation and differences of 2-D DCT coefficients from neutral facial image of the same person are used as features. Therefore, in the early part of the paper, separation characteristics of facial expressions in the frequency domain are analyzed using a still facial image database which consists of neutral, smile, anger, surprise and sadness facial images for each of 60 Japanese males (300 facial images). Results show that facial expression categories are well separated in the low frequency domain. PFES is constructed using multidimensional scaling by taking these low frequency domain of differences of 2-D DCT coefficients as features. On the PFES, trajectory of a facial image sequence of a person can be calculated in real time. Based on this trajectory, facial expressions can be recognized. Experimental results show the effectiveness of this method.

  3. Performance of distance-based matching algorithms in 3D facial identification

    Directory of Open Access Journals (Sweden)

    Petra Urbanová

    2016-06-01

    Full Text Available Facial image identification is an area of forensic sciences, where an expert provides an opinion on whether or not two or more images depict the same individual. The primary concern for facial image identification is that it must be based on sound scientific principles. The recent extensive development in 3D recording technology, which is presumed to enhance performances of identification tasks, has made essential to question conditions, under which 3D images can yield accurate and reliable results. The present paper explores the effect of mesh resolution, adequacy of selected measures of dissimilarity and number of variables employed to encode identity-specific facial features on a dataset of 528 3D face models sampled from the Fidentis 3D Face Database (N ∼ 2100. In order to match 3D images two quantitative approaches were tested, the first based on closest point-to-point distances computed from registered surface models and the second grounded on Procrustes distances derived from discrete 3D facial points collected manually on textured 3D facial models. The results expressed in terms of rank-1 identification rates, ROC curves and likelihood ratios show that under optimized conditions the tested algorithms have the capacity to provide very accurate and reliable results. The performance of the tested algorithms is, however, highly dependent on mesh resolution and the number of variables employed in the task. The results also show that in addition to numerical measures of dissimilarity, various 3D visualization tools can be of assistance in the decision-making.

  4. Efficient Web-based Facial Recognition System Employing 2DHOG

    CERN Document Server

    Abdelwahab, Moataz M; Yousry, Islam

    2012-01-01

    In this paper, a system for facial recognition to identify missing and found people in Hajj and Umrah is described as a web portal. Explicitly, we present a novel algorithm for recognition and classifications of facial images based on applying 2DPCA to a 2D representation of the Histogram of oriented gradients (2D-HOG) which maintains the spatial relation between pixels of the input images. This algorithm allows a compact representation of the images which reduces the computational complexity and the storage requirments, while maintaining the highest reported recognition accuracy. This promotes this method for usage with very large datasets. Large dataset was collected for people in Hajj. Experimental results employing ORL, UMIST, JAFFE, and HAJJ datasets confirm these excellent properties.

  5. An Experimental Investigation about the Integration of Facial Dynamics in Video-Based Face Recognition

    OpenAIRE

    Hadid, Abdenour; Pietikäinen, Matti

    2005-01-01

    Recent psychological and neural studies indicate that when people talk their changing facial expressions and head movements provide a dynamic cue for recognition. Therefore, both fixed facial features and dynamic personal characteristics are used in the human visual system (HVS) to recognize faces. However, most automatic recognition systems use only the static information as it is unclear how the dynamic cue can be integrated and exploited. The few works attempting to combine facial structur...

  6. Facial expression recognition based on improved DAGSVM

    Science.gov (United States)

    Luo, Yuan; Cui, Ye; Zhang, Yi

    2014-11-01

    For the cumulative error problem because of randomization sequence of traditional DAGSVM(Directed Acyclic Graph Support Vector Machine) classification, this paper presents an improved DAGSVM expression recognition method. The method uses the distance of class and the standard deviation as the measure of the classer, which minimize the error rate of the upper structure of the classification. At the same time, this paper uses the method which combines discrete cosine transform (Discrete Cosine Transform, DCT) with Local Binary Pattern(Local Binary Pattern - LBP) ,to extract expression feature and be the input to improve the DAGSVM classifier for recognition. Experimental results show that compared with other multi-class support vector machine method, improved DAGSVM classifier can achieve higher recognition rate. And when it's used at the platform of the intelligent wheelchair, experiments show that the method has a better robustness.

  7. Content-based Image Retrieval Using Constrained Independent Component Analysis: Facial Image Retrieval Based on Compound Queries

    OpenAIRE

    Kim, Tae-Seong; Ahmed, Bilal

    2008-01-01

    In this work, we have proposed a new technique of facial image retrieval based on constrained ICA. Our technique requires no offline learning, pre-processing, and feature extraction. The system has been designed so that none of the user-provided information is lost, and in turn more semantically accurate images can be retrieved. As our future work, we would like to test the system in other domains such as the retrieval of chest x-rays and CT images.

  8. A Study of Techniques for Facial Detection and Expression Classification

    Directory of Open Access Journals (Sweden)

    G.Hemalatha

    2014-04-01

    Full Text Available Automatic recognition of facial expressions is an important component for human-machine interfaces. It has lot of attraction in research area since 1990's.Although humans recognize face without effort or delay, recognition by a machine is still a challenge. Some of its challenges are highly dynamic in their orientation, lightening, scale, facial expression and occlusion. Applications are in the fields like user authentication, person identification, video surveillance, information security, data privacy etc. The various approaches for facial recognition are categorized into two namely holistic based facial recognition and feature based facial recognition. Holistic based treat the image data as one entity without isolating different region in the face where as feature based methods identify certain points on the face such as eyes, nose and mouth etc. In this paper, facial expression recognition is analyzed with various methods of facial detection,facial feature extraction and classification.

  9. Intensity Estimation of Spontaneous Facial Action Units Based on Their Sparsity Properties.

    Science.gov (United States)

    Mohammadi, Mohammad Reza; Fatemizadeh, Emad; Mahoor, Mohammad H

    2016-03-01

    Automatic measurement of spontaneous facial action units (AUs) defined by the facial action coding system (FACS) is a challenging problem. The recent FACS user manual defines 33 AUs to describe different facial activities and expressions. In spontaneous facial expressions, a subset of AUs are often occurred or activated at a time. Given this fact that AUs occurred sparsely over time, we propose a novel method to detect the absence and presence of AUs and estimate their intensity levels via sparse representation (SR). We use the robust principal component analysis to decompose expression from facial identity and then estimate the intensity of multiple AUs jointly using a regression model formulated based on dictionary learning and SR. Our experiments on Denver intensity of spontaneous facial action and UNBC-McMaster shoulder pain expression archive databases show that our method is a promising approach for measurement of spontaneous facial AUs.

  10. Perceptually Valid Facial Expressions for Character-Based Applications

    OpenAIRE

    Ali Arya; Steve DiPaola; Avi Parush

    2009-01-01

    This paper addresses the problem of creating facial expression of mixed emotions in a perceptually valid way. The research has been done in the context of a “game-like” health and education applications aimed at studying social competency and facial expression awareness in autistic children as well as native language learning, but the results can be applied to many other applications such as games with need for dynamic facial expressions or tools for automating the creation of facial animatio...

  11. Facial Expressions recognition Based on Principal Component Analysis (PCA)

    OpenAIRE

    2014-01-01

    The facial expression recognition is an ocular task that can be performed without human discomfort, is really a speedily growing on the computer research field. There are many applications and programs uses facial expression to evaluate human character, judgment, feelings, and viewpoint. The process of recognizing facial expression is a hard task due to the several circumstances such as facial occlusions, face shape, illumination, face colors, and etc. This paper present a PCA methodology to ...

  12. Facial Image Analysis Based on Local Binary Patterns: A Survey

    NARCIS (Netherlands)

    Huang, D.; Shan, C.; Ardebilian, M.; Chen, L.

    2011-01-01

    Facial image analysis, including face detection, face recognition,facial expression analysis, facial demographic classification, and so on, is an important and interesting research topic in the computervision and image processing area, which has many important applications such as human-computer

  13. Influence of skin ageing features on Chinese women's perception of facial age and attractiveness

    Science.gov (United States)

    Porcheron, A; Latreille, J; Jdid, R; Tschachler, E; Morizot, F

    2014-01-01

    Objectives Ageing leads to characteristic changes in the appearance of facial skin. Among these changes, we can distinguish the skin topographic cues (skin sagging and wrinkles), the dark spots and the dark circles around the eyes. Although skin changes are similar in Caucasian and Chinese faces, the age of occurrence and the severity of age-related features differ between the two populations. Little is known about how the ageing of skin influences the perception of female faces in Chinese women. The aim of this study is to evaluate the contribution of the different age-related skin features to the perception of age and attractiveness in Chinese women. Methods Facial images of Caucasian women and Chinese women in their 60s were manipulated separately to reduce the following skin features: (i) skin sagging and wrinkles, (ii) dark spots and (iii) dark circles. Finally, all signs were reduced simultaneously (iv). Female Chinese participants were asked to estimate the age difference between the modified and original images and evaluate the attractiveness of modified and original faces. Results Chinese women perceived the Chinese faces as younger after the manipulation of dark spots than after the reduction in wrinkles/sagging, whereas they perceived the Caucasian faces as the youngest after the manipulation of wrinkles/sagging. Interestingly, Chinese women evaluated faces with reduced dark spots as being the most attractive whatever the origin of the face. The manipulation of dark circles contributed to making Caucasian and Chinese faces being perceived younger and more attractive than the original faces, although the effect was less pronounced than for the two other types of manipulation. Conclusion This is the first study to have examined the influence of various age-related skin features on the facial age and attractiveness perception of Chinese women. The results highlight different contributions of dark spots, sagging/wrinkles and dark circles to their perception

  14. Recognition of 3D facial expression dynamics

    NARCIS (Netherlands)

    Sandbach, G.; Zafeiriou, S.; Pantic, Maja; Rueckert, D.

    2012-01-01

    In this paper we propose a method that exploits 3D motion-based features between frames of 3D facial geometry sequences for dynamic facial expression recognition. An expressive sequence is modelled to contain an onset followed by an apex and an offset. Feature selection methods are applied in order

  15. Man-machine collaboration using facial expressions

    Science.gov (United States)

    Dai, Ying; Katahera, S.; Cai, D.

    2002-09-01

    For realizing the flexible man-machine collaboration, understanding of facial expressions and gestures is not negligible. In our method, we proposed a hierarchical recognition approach, for the understanding of human emotions. According to this method, the facial AFs (action features) were firstly extracted and recognized by using histograms of optical flow. Then, based on the facial AFs, facial expressions were classified into two calsses, one of which presents the positive emotions, and the other of which does the negative ones. Accordingly, the facial expressions belonged to the positive class, or the ones belonged to the negative class, were classified into more complex emotions, which were revealed by the corresponding facial expressions. Finally, the system architecture how to coordinate in recognizing facil action features and facial expressions for man-machine collaboration was proposed.

  16. Automated Facial Expression Recognition Using Gradient-Based Ternary Texture Patterns

    OpenAIRE

    Faisal Ahmed; Emam Hossain

    2013-01-01

    Recognition of human expression from facial image is an interesting research area, which has received increasing attention in the recent years. A robust and effective facial feature descriptor is the key to designing a successful expression recognition system. Although much progress has been made, deriving a face feature descriptor that can perform consistently under changing environment is still a difficult and challenging task. In this paper, we present the gradient local ternary pattern (G...

  17. Recognizing Action Units for Facial Expression Analysis.

    Science.gov (United States)

    Tian, Ying-Li; Kanade, Takeo; Cohn, Jeffrey F

    2001-02-01

    Most automatic expression analysis systems attempt to recognize a small set of prototypic expressions, such as happiness, anger, surprise, and fear. Such prototypic expressions, however, occur rather infrequently. Human emotions and intentions are more often communicated by changes in one or a few discrete facial features. In this paper, we develop an Automatic Face Analysis (AFA) system to analyze facial expressions based on both permanent facial features (brows, eyes, mouth) and transient facial features (deepening of facial furrows) in a nearly frontal-view face image sequence. The AFA system recognizes fine-grained changes in facial expression into action units (AUs) of the Facial Action Coding System (FACS), instead of a few prototypic expressions. Multistate face and facial component models are proposed for tracking and modeling the various facial features, including lips, eyes, brows, cheeks, and furrows. During tracking, detailed parametric descriptions of the facial features are extracted. With these parameters as the inputs, a group of action units (neutral expression, six upper face AUs and 10 lower face AUs) are recognized whether they occur alone or in combinations. The system has achieved average recognition rates of 96.4 percent (95.4 percent if neutral expressions are excluded) for upper face AUs and 96.7 percent (95.6 percent with neutral expressions excluded) for lower face AUs. The generalizability of the system has been tested by using independent image databases collected and FACS-coded for ground-truth by different research teams.

  18. Personality Trait and Facial Expression Filter-Based Brain-Computer Interface

    OpenAIRE

    Seongah Chin; Chung-Yeon Lee

    2013-01-01

    In this paper, we present technical approaches that bridge the gap in the research related to the use of brain‐computer interfaces for entertainment and facial expressions. Such facial expressions that reflect an individual’s personal traits can be used to better realize artificial facial expressions in a gaming environment based on a brain‐computer interface. First, an emotion extraction filter is introduced in order to classify emotions on the basis of the users’ brain signals in real time....

  19. Fingerprints, Iris and DNA Features based Multimodal Systems: A Review

    Directory of Open Access Journals (Sweden)

    Prakash Chandra Srivastava

    2013-01-01

    Full Text Available Biometric systems are alternates to the traditional identification systems. This paper provides an overview of single feature and multiple features based biometric systems, including the performance of physiological characteristics (such as fingerprint, hand geometry, head recognition, iris, retina, face recognition, DNA recognition, palm prints, heartbeat, finger veins, palates etc and behavioral characteristics (such as body language, facial expression, signature verification, speech recognition, Gait Signature etc.. The fingerprints, iris image, and DNA features based multimodal systems and their performances are analyzed in terms of security, reliability, accuracy, and long-term stability. The strengths and weaknesses of various multiple features based biometric approaches published so far are analyzed. The directions of future research work for robust personal identification is outlined.

  20. Hierarchical Spatio-Temporal Probabilistic Graphical Model with Multiple Feature Fusion for Binary Facial Attribute Classification in Real-World Face Videos.

    Science.gov (United States)

    Demirkus, Meltem; Precup, Doina; Clark, James J; Arbel, Tal

    2016-06-01

    Recent literature shows that facial attributes, i.e., contextual facial information, can be beneficial for improving the performance of real-world applications, such as face verification, face recognition, and image search. Examples of face attributes include gender, skin color, facial hair, etc. How to robustly obtain these facial attributes (traits) is still an open problem, especially in the presence of the challenges of real-world environments: non-uniform illumination conditions, arbitrary occlusions, motion blur and background clutter. What makes this problem even more difficult is the enormous variability presented by the same subject, due to arbitrary face scales, head poses, and facial expressions. In this paper, we focus on the problem of facial trait classification in real-world face videos. We have developed a fully automatic hierarchical and probabilistic framework that models the collective set of frame class distributions and feature spatial information over a video sequence. The experiments are conducted on a large real-world face video database that we have collected, labelled and made publicly available. The proposed method is flexible enough to be applied to any facial classification problem. Experiments on a large, real-world video database McGillFaces [1] of 18,000 video frames reveal that the proposed framework outperforms alternative approaches, by up to 16.96 and 10.13%, for the facial attributes of gender and facial hair, respectively. PMID:26415152

  1. Hierarchical Spatio-Temporal Probabilistic Graphical Model with Multiple Feature Fusion for Binary Facial Attribute Classification in Real-World Face Videos.

    Science.gov (United States)

    Demirkus, Meltem; Precup, Doina; Clark, James J; Arbel, Tal

    2016-06-01

    Recent literature shows that facial attributes, i.e., contextual facial information, can be beneficial for improving the performance of real-world applications, such as face verification, face recognition, and image search. Examples of face attributes include gender, skin color, facial hair, etc. How to robustly obtain these facial attributes (traits) is still an open problem, especially in the presence of the challenges of real-world environments: non-uniform illumination conditions, arbitrary occlusions, motion blur and background clutter. What makes this problem even more difficult is the enormous variability presented by the same subject, due to arbitrary face scales, head poses, and facial expressions. In this paper, we focus on the problem of facial trait classification in real-world face videos. We have developed a fully automatic hierarchical and probabilistic framework that models the collective set of frame class distributions and feature spatial information over a video sequence. The experiments are conducted on a large real-world face video database that we have collected, labelled and made publicly available. The proposed method is flexible enough to be applied to any facial classification problem. Experiments on a large, real-world video database McGillFaces [1] of 18,000 video frames reveal that the proposed framework outperforms alternative approaches, by up to 16.96 and 10.13%, for the facial attributes of gender and facial hair, respectively.

  2. Information based universal feature extraction

    Science.gov (United States)

    Amiri, Mohammad; Brause, Rüdiger

    2015-02-01

    In many real world image based pattern recognition tasks, the extraction and usage of task-relevant features are the most crucial part of the diagnosis. In the standard approach, they mostly remain task-specific, although humans who perform such a task always use the same image features, trained in early childhood. It seems that universal feature sets exist, but they are not yet systematically found. In our contribution, we tried to find those universal image feature sets that are valuable for most image related tasks. In our approach, we trained a neural network by natural and non-natural images of objects and background, using a Shannon information-based algorithm and learning constraints. The goal was to extract those features that give the most valuable information for classification of visual objects hand-written digits. This will give a good start and performance increase for all other image learning tasks, implementing a transfer learning approach. As result, in our case we found that we could indeed extract features which are valid in all three kinds of tasks.

  3. Synthesizing Performance-driven Facial Animation

    Institute of Scientific and Technical Information of China (English)

    LUO Chang-Wei; YU Jun; WANG Zeng-Fu

    2014-01-01

    In this paper, we present a system for real-time performance-driven facial animation. With the system, the user can control the facial expression of a digital character by acting out the desired facial action in front of an ordinary camera. First, we create a muscle-based 3D face model. The muscle actuation parameters are used to animate the face model. To increase the reality of facial animation, the orbicularis oris in our face model is divided into the inner part and outer part. We also establish the relationship between jaw rotation and facial surface deformation. Second, a real-time facial tracking method is employed to track the facial features of a performer in the video. Finally, the tracked facial feature points are used to estimate muscle actuation parameters to drive the face model. Experimental results show that our system runs in real time and outputs realistic facial animations. Compared with most existing performance-based facial animation systems, ours does not require facial markers, intrusive lighting, or special scanning equipment, thus it is inexpensive and easy to use.

  4. Appearance-based human gesture recognition using multimodal features for human computer interaction

    Science.gov (United States)

    Luo, Dan; Gao, Hua; Ekenel, Hazim Kemal; Ohya, Jun

    2011-03-01

    The use of gesture as a natural interface plays an utmost important role for achieving intelligent Human Computer Interaction (HCI). Human gestures include different components of visual actions such as motion of hands, facial expression, and torso, to convey meaning. So far, in the field of gesture recognition, most previous works have focused on the manual component of gestures. In this paper, we present an appearance-based multimodal gesture recognition framework, which combines the different groups of features such as facial expression features and hand motion features which are extracted from image frames captured by a single web camera. We refer 12 classes of human gestures with facial expression including neutral, negative and positive meanings from American Sign Languages (ASL). We combine the features in two levels by employing two fusion strategies. At the feature level, an early feature combination can be performed by concatenating and weighting different feature groups, and LDA is used to choose the most discriminative elements by projecting the feature on a discriminative expression space. The second strategy is applied on decision level. Weighted decisions from single modalities are fused in a later stage. A condensation-based algorithm is adopted for classification. We collected a data set with three to seven recording sessions and conducted experiments with the combination techniques. Experimental results showed that facial analysis improve hand gesture recognition, decision level fusion performs better than feature level fusion.

  5. CBFS: high performance feature selection algorithm based on feature clearness.

    Directory of Open Access Journals (Sweden)

    Minseok Seo

    Full Text Available BACKGROUND: The goal of feature selection is to select useful features and simultaneously exclude garbage features from a given dataset for classification purposes. This is expected to bring reduction of processing time and improvement of classification accuracy. METHODOLOGY: In this study, we devised a new feature selection algorithm (CBFS based on clearness of features. Feature clearness expresses separability among classes in a feature. Highly clear features contribute towards obtaining high classification accuracy. CScore is a measure to score clearness of each feature and is based on clustered samples to centroid of classes in a feature. We also suggest combining CBFS and other algorithms to improve classification accuracy. CONCLUSIONS/SIGNIFICANCE: From the experiment we confirm that CBFS is more excellent than up-to-date feature selection algorithms including FeaLect. CBFS can be applied to microarray gene selection, text categorization, and image classification.

  6. Granuloma faciale: a cutaneous lesion sharing features with IgG4-associated sclerosing diseases.

    Science.gov (United States)

    Cesinaro, Anna Maria; Lonardi, Silvia; Facchetti, Fabio

    2013-01-01

    The pathogenesis of granuloma faciale (GF), framed in the group of cutaneous vasculopathic dermatitis, is poorly understood. The present study investigated whether GF might be part of the spectrum of IgG4-related sclerosing diseases (IgG4-RD). Erythema elevatum diutinum (EED), believed to belong to the same group of disorders as GF, was also studied for comparison. Thirty-one biopsies of GF obtained from 25 patients (18 men, 7 women) and 5 cases of EED (4 women and 1 man) were analyzed morphologically and for the expression of IgG and IgG4 by immunohistochemistry. The distribution of Th1, T regulatory and Th2 T-cell subsets, respectively, identified by anti-T-bet, anti-FoxP3, and anti-GATA-3 antibodies, was also evaluated. The dermal inflammatory infiltrate in GF contained eosinophils and plasma cells in variable proportions. Obliterative venulitis was found in 16 cases, and storiform fibrosis, a typical feature of IgG4-RD, was observed in 8 cases and was prominent in 3 of them. On immunohistochemical analysis 7 of 31 biopsies (22.6%) from 6 GF patients fulfilled the criteria for IgG4-RD (IgG4/IgG ratio >40%, and absolute number of IgG4 per high-power field >50). Interestingly, the 6 patients were male, and 4 showed recurrent and/or multiple lesions. In an additional 5 cases, only the IgG4/IgG ratio was abnormal. None of the 5 EED cases fulfilled the criteria for IgG4-RD. The T-cell subsets in GF were quite variable in number, GATA-3 lymphocytes were generally more abundant, but no relationship with the number of IgG4 plasma cells was found. The study indicates that a significant number of GF cases are associated with an abnormal content of IgG4 plasma cells; this association was particularly obvious in male patients and in cases presenting with multiple or recurrent lesions. As morphologic changes typically found in IgG4-RD, such as obliterative vascular inflammation and storiform sclerosis, are found in GF, we suggest that GF might represent a localized form of

  7. Quantitative assessment of the facial features of a Mexican population dataset.

    Science.gov (United States)

    Farrera, Arodi; García-Velasco, Maria; Villanueva, Maria

    2016-05-01

    The present study describes the morphological variation of a large database of facial photographs. The database comprises frontal (386 female, 764 males) and lateral (312 females, 666 males) images of Mexican individuals aged 14-69 years that were obtained under controlled conditions. We used geometric morphometric methods and multivariate statistics to describe the phenotypic variation within the dataset as well as the variation regarding sex and age groups. In addition, we explored the correlation between facial traits in both views. We found a spectrum of variation that encompasses broad and narrow faces. In frontal view, the latter is associated to a longer nose, a thinner upper lip, a shorter lower face and to a longer upper face, than individuals with broader faces. In lateral view, antero-posteriorly shortened faces are associated to a longer profile and to a shortened helix, than individuals with longer faces. Sexual dimorphism is found in all age groups except for individuals above 39 years old in lateral view. Likewise, age-related changes are significant for both sexes, except for females above 29 years old in both views. Finally, we observed that the pattern of covariation between views differs in males and females mainly in the thickness of the upper lip and the angle of the facial profile and the auricle. The results of this study could contribute to the forensic practices as a complement for the construction of biological profiles, for example, to improve facial reconstruction procedures. PMID:27017173

  8. Facial action detection using block-based pyramid appearance descriptors

    NARCIS (Netherlands)

    Jiang, Bihan; Valstar, Michel F.; Pantic, Maja

    2012-01-01

    Facial expression is one of the most important non-verbal behavioural cues in social signals. Constructing an effective face representation from images is an essential step for successful facial behaviour analysis. Most existing face descriptors operate on the same scale, and do not leverage coarse

  9. Cosmetics as a feature of the extended human phenotype: modulation of the perception of biologically important facial signals.

    Directory of Open Access Journals (Sweden)

    Nancy L Etcoff

    Full Text Available Research on the perception of faces has focused on the size, shape, and configuration of inherited features or the biological phenotype, and largely ignored the effects of adornment, or the extended phenotype. Research on the evolution of signaling has shown that animals frequently alter visual features, including color cues, to attract, intimidate or protect themselves from conspecifics. Humans engage in conscious manipulation of visual signals using cultural tools in real time rather than genetic changes over evolutionary time. Here, we investigate one tool, the use of color cosmetics. In two studies, we asked viewers to rate the same female faces with or without color cosmetics, and we varied the style of makeup from minimal (natural, to moderate (professional, to dramatic (glamorous. Each look provided increasing luminance contrast between the facial features and surrounding skin. Faces were shown for 250 ms or for unlimited inspection time, and subjects rated them for attractiveness, competence, likeability and trustworthiness. At 250 ms, cosmetics had significant positive effects on all outcomes. Length of inspection time did not change the effect for competence or attractiveness. However, with longer inspection time, the effect of cosmetics on likability and trust varied by specific makeup looks, indicating that cosmetics could impact automatic and deliberative judgments differently. The results suggest that cosmetics can create supernormal facial stimuli, and that one way they may do so is by exaggerating cues to sexual dimorphism. Our results provide evidence that judgments of facial trustworthiness and attractiveness are at least partially separable, that beauty has a significant positive effect on judgment of competence, a universal dimension of social cognition, but has a more nuanced effect on the other universal dimension of social warmth, and that the extended phenotype significantly influences perception of biologically important

  10. Cosmetics as a feature of the extended human phenotype: modulation of the perception of biologically important facial signals.

    Science.gov (United States)

    Etcoff, Nancy L; Stock, Shannon; Haley, Lauren E; Vickery, Sarah A; House, David M

    2011-01-01

    Research on the perception of faces has focused on the size, shape, and configuration of inherited features or the biological phenotype, and largely ignored the effects of adornment, or the extended phenotype. Research on the evolution of signaling has shown that animals frequently alter visual features, including color cues, to attract, intimidate or protect themselves from conspecifics. Humans engage in conscious manipulation of visual signals using cultural tools in real time rather than genetic changes over evolutionary time. Here, we investigate one tool, the use of color cosmetics. In two studies, we asked viewers to rate the same female faces with or without color cosmetics, and we varied the style of makeup from minimal (natural), to moderate (professional), to dramatic (glamorous). Each look provided increasing luminance contrast between the facial features and surrounding skin. Faces were shown for 250 ms or for unlimited inspection time, and subjects rated them for attractiveness, competence, likeability and trustworthiness. At 250 ms, cosmetics had significant positive effects on all outcomes. Length of inspection time did not change the effect for competence or attractiveness. However, with longer inspection time, the effect of cosmetics on likability and trust varied by specific makeup looks, indicating that cosmetics could impact automatic and deliberative judgments differently. The results suggest that cosmetics can create supernormal facial stimuli, and that one way they may do so is by exaggerating cues to sexual dimorphism. Our results provide evidence that judgments of facial trustworthiness and attractiveness are at least partially separable, that beauty has a significant positive effect on judgment of competence, a universal dimension of social cognition, but has a more nuanced effect on the other universal dimension of social warmth, and that the extended phenotype significantly influences perception of biologically important signals at first

  11. Cosmetics as a feature of the extended human phenotype: modulation of the perception of biologically important facial signals.

    Science.gov (United States)

    Etcoff, Nancy L; Stock, Shannon; Haley, Lauren E; Vickery, Sarah A; House, David M

    2011-01-01

    Research on the perception of faces has focused on the size, shape, and configuration of inherited features or the biological phenotype, and largely ignored the effects of adornment, or the extended phenotype. Research on the evolution of signaling has shown that animals frequently alter visual features, including color cues, to attract, intimidate or protect themselves from conspecifics. Humans engage in conscious manipulation of visual signals using cultural tools in real time rather than genetic changes over evolutionary time. Here, we investigate one tool, the use of color cosmetics. In two studies, we asked viewers to rate the same female faces with or without color cosmetics, and we varied the style of makeup from minimal (natural), to moderate (professional), to dramatic (glamorous). Each look provided increasing luminance contrast between the facial features and surrounding skin. Faces were shown for 250 ms or for unlimited inspection time, and subjects rated them for attractiveness, competence, likeability and trustworthiness. At 250 ms, cosmetics had significant positive effects on all outcomes. Length of inspection time did not change the effect for competence or attractiveness. However, with longer inspection time, the effect of cosmetics on likability and trust varied by specific makeup looks, indicating that cosmetics could impact automatic and deliberative judgments differently. The results suggest that cosmetics can create supernormal facial stimuli, and that one way they may do so is by exaggerating cues to sexual dimorphism. Our results provide evidence that judgments of facial trustworthiness and attractiveness are at least partially separable, that beauty has a significant positive effect on judgment of competence, a universal dimension of social cognition, but has a more nuanced effect on the other universal dimension of social warmth, and that the extended phenotype significantly influences perception of biologically important signals at first

  12. On Facial Expression Recognition Based on SKLLE and SVM%基于SKLLE和SVM的人脸表情识别

    Institute of Scientific and Technical Information of China (English)

    晏勇

    2014-01-01

    In order to extract facial expression image features efficiently and to reduce the dimension of fea-ture vectors,a novel dimension reduction and classification methodology based on supervised kernel locally linear embedding (SKLLE)and support vector machine (SVM)has been proposed.Nonlinear manifold structure information and label information have been used to reduce dimension and extract low-dimension embedding features for facial expression recognition.Support vector machine has been used as classifier in-stead of K-nearest neighbor (KNN).Experiments with JAFFE facial expression image database and Cohn-Kanade AU-Coded facial expression database show that,in this method,dimension can be reduced effec-tively and high recognition rate enhanced relatively,which improved the performance of facial expression recognition.%为有效提取人脸表情图像特征并降低特征向量维数,该文提出一种基于监督核局部线性嵌入(Supervised Kernel Locally Linear Embedding,SKLLE)和支持向量机(Support Vector Machine,SVM)相结合的降维和分类方法。利用人脸表情图像数据本身的非线性流形结构信息和标签信息实现维数约简,提取低维嵌入特征用于人脸表情识别,采用支持向量机代替传统的K近邻分类器。基于JAFFE人脸表情图像库和Cohn-Kanade人脸表情数据库的实验结果表明,该方法可以很好地实现维数约简,达到较高的识别率,有效地提高了人脸表情识别的性能。

  13. A genome-wide association scan in admixed Latin Americans identifies loci influencing facial and scalp hair features

    OpenAIRE

    Adhikari, Kaustubh; Fontanil, Tania; Cal, Santiago; Mendoza-Revilla, Javier; Fuentes-Guajardo, Macarena; Chacón-Duque, Juan-Camilo; Al-Saadi, Farah; Johansson, Jeanette A.; Quinto-Sanchez, Mirsha; Acuña-Alonzo, Victor; Jaramillo, Claudia; Arias, William; Barquera Lozano, Rodrigo; Macín Pérez, Gastón; Gómez-Valdés, Jorge

    2016-01-01

    We report a genome-wide association scan in over 6,000 Latin Americans for features of scalp hair (shape, colour, greying, balding) and facial hair (beard thickness, monobrow, eyebrow thickness). We found 18 signals of association reaching genome-wide significance (P values 5 × 10(-8) to 3 × 10(-119)), including 10 novel associations. These include novel loci for scalp hair shape and balding, and the first reported loci for hair greying, monobrow, eyebrow and beard thickness. A newly identifi...

  14. A genome-wide association scan in admixed Latin Americans identifies loci influencing facial and scalp hair features

    OpenAIRE

    Adhikari, Kaustubh; Fontanil, Tania; Cal, Santiago; Mendoza-Revilla, Javier; Fuentes-Guajardo, Macarena; Chacón-Duque, Juan-Camilo; Al-Saadi, Farah; Johansson, Jeanette A.; Quinto-Sanchez, Mirsha; Acuña-Alonzo, Victor; Jaramillo, Claudia; Arias, William; Barquera Lozano, Rodrigo; Macín Pérez, Gastón; Gómez-Valdés, Jorge

    2016-01-01

    We report a genome-wide association scan in over 6,000 Latin Americans for features of scalp hair (shape, colour, greying, balding) and facial hair (beard thickness, monobrow, eyebrow thickness). We found 18 signals of association reaching genome-wide significance (P values 5 × 10−8 to 3 × 10−119), including 10 novel associations. These include novel loci for scalp hair shape and balding, and the first reported loci for hair greying, monobrow, eyebrow and beard thickness. A newly identified l...

  15. Pediatric facial nerve rehabilitation.

    Science.gov (United States)

    Banks, Caroline A; Hadlock, Tessa A

    2014-11-01

    Facial paralysis is a rare but severe condition in the pediatric population. Impaired facial movement has multiple causes and varied presentations, therefore individualized treatment plans are essential for optimal results. Advances in facial reanimation over the past 4 decades have given rise to new treatments designed to restore balance and function in pediatric patients with facial paralysis. This article provides a comprehensive review of pediatric facial rehabilitation and describes a zone-based approach to assessment and treatment of impaired facial movement.

  16. Local Illumination Normalization and Facial Feature Point Selection for Robust Face Recognition

    OpenAIRE

    Han, Song; Jinsong KIM; Cholhun KIM; Jo, Jongchol

    2013-01-01

    Face recognition systems must be robust to the variation of various factors such as facial expression, illumination, head pose and aging. Especially, the robustness against illumination variation is one of the most important problems to be solved for the practical use of face recognition systems. Gabor wavelet is widely used in face detection and recognition because it gives the possibility to simulate the function of human visual system. In this paper, we propose a method for extracting Gab...

  17. Robust Facial Expression Recognition via Compressive Sensing

    OpenAIRE

    Shiqing Zhang; Xiaoming Zhao; Bicheng Lei

    2012-01-01

    Recently, compressive sensing (CS) has attracted increasing attention in the areas of signal processing, computer vision and pattern recognition. In this paper, a new method based on the CS theory is presented for robust facial expression recognition. The CS theory is used to construct a sparse representation classifier (SRC). The effectiveness and robustness of the SRC method is investigated on clean and occluded facial expression images. Three typical facial features, i.e., the raw pixels, ...

  18. 脸部特征点定位方法综述%Review to facial features localization approachesHAN Yufeng

    Institute of Scientific and Technical Information of China (English)

    韩玉峰; 施铜兴; 王小林

    2012-01-01

    Face recognition technology has greatly promoted the development of image processing, pattern recognition and computer vision. Human facial features positioning is a key stage in face recognition and the accuracy of the positioning directly relates to the reliability of subsequent applications. This paper systematically reviews the six categories of methods in facial features positioning, namely, those based on grey-level information, a priori knowledge, geometric shapes, statistic models, wavelet and three dimensions. It evaluates the above-mentioned methods and expresses prospects.%人脸识别技术极大推动了图像处理、模式识别、计算机视觉等诸多学科的发展.人脸部特征点的定位是人脸识别中的关键步骤,定位准确与否直接关系到后续应用的可靠性.系统综述了特征点定位六大类方法,分为基于灰度信息、先验规则、几何形状、统计模型、小波和3D方法,并给出了对各方法的性能评价以及对未来的展望.

  19. Multiple features extraction using Gabor wavelet transformation, Fisher faces and integrated SVM with application to facial expression recognition%基于Gabor、Fisher脸多特征提取及集成SVM的人脸表情识别

    Institute of Scientific and Technical Information of China (English)

    黄永明; 章国宝; 董飞; 达飞鹏

    2011-01-01

    针对静态的灰度图像表情库,提出了基于多种脸部表情特征多级分类的表情识别算法.首先在选取的人脸特征点上作局部的Gabor小波变换.为了提高特征提取速度,利用改进的弹性图匹配算法来提取图像中的人脸有效区域,在提取的人脸区域中提取几何特征,并通过Fisher脸法提取统计特征,利用几何特征与建立的相应一级集成SVM来进行初次分类.最后利用Fisher特征与建立的相应二级集成SVM进行最终分类.通过在JAFFE与Cohn-Kanade表情库中实验,证明该方法与单个特征相比较,具有更高的表情识别率以及更强的鲁棒性.%Based on the static gray image expression database, this paper gave a recognition algorithm by using multiple facial expression features to construct multi-classifier.Aiming to improving speed of extracting features, features of expression that were extracted by local Gabor wavelet transformation on the selected facial landmark were used to constructing facial elastic templates.Extracted geometric features and Fisherfaces features on the facial effective area extracted by elastic templates.Primary integrated SVM should be constructed by combining with Geometric features; secondary integrated SVM should be constructed by combining with Fisherfaces features.Compared with the single features, the experimental results show that recognition rate and robustness are improved by experiments based on JAFFE and Cohn-Kanade.

  20. Feature Extraction based Face Recognition, Gender and Age Classification

    Directory of Open Access Journals (Sweden)

    Venugopal K R

    2010-01-01

    Full Text Available The face recognition system with large sets of training sets for personal identification normally attains good accuracy. In this paper, we proposed Feature Extraction based Face Recognition, Gender and Age Classification (FEBFRGAC algorithm with only small training sets and it yields good results even with one image per person. This process involves three stages: Pre-processing, Feature Extraction and Classification. The geometric features of facial images like eyes, nose, mouth etc. are located by using Canny edge operator and face recognition is performed. Based on the texture and shape information gender and age classification is done using Posteriori Class Probability and Artificial Neural Network respectively. It is observed that the face recognition is 100%, the gender and age classification is around 98% and 94% respectively.

  1. Features Based Text Similarity Detection

    CERN Document Server

    Kent, Chow Kok

    2010-01-01

    As the Internet help us cross cultural border by providing different information, plagiarism issue is bound to arise. As a result, plagiarism detection becomes more demanding in overcoming this issue. Different plagiarism detection tools have been developed based on various detection techniques. Nowadays, fingerprint matching technique plays an important role in those detection tools. However, in handling some large content articles, there are some weaknesses in fingerprint matching technique especially in space and time consumption issue. In this paper, we propose a new approach to detect plagiarism which integrates the use of fingerprint matching technique with four key features to assist in the detection process. These proposed features are capable to choose the main point or key sentence in the articles to be compared. Those selected sentence will be undergo the fingerprint matching process in order to detect the similarity between the sentences. Hence, time and space usage for the comparison process is r...

  2. Spontaneous Subtle Expression Detection and Recognition based on Facial Strain

    OpenAIRE

    Liong, Sze-Teng; See, John; Phan, Raphael Chung-Wei; Oh, Yee-Hui; Ngo, Anh Cat Le; Wong, KokSheik; Tan, Su-Wei

    2016-01-01

    Optical strain is an extension of optical flow that is capable of quantifying subtle changes on faces and representing the minute facial motion intensities at the pixel level. This is computationally essential for the relatively new field of spontaneous micro-expression, where subtle expressions can be technically challenging to pinpoint. In this paper, we present a novel method for detecting and recognizing micro-expressions by utilizing facial optical strain magnitudes to construct optical ...

  3. Suitable models for face geometry normalization in facial expression recognition

    Science.gov (United States)

    Sadeghi, Hamid; Raie, Abolghasem A.

    2015-01-01

    Recently, facial expression recognition has attracted much attention in machine vision research because of its various applications. Accordingly, many facial expression recognition systems have been proposed. However, the majority of existing systems suffer from a critical problem: geometric variability. It directly affects the performance of geometric feature-based facial expression recognition approaches. Furthermore, it is a crucial challenge in appearance feature-based techniques. This variability appears in both neutral faces and facial expressions. Appropriate face geometry normalization can improve the accuracy of each facial expression recognition system. Therefore, this paper proposes different geometric models or shapes for normalization. Face geometry normalization removes geometric variability of facial images and consequently, appearance feature extraction methods can be accurately utilized to represent facial images. Thus, some expression-based geometric models are proposed for facial image normalization. Next, local binary patterns and local phase quantization are used for appearance feature extraction. A combination of an effective geometric normalization with accurate appearance representations results in more than a 4% accuracy improvement compared to several state-of-the-arts in facial expression recognition. Moreover, utilizing the model of facial expressions which have larger mouth and eye region sizes gives higher accuracy due to the importance of these regions in facial expression.

  4. Facial Nerve Palsy: An Unusual Presenting Feature of Small Cell Lung Cancer

    Directory of Open Access Journals (Sweden)

    Ozcan Yildiz

    2011-01-01

    Full Text Available Lung cancer is the second most common type of cancer in the world and is the most common cause of cancer-related death in men and women; it is responsible for 1.3 million deaths annually worldwide. It can metastasize to any organ. The most common site of metastasis in the head and neck region is the brain; however, it can also metastasize to the oral cavity, gingiva, tongue, parotid gland and lymph nodes. This article reports a case of small cell lung cancer presenting with metastasis to the facial nerve.

  5. A genome-wide association scan in admixed Latin Americans identifies loci influencing facial and scalp hair features.

    Science.gov (United States)

    Adhikari, Kaustubh; Fontanil, Tania; Cal, Santiago; Mendoza-Revilla, Javier; Fuentes-Guajardo, Macarena; Chacón-Duque, Juan-Camilo; Al-Saadi, Farah; Johansson, Jeanette A; Quinto-Sanchez, Mirsha; Acuña-Alonzo, Victor; Jaramillo, Claudia; Arias, William; Barquera Lozano, Rodrigo; Macín Pérez, Gastón; Gómez-Valdés, Jorge; Villamil-Ramírez, Hugo; Hunemeier, Tábita; Ramallo, Virginia; Silva de Cerqueira, Caio C; Hurtado, Malena; Villegas, Valeria; Granja, Vanessa; Gallo, Carla; Poletti, Giovanni; Schuler-Faccini, Lavinia; Salzano, Francisco M; Bortolini, Maria-Cátira; Canizales-Quinteros, Samuel; Rothhammer, Francisco; Bedoya, Gabriel; Gonzalez-José, Rolando; Headon, Denis; López-Otín, Carlos; Tobin, Desmond J; Balding, David; Ruiz-Linares, Andrés

    2016-01-01

    We report a genome-wide association scan in over 6,000 Latin Americans for features of scalp hair (shape, colour, greying, balding) and facial hair (beard thickness, monobrow, eyebrow thickness). We found 18 signals of association reaching genome-wide significance (P values 5 × 10(-8) to 3 × 10(-119)), including 10 novel associations. These include novel loci for scalp hair shape and balding, and the first reported loci for hair greying, monobrow, eyebrow and beard thickness. A newly identified locus influencing hair shape includes a Q30R substitution in the Protease Serine S1 family member 53 (PRSS53). We demonstrate that this enzyme is highly expressed in the hair follicle, especially the inner root sheath, and that the Q30R substitution affects enzyme processing and secretion. The genome regions associated with hair features are enriched for signals of selection, consistent with proposals regarding the evolution of human hair. PMID:26926045

  6. A genome-wide association scan in admixed Latin Americans identifies loci influencing facial and scalp hair features.

    Science.gov (United States)

    Adhikari, Kaustubh; Fontanil, Tania; Cal, Santiago; Mendoza-Revilla, Javier; Fuentes-Guajardo, Macarena; Chacón-Duque, Juan-Camilo; Al-Saadi, Farah; Johansson, Jeanette A; Quinto-Sanchez, Mirsha; Acuña-Alonzo, Victor; Jaramillo, Claudia; Arias, William; Barquera Lozano, Rodrigo; Macín Pérez, Gastón; Gómez-Valdés, Jorge; Villamil-Ramírez, Hugo; Hunemeier, Tábita; Ramallo, Virginia; Silva de Cerqueira, Caio C; Hurtado, Malena; Villegas, Valeria; Granja, Vanessa; Gallo, Carla; Poletti, Giovanni; Schuler-Faccini, Lavinia; Salzano, Francisco M; Bortolini, Maria-Cátira; Canizales-Quinteros, Samuel; Rothhammer, Francisco; Bedoya, Gabriel; Gonzalez-José, Rolando; Headon, Denis; López-Otín, Carlos; Tobin, Desmond J; Balding, David; Ruiz-Linares, Andrés

    2016-03-01

    We report a genome-wide association scan in over 6,000 Latin Americans for features of scalp hair (shape, colour, greying, balding) and facial hair (beard thickness, monobrow, eyebrow thickness). We found 18 signals of association reaching genome-wide significance (P values 5 × 10(-8) to 3 × 10(-119)), including 10 novel associations. These include novel loci for scalp hair shape and balding, and the first reported loci for hair greying, monobrow, eyebrow and beard thickness. A newly identified locus influencing hair shape includes a Q30R substitution in the Protease Serine S1 family member 53 (PRSS53). We demonstrate that this enzyme is highly expressed in the hair follicle, especially the inner root sheath, and that the Q30R substitution affects enzyme processing and secretion. The genome regions associated with hair features are enriched for signals of selection, consistent with proposals regarding the evolution of human hair.

  7. 男性化与女性化对面孔偏好的影响——基于图像处理技术和眼动的检验%The Effects of Transformed Gender Facial Features on Face Preference of College Students: Based on the Test of Computer Graphics and Eye Movement Tracks

    Institute of Scientific and Technical Information of China (English)

    温芳芳; 佐斌

    2012-01-01

    采用图像处理技术和眼动探讨了性别二态线索对面孔偏好的影响.实验1发现非面孔线索未掩蔽和掩蔽时,感知男性化技术与原始照片条件下女性化的男性面孔更有吸引力和信任度;性别二态技术条件下,非面孔线索未掩蔽时男性化的男性面孔更有吸引力和信任度.实验2表明被试对男性面孔的平均瞳孔大小和注视次数均大于和多于女性面孔,首次注视时间短于女性面孔;被试对男性化面孔的首次注视时间和首次注视持续时间均长于女性化面孔.%Perceived facial attractiveness can influence people's social interactions with one another, including mate selection, intimate relationship, hiring decision, and voting behavior. People evaluate faces using multiple trait dimensions such as attractiveness and trustworthiness both of which are affected by facial masculinity or femininity cues. However, studies manipulating the computer graphics of sexual dimorphism on facial attractiveness have yielded inconsistent results. Some found that feminine facial features in male faces were more attractive than masculine ones. Some others found that women prefer masculine male faces. And still others found that women preferred femininity in male faces.The current study used the computer graphics and the eye tracker to assess the effect of the dimorphic cues on the perception of facial attractiveness among Chinese college students through two experiments. Experiment 1 assessed women's perceptions of attractiveness and trustworthiness of men's faces under the condition of either perceived masculinity vs. Femininity or the sexual dimorphism. Results showed that, when non-face cues (e.g., hairstyle) were masked, women perceived femininity in men's faces as more attractive and trustworthy than the masculinity. However, in the sexual dimorphism condition in which the non-face cues were not masked, women found masculinity in men's faces more attractive and

  8. Facial expression discrimination varies with presentation time but not with fixation on features: a backward masking study using eye-tracking.

    Science.gov (United States)

    Neath, Karly N; Itier, Roxane J

    2014-01-01

    The current study investigated the effects of presentation time and fixation to expression-specific diagnostic features on emotion discrimination performance, in a backward masking task. While no differences were found when stimuli were presented for 16.67 ms, differences between facial emotions emerged beyond the happy-superiority effect at presentation times as early as 50 ms. Happy expressions were best discriminated, followed by neutral and disgusted, then surprised, and finally fearful expressions presented for 50 and 100 ms. While performance was not improved by the use of expression-specific diagnostic facial features, performance increased with presentation time for all emotions. Results support the idea of an integration of facial features (holistic processing) varying as a function of emotion and presentation time. PMID:23879672

  9. Evolutionary Computational Method of Facial Expression Analysis for Content-based Video Retrieval using 2-Dimensional Cellular Automata

    CERN Document Server

    Geetha, P

    2010-01-01

    In this paper, Deterministic Cellular Automata (DCA) based video shot classification and retrieval is proposed. The deterministic 2D Cellular automata model captures the human facial expressions, both spontaneous and posed. The determinism stems from the fact that the facial muscle actions are standardized by the encodings of Facial Action Coding System (FACS) and Action Units (AUs). Based on these encodings, we generate the set of evolutionary update rules of the DCA for each facial expression. We consider a Person-Independent Facial Expression Space (PIFES) to analyze the facial expressions based on Partitioned 2D-Cellular Automata which capture the dynamics of facial expressions and classify the shots based on it. Target video shot is retrieved by comparing the similar expression is obtained for the query frame's face with respect to the key faces expressions in the database video. Consecutive key face expressions in the database that are highly similar to the query frame's face, then the key faces are use...

  10. 3D facial expression recognition based on histograms of surface differential quantities

    KAUST Repository

    Li, Huibin

    2011-01-01

    3D face models accurately capture facial surfaces, making it possible for precise description of facial activities. In this paper, we present a novel mesh-based method for 3D facial expression recognition using two local shape descriptors. To characterize shape information of the local neighborhood of facial landmarks, we calculate the weighted statistical distributions of surface differential quantities, including histogram of mesh gradient (HoG) and histogram of shape index (HoS). Normal cycle theory based curvature estimation method is employed on 3D face models along with the common cubic fitting curvature estimation method for the purpose of comparison. Based on the basic fact that different expressions involve different local shape deformations, the SVM classifier with both linear and RBF kernels outperforms the state of the art results on the subset of the BU-3DFE database with the same experimental setting. © 2011 Springer-Verlag.

  11. Robust Wavelet-Based Facial Image Watermarking Against Geometric Attacks Using Coordinate System Recovery

    Institute of Scientific and Technical Information of China (English)

    ZHAO Pei-dong; XIE Jian-ying

    2008-01-01

    A coordinate system of the original image is established using a facial feature point localization technique. After the original image transformed into a new image with the standard coordinate system, a redundant watermark is adaptively embedded in the discrete wavelet transform(DWT) domain based on the statistical characteristics of the wavelet coefficient block. The coordinate system of watermarked image is reestablished as a calibration system. Regardless of the host image rotated, scaled, or translated(RST), all the geometric attacks are eliminated while the watermarked image is transformed into the standard coordinate system. The proposed watermark detection is a blind detection. Experimental results demonstrate the proposed scheme is robust against common and geometric image processing attacks, particularly its robustness against joint geometric attacks.

  12. Facial Emotion Recognition Using Context Based Multimodal Approach

    Directory of Open Access Journals (Sweden)

    Priya Metri

    2011-12-01

    Full Text Available Emotions play a crucial role in person to person interaction. In recent years, there has been a growing interest in improving all aspects of interaction between humans and computers. The ability to understand human emotions is desirable for the computer in several applications especially by observing facial expressions. This paper explores a ways of human-computer interaction that enable the computer to be more aware of the user’s emotional expressions we present a approach for the emotion recognition from a facial expression, hand and body posture. Our model uses multimodal emotion recognition system in which we use two different models for facial expression recognition and for hand and body posture recognition and then combining the result of both classifiers using a third classifier which give the resulting emotion . Multimodal system gives more accurate result than a signal or bimodal system

  13. 一种特征加权融合人脸识别方法%Face recognition by weighted fusion of facial features

    Institute of Scientific and Technical Information of China (English)

    孙劲光; 孟凡宇

    2015-01-01

    针对传统人脸识别算法在非限制条件下识别准确率不高的问题,提出了一种特征加权融合人脸识别方法( DLWF+). 根据人脸面部左眼、右眼、鼻子、嘴、下巴等5个器官位置,将人脸图像划分成5个局部采样区域;将得到的5个局部采样区域和整幅人脸图像分别输入到对应的神经网络中进行网络权值调整,完成子网络的构建;利用softmax回归求出6个相似度向量并组成相似度矩阵与权向量相乘得出最终的识别结果. 经ORL和WFL人脸库上进行实验验证,识别准确率分别达到97%和91.63%. 实验结果表明:该算法能够有效提高人脸识别能力,与传统识别算法相比在限制条件和非限制条件下都具有较高的识别准确率.%The accuracy of face recognition is low under unconstrained conditions. To solve this problem, we pro-pose a new method based on deep learning and the weighted fusion of facial features. First, we divide facial feature points into five regions using an active shape model and then sample different facial components corresponding to those facial feature points. A corresponding deep belief network ( DBN) was then trained based on these regional samples to obtain optimal network parameters. The five regional sampling regions and entire facial image obtained were then inputted into a corresponding neural network to adjust the network weight and complete the construction of sub-networks. Finally, using softmax regression, we obtained six similarity vectors of different components. These six similarity vectors comprise a similarity matrix, which is then multiplied by the weight vector to derive the final recognition result. Recognition accuracy was 97% and 91.63% on the ORL and WFL face databases, respectively. Compared with traditional recognition algorithms such as SVM, DBN, PCA, and FIP+LDA, recognition rates for both databases were improved in both constrained and unconstrained conditions. On the basis of

  14. Multi-Layer Sparse Representation for Weighted LBP-Patches Based Facial Expression Recognition

    OpenAIRE

    Qi Jia; Xinkai Gao; He Guo; Zhongxuan Luo; Yi Wang(Kavli Institute for the Physics and Mathematics of the Universe, Todai Institutes for Advanced Study, University of Tokyo (WPI), 5-1-5 Kashiwanoha, Kashiwa, Chiba 277-8583, Japan)

    2015-01-01

    In this paper, a novel facial expression recognition method based on sparse representation is proposed. Most contemporary facial expression recognition systems suffer from limited ability to handle image nuisances such as low resolution and noise. Especially for low intensity expression, most of the existing training methods have quite low recognition rates. Motivated by sparse representation, the problem can be solved by finding sparse coefficients of the test image by the whole training set...

  15. Facial Expression Recognition Based on Local Binary Patterns and Kernel Discriminant Isomap

    OpenAIRE

    Xiaoming Zhao; Shiqing Zhang

    2011-01-01

    Facial expression recognition is an interesting and challenging subject. Considering the nonlinear manifold structure of facial images, a new kernel-based manifold learning method, called kernel discriminant isometric mapping (KDIsomap), is proposed. KDIsomap aims to nonlinearly extract the discriminant information by maximizing the interclass scatter while minimizing the intraclass scatter in a reproducing kernel Hilbert space. KDIsomap is used to perform nonlinear dimensionality reduction o...

  16. Multi-Layer Sparse Representation for Weighted LBP-Patches Based Facial Expression Recognition

    Directory of Open Access Journals (Sweden)

    Qi Jia

    2015-03-01

    Full Text Available In this paper, a novel facial expression recognition method based on sparse representation is proposed. Most contemporary facial expression recognition systems suffer from limited ability to handle image nuisances such as low resolution and noise. Especially for low intensity expression, most of the existing training methods have quite low recognition rates. Motivated by sparse representation, the problem can be solved by finding sparse coefficients of the test image by the whole training set. Deriving an effective facial representation from original face images is a vital step for successful facial expression recognition. We evaluate facial representation based on weighted local binary patterns, and Fisher separation criterion is used to calculate the weighs of patches. A multi-layer sparse representation framework is proposed for multi-intensity facial expression recognition, especially for low-intensity expressions and noisy expressions in reality, which is a critical problem but seldom addressed in the existing works. To this end, several experiments based on low-resolution and multi-intensity expressions are carried out. Promising results on publicly available databases demonstrate the potential of the proposed approach.

  17. Multi-layer sparse representation for weighted LBP-patches based facial expression recognition.

    Science.gov (United States)

    Jia, Qi; Gao, Xinkai; Guo, He; Luo, Zhongxuan; Wang, Yi

    2015-01-01

    In this paper, a novel facial expression recognition method based on sparse representation is proposed. Most contemporary facial expression recognition systems suffer from limited ability to handle image nuisances such as low resolution and noise. Especially for low intensity expression, most of the existing training methods have quite low recognition rates. Motivated by sparse representation, the problem can be solved by finding sparse coefficients of the test image by the whole training set. Deriving an effective facial representation from original face images is a vital step for successful facial expression recognition. We evaluate facial representation based on weighted local binary patterns, and Fisher separation criterion is used to calculate the weighs of patches. A multi-layer sparse representation framework is proposed for multi-intensity facial expression recognition, especially for low-intensity expressions and noisy expressions in reality, which is a critical problem but seldom addressed in the existing works. To this end, several experiments based on low-resolution and multi-intensity expressions are carried out. Promising results on publicly available databases demonstrate the potential of the proposed approach. PMID:25808772

  18. Facial Expression Analysis

    NARCIS (Netherlands)

    Pantic, Maja; Li, S.; Jain, A.

    2009-01-01

    Facial expression recognition is a process performed by humans or computers, which consists of: 1. Locating faces in the scene (e.g., in an image; this step is also referred to as face detection), 2. Extracting facial features from the detected face region (e.g., detecting the shape of facial compon

  19. Limitations in 4-Year-Old Children's Sensitivity to the Spacing among Facial Features

    Science.gov (United States)

    Mondloch, Catherine J.; Thomson, Kendra

    2008-01-01

    Four-year-olds' sensitivity to differences among faces in the spacing of features was tested under 4 task conditions: judging distinctiveness when the external contour was visible and when it was occluded, simultaneous match-to-sample, and recognizing the face of a friend. In each task, the foil differed only in the spacing of features, and…

  20. Emotion Index of Cover Song Music Video Clips based on Facial Expression Recognition

    DEFF Research Database (Denmark)

    Vidakis, Nikolaos; Kavallakis, George; Triantafyllidis, Georgios

    2016-01-01

    This paper presents a scheme of creating an emotion index of cover song music video clips by recognizing and classifying facial expressions of the artist in the video. More specifically, it fuses effective and robust algorithms which are employed for expression recognition, along with the use...... of a neural network system using the features extracted by the SIFT algorithm. Also we support the need of this fusion of different expression recognition algorithms, because of the way that emotions are linked to facial expressions in music video clips....

  1. 基于局部SVM分类器的表情识别方法%Facial expression recognition based on local SVM classifiers

    Institute of Scientific and Technical Information of China (English)

    孙正兴; 徐文晖

    2008-01-01

    This paper presents a novel technique developed for the identification of facial expressions in video sources. The method uses two steps: facial expression feature extraction and expression classification. First we used an active shape model (ASM) based on a facial point tracking system to extract the geometric features of facial expressions in videos. Then a new type of local support vector machine (LSVM) was created to classify the facial expressions. Four different classifiers using KNN, SVM, KNN-SVM, and LSVM were compared with the new LSVM. The results on the Cohn-Kanade database showed the effectiveness of our method.%提出了一种新的视频人脸表情识别方法.该方法将识别过程分成人脸表情特征提取和分类2个部分,首先采用基于点跟踪的活动形状模型(ASM)从视频人脸中提取人脸表情几何特征;然后,采用一种新的局部支撑向量机分类器对表情进行分类.在Cohn-Kanade数据库上对KNN、 SVM、 KNN-SVM和LSVM 4种分类器的比较实验结果验证了所提出方法的有效性.

  2. Feature-based Image Sequence Compression Coding

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A novel compressing method for video teleconference applications is presented. Semantic-based coding based on human image feature is realized, where human features are adopted as parameters. Model-based coding and the concept of vector coding are combined with the work on image feature extraction to obtain the result.

  3. A novel three-dimensional smile analysis based on dynamic evaluation of facial curve contour

    Science.gov (United States)

    Lin, Yi; Lin, Han; Lin, Qiuping; Zhang, Jinxin; Zhu, Ping; Lu, Yao; Zhao, Zhi; Lv, Jiahong; Lee, Mln Kyeong; Xu, Yue

    2016-02-01

    The influence of three-dimensional facial contour and dynamic evaluation decoding on factors of smile esthetics is essential for facial beauty improvement. However, the kinematic features of the facial smile contour and the contribution from the soft tissue and underlying skeleton are uncharted. Here, the cheekbone-maxilla contour and nasolabial fold were combined into a “smile contour” delineating the overall facial topography emerges prominently in smiling. We screened out the stable and unstable points on the smile contour using facial motion capture and curve fitting, before analyzing the correlation between soft tissue coordinates and hard tissue counterparts of the screened points. Our finding suggests that the mouth corner region was the most mobile area characterizing smile expression, while the other areas remained relatively stable. Therefore, the perioral area should be evaluated dynamically while the static assessment outcome of other parts of the smile contour contribute partially to their dynamic esthetics. Moreover, different from the end piece, morphologies of the zygomatic area and the superior part of the nasolabial crease were determined largely by the skeleton in rest, implying the latter can be altered by orthopedic or orthodontic correction and the former better improved by cosmetic procedures to improve the beauty of smile.

  4. Facial Emotion Recognition Using Context Based Multimodal Approach

    OpenAIRE

    Priya Metri; Jayshree Ghorpade; Ayesha Butalia

    2011-01-01

    Emotions play a crucial role in person to person interaction. In recent years, there has been a growing interest in improving all aspects of interaction between humans and computers. The ability to understand human emotions is desirable for the computer in several applications especially by observing facial expressions. This paper explores a ways of human-computer interaction that enable the computer to be more aware of the user’s emotional expressions we present a approach for the emotion re...

  5. Pain assessment in severe demented elderly based on facial expression

    OpenAIRE

    Leysens, Greet; Noben, Annelies; De Maesschalck, Lieven

    2010-01-01

    Introduction: Pain is an important and underestimated aspect at elderly with dementia, especially when their communication skills deteriorate. Moreover, the risk of under treatment increases with the progression of dementia, despite of the increasing pharmacological possibilities and interest in pain. Facial expression can be considered as a reflection of the real, authentic pain experience. Elderly with cognitive limitations are less socially inhibited to express pain nonverbally. Therefore ...

  6. Hepatitis Diagnosis Using Facial Color Image

    Science.gov (United States)

    Liu, Mingjia; Guo, Zhenhua

    Facial color diagnosis is an important diagnostic method in traditional Chinese medicine (TCM). However, due to its qualitative, subjective and experi-ence-based nature, traditional facial color diagnosis has a very limited application in clinical medicine. To circumvent the subjective and qualitative problems of facial color diagnosis of Traditional Chinese Medicine, in this paper, we present a novel computer aided facial color diagnosis method (CAFCDM). The method has three parts: face Image Database, Image Preprocessing Module and Diagnosis Engine. Face Image Database is carried out on a group of 116 patients affected by 2 kinds of liver diseases and 29 healthy volunteers. The quantitative color feature is extracted from facial images by using popular digital image processing techni-ques. Then, KNN classifier is employed to model the relationship between the quantitative color feature and diseases. The results show that the method can properly identify three groups: healthy, severe hepatitis with jaundice and severe hepatitis without jaundice with accuracy higher than 73%.

  7. Facial Psoriasis Log-based Area and Severity Index: A valid and reliable severity measurement method detecting improvement of facial psoriasis in clinical practice settings.

    Science.gov (United States)

    Kwon, Hyuck Hoon; Kim, Min-Woo; Park, Gyeong-Hun; Bae, You In; Kuk, Su Kyung; Suh, Dae Hun; Youn, Jai Il; Kwon, In Ho

    2016-08-01

    Facial psoriasis is often observed in moderate to severe degrees of psoriasis. While we previously demonstrated construct validity of the facial Psoriasis Log-based Area and Severity Index (fPLASI) system for the cross-sectional evaluation of facial psoriasis, its reliability and accuracy to detect clinical improvement has not been confirmed yet. The aim of this study is to analyze whether the fPLASI properly represents the range of improvement for facial psoriasis compared with the existing facial Psoriasis Area and Severity Index (fPASI) after receiving systemic treatments in clinical practice settings. The changing severity of facial psoriasis for 118 patients was calculated by the scales of fPASI and fPLASI between two time points after systemic treatments. Then, percentage changes (ΔfPASI and ΔfPLASI) were analyzed from the perspective of both the Physician's Global Assessment of effectiveness (PGA) and patients' Subjective Global Assessment (SGA). As a result, the distribution of the fPASI was more heavily clustered around the low score range compared with the fPLASI at both first and second visits. Linear regression analysis between ΔfPASI and ΔfPLASI shows that the correlation coefficient was 0.94, and ΔfPLASI represented greater percentage changes than ΔfPASI. Remarkably, degrees of clinical improvement measured by the PGA matched better with ΔfPLASI, while ΔfPASI underestimated clinical improvements compared with ΔfPLASI from treatment-responding groups by the PGA and SGA. In conclusion, the fPLASI represented clinical improvement of facial psoriasis with more sensitivity and reliability compared with the fPASI. Therefore, the PLASI system would be a viable severity measurement method for facial psoriasis in clinical practice.

  8. A small-world network model of facial emotion recognition.

    Science.gov (United States)

    Takehara, Takuma; Ochiai, Fumio; Suzuki, Naoto

    2016-01-01

    Various models have been proposed to increase understanding of the cognitive basis of facial emotions. Despite those efforts, interactions between facial emotions have received minimal attention. If collective behaviours relating to each facial emotion in the comprehensive cognitive system could be assumed, specific facial emotion relationship patterns might emerge. In this study, we demonstrate that the frameworks of complex networks can effectively capture those patterns. We generate 81 facial emotion images (6 prototypes and 75 morphs) and then ask participants to rate degrees of similarity in 3240 facial emotion pairs in a paired comparison task. A facial emotion network constructed on the basis of similarity clearly forms a small-world network, which features an extremely short average network distance and close connectivity. Further, even if two facial emotions have opposing valences, they are connected within only two steps. In addition, we show that intermediary morphs are crucial for maintaining full network integration, whereas prototypes are not at all important. These results suggest the existence of collective behaviours in the cognitive systems of facial emotions and also describe why people can efficiently recognize facial emotions in terms of information transmission and propagation. For comparison, we construct three simulated networks--one based on the categorical model, one based on the dimensional model, and one random network. The results reveal that small-world connectivity in facial emotion networks is apparently different from those networks, suggesting that a small-world network is the most suitable model for capturing the cognitive basis of facial emotions. PMID:26315136

  9. A small-world network model of facial emotion recognition.

    Science.gov (United States)

    Takehara, Takuma; Ochiai, Fumio; Suzuki, Naoto

    2016-01-01

    Various models have been proposed to increase understanding of the cognitive basis of facial emotions. Despite those efforts, interactions between facial emotions have received minimal attention. If collective behaviours relating to each facial emotion in the comprehensive cognitive system could be assumed, specific facial emotion relationship patterns might emerge. In this study, we demonstrate that the frameworks of complex networks can effectively capture those patterns. We generate 81 facial emotion images (6 prototypes and 75 morphs) and then ask participants to rate degrees of similarity in 3240 facial emotion pairs in a paired comparison task. A facial emotion network constructed on the basis of similarity clearly forms a small-world network, which features an extremely short average network distance and close connectivity. Further, even if two facial emotions have opposing valences, they are connected within only two steps. In addition, we show that intermediary morphs are crucial for maintaining full network integration, whereas prototypes are not at all important. These results suggest the existence of collective behaviours in the cognitive systems of facial emotions and also describe why people can efficiently recognize facial emotions in terms of information transmission and propagation. For comparison, we construct three simulated networks--one based on the categorical model, one based on the dimensional model, and one random network. The results reveal that small-world connectivity in facial emotion networks is apparently different from those networks, suggesting that a small-world network is the most suitable model for capturing the cognitive basis of facial emotions.

  10. Automated Facial Action Coding System for dynamic analysis of facial expressions in neuropsychiatric disorders.

    Science.gov (United States)

    Hamm, Jihun; Kohler, Christian G; Gur, Ruben C; Verma, Ragini

    2011-09-15

    Facial expression is widely used to evaluate emotional impairment in neuropsychiatric disorders. Ekman and Friesen's Facial Action Coding System (FACS) encodes movements of individual facial muscles from distinct momentary changes in facial appearance. Unlike facial expression ratings based on categorization of expressions into prototypical emotions (happiness, sadness, anger, fear, disgust, etc.), FACS can encode ambiguous and subtle expressions, and therefore is potentially more suitable for analyzing the small differences in facial affect. However, FACS rating requires extensive training, and is time consuming and subjective thus prone to bias. To overcome these limitations, we developed an automated FACS based on advanced computer science technology. The system automatically tracks faces in a video, extracts geometric and texture features, and produces temporal profiles of each facial muscle movement. These profiles are quantified to compute frequencies of single and combined Action Units (AUs) in videos, and they can facilitate a statistical study of large populations in disorders known to impact facial expression. We derived quantitative measures of flat and inappropriate facial affect automatically from temporal AU profiles. Applicability of the automated FACS was illustrated in a pilot study, by applying it to data of videos from eight schizophrenia patients and controls. We created temporal AU profiles that provided rich information on the dynamics of facial muscle movements for each subject. The quantitative measures of flatness and inappropriateness showed clear differences between patients and the controls, highlighting their potential in automatic and objective quantification of symptom severity.

  11. An optimized ERP brain-computer interface based on facial expression changes

    Science.gov (United States)

    Jin, Jing; Daly, Ian; Zhang, Yu; Wang, Xingyu; Cichocki, Andrzej

    2014-06-01

    Objective. Interferences from spatially adjacent non-target stimuli are known to evoke event-related potentials (ERPs) during non-target flashes and, therefore, lead to false positives. This phenomenon was commonly seen in visual attention-based brain-computer interfaces (BCIs) using conspicuous stimuli and is known to adversely affect the performance of BCI systems. Although users try to focus on the target stimulus, they cannot help but be affected by conspicuous changes of the stimuli (such as flashes or presenting images) which were adjacent to the target stimulus. Furthermore, subjects have reported that conspicuous stimuli made them tired and annoyed. In view of this, the aim of this study was to reduce adjacent interference, annoyance and fatigue using a new stimulus presentation pattern based upon facial expression changes. Our goal was not to design a new pattern which could evoke larger ERPs than the face pattern, but to design a new pattern which could reduce adjacent interference, annoyance and fatigue, and evoke ERPs as good as those observed during the face pattern. Approach. Positive facial expressions could be changed to negative facial expressions by minor changes to the original facial image. Although the changes are minor, the contrast is big enough to evoke strong ERPs. In this paper, a facial expression change pattern between positive and negative facial expressions was used to attempt to minimize interference effects. This was compared against two different conditions, a shuffled pattern containing the same shapes and colours as the facial expression change pattern, but without the semantic content associated with a change in expression, and a face versus no face pattern. Comparisons were made in terms of classification accuracy and information transfer rate as well as user supplied subjective measures. Main results. The results showed that interferences from adjacent stimuli, annoyance and the fatigue experienced by the subjects could be

  12. Face Recognition from Still Images to Video Sequences: A Local-Feature-Based Framework

    OpenAIRE

    Shaokang Chen; Sandra Mau; Harandi, Mehrtash T.; Conrad Sanderson; Abbas Bigdeli; Lovell, Brian C.

    2011-01-01

    Although automatic faces recognition has shown success for high-quality images under controlled conditions, for video-based recognition it is hard to attain similar levels of performance. We describe in this paper recent advances in a project being undertaken to trial and develop advanced surveillance systems for public safety. In this paper, we propose a local facial feature based framework for both still image and video-based face recognition. The evaluation is performed on a still image d...

  13. Feature selection of facial displays for detection of non verbal communication in natural conversation

    OpenAIRE

    Sheerman-Chase T.; Ong E.-J.; Bowden R.

    2009-01-01

    Recognition of human communication has previously focused on deliberately acted emotions or in structured or artificial social contexts. This makes the result hard to apply to realistic social situations. This paper describes the recording of spontaneous human communication in a specific and common social situation: conversation between two people. The clips are then annotated by multiple observers to reduce individual variations in interpretation of social signals. Temporal and static featur...

  14. FACIAL LANDMARKING LOCALIZATION FOR EMOTION RECOGNITION USING BAYESIAN SHAPE MODELS

    Directory of Open Access Journals (Sweden)

    Hernan F. Garcia

    2013-02-01

    Full Text Available This work presents a framework for emotion recognition, based in facial expression analysis using Bayesian Shape Models (BSM for facial landmarking localization. The Facial Action Coding System (FACS compliant facial feature tracking based on Bayesian Shape Model. The BSM estimate the parameters of the model with an implementation of the EM algorithm. We describe the characterization methodology from parametric model and evaluated the accuracy for feature detection and estimation of the parameters associated with facial expressions, analyzing its robustness in pose and local variations. Then, a methodology for emotion characterization is introduced to perform the recognition. The experimental results show that the proposed model can effectively detect the different facial expressions. Outperforming conventional approaches for emotion recognition obtaining high performance results in the estimation of emotion present in a determined subject. The model used and characterization methodology showed efficient to detect the emotion type in 95.6% of the cases.

  15. Invariant facial feature extraction method with biologically-like mechanism%一种仿生的人脸不变特征提取方法

    Institute of Scientific and Technical Information of China (English)

    杜兴; 龚卫国; 张睿

    2011-01-01

    A biologically-like invariant facial feature extraction method is proposed to improve the face recognition rate obtained using the methods based on subspace algorithms. A hierarchical network with two layers, which is constructed according to the information processing procedure in the primary visual cortex ( V1 ) , is put forward to extract invariant feature from the face image. The first layer of the network, which simulates the function of the V1 simple cells, leams and obtains a group of V1-simple-cell-like filters using sparse coding method, and employs these filters to extract a set of illumination insensitive features from the face image. The second layer, which simulates the function of the V1 complex cells, merges the output of the first layer in neighborhood of positions and scales using local maximum operation, so as to obtain the facial feature robust to illumination, expression, slight pose change and local facial detail variations. The obtained invariant features are used to replace original face image as the input of a subspace algorithm, and the performance of face recognition is improved. Experiments on the FERET and ORL face databases show that compared with directly applying subspace algorithms to the image, using the proposed method can increase the recognition rate by 4.95% ~ 20.35%.%为了提高基于子空间算法的人脸识别的识别率,提出一种仿生的人脸不变特征提取方法.通过模拟初级视皮层(V1)的信息处理机制,构建一个二层结构的分层网络提取人脸图像的不变特征.网络的第1层模拟Vl简单细胞的功能,通过稀疏编码方法学习获得一组类似Vl简单细胞的滤波器,利用该组滤波器提取图像的光照不变特征;第2层模拟V1复杂细胞的功能,通过局部极大值运算对第1层的输出在空间和尺度邻域内进行合并,得到对光照、表情、轻微姿态变化和面部局部细节变化具有鲁棒性的人脸不变特征.以此不变特征

  16. Brief communication: MaqFACS: A muscle-based facial movement coding system for the rhesus macaque.

    Science.gov (United States)

    Parr, L A; Waller, B M; Burrows, A M; Gothard, K M; Vick, S J

    2010-12-01

    Over 125 years ago, Charles Darwin (1872) suggested that the only way to fully understand the form and function of human facial expression was to make comparisons with other species. Nevertheless, it has been only recently that facial expressions in humans and related primate species have been compared using systematic, anatomically based techniques. Through this approach, large-scale evolutionary and phylogenetic analyses of facial expressions, including their homology, can now be addressed. Here, the development of a muscular-based system for measuring facial movement in rhesus macaques (Macaca mulatta) is described based on the well-known FACS (Facial Action Coding System) and ChimpFACS. These systems describe facial movement according to the action of the underlying facial musculature, which is highly conserved across primates. The coding systems are standardized; thus, their use is comparable across laboratories and study populations. In the development of MaqFACS, several species differences in the facial movement repertoire of rhesus macaques were observed in comparison with chimpanzees and humans, particularly with regard to brow movements, puckering of the lips, and ear movements. These differences do not seem to be the result of constraints imposed by morphological differences in the facial structure of these three species. It is more likely that they reflect unique specializations in the communicative repertoire of each species.

  17. Survey on Sparse Coded Features for Content Based Face Image Retrieval

    OpenAIRE

    Johnvictor, D.; Selvavinayagam, G.

    2014-01-01

    Content based image retrieval, a technique which uses visual contents of image to search images from large scale image databases according to users' interests. This paper provides a comprehensive survey on recent technology used in the area of content based face image retrieval. Nowadays digital devices and photo sharing sites are getting more popularity, large human face photos are available in database. Multiple types of facial features are used to represent discriminality on large scale hu...

  18. 基于LBP和SVM决策树的人脸表情识别%Facial Expression Recognition Based on LBP and SVM Decision Tree

    Institute of Scientific and Technical Information of China (English)

    李扬; 郭海礁

    2014-01-01

    为了提高人脸表情识别的识别率,提出一种LBP和SVM决策树相结合的人脸表情识别算法。首先利用LBP算法将人脸表情图像转换为LBP特征谱,然后将LBP特征谱转换成LBP直方图特征序列,最后通过SVM决策树算法完成人脸表情的分类和识别,并且在JAFFE人脸表情库的识别中证明该算法的有效性。%In order to improve the recognition rate of facial expression, proposes a facial expression recognition algorithm based on a LBP and SVM decision tree. First facial expression image is converted to LBP characteristic spectrum using LBP algorithm, and then the LBP character-istic spectrum into LBP histogram feature sequence, finally completes the classification and recognition of facial expression by SVM deci-sion tree algorithm, and proves the effectiveness of the proposed method in the recognition of facial expression database in JAFFE.

  19. Personality Trait and Facial Expression Filter-Based Brain-Computer Interface

    Directory of Open Access Journals (Sweden)

    Seongah Chin

    2013-02-01

    Full Text Available In this paper, we present technical approaches that bridge the gap in the research related to the use of brain‐computer interfaces for entertainment and facial expressions. Such facial expressions that reflect an individual’s personal traits can be used to better realize artificial facial expressions in a gaming environment based on a brain‐computer interface. First, an emotion extraction filter is introduced in order to classify emotions on the basis of the users’ brain signals in real time. Next, a personality trait filter is defined to classify extrovert and introvert types, which manifest as five traits: very extrovert, extrovert, medium, introvert and very introvert. In addition, facial expressions derived from expression rates are obtained by an extrovert‐introvert fuzzy model through its defuzzification process. Finally, we confirm this validation via an analysis of the variance of the personality trait filter, a k‐fold cross validation of the emotion extraction filter, an accuracy analysis, a user study of facial synthesis and a test case game.

  20. Rough set-based feature selection method

    Institute of Scientific and Technical Information of China (English)

    ZHAN Yanmei; ZENG Xiangyang; SUN Jincai

    2005-01-01

    A new feature selection method is proposed based on the discern matrix in rough set in this paper. The main idea of this method is that the most effective feature, if used for classification, can distinguish the most number of samples belonging to different classes. Experiments are performed using this method to select relevant features for artificial datasets and real-world datasets. Results show that the selection method proposed can correctly select all the relevant features of artificial datasets and drastically reduce the number of features at the same time. In addition, when this method is used for the selection of classification features of real-world underwater targets,the number of classification features after selection drops to 20% of the original feature set, and the classification accuracy increases about 6% using dataset after feature selection.

  1. A voxel-based lesion study on facial emotion recognition after penetrating brain injury

    OpenAIRE

    Dal Monte, Olga; Krueger, Frank; Solomon, Jeffrey M.; Schintu, Selene; Knutson, Kristine M.; Strenziok, Maren; Pardini, Matteo; Leopold, Anne; Raymont, Vanessa; Grafman, Jordan

    2012-01-01

    The ability to read emotions in the face of another person is an important social skill that can be impaired in subjects with traumatic brain injury (TBI). To determine the brain regions that modulate facial emotion recognition, we conducted a whole-brain analysis using a well-validated facial emotion recognition task and voxel-based lesion symptom mapping (VLSM) in a large sample of patients with focal penetrating TBIs (pTBIs). Our results revealed that individuals with pTBI performed signif...

  2. Avoiding occlusal derangement in facial fractures: An evidence based approach

    Directory of Open Access Journals (Sweden)

    Derick Mendonca

    2013-01-01

    Full Text Available Facial fractures with occlusal derangement describe any fracture which directly or indirectly affects the occlusal relationship. Such fractures include dento-alveolar fractures in the maxilla and mandible, midface fractures - Le fort I, II, III and mandible fractures of the symphysis, parasymphysis, body, angle, and condyle. In some of these fractures, the fracture line runs through the dento-alveolar component whereas in others the fracture line is remote from the occlusal plane nevertheless altering the occlusion. The complications that could ensue from the management of maxillofacial fractures are predominantly iatrogenic, and therefore can be avoided if adequate care is exercised by the operating surgeon. This paper does not emphasize on complications arising from any particular technique in the management of maxillofacial fractures but rather discusses complications in general, irrespective of the technique used.

  3. Robust Facial Expression Recognition via Compressive Sensing

    Directory of Open Access Journals (Sweden)

    Shiqing Zhang

    2012-03-01

    Full Text Available Recently, compressive sensing (CS has attracted increasing attention in the areas of signal processing, computer vision and pattern recognition. In this paper, a new method based on the CS theory is presented for robust facial expression recognition. The CS theory is used to construct a sparse representation classifier (SRC. The effectiveness and robustness of the SRC method is investigated on clean and occluded facial expression images. Three typical facial features, i.e., the raw pixels, Gabor wavelets representation and local binary patterns (LBP, are extracted to evaluate the performance of the SRC method. Compared with the nearest neighbor (NN, linear support vector machines (SVM and the nearest subspace (NS, experimental results on the popular Cohn-Kanade facial expression database demonstrate that the SRC method obtains better performance and stronger robustness to corruption and occlusion on robust facial expression recognition tasks.

  4. Facial Expression Biometrics Using Statistical Shape Models

    Science.gov (United States)

    Quan, Wei; Matuszewski, Bogdan J.; Shark, Lik-Kwan; Ait-Boudaoud, Djamel

    2009-12-01

    This paper describes a novel method for representing different facial expressions based on the shape space vector (SSV) of the statistical shape model (SSM) built from 3D facial data. The method relies only on the 3D shape, with texture information not being used in any part of the algorithm, that makes it inherently invariant to changes in the background, illumination, and to some extent viewing angle variations. To evaluate the proposed method, two comprehensive 3D facial data sets have been used for the testing. The experimental results show that the SSV not only controls the shape variations but also captures the expressive characteristic of the faces and can be used as a significant feature for facial expression recognition. Finally the paper suggests improvements of the SSV discriminatory characteristics by using 3D facial sequences rather than 3D stills.

  5. Facial Expression Biometrics Using Statistical Shape Models

    Directory of Open Access Journals (Sweden)

    Djamel Ait-Boudaoud

    2009-01-01

    Full Text Available This paper describes a novel method for representing different facial expressions based on the shape space vector (SSV of the statistical shape model (SSM built from 3D facial data. The method relies only on the 3D shape, with texture information not being used in any part of the algorithm, that makes it inherently invariant to changes in the background, illumination, and to some extent viewing angle variations. To evaluate the proposed method, two comprehensive 3D facial data sets have been used for the testing. The experimental results show that the SSV not only controls the shape variations but also captures the expressive characteristic of the faces and can be used as a significant feature for facial expression recognition. Finally the paper suggests improvements of the SSV discriminatory characteristics by using 3D facial sequences rather than 3D stills.

  6. Feature-based sentiment analysis with ontologies

    OpenAIRE

    Taner, Berk

    2011-01-01

    Sentiment analysis is a topic that many researchers work on. In recent years, new research directions under sentiment analysis appeared. Feature-based sentiment analysis is one such topic that deals not only with finding sentiment in a sentence but providing a more detailed analysis on a given domain. In the beginning researchers focused on commercial products and manually generated list of features for a product. Then they tried to generate a feature-based approach to attach sentiments to th...

  7. Facial Video based Detection of Physical Fatigue for Maximal Muscle Activity

    DEFF Research Database (Denmark)

    Haque, Mohammad Ahsanul; Irani, Ramin; Nasrollahi, Kamal;

    2016-01-01

    Physical fatigue reveals the health condition of a person at for example health checkup, fitness assessment or rehabilitation training. This paper presents an efficient noncontact system for detecting non-localized physi-cal fatigue from maximal muscle activity using facial videos acquired...... results show that the proposed system outperforms video based existing system for physical fatigue detection....

  8. Generation of facial expressions from emotion using a fuzzy rule based system

    NARCIS (Netherlands)

    Bui, The Duy; Heylen, Dirk; Poel, Mannes; Nijholt, Anton; Stumptner, Markus; Corbett, Dan; Brooks, Mike

    2001-01-01

    We propose a fuzzy rule-based system to map representations of the emotional state of an animated agent onto muscle contraction values for the appropriate facial expressions. Our implementation pays special attention to the way in which continuous changes in the intensity of emotions can be displaye

  9. A Genetic Algorithm-Based Feature Selection

    Directory of Open Access Journals (Sweden)

    Babatunde Oluleye

    2014-07-01

    Full Text Available This article details the exploration and application of Genetic Algorithm (GA for feature selection. Particularly a binary GA was used for dimensionality reduction to enhance the performance of the concerned classifiers. In this work, hundred (100 features were extracted from set of images found in the Flavia dataset (a publicly available dataset. The extracted features are Zernike Moments (ZM, Fourier Descriptors (FD, Lengendre Moments (LM, Hu 7 Moments (Hu7M, Texture Properties (TP and Geometrical Properties (GP. The main contributions of this article are (1 detailed documentation of the GA Toolbox in MATLAB and (2 the development of a GA-based feature selector using a novel fitness function (kNN-based classification error which enabled the GA to obtain a combinatorial set of feature giving rise to optimal accuracy. The results obtained were compared with various feature selectors from WEKA software and obtained better results in many ways than WEKA feature selectors in terms of classification accuracy

  10. Facial expression discrimination varies with presentation time but not with fixation on features: A backward masking study using eye-tracking

    OpenAIRE

    Neath, Karly N.; Itier, Roxane J.

    2013-01-01

    The current study investigated the effects of presentation time and fixation to expression-specific diagnostic features on emotion discrimination performance, in a backward masking task. While no differences were found when stimuli were presented for 16.67 ms, differences between facial emotions emerged beyond the happy-superiority effect at presentation times as early as 50 ms. Happy expressions were best discriminated, followed by neutral and disgusted, then surprised, and finally fearful e...

  11. Neural processing of fearful and happy facial expressions during emotion-relevant and emotion-irrelevant tasks: A fixation-to-feature approach.

    Science.gov (United States)

    Neath-Tavares, Karly N; Itier, Roxane J

    2016-09-01

    Research suggests an important role of the eyes and mouth for discriminating facial expressions of emotion. A gaze-contingent procedure was used to test the impact of fixation to facial features on the neural response to fearful, happy and neutral facial expressions in an emotion discrimination (Exp.1) and an oddball detection (Exp.2) task. The N170 was the only eye-sensitive ERP component, and this sensitivity did not vary across facial expressions. In both tasks, compared to neutral faces, responses to happy expressions were seen as early as 100-120ms occipitally, while responses to fearful expressions started around 150ms, on or after the N170, at both occipital and lateral-posterior sites. Analyses of scalp topographies revealed different distributions of these two emotion effects across most of the epoch. Emotion processing interacted with fixation location at different times between tasks. Results suggest a role of both the eyes and mouth in the neural processing of fearful expressions and of the mouth in the processing of happy expressions, before 350ms.

  12. Persistent facial pain conditions

    DEFF Research Database (Denmark)

    Forssell, Heli; Alstergren, Per; Bakke, Merete;

    2016-01-01

    , clinical features, consequences, central and peripheral mechanisms, diagnostic criteria (DC/TMD), and principles of management. For each of the neuropathic facial pain entities, the definitions, prevalence, clinical features, and diagnostics are described. The current understanding of the pathophysiology...

  13. A comprehensive approach to long-standing facial paralysis based on lengthening temporalis myoplasty.

    Science.gov (United States)

    Labbè, D; Bussu, F; Iodice, A

    2012-06-01

    Long-standing peripheral monolateral facial paralysis in the adult has challenged otolaryngologists, neurologists and plastic surgeons for centuries. Notwithstanding, the ultimate goal of normality of the paralyzed hemi-face with symmetry at rest, and the achievement of a spontaneous symmetrical smile with corneal protection, has not been fully reached. At the beginning of the 20(th) century, the main options were neural reconstructions including accessory to facial nerve transfer and hypoglossal to facial nerve crossover. In the first half of the 20(th) century, various techniques for static correction with autologous temporalis muscle and fascia grafts were proposed as the techniques of Gillies (1934) and McLaughlin (1949). Cross-facial nerve grafts have been performed since the beginning of the 1970s often with the attempt to transplant free-muscle to restore active movements. However, these transplants were non-vascularized, and further evaluations revealed central fibrosis and minimal return of function. A major step was taken in the second half of the 1970s, with the introduction of microneurovascular muscle transfer in facial reanimation, which, often combined in two steps with a cross-facial nerve graft, has become the most popular option for the comprehensive treatment of long-standing facial paralysis. In the second half of the 1990s in France, a regional muscle transfer technique with the definite advantages of being one-step, technically easier and relatively fast, namely lengthening temporalis myoplasty, acquired popularity and consensus among surgeons treating facial paralysis. A total of 111 patients with facial paralysis were treated in Caen between 1997 and 2005 by a single surgeon who developed 2 variants of the technique (V1, V2), each with its advantages and disadvantages, but both based on the same anatomo-functional background and aim, which is transfer of the temporalis muscle tendon on the coronoid process to the lips. For a comprehensive

  14. Infrared-based blink-detecting glasses for facial pacing: toward a bionic blink.

    Science.gov (United States)

    Frigerio, Alice; Hadlock, Tessa A; Murray, Elizabeth H; Heaton, James T

    2014-01-01

    IMPORTANCE Facial paralysis remains one of the most challenging conditions to effectively manage, often causing life-altering deficits in both function and appearance. Facial rehabilitation via pacing and robotic technology has great yet unmet potential. A critical first step toward reanimating symmetrical facial movement in cases of unilateral paralysis is the detection of healthy movement to use as a trigger for stimulated movement. OBJECTIVE To test a blink detection system that can be attached to standard eyeglasses and used as part of a closed-loop facial pacing system. DESIGN, SETTING, AND PARTICIPANTS Standard safety glasses were equipped with an infrared (IR) emitter-detector unit, oriented horizontally across the palpebral fissure, creating a monitored IR beam that became interrupted when the eyelids closed, and were tested in 24 healthy volunteers from a tertiary care facial nerve center community. MAIN OUTCOMES AND MEASURES Video-quantified blinking was compared with both IR sensor signal magnitude and rate of change in healthy participants with their gaze in repose, while they shifted their gaze from central to far-peripheral positions, and during the production of particular facial expressions. RESULTS Blink detection based on signal magnitude achieved 100% sensitivity in forward gaze but generated false detections on downward gaze. Calculations of peak rate of signal change (first derivative) typically distinguished blinks from gaze-related eyelid movements. During forward gaze, 87% of detected blink events were true positives, 11% were false positives, and 2% were false negatives. Of the 11% false positives, 6% were associated with partial eyelid closures. During gaze changes, false blink detection occurred 6% of the time during lateral eye movements, 10% of the time during upward movements, 47% of the time during downward movements, and 6% of the time for movements from an upward or downward gaze back to the primary gaze. Facial expressions

  15. Infrared-based blink-detecting glasses for facial pacing: toward a bionic blink.

    Science.gov (United States)

    Frigerio, Alice; Hadlock, Tessa A; Murray, Elizabeth H; Heaton, James T

    2014-01-01

    IMPORTANCE Facial paralysis remains one of the most challenging conditions to effectively manage, often causing life-altering deficits in both function and appearance. Facial rehabilitation via pacing and robotic technology has great yet unmet potential. A critical first step toward reanimating symmetrical facial movement in cases of unilateral paralysis is the detection of healthy movement to use as a trigger for stimulated movement. OBJECTIVE To test a blink detection system that can be attached to standard eyeglasses and used as part of a closed-loop facial pacing system. DESIGN, SETTING, AND PARTICIPANTS Standard safety glasses were equipped with an infrared (IR) emitter-detector unit, oriented horizontally across the palpebral fissure, creating a monitored IR beam that became interrupted when the eyelids closed, and were tested in 24 healthy volunteers from a tertiary care facial nerve center community. MAIN OUTCOMES AND MEASURES Video-quantified blinking was compared with both IR sensor signal magnitude and rate of change in healthy participants with their gaze in repose, while they shifted their gaze from central to far-peripheral positions, and during the production of particular facial expressions. RESULTS Blink detection based on signal magnitude achieved 100% sensitivity in forward gaze but generated false detections on downward gaze. Calculations of peak rate of signal change (first derivative) typically distinguished blinks from gaze-related eyelid movements. During forward gaze, 87% of detected blink events were true positives, 11% were false positives, and 2% were false negatives. Of the 11% false positives, 6% were associated with partial eyelid closures. During gaze changes, false blink detection occurred 6% of the time during lateral eye movements, 10% of the time during upward movements, 47% of the time during downward movements, and 6% of the time for movements from an upward or downward gaze back to the primary gaze. Facial expressions

  16. Color Facial Discriminate Feature Extraction and Recognition%基于鉴别分析的彩色人脸图像灰度转换方法

    Institute of Scientific and Technical Information of China (English)

    儒林

    2012-01-01

    Current popular algorithms for face recognition all share the same characteristic, they all convert the origin color image to gray image. After that, operate the class discrimination based on the feature extraction and recognition algorithm of the gray image. During practical operations, people convert color images to gray images with a simple combination color coefficients. This can't tell which is more important of the three components RGB. According to the color components of the facial images, this paper seeks to find a optimal combination coefficient which contains the most color information of the image, by feature extraction and analysis of the R, G and B components of color facial images. Then operate PCA on the combined components. Finally, extensive experiments performed on the international and universal AR standard color face database verify the effectiveness of the proposed method.%当前主流的人脸识别算法,都是把原有的彩色图像转化为灰度图后,采用基于灰度图像的特征抽取与识别算法进行分类识别.人们在实际操作过程中,只是使用一组简单的加权系数实现从彩色图像到灰度图的转换,这并不能很好的体现R,G,B 3个颜色分量之间的次重关系.本文根据人脸图像颜色组成的特点,对彩色人脸图像的R,G,B 3个分量的颜色信息进行特征抽取与分析,从中找出鉴别特征的三基色系数表示方法,把彩色图像转化为灰度图.最后,在国际通用的AR标准彩色人脸库中进行了大量实验,验证了本文算法的有效性.

  17. Human emotion detector based on genetic algorithm using lip features

    Science.gov (United States)

    Brown, Terrence; Fetanat, Gholamreza; Homaifar, Abdollah; Tsou, Brian; Mendoza-Schrock, Olga

    2010-04-01

    We predicted human emotion using a Genetic Algorithm (GA) based lip feature extractor from facial images to classify all seven universal emotions of fear, happiness, dislike, surprise, anger, sadness and neutrality. First, we isolated the mouth from the input images using special methods, such as Region of Interest (ROI) acquisition, grayscaling, histogram equalization, filtering, and edge detection. Next, the GA determined the optimal or near optimal ellipse parameters that circumvent and separate the mouth into upper and lower lips. The two ellipses then went through fitness calculation and were followed by training using a database of Japanese women's faces expressing all seven emotions. Finally, our proposed algorithm was tested using a published database consisting of emotions from several persons. The final results were then presented in confusion matrices. Our results showed an accuracy that varies from 20% to 60% for each of the seven emotions. The errors were mainly due to inaccuracies in the classification, and also due to the different expressions in the given emotion database. Detailed analysis of these errors pointed to the limitation of detecting emotion based on the lip features alone. Similar work [1] has been done in the literature for emotion detection in only one person, we have successfully extended our GA based solution to include several subjects.

  18. Spontaneous Facial Mimicry in Response to Dynamic Facial Expressions

    Science.gov (United States)

    Sato, Wataru; Yoshikawa, Sakiko

    2007-01-01

    Based on previous neuroscientific evidence indicating activation of the mirror neuron system in response to dynamic facial actions, we hypothesized that facial mimicry would occur while subjects viewed dynamic facial expressions. To test this hypothesis, dynamic/static facial expressions of anger/happiness were presented using computer-morphing…

  19. RGB-D动态序列的人脸自然表情识别%Spontaneous Facial Expression Recognition Based on RGB-D Dynamic Sequences

    Institute of Scientific and Technical Information of China (English)

    邵洁; 董楠

    2015-01-01

    Different from traditional facial expression recognition methods based on 2D static images, a spontaneous facial expression recognition algorithm is proposed for RGB-D image sequences. After pre-processing on image alignments and normalization, 4D spatio-temporal texture data are extracted as dynamic features. Then Slow Feature Analysis method is applied to detect the apex of the expression, so that 3D fa-cial geometrical model of the apex image is built and used as the static feature. With the combination of these two kinds of features and the dimensional reduction by PCA, Conditional Random Fields is applied to train and classify the features in the end. A lot of experiments were performed based on BU-4DFE facial ex-pression database. It has been verified that our algorithm not only outperforms traditional static facial ex-pression recognition methods and many other dynamic facial expression recognition methods, but also could recognize spontaneous expression automatically, which makes it possible for further practical applications.%区别于以二维静态图像为对象的传统人脸表情识别,提出一种针对RGB-D动态图像序列分析的人脸自然表情自动识别算法。首先针对预处理后的RGB-D表情图像序列提取四维时空纹理特征作为局部动态特征;再利用慢特征分析自动检测表情序列的峰值图像,并提取脸部三维几何模型为全局静态特征;最后结合动、静态特征,经主成分分析降维后输入条件随机场模型完成特征训练和表情识别。经由BU-4DFE人脸表情库验证表明,该算法不但比传统静态表情识别算法和其他动态算法具有优越性,而且能够针对自然展现的表情实现自动识别,为今后算法的实用化提供了可能。

  20. Linear feature detection based on ridgelet

    Institute of Scientific and Technical Information of China (English)

    HOU; Biao; (侯彪); LIU; Fang; (刘芳); JIAO; Licheng; (焦李成)

    2003-01-01

    Linear feature detection is very important in image processing. The detection efficiency will directly affect the perfomance of pattern recognition and pattern classification. Based on the idea of ridgelet, this paper presents a new discrete localized ridgelet transform and a new method for detecting linear feature in anisotropic images. Experimental results prove the efficiency of the proposed method.

  1. 3D face recognition using compositional features from facial curves%基于面部曲线特征融合的三维人脸识别

    Institute of Scientific and Technical Information of China (English)

    邹红艳; 达飞鹏; 李晓莉

    2012-01-01

    针对三维人脸识别,提出了一种基于面部等测地轮廓线并结合局部特征和整体特征的人脸识别方法.首先,在人脸中提取到鼻尖点等测地距离的点组成等测地轮廓线来表征人脸面部曲面;然后,根据重采样后轮廓线上点的邻域信息提取局部特征,根据轮廓线的整体形状信息提取人脸整体特征;最后,分别利用比较局部特征和整体特征,将比较结果在决策级融合,给出最终识别结果.所提算法在FRGC (face recognition grand challenge) v2.0数据库上进行测试,测试结果表明,特征融合后的识别性能优于单一特征的识别率,且具有较好的表情鲁棒性.%A 3D face recognition method combining local and global geometric features which are extracted from the iso-geodesic curves is proposed. First, a set of facial curves with different geodesic distances from the nose tip are extracted to represent a facial surface. Then, for each point in the re-sampled facial curves, local feature which is invariant to pose is calculated from its local neighborhood and the local feature represents the geometric information of the local neighborhood. Next, the shape information of the facial curves is computed which constitute the global feature. Finally, local feature and global feature are compared respectively, and the final result is the weighted sum of them. The method is tested on the FRGC (face recognition grand challenge) v2. 0 data set, and the experimental results show that recognition performance using compositional features is superior to that using single feature. Furthermore, it is also robust to expression.

  2. Facial expression recognition and model-based regeneration for distance teaching

    Science.gov (United States)

    De Silva, Liyanage C.; Vinod, V. V.; Sengupta, Kuntal

    1998-12-01

    This paper presents a novel idea of a visual communication system, which can support distance teaching using a network of computers. Here the author's main focus is to enhance the quality of distance teaching by reducing the barrier between the teacher and the student, which is formed due to the remote connection of the networked participants. The paper presents an effective way of improving teacher-student communication link of an IT (Information Technology) based distance teaching scenario, using facial expression recognition results and face global and local motion detection results of both the teacher and the student. It presents a way of regenerating the facial images for the teacher-student down-link, which can enhance the teachers facial expressions and which also can reduce the network traffic compared to usual video broadcasting scenarios. At the same time, it presents a way of representing a large volume of facial expression data of the whole student population (in the student-teacher up-link). This up-link representation helps the teacher to receive an instant feed back of his talk, as if he was delivering a face to face lecture. In conventional video tele-conferencing type of applications, this task is nearly impossible, due to huge volume of upward network traffic. The authors utilize several of their previous publication results for most of the image processing components needs to be investigated to complete such a system. In addition, some of the remaining system components are covered by several on going work.

  3. A fully three-dimensional method for facial reconstruction based on deformable models.

    Science.gov (United States)

    Quatrehomme, G; Cotin, S; Subsol, G; Delingette, H; Garidel, Y; Grévin, G; Fidrich, M; Bailet, P; Ollier, A

    1997-07-01

    Two facial models corresponding to two deceased subjects have been manually created and the two corresponding skulls have been dissected and skeletonized. These pairs of skull/ facial data have been scanned with a CT scanner, and the computed geometric three-dimensional models of both skulls and facial tissue have been built. One set of skull/facial data will be used as a reference set whereas the second set is used as ground truth for validating our method. After a semi-automatic face-skull registration, we apply an original computing global parametric transformation T that turns the reference skull into the skull to be reconstructed. This algorithm is based upon salient lines of the skull called crest lines: more precisely the crest lines of the first skull are matched to the crest lines of the second skull by an iterative closest point algorithm. Then we apply this algorithm to the reference face to obtain the "unknown" face to be reconstructed. The reliability and difficulties of this original technique are then discussed.

  4. A Robust and Efficient Facial Feature Tracking Algorithm%一种鲁棒高效的人脸特征点跟踪方法

    Institute of Scientific and Technical Information of China (English)

    黄琛; 丁晓青; 方驰

    2012-01-01

    人脸特征点跟踪能获取除粗略的人脸位置和运动轨迹以外的人脸部件的精确信息,对计算机视觉研究有重要作用.主动表象模型(Active appearance model,AAM)是描述人脸特征点位置的最有效的方法之一,但是其高维参数空间和梯度下降优化策略使得AAM对初始参数敏感,且易陷入局部极值.因此,基于传统AAM的人脸特征点跟踪方法不能同时较好地解决大姿态、光照和表情的问题.本文在多视角AAM的框架下,提出一种结合随机森林和线性判别分析(Linear discriminate analysis,LDA)的实时姿态估计算法对跟踪的人脸进行姿态预估计和更新,从而有效地解决了视频人脸大姿态变化的问题.提出了一种改进的在线表象模型(Online appearance model,OAM)方法来评估跟踪的准确性,并自适应地通过增量主成分分析(Principle component analysis,PCA)学习来更新AAM的纹理模型,极大地提高了跟踪的稳定性和模型应对光照和表情变化的能力.实验结果表明,本文算法在视频人脸特征点跟踪的准确性、鲁棒性和实时性方面都有良好的性能.%Facial feature tracking obtains precise information of facial components in addition to the coarse face position and moving track, and is important to computer vision. The active appearance model (AAM) is an efficient method to describe the facial features. However, it suffers from the sensitivity to initial parameters and may easily be stuck in local minima due to the gradient-descent optimization, which makes the AAM based tracker unstable in the presence of large pose, illumination and expression changes. In the framework of multi-view AAM, a real time pose estimation algorithm is proposed by combining random forest and linear discriminate analysis (LDA) to estimate and update the head pose during tracking. To improve the robustness to variations in illumination and expression, a modified online appearance model (OAM) is

  5. Initial assessment of facial nerve paralysis based on motion analysis using an optical flow method.

    Science.gov (United States)

    Samsudin, Wan Syahirah W; Sundaraj, Kenneth; Ahmad, Amirozi; Salleh, Hasriah

    2016-01-01

    An initial assessment method that can classify as well as categorize the severity of paralysis into one of six levels according to the House-Brackmann (HB) system based on facial landmarks motion using an Optical Flow (OF) algorithm is proposed. The desired landmarks were obtained from the video recordings of 5 normal and 3 Bell's Palsy subjects and tracked using the Kanade-Lucas-Tomasi (KLT) method. A new scoring system based on the motion analysis using area measurement is proposed. This scoring system uses the individual scores from the facial exercises and grades the paralysis based on the HB system. The proposed method has obtained promising results and may play a pivotal role towards improved rehabilitation programs for patients.

  6. Ontology Based Feature Driven Development Life Cycle

    Directory of Open Access Journals (Sweden)

    Farheen Siddiqui

    2012-01-01

    Full Text Available The upcoming technology support for semantic web promises fresh directions for Software Engineering community. Also semantic web has its roots in knowledge engineering that provoke software engineers to look for application of ontology applications throughout the Software Engineering lifecycle. The internal components of a semantic web are "light weight", and may be of less quality standards than the externally visible modules. In fact the internal components are generated from external (ontological component. That's the reason agile development approaches such as feature driven development are suitable for applications internal component development. As yet there is no particular procedure that describes the role of ontology in FDD processes. Therefore we propose an ontology based feature driven development for semantic web application that can be used form application model development to feature design and implementation. Features are precisely defined in the OWL-based domain model. Transition from OWL based domain model to feature list is directly defined in transformation rules. On the other hand the ontology based overall model can be easily validated through automated tools. Advantages of ontology-based feature Driven development are also discussed.

  7. Feature-Based Classification of Networks

    CERN Document Server

    Barnett, Ian; Kuijjer, Marieke L; Mucha, Peter J; Onnela, Jukka-Pekka

    2016-01-01

    Network representations of systems from various scientific and societal domains are neither completely random nor fully regular, but instead appear to contain recurring structural building blocks. These features tend to be shared by networks belonging to the same broad class, such as the class of social networks or the class of biological networks. At a finer scale of classification within each such class, networks describing more similar systems tend to have more similar features. This occurs presumably because networks representing similar purposes or constructions would be expected to be generated by a shared set of domain specific mechanisms, and it should therefore be possible to classify these networks into categories based on their features at various structural levels. Here we describe and demonstrate a new, hybrid approach that combines manual selection of features of potential interest with existing automated classification methods. In particular, selecting well-known and well-studied features that ...

  8. Controversies in Contemporary Facial Reanimation.

    Science.gov (United States)

    Kim, Leslie; Byrne, Patrick J

    2016-08-01

    Facial palsy is a devastating condition with profound functional, aesthetic, and psychosocial implications. Although the complexity of facial expression and intricate synergy of facial mimetic muscles are difficult to restore, the goal of management is to reestablish facial symmetry and movement. Facial reanimation surgery requires an individualized treatment approach based on the cause, pattern, and duration of facial palsy while considering patient age, comorbidities, motivation, and goals. Contemporary reconstructive options include a spectrum of static and dynamic procedures. Controversies in the evaluation of patients with facial palsy, timing of intervention, and management decisions for dynamic smile reanimation are discussed. PMID:27400842

  9. A dynamic approach to the recognition of 3D facial expressions and their temporal models

    NARCIS (Netherlands)

    Sandbach, Georgia; Zafeiriou, Stefanos; Pantic, Maja; Rueckert, Daniel

    2011-01-01

    In this paper we propose a method that exploits 3D motion-based features between frames of 3D facial geometry sequences for dynamic facial expression recognition. An expressive sequence is modeled to contain an onset followed by an apex and an offset. Feature selection methods are applied in order t

  10. 结合FSVM和KNN的人脸表情识别%Facial Expression Recognition Based on FSVM and KNN

    Institute of Scientific and Technical Information of China (English)

    王小虎; 黄银珍; 张石清

    2013-01-01

    为了提高人脸表情的正确识别率,提出了一种组合模糊支持向量机(FSVM )和K-近邻(KNN)的人脸表情识别的新方法。该方法通过主成分分析(PCA )提取人脸表情特征,对于待分类的不同区域,根据区分程度自适应划分为不同区域类型;并结合FSVM和KNN算法的特点,对不同区域类型切换分类算法。实验表明,此方法既能保证分类的精确度,又能简化计算复杂度。%To improve the recognition accuracy ,a new approach for facial expression recognition based on Fuzzy Support Vector Machine (FSVM ) and K-Nearest Neighbor (KNN) is presented in this paper .At first ,the feature of the static facial expression image is extracted by the Principle Component Analysis (PCA ) ,then ,the algorithm divide the region into different types ,and combine with the characteristic of the FSVM and KNN ,switch the classification methods to the different types .The result of the experiment show that proposed algorithm can achieve good recognition accuracy ,and can simplify the computation complexity .

  11. Facial Expressions Recognition Using Eigenspaces

    OpenAIRE

    Senthil Ragavan Valayapalayam Kittusamy; Venkatesh Chakrapani

    2012-01-01

    A challenging research topic is to make the Computer Systems to recognize facial expressions from the face image. A method of facial expression recognition, based on Eigenspaces is presented in this study. Here, the authors recognize the userâs facial expressions from the input images, using a method that was customized from eigenface recognition. Evaluation was done for this method in terms of identification correctness using two different Facial Expressions databases, Cohn-Kanade facial exp...

  12. Dynamic Model of Facial Expression Recognition based on Eigen-face Approach

    OpenAIRE

    Bajaj, Nikunj; Routray, Aurobinda; Happy, S L

    2013-01-01

    Emotions are best way of communicating information; and sometimes it carry more information than words. Recently, there has been a huge interest in automatic recognition of human emotion because of its wide spread application in security, surveillance, marketing, advertisement, and human-computer interaction. To communicate with a computer in a natural way, it will be desirable to use more natural modes of human communication based on voice, gestures and facial expressions. In this paper, a h...

  13. Robust feature-based object tracking

    Science.gov (United States)

    Han, Bing; Roberts, William; Wu, Dapeng; Li, Jian

    2007-04-01

    Object tracking is an important component of many computer vision systems. It is widely used in video surveillance, robotics, 3D image reconstruction, medical imaging, and human computer interface. In this paper, we focus on unsupervised object tracking, i.e., without prior knowledge about the object to be tracked. To address this problem, we take a feature-based approach, i.e., using feature points (or landmark points) to represent objects. Feature-based object tracking consists of feature extraction and feature correspondence. Feature correspondence is particularly challenging since a feature point in one image may have many similar points in another image, resulting in ambiguity in feature correspondence. To resolve the ambiguity, algorithms, which use exhaustive search and correlation over a large neighborhood, have been proposed. However, these algorithms incur high computational complexity, which is not suitable for real-time tracking. In contrast, Tomasi and Kanade's tracking algorithm only searches corresponding points in a small candidate set, which significantly reduces computational complexity; but the algorithm may lose track of feature points in a long image sequence. To mitigate the limitations of the aforementioned algorithms, this paper proposes an efficient and robust feature-based tracking algorithm. The key idea of our algorithm is as below. For a given target feature point in one frame, we first find a corresponding point in the next frame, which minimizes the sum-of-squared-difference (SSD) between the two points; then we test whether the corresponding point gives large value under the so-called Harris criterion. If not, we further identify a candidate set of feature points in a small neighborhood of the target point; then find a corresponding point from the candidate set, which minimizes the SSD between the two points. The algorithm may output no corresponding point due to disappearance of the target point. Our algorithm is capable of tracking

  14. Multiresolution Feature Based Fractional Power Polynomial Kernel Fisher Discriminant Model for Face Recognition

    OpenAIRE

    Dattatray V. Jadhav; Jayant V. Kulkarni; Raghunath S. Holambe

    2008-01-01

    This paper prese nts a technique for face recognition which uses wavelet transform to derive desirable facial features. Three level decompositions are used to form the pyramidal multiresolution features to cope with the variations due to illumination and facial expression changes. The fractional power polynomial kernel maps the input data into an implicit feature space with a nonlinear mapping. Being linear in the feature space, but nonlinear in the input space, kernel is capable of deriving ...

  15. Rehabilitation of long-standing facial nerve paralysis with percutaneous suture-based slings.

    Science.gov (United States)

    Alam, Daniel

    2007-01-01

    Long-standing facial paralysis creates significant functional and aesthetic problems for patients affected by this deficit. Traditional approaches to correct this problem have involved aggressive open procedures such as unilateral face-lifts and sling procedures using fascia and implantable materials. Unfortunately, our results with these techniques over the last 5 years have been suboptimal. The traditional face-lift techniques did not address the nasolabial fold to our satisfaction, and suture-based techniques alone, while offering excellent short-term results, failed to provide a long-term solution. This led to the development of a novel percutaneous technique combining the minimally invasive approach of suture-based lifts with the long-term efficacy of Gore-Tex-based slings. We report our results with this technique for static facial suspension in patients with long-standing facial nerve paralysis and our surgical outcomes in 13 patients. The procedure offers re-creation of the nasolabial crease and suspension of the oral commissure to its normal anatomic relationships. The recovery time is minimal, and the operation is performed as a short outpatient procedure. Long-term 2-year follow-up has shown effective preservation of the surgical results.

  16. FPGA Based Assembling of Facial Components for Human Face Construction

    CERN Document Server

    Halder, Santanu; Nasipuri, Mita; Basu, Dipak Kumar; Kundu, Mahantapas

    2010-01-01

    This paper aims at VLSI realization for generation of a new face from textual description. The FASY (FAce SYnthesis) System is a Face Database Retrieval and new Face generation System that is under development. One of its main features is the generation of the requested face when it is not found in the existing database. The new face generation system works in three steps - searching phase, assembling phase and tuning phase. In this paper the tuning phase using hardware description language and its implementation in a Field Programmable Gate Array (FPGA) device is presented.

  17. Facial Expression Recognition Techniques Based on Bilinear Model%基于双线性模型的人脸表情识别技术

    Institute of Scientific and Technical Information of China (English)

    徐欢

    2014-01-01

    Aiming at the problems existing in facial expression recognition currently , based on the data in the 3D expression data-base BU-3DFE, we study the point cloud alignment of 3D facial expression data , establish the bilinear models based on the align-ment data , and improve the recognition algorithms based on bilinear model in order to form the new recognition and classification algorithms, to reduce the quantity of identity feature calculation in original algorithm , to minimize the influence of identity feature on the total expression recognition process , to improve the results of facial expression recognition , and to ultimately achieve the high robustness of 3D facial expression recognition .%针对现阶段人脸表情识别过程中所遇到的问题,基于三维数据库BU-3DFE中的三维表情数据,研究三维人脸表情数据的点云对齐及基于对齐数据的双线性模型建立,对基于双线性模型的识别算法加以改进,形成新的识别分类算法,降低原有算法中身份特征参与计算的比重,最大可能地降低身份特征对于整个表情识别过程的影响。旨在提高表情识别的结果,最终实现高鲁棒性的三维表情识别。

  18. Changing the facial features of patients with Treacher Collins syndrome: protocol for 3-stage treatment of hard and soft tissue hypoplasia in the upper half of the face.

    Science.gov (United States)

    Mitsukawa, Nobuyuki; Saiga, Atsuomi; Satoh, Kaneshige

    2014-07-01

    Treacher Collins syndrome is a disorder characterized by various congenital soft tissue anomalies involving hypoplasia of the zygoma, maxilla, and mandible. A variety of treatments have been reported to date. These treatments can be classified into 2 major types. The first type involves osteotomy for hard tissue such as the zygoma and mandible. The second type involves plastic surgery using bone grafting in the malar region and soft tissue repair of eyelid deformities. We devised a new treatment to comprehensively correct hard and soft tissue deformities in the upper half of the face of Treacher Collins patients. The aim was to "change facial features and make it difficult to tell that the patients have this disorder." This innovative treatment strategy consists of 3 stages: (1) placement of dermal fat graft from the lower eyelid to the malar subcutaneous area, (2) custom-made synthetic zygomatic bone grafting, and (3) Z-plasty flap transposition from the upper to the lower eyelid and superior repositioning and fixation of the lateral canthal tendon using a Mitek anchor system. This method was used on 4 patients with Treacher Collins syndrome who had moderate to severe hypoplasia of the zygomas and the lower eyelids. Facial features of these patients were markedly improved and very good results were obtained. There were no major complications intraoperatively or postoperatively in any of the patients during the series of treatments. In synthetic bone grafting in the second stage, the implant in some patients was in the way of the infraorbital nerve. Thus, the nerve was detached and then sutured under the microscope. Postoperatively, patients had almost full restoration of sensory nerve torpor within 5 to 6 months. We devised a 3-stage treatment to "change facial features" of patients with hypoplasia of the upper half of the face due to Treacher Collins syndrome. The treatment protocol provided a very effective way to treat deformities of the upper half of the face

  19. Intensity-based registration and fusion of thermal and visual facial-images

    Science.gov (United States)

    Arslan, Musa Serdar; Elbakaray, Mohamed I.; Reza, Shamim; Iftekharuddin, Khan M.

    2012-10-01

    Fusion of images from different modalities provides information that cannot be obtained by viewing the images separately and consecutively. Automatic fusion of thermal and visual images is of great interest in defense and medical applications. In this study, we implemented automatic intensity-based illumination, translation and scale invariant registration of deformable objects in thermal and visual images by maximization of a similarity measure such as generalized correlation ratio. This method was originally used to register ultrasound (US) and magnetic resonance images (MRI) successfully. In our current work, we propose a major modification to the original algorithm by investigating appropriate information content in the input data. The registration of facial thermal and visual images in this algorithm is achieved by maximization of the similarity measure between the input images in the appropriate image channel. The algorithm is tested using real facial images with illumination, scale, and translation variations and the results show acceptable accuracy.

  20. NEURO FUZZY MODEL FOR FACE RECOGNITION WITH CURVELET BASED FEATURE IMAGE

    OpenAIRE

    SHREEJA R,; KHUSHALI DEULKAR,; SHALINI BHATIA

    2011-01-01

    A facial recognition system is a computer application for automatically identifying or verifying a person from a digital image or a video frame from a video source. One of the ways to do this is by comparing selected facial features from the image and a facial database. It is typically used in security systems and can be compared to other biometric techniques such as fingerprint or iris recognition systems. Every face has approximately 80 nodal points like (Distance between the eyes, Width of...

  1. Face recognition via edge-based Gabor feature representation for plastic surgery-altered images

    Science.gov (United States)

    Chude-Olisah, Chollette C.; Sulong, Ghazali; Chude-Okonkwo, Uche A. K.; Hashim, Siti Z. M.

    2014-12-01

    Plastic surgery procedures on the face introduce skin texture variations between images of the same person (intra-subject), thereby making the task of face recognition more difficult than in normal scenario. Usually, in contemporary face recognition systems, the original gray-level face image is used as input to the Gabor descriptor, which translates to encoding some texture properties of the face image. The texture-encoding process significantly degrades the performance of such systems in the case of plastic surgery due to the presence of surgically induced intra-subject variations. Based on the proposition that the shape of significant facial components such as eyes, nose, eyebrow, and mouth remains unchanged after plastic surgery, this paper employs an edge-based Gabor feature representation approach for the recognition of surgically altered face images. We use the edge information, which is dependent on the shapes of the significant facial components, to address the plastic surgery-induced texture variation problems. To ensure that the significant facial components represent useful edge information with little or no false edges, a simple illumination normalization technique is proposed for preprocessing. Gabor wavelet is applied to the edge image to accentuate on the uniqueness of the significant facial components for discriminating among different subjects. The performance of the proposed method is evaluated on the Georgia Tech (GT) and the Labeled Faces in the Wild (LFW) databases with illumination and expression problems, and the plastic surgery database with texture changes. Results show that the proposed edge-based Gabor feature representation approach is robust against plastic surgery-induced face variations amidst expression and illumination problems and outperforms the existing plastic surgery face recognition methods reported in the literature.

  2. Statistical feature extraction based iris recognition system

    Indian Academy of Sciences (India)

    ATUL BANSAL; RAVINDER AGARWAL; R K SHARMA

    2016-05-01

    Iris recognition systems have been proposed by numerous researchers using different feature extraction techniques for accurate and reliable biometric authentication. In this paper, a statistical feature extraction technique based on correlation between adjacent pixels has been proposed and implemented. Hamming distance based metric has been used for matching. Performance of the proposed iris recognition system (IRS) has been measured by recording false acceptance rate (FAR) and false rejection rate (FRR) at differentthresholds in the distance metric. System performance has been evaluated by computing statistical features along two directions, namely, radial direction of circular iris region and angular direction extending from pupil tosclera. Experiments have also been conducted to study the effect of number of statistical parameters on FAR and FRR. Results obtained from the experiments based on different set of statistical features of iris images show thatthere is a significant improvement in equal error rate (EER) when number of statistical parameters for feature extraction is increased from three to six. Further, it has also been found that increasing radial/angular resolution,with normalization in place, improves EER for proposed iris recognition system

  3. 一种基于MPEG-4的三维人脸表情动画算法%A 3D facial expression animation system based on MPEG-4

    Institute of Scientific and Technical Information of China (English)

    於俊; 汪增福

    2011-01-01

    面向模型基人脸视频编解码领域,提出了一种基于MPEG-4的三维人脸表情动画算法.首先对编码端发送视频的首帧图像,利用Adaboost+Camshift+AAM(active appearance model算法检测人脸和定位特征点,接着特定化一个简洁人脸通用网格模型得到FDP(facial definition parameter);对于得到的FDP,解码端先用其特定化一个精细人脸通用网格模型,然后基于肌肉模型和参数模型相结合的方式来生成人脸表情动画,同时对人脸功能区进行划分.实验表明,该算法在FAP(facial animation parameter)流的驱动下可以生成真实感较强的三维人脸表情动画.%In view of the model based coding/decoding area, a 3D facial expression animation system based on MPEG-4 was proposed. The coder obtained FDPs (facial definition parameter) through face adaptation of a simple universal triangular model with Adaboost + Camshift + AAM algorithm for face detection and feature localization in the first frame. Then the decoder adapted a complex universal triangular model using these FDPs, Finally the algorithm produced facial animation combining the parameterized model and muscle model. A facial action area split scheme was also proposed. Experiment results confirm that this system can produce realistic facial expression animation with FAP (facial animation parameter) flow.

  4. 基于多核学习的画像画风的识别%Drawing Style Recognition of Facial Sketch Based on Multiple Kernel Learning

    Institute of Scientific and Technical Information of China (English)

    张铭津; 李洁; 王楠楠

    2015-01-01

    画像的画风识别广泛应用于名画甄别和刑侦破案领域。文中提出基于多核学习的画像画风的识别算法。首先根据艺术评论家从画像部件的处理方式鉴定画像画风的方法,从画像中提取脸、左眼、右眼、鼻和嘴5个部件。然后根据画家从画像的明暗度和画像作者的绘画笔法识别画像画风的方法,从每个部件上提取灰度直方图特征、灰度矩特征、快速鲁棒特征和多尺度的局部二值模式特征。最后通过多核学习将不同部件和不同特征融合以进行画像画风的识别。实验表明,文中算法性能较好,能取得较高识别率。%The drawing style recognition of facial sketches is widely used for painting authentication and criminal investigation. A drawing style recognition algorithm of facial sketch based on multiple kernel learning is presented. Firstly, according to the way of art critics recognize the drawing style of facial sketch, five parts, the face part, left eye part, right eye part, nose part and mouth part, are extracted from the facial sketch. Then, gray histogram feature, gray moment feature, speeded-up robust feature and multiscale local binary pattern feature are extracted from each part on the basis of artistsˊ different understandings of lights and shadows on a face and various usages of the pencil . Finally, different parts and features are integrated and the drawing styles of facial sketches are classified by multiple kernel learning. Experimental results demonstrate that the proposed algorithm has better performance and obtains higher recognition rates.

  5. Facial Action Unit Recognition under Incomplete Data Based on Multi-label Learning with Missing Labels

    KAUST Repository

    Li, Yongqiang

    2016-07-07

    Facial action unit (AU) recognition has been applied in a wild range of fields, and has attracted great attention in the past two decades. Most existing works on AU recognition assumed that the complete label assignment for each training image is available, which is often not the case in practice. Labeling AU is expensive and time consuming process. Moreover, due to the AU ambiguity and subjective difference, some AUs are difficult to label reliably and confidently. Many AU recognition works try to train the classifier for each AU independently, which is of high computation cost and ignores the dependency among different AUs. In this work, we formulate AU recognition under incomplete data as a multi-label learning with missing labels (MLML) problem. Most existing MLML methods usually employ the same features for all classes. However, we find this setting is unreasonable in AU recognition, as the occurrence of different AUs produce changes of skin surface displacement or face appearance in different face regions. If using the shared features for all AUs, much noise will be involved due to the occurrence of other AUs. Consequently, the changes of the specific AUs cannot be clearly highlighted, leading to the performance degradation. Instead, we propose to extract the most discriminative features for each AU individually, which are learned by the supervised learning method. The learned features are further embedded into the instance-level label smoothness term of our model, which also includes the label consistency and the class-level label smoothness. Both a global solution using st-cut and an approximated solution using conjugate gradient (CG) descent are provided. Experiments on both posed and spontaneous facial expression databases demonstrate the superiority of the proposed method in comparison with several state-of-the-art works.

  6. Rheology-based facial animation realistic face model

    Institute of Scientific and Technical Information of China (English)

    ZENG Dan; PEI Li

    2009-01-01

    This paper presents a rheology-based approach to animate realistic face model. The dynamic and biorheological characteristics of the force member (muscles) and stressed member (face) are considered. The stressed face can be modeled as viscoelastic bodies with the Hooke bodies and Newton bodies connected in a composite series-parallel manner. Then, the stress-strain relationship is derived, and the constitutive equations established. Using these constitutive equations, the face model can be animated with the force generated by muscles. Experimental results show that this method can realistically simulate the mechanical properties and motion characteristics of human face, and performance of this method is satisfactory.

  7. Facial Expression Recognition Based on RGB-D%基于RGB-D的人脸表情识别研究

    Institute of Scientific and Technical Information of China (English)

    吴会霞; 陶青川; 龚雪友

    2016-01-01

    针对二维人脸表情识别在复杂光照及光照条件较差时,识别准确率较低的问题,提出一种基于RGB-D 的融合多分类器的面部表情识别的算法。该算法首先在图像的彩色信息(Y、Cr、Q)和深度信息(D)上分别提取其LPQ,Gabor,LBP 以及HOG 特征信息,并对提取的高维特征信息做线性降维(PCA)及特征空间转换(LDA),而后用最近邻分类法得到各表情弱分类器,并用AdaBoost 算法权重分配弱分类器从而生成强分类器,最后用Bayes 进行多分类器的融合,统计输出平均识别率。在具有复杂光照条件变化的人脸表情库CurtinFaces 和KinectFaceDB 上,该算法平均识别率最高达到98.80%。试验结果表明:比较于单独彩色图像的表情识别算法,深度信息的融合能够更加明显的提升面部表情识别的识别率,并且具有一定的应用价值。%For two-dimensional facial expression recognition complex when poor lighting and illumination conditions, a low recognition rate of prob-lem, proposes a facial expression recognition algorithm based on multi-feature RGB-D fusion. Extracts their LPQ, Gabor, LBP and HOG feature information in image color information(Y, Cr, Q) and depth information (D) on, and the extraction of high-dimensional feature in-formation does linear dimensionality reduction (PCA) and feature space conversion (LDA), and then gives each face of weak classifiers nearest neighbor classification, and with AdaBoost algorithm weight distribution of weak classifiers to generate strong classifier, and finally with Bayes multi-classifier fusion, statistical output average recognition rate. With complex changes in lighting conditions and facial ex-pression libraries CurtinFaces KinectFaceDB, the algorithm average recognition rate of up to 98.80%. The results showed that: compared to a separate color image expression recognition algorithm, the fusion depth information can be more

  8. Adaptive norm-based coding of facial identity.

    Science.gov (United States)

    Rhodes, Gillian; Jeffery, Linda

    2006-09-01

    Identification of a face is facilitated by adapting to its computationally opposite identity, suggesting that the average face functions as a norm for coding identity [Leopold, D. A., O'Toole, A. J., Vetter, T., & Blanz, V. (2001). Prototype-referenced shape encoding revealed by high-level aftereffects. Nature Neuroscience, 4, 89-94; Leopold, D. A., Rhodes, G., Müller, K. -M., & Jeffery, L. (2005). The dynamics of visual adaptation to faces. Proceedings of the Royal Society of London, Series B, 272, 897-904]. Crucially, this interpretation requires that the aftereffect is selective for the opposite identity, but this has not been convincingly demonstrated. We demonstrate such selectivity, observing a larger aftereffect for opposite than non-opposite adapt-test pairs that are matched on perceptual contrast (dissimilarity). Component identities were also harder to detect in morphs of opposite than non-opposite face pairs. We propose an adaptive norm-based coding model of face identity. PMID:16647736

  9. Silicone based artificial skin for humanoid facial expressions

    Science.gov (United States)

    Tadesse, Yonas; Moore, David; Thayer, Nick; Priya, Shashank

    2009-03-01

    Artificial skin materials were synthesized using platinum-cured silicone elastomeric material (Reynolds Advanced Materials Inc.) as the base consisting of mainly polyorganosiloxanes, amorphous silica and platinum-siloxane complex compounds. Systematic incorporation of porosity in this material was found to lower the force required to deform the skin in axial direction. In this study, we utilized foaming agents comprising of sodium bicarbonate and dilute form of acetic acid for modifying the polymeric chain and introducing the porosity. Experimental determination of functional relationship between the concentration of foaming agent, slacker and non-reactive silicone fluid and that of force - deformation behavior was conducted. Tensile testing of material showed a local parabolic relationship between the concentrations of foaming agents used (per milliliter of siloxane compound) and strain. This data can be used to optimize the amount of additives in platinum cured silicone to obtain desired force - displacement characteristics. Addition of "silicone thinner" and "slacker" showed a monotonically increasing strain behavior. A mathematical model was developed to arrive at the performance metrics of artificial skin.

  10. Partial fingerprint matching based on SIFT Features

    Directory of Open Access Journals (Sweden)

    Ms. S.Malathi,

    2010-07-01

    Full Text Available Fingerprints are being extensively used for person identification in a number of commercial, civil, and forensic applications. The current Fingerprint matching technology is quite mature for matching full prints, matching partial fingerprints still needs lots of improvement. Most of the current fingerprint identification systems utilize features that are based on minutiae points and ridge patterns. The major challenges faced in partial fingerprint matching are the absence of sufficient minutiae features and other structures such as core and delta. However, this technology suffers from the problem of handling incomplete prints and often discards any partial fingerprints obtained. Recent research has begun to delve into the problems of latent or partial fingerprints. In this paper we present a novel approach for partial fingerprint matching scheme based on SIFT(Scale Invariant Feature Transform features and matching is achieved using a modified point matching process. Using Neurotechnology database, we demonstrate that the proposed method exhibits an improved performance when matching full print against partial print.

  11. 基于图像差分法的人脸表情识别%Facial Expression Recognition Based on the difference image

    Institute of Scientific and Technical Information of China (English)

    2013-01-01

    This paper discusses the static image feature extraction methods.Presents a method that based on the difference image method of facial expression recognition.Through difference image to find the feature points,and use the toolboxes in MATLAB to fit the feature points in order to find the change with feature region.Through the experiment,verify the feasibility of the method.%  讨论了静态图像表情特征提取方法,提出了一种基于图像差分法的人脸表情识别方法。通过差值图找到特征点,采用特征点拟合的办法找出特征区域的变化,通过 Matlab 验证了该方法的可行性。

  12. Facial emotion recognition system for autistic children: a feasible study based on FPGA implementation.

    Science.gov (United States)

    Smitha, K G; Vinod, A P

    2015-11-01

    Children with autism spectrum disorder have difficulty in understanding the emotional and mental states from the facial expressions of the people they interact. The inability to understand other people's emotions will hinder their interpersonal communication. Though many facial emotion recognition algorithms have been proposed in the literature, they are mainly intended for processing by a personal computer, which limits their usability in on-the-move applications where portability is desired. The portability of the system will ensure ease of use and real-time emotion recognition and that will aid for immediate feedback while communicating with caretakers. Principal component analysis (PCA) has been identified as the least complex feature extraction algorithm to be implemented in hardware. In this paper, we present a detailed study of the implementation of serial and parallel implementation of PCA in order to identify the most feasible method for realization of a portable emotion detector for autistic children. The proposed emotion recognizer architectures are implemented on Virtex 7 XC7VX330T FFG1761-3 FPGA. We achieved 82.3% detection accuracy for a word length of 8 bits. PMID:26239162

  13. Comparative analysis of the anterior and posterior length and deflection angle of the cranial base, in individuals with facial Pattern I, II and III

    Directory of Open Access Journals (Sweden)

    Guilherme Thiesen

    2013-02-01

    Full Text Available OBJECTIVE: This study evaluated the variations in the anterior cranial base (S-N, posterior cranial base (S-Ba and deflection of the cranial base (SNBa among three different facial patterns (Pattern I, II and III. METHOD: A sample of 60 lateral cephalometric radiographs of Brazilian Caucasian patients, both genders, between 8 and 17 years of age was selected. The sample was divided into 3 groups (Pattern I, II and III of 20 individuals each. The inclusion criteria for each group were the ANB angle, Wits appraisal and the facial profile angle (G'.Sn.Pg'. To compare the mean values obtained from (SNBa, S-N, S-Ba each group measures, the ANOVA test and Scheffé's Post-Hoc test were applied. RESULTS AND CONCLUSIONS: There was no statistically significant difference for the deflection angle of the cranial base among the different facial patterns (Patterns I, II and III. There was no significant difference for the measures of the anterior and posterior cranial base between the facial Patterns I and II. The mean values for S-Ba were lower in facial Pattern III with statistically significant difference. The mean values of S-N in the facial Pattern III were also reduced, but without showing statistically significant difference. This trend of lower values in the cranial base measurements would explain the maxillary deficiency and/or mandibular prognathism features that characterize the facial Pattern III.OBJETIVO: o presente estudo avaliou as variações da base craniana anterior (S-N, base craniana posterior (S-Ba, e ângulo de deflexão da base do crânio (SNBa entre três diferentes padrões faciais (Padrão I, II e III. MÉTODOS: selecionou-se uma amostra de 60 telerradiografias em norma lateral de pacientes brasileiros leucodermas, de ambos os sexos, com idades entre 8 anos e 17 anos. A amostra foi dividida em três grupos (Padrão I, II e III, sendo cada grupo constituído de 20 indivíduos. Os critérios de seleção dos indivíduos para cada grupo

  14. Facial Expression Recognition Using SVM Classifier

    Directory of Open Access Journals (Sweden)

    Vasanth P.C.

    2015-03-01

    Full Text Available Facial feature tracking and facial actions recognition from image sequence attracted great attention in computer vision field. Computational facial expression analysis is a challenging research topic in computer vision. It is required by many applications such as human-computer interaction, computer graphic animation and automatic facial expression recognition. In recent years, plenty of computer vision techniques have been developed to track or recognize the facial activities in three levels. First, in the bottom level, facial feature tracking, which usually detects and tracks prominent landmarks surrounding facial components (i.e., mouth, eyebrow, etc, captures the detailed face shape information; Second, facial actions recognition, i.e., recognize facial action units (AUs defined in FACS, try to recognize some meaningful facial activities (i.e., lid tightener, eyebrow raiser, etc; In the top level, facial  expression analysis attempts to recognize some meaningful facial activities (i.e., lid tightener, eyebrow raiser, etc; In the top level, facial expression analysis attempts to recognize facial expressions that represent the human emotion states. In this proposed algorithm initially detecting eye and mouth, features of eye and mouth are extracted using Gabor filter, (Local Binary Pattern LBP and PCA is used to reduce the dimensions of the features. Finally SVM is used to classification of expression and facial action units.

  15. Facial Reconstruction and Rehabilitation.

    Science.gov (United States)

    Guntinas-Lichius, Orlando; Genther, Dane J; Byrne, Patrick J

    2016-01-01

    Extracranial infiltration of the facial nerve by salivary gland tumors is the most frequent cause of facial palsy secondary to malignancy. Nevertheless, facial palsy related to salivary gland cancer is uncommon. Therefore, reconstructive facial reanimation surgery is not a routine undertaking for most head and neck surgeons. The primary aims of facial reanimation are to restore tone, symmetry, and movement to the paralyzed face. Such restoration should improve the patient's objective motor function and subjective quality of life. The surgical procedures for facial reanimation rely heavily on long-established techniques, but many advances and improvements have been made in recent years. In the past, published experiences on strategies for optimizing functional outcomes in facial paralysis patients were primarily based on small case series and described a wide variety of surgical techniques. However, in the recent years, larger series have been published from high-volume centers with significant and specialized experience in surgical and nonsurgical reanimation of the paralyzed face that have informed modern treatment. This chapter reviews the most important diagnostic methods used for the evaluation of facial paralysis to optimize the planning of each individual's treatment and discusses surgical and nonsurgical techniques for facial rehabilitation based on the contemporary literature. PMID:27093062

  16. Facial Reconstruction and Rehabilitation.

    Science.gov (United States)

    Guntinas-Lichius, Orlando; Genther, Dane J; Byrne, Patrick J

    2016-01-01

    Extracranial infiltration of the facial nerve by salivary gland tumors is the most frequent cause of facial palsy secondary to malignancy. Nevertheless, facial palsy related to salivary gland cancer is uncommon. Therefore, reconstructive facial reanimation surgery is not a routine undertaking for most head and neck surgeons. The primary aims of facial reanimation are to restore tone, symmetry, and movement to the paralyzed face. Such restoration should improve the patient's objective motor function and subjective quality of life. The surgical procedures for facial reanimation rely heavily on long-established techniques, but many advances and improvements have been made in recent years. In the past, published experiences on strategies for optimizing functional outcomes in facial paralysis patients were primarily based on small case series and described a wide variety of surgical techniques. However, in the recent years, larger series have been published from high-volume centers with significant and specialized experience in surgical and nonsurgical reanimation of the paralyzed face that have informed modern treatment. This chapter reviews the most important diagnostic methods used for the evaluation of facial paralysis to optimize the planning of each individual's treatment and discusses surgical and nonsurgical techniques for facial rehabilitation based on the contemporary literature.

  17. Arabic writer identification based on diacritic's features

    Science.gov (United States)

    Maliki, Makki; Al-Jawad, Naseer; Jassim, Sabah A.

    2012-06-01

    Natural languages like Arabic, Kurdish, Farsi (Persian), Urdu, and any other similar languages have many features, which make them different from other languages like Latin's script. One of these important features is diacritics. These diacritics are classified as: compulsory like dots which are used to identify/differentiate letters, and optional like short vowels which are used to emphasis consonants. Most indigenous and well trained writers often do not use all or some of these second class of diacritics, and expert readers can infer their presence within the context of the writer text. In this paper, we investigate the use of diacritics shapes and other characteristic as parameters of feature vectors for Arabic writer identification/verification. Segmentation techniques are used to extract the diacritics-based feature vectors from examples of Arabic handwritten text. The results of evaluation test will be presented, which has been carried out on an in-house database of 50 writers. Also the viability of using diacritics for writer recognition will be demonstrated.

  18. FEATURE EXTRACTION FOR EMG BASED PROSTHESES CONTROL

    Directory of Open Access Journals (Sweden)

    R. Aishwarya

    2013-01-01

    Full Text Available The control of prosthetic limb would be more effective if it is based on Surface Electromyogram (SEMG signals from remnant muscles. The analysis of SEMG signals depend on a number of factors, such as amplitude as well as time- and frequency-domain properties. Time series analysis using Auto Regressive (AR model and Mean frequency which is tolerant to white Gaussian noise are used as feature extraction techniques. EMG Histogram is used as another feature vector that was seen to give more distinct classification. The work was done with SEMG dataset obtained from the NINAPRO DATABASE, a resource for bio robotics community. Eight classes of hand movements hand open, hand close, Wrist extension, Wrist flexion, Pointing index, Ulnar deviation, Thumbs up, Thumb opposite to little finger are taken into consideration and feature vectors are extracted. The feature vectors can be given to an artificial neural network for further classification in controlling the prosthetic arm which is not dealt in this paper.

  19. BROAD PHONEME CLASSIFICATION USING SIGNAL BASED FEATURES

    Directory of Open Access Journals (Sweden)

    Deekshitha G

    2014-12-01

    Full Text Available Speech is the most efficient and popular means of human communication Speech is produced as a sequence of phonemes. Phoneme recognition is the first step performed by automatic speech recognition system. The state-of-the-art recognizers use mel-frequency cepstral coefficients (MFCC features derived through short time analysis, for which the recognition accuracy is limited. Instead of this, here broad phoneme classification is achieved using features derived directly from the speech at the signal level itself. Broad phoneme classes include vowels, nasals, fricatives, stops, approximants and silence. The features identified useful for broad phoneme classification are voiced/unvoiced decision, zero crossing rate (ZCR, short time energy, most dominant frequency, energy in most dominant frequency, spectral flatness measure and first three formants. Features derived from short time frames of training speech are used to train a multilayer feedforward neural network based classifier with manually marked class label as output and classification accuracy is then tested. Later this broad phoneme classifier is used for broad syllable structure prediction which is useful for applications such as automatic speech recognition and automatic language identification.

  20. Facial Expression Recognition Using SVM Classifier

    OpenAIRE

    Vasanth P.C.; Nataraj. K. R

    2015-01-01

    Facial feature tracking and facial actions recognition from image sequence attracted great attention in computer vision field. Computational facial expression analysis is a challenging research topic in computer vision. It is required by many applications such as human-computer interaction, computer graphic animation and automatic facial expression recognition. In recent years, plenty of computer vision techniques have been developed to track or recognize the facial activities in three levels...

  1. Face Recognition from Still Images to Video Sequences: A Local-Feature-Based Framework

    Directory of Open Access Journals (Sweden)

    Chen Shaokang

    2011-01-01

    Full Text Available Although automatic faces recognition has shown success for high-quality images under controlled conditions, for video-based recognition it is hard to attain similar levels of performance. We describe in this paper recent advances in a project being undertaken to trial and develop advanced surveillance systems for public safety. In this paper, we propose a local facial feature based framework for both still image and video-based face recognition. The evaluation is performed on a still image dataset LFW and a video sequence dataset MOBIO to compare 4 methods for operation on feature: feature averaging (Avg-Feature, Mutual Subspace Method (MSM, Manifold to Manifold Distance (MMS, and Affine Hull Method (AHM, and 4 methods for operation on distance on 3 different features. The experimental results show that Multi-region Histogram (MRH feature is more discriminative for face recognition compared to Local Binary Patterns (LBP and raw pixel intensity. Under the limitation on a small number of images available per person, feature averaging is more reliable than MSM, MMD, and AHM and is much faster. Thus, our proposed framework—averaging MRH feature is more suitable for CCTV surveillance systems with constraints on the number of images and the speed of processing.

  2. 一种基于人脸图像的年龄估计方法%An Age Estimation Method Based on Facial Images

    Institute of Scientific and Technical Information of China (English)

    罗佳佳; 蔡超

    2012-01-01

    Research on age estimation has a significant impact on Human-Computer Interaction. In this paper, an age estimation method based on facial images is proposed. The new method establishes a face anthropometry template based on craniofacial growth pattern theory to obtain facial geometric proportion features, and extracts texture features of facial local area using fractional differential approach, combines these two kinds of features to form personal age feature vectors. Machine learning methods such as clustering algorithms we used to obtain age-feature knowledge matrix, and in age estimating, such knowledge matrix voting on estimate age of input facial image. Experimental results show that the estimation error is small and the classification accuracy is close to human judgment.%有关年龄估计的研究在人机交互领域有着非常重要的意义.该文提出一种基于人脸图像的年龄估计方法,该方法首先基于颅面成长模式理论建立人脸测量模板,在此模板上计算面部几何比例特征,然后运用分数阶微分提取人脸局部区域的纹理特征,结合这两类特征构成个体年龄特征向量;通过聚类学习的方法训练年龄特征向量获得年龄-特征映射矩阵,最后由此矩阵表决出输人人脸的估计年龄.实验结果表明,基于这两种特征构建的年龄估计模型可以获得较好的年龄估计结果,年龄误差较小,分类准确率接近人的主观判断结果.

  3. Comparative Study of Triangulation based and Feature based Image Morphing

    Directory of Open Access Journals (Sweden)

    Ms. Bhumika G. Bhatt

    2012-01-01

    Full Text Available Image Morphing is one of the most powerful Digital Image processing technique, which is used to enhancemany multimedia projects, presentations, education and computer based training. It is also used inmedical imaging field to recover features not visible in images by establishing correspondence of featuresamong successive pair of scanned images. This paper discuss what morphing is and implementation ofTriangulation based morphing Technique and Feature based Image Morphing. IT analyze both morphingtechniques in terms of different attributes such as computational complexity, Visual quality of morphobtained and complexity involved in selection of features.

  4. A Robust Gender and Age Estimation under Varying Facial Pose

    Science.gov (United States)

    Takimoto, Hironori; Mitsukura, Yasue; Fukumi, Minoru; Akamatsu, Norio

    This paper presents a method for gender and age estimation which is robust for facial pose changing. We propose a feature point detection method which is the Adapted Retinal Sampling Method (ARSM), and a feature extraction method. A basic concept of the ARSM is to add knowledge about the facial structure into the Retinal Sampling Method. In this method, feature points are detected based on 7 points corresponding to facial organ from face image. The reason why we used 7 points to basis of feature point detection is that facial organ is conspicuous in facial region, and it is comparatively easy to extract. As features which is robust for facial pose changing, a skin texture, a hue and a gabor jet are used for the gender and age estimation. For classification of gender and estimation of seriate age, we use a multi-layered neural network. Moreover, we examine the left-right symmetric property of the face concerning gender and age estimation by the proposed method.

  5. Competence Judgments Based on Facial Appearance Are Better Predictors of American Elections Than of Korean Elections.

    Science.gov (United States)

    Na, Jinkyung; Kim, Seunghee; Oh, Hyewon; Choi, Incheol; O'Toole, Alice

    2015-07-01

    Competence judgments based on facial appearance predict election results in Western countries, which indicates that these inferences contribute to decisions with social and political consequence. Because trait inferences are less pronounced in Asian cultures, such competence judgments should predict Asian election results less accurately than they do Western elections. In the study reported here, we compared Koreans' and Americans' competence judgments from face-to-trait inferences for candidates in U.S. Senate and state gubernatorial elections and Korean Assembly elections. Perceived competence was a far better predictor of the outcomes of real elections held in the United States than of elections held in Korea. When deciding which of two candidates to vote for in hypothetical elections, however, Koreans and Americans both voted on the basis of perceived competence inferred from facial appearance. Combining actual and hypothetical election results, we conclude that for Koreans, competence judgments from face-to-trait inferences are critical in voting only when other information is unavailable. However, in the United States, such competence judgments are substantially important, even in the presence of other information. PMID:25956912

  6. 基于Gabor小波和稀疏表示的人脸表情识别%Facial Expression Recognition Based on Gabor Wavelet and Sparse Representation

    Institute of Scientific and Technical Information of China (English)

    张娟; 詹永照; 毛启容; 邹翔

    2012-01-01

    By analyzing the biology background and mathematical properties of Gabor wavelet and sparse representation, a new approach for facial expression recognition based on Gabor wavelet and sparse representation is presented in this paper. Gabor wavelet transformation is adopted to extract features for the static facial expression image. The over-complete dictionary is constructed by the Gabor features of all training samples and sparse feature vector of this facial expression image is obtained by using sparse representation model. It uses a fusion recognition method for implementing multiple classifiers fusion. Experimental results show that integrating Gabor wavelet transformation and sparse representation is more effective for extracting expression image information. The approach effectively raises the accuracy of expression recognition.%通过分析Gabor小波和稀疏表示的生物学背景和数学特性,提出一种基于Gabor小波和稀疏表示的人脸表情识别方法.采用Gabor小波变换对表情图像进行特征提取,建立训练样本Gabor特征的超完备字典,通过稀疏表示模型优化人脸表情图像的特征向量,利用融合识别方法进行多分类器融合识别分类.实验结果表明,该方法能够有效提取表情图像的特征信息,提高表情识别率.

  7. Automatic 2.5-D Facial Landmarking and Emotion Annotation for Social Interaction Assistance.

    Science.gov (United States)

    Zhao, Xi; Zou, Jianhua; Li, Huibin; Dellandrea, Emmanuel; Kakadiaris, Ioannis A; Chen, Liming

    2016-09-01

    People with low vision, Alzheimer's disease, and autism spectrum disorder experience difficulties in perceiving or interpreting facial expression of emotion in their social lives. Though automatic facial expression recognition (FER) methods on 2-D videos have been extensively investigated, their performance was constrained by challenges in head pose and lighting conditions. The shape information in 3-D facial data can reduce or even overcome these challenges. However, high expenses of 3-D cameras prevent their widespread use. Fortunately, 2.5-D facial data from emerging portable RGB-D cameras provide a good balance for this dilemma. In this paper, we propose an automatic emotion annotation solution on 2.5-D facial data collected from RGB-D cameras. The solution consists of a facial landmarking method and a FER method. Specifically, we propose building a deformable partial face model and fit the model to a 2.5-D face for localizing facial landmarks automatically. In FER, a novel action unit (AU) space-based FER method has been proposed. Facial features are extracted using landmarks and further represented as coordinates in the AU space, which are classified into facial expressions. Evaluated on three publicly accessible facial databases, namely EURECOM, FRGC, and Bosphorus databases, the proposed facial landmarking and expression recognition methods have achieved satisfactory results. Possible real-world applications using our algorithms have also been discussed. PMID:26316289

  8. Holistic facial expression classification

    Science.gov (United States)

    Ghent, John; McDonald, J.

    2005-06-01

    This paper details a procedure for classifying facial expressions. This is a growing and relatively new type of problem within computer vision. One of the fundamental problems when classifying facial expressions in previous approaches is the lack of a consistent method of measuring expression. This paper solves this problem by the computation of the Facial Expression Shape Model (FESM). This statistical model of facial expression is based on an anatomical analysis of facial expression called the Facial Action Coding System (FACS). We use the term Action Unit (AU) to describe a movement of one or more muscles of the face and all expressions can be described using the AU's described by FACS. The shape model is calculated by marking the face with 122 landmark points. We use Principal Component Analysis (PCA) to analyse how the landmark points move with respect to each other and to lower the dimensionality of the problem. Using the FESM in conjunction with Support Vector Machines (SVM) we classify facial expressions. SVMs are a powerful machine learning technique based on optimisation theory. This project is largely concerned with statistical models, machine learning techniques and psychological tools used in the classification of facial expression. This holistic approach to expression classification provides a means for a level of interaction with a computer that is a significant step forward in human-computer interaction.

  9. SVM Based Recognition of Facial Expressions Used In Indian Sign Language

    OpenAIRE

    Daleesha M Viswanathan; Sumam Mary Idicula

    2015-01-01

    In sign language systems, facial expressions are an intrinsic component that usually accompanies hand gestures. The facial expressions would modify or change the meaning of hand gesture into a statement, a question or improve the meaning and understanding of hand gestures. The scientific literature available in Indian Sign Language (ISL) on facial expression recognition is scanty. Contrary to American Sign Language (ASL), head movements are less conspicuous in ISL and the answers to questions...

  10. Augmented reality-based self-facial modeling to promote the emotional expression and social skills of adolescents with autism spectrum disorders.

    Science.gov (United States)

    Chen, Chien-Hsu; Lee, I-Jui; Lin, Ling-Yi

    2014-11-01

    Autism spectrum disorders (ASD) are characterized by a reduced ability to understand the emotions of other people; this ability involves recognizing facial expressions. This study assessed the possibility of enabling three adolescents with ASD to become aware of facial expressions observed in situations in a school setting simulated using augmented reality (AR) technology. The AR system provided three-dimensional (3-D) animations of six basic facial expressions overlaid on participant faces to facilitate practicing emotional judgments and social skills. Based on the multiple baseline design across subjects, the data indicated that AR intervention can improve the appropriate recognition and response to facial emotional expressions seen in the situational task.

  11. A hybrid features based image matching algorithm

    Science.gov (United States)

    Tu, Zhenbiao; Lin, Tao; Sun, Xiao; Dou, Hao; Ming, Delie

    2015-12-01

    In this paper, we present a novel image matching method to find the correspondences between two sets of image interest points. The proposed method is based on a revised third-order tensor graph matching method, and introduces an energy function that takes four kinds of energy term into account. The third-order tensor method can hardly deal with the situation that the number of interest points is huge. To deal with this problem, we use a potential matching set and a vote mechanism to decompose the matching task into several sub-tasks. Moreover, the third-order tensor method sometimes could only find a local optimum solution. Thus we use a cluster method to divide the feature points into some groups and only sample feature triangles between different groups, which could make the algorithm to find the global optimum solution much easier. Experiments on different image databases could prove that our new method would obtain correct matching results with relatively high efficiency.

  12. Dominant Local Binary Pattern Based Face Feature Selection and Detection

    Directory of Open Access Journals (Sweden)

    Kavitha.T

    2010-04-01

    Full Text Available Face Detection plays a major role in Biometrics.Feature selection is a problem of formidable complexity. Thispaper proposes a novel approach to extract face features forface detection. The LBP features can be extracted faster in asingle scan through the raw image and lie in a lower dimensional space, whilst still retaining facial information efficiently. The LBP features are robust to low-resolution images. The dominant local binary pattern (DLBP is used to extract features accurately. A number of trainable methods are emerging in the empirical practice due to their effectiveness. The proposed method is a trainable system for selecting face features from over-completes dictionaries of imagemeasurements. After the feature selection procedure is completed the SVM classifier is used for face detection. The main advantage of this proposal is that it is trained on a very small training set. The classifier is used to increase the selection accuracy. This is not only advantageous to facilitate the datagathering stage, but, more importantly, to limit the training time. CBCL frontal faces dataset is used for training and validation.

  13. Multi scale feature based matched filter processing

    Institute of Scientific and Technical Information of China (English)

    LI Jun; HOU Chaohuan

    2004-01-01

    Using the extreme difference of self-similarity and kurtosis at large level scale of wavelet transform approximation between the PTFM (Pulse Trains of Frequency Modulated)signals and its reverberation, a feature-based matched filter method using the classify-beforedetect paragriam is proposed to improve the detection performance in reverberation and multipath environments. Processing the data of lake-trails showed that the processing gain of the proposed method is bigger than that of matched filter about 10 dB. In multipath environments, detection performance of matched filter become badly poorer, while that of the proposed method is improved better. It shows that the method is much more robust with the effect of multipath.

  14. Down syndrome detection from facial photographs using machine learning techniques

    Science.gov (United States)

    Zhao, Qian; Rosenbaum, Kenneth; Sze, Raymond; Zand, Dina; Summar, Marshall; Linguraru, Marius George

    2013-02-01

    Down syndrome is the most commonly occurring chromosomal condition; one in every 691 babies in United States is born with it. Patients with Down syndrome have an increased risk for heart defects, respiratory and hearing problems and the early detection of the syndrome is fundamental for managing the disease. Clinically, facial appearance is an important indicator in diagnosing Down syndrome and it paves the way for computer-aided diagnosis based on facial image analysis. In this study, we propose a novel method to detect Down syndrome using photography for computer-assisted image-based facial dysmorphology. Geometric features based on facial anatomical landmarks, local texture features based on the Contourlet transform and local binary pattern are investigated to represent facial characteristics. Then a support vector machine classifier is used to discriminate normal and abnormal cases; accuracy, precision and recall are used to evaluate the method. The comparison among the geometric, local texture and combined features was performed using the leave-one-out validation. Our method achieved 97.92% accuracy with high precision and recall for the combined features; the detection results were higher than using only geometric or texture features. The promising results indicate that our method has the potential for automated assessment for Down syndrome from simple, noninvasive imaging data.

  15. Techniques in Facial Expression Recognition

    OpenAIRE

    Avinash Prakash Pandhare; Umesh Balkrishna Chavan

    2016-01-01

    Facial expression recognition is gaining widespread importance as the applications related to Human – Computer interactions are increasing. This paper mentions various techniques and approaches that have been used in the field of facial expression recognition. Facial expression recognition takes place in various stages and these stages have been implemented by various approaches. Viola and Jones for face detection, Gabor filters for feature extraction, SVM classifiers for classifi...

  16. Improved AAG based recognization of machining feature

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    The lost information caused by feature interaction is restored by using auxiliary faces(AF)and virtual links(VL).The delta volume of the interacted features represented by concave attachable connected graph (CACG)can be decomposed into several isolated features represented by complete concave adjacency graph (CCAG).We can recognize the features sketchy type by using CCAG as a hint; the exact type of the feature can be attained by deleting the auxiliary faces from the isolated feature.United machining feature(UMF)is used to represent the features that can be machined in the same machining process.It is important to the rationalizing of the process plans and reduce the time costing in machining.An example is given to demonstrate the effectiveness of this method.

  17. Assessing facial wrinkles: automatic detection and quantification

    Science.gov (United States)

    Cula, Gabriela O.; Bargo, Paulo R.; Kollias, Nikiforos

    2009-02-01

    Nowadays, documenting the face appearance through imaging is prevalent in skin research, therefore detection and quantitative assessment of the degree of facial wrinkling is a useful tool for establishing an objective baseline and for communicating benefits to facial appearance due to cosmetic procedures or product applications. In this work, an algorithm for automatic detection of facial wrinkles is developed, based on estimating the orientation and the frequency of elongated features apparent on faces. By over-filtering the skin texture image with finely tuned oriented Gabor filters, an enhanced skin image is created. The wrinkles are detected by adaptively thresholding the enhanced image, and the degree of wrinkling is estimated based on the magnitude of the filter responses. The algorithm is tested against a clinically scored set of images of periorbital lines of different severity and we find that the proposed computational assessment correlates well with the corresponding clinical scores.

  18. Cosmetics as a Feature of the Extended Human Phenotype: Modulation of the Perception of Biologically Important Facial Signals

    OpenAIRE

    Nancy L Etcoff; Shannon Stock; Haley, Lauren E.; Vickery, Sarah A.; House, David M.

    2011-01-01

    Research on the perception of faces has focused on the size, shape, and configuration of inherited features or the biological phenotype, and largely ignored the effects of adornment, or the extended phenotype. Research on the evolution of signaling has shown that animals frequently alter visual features, including color cues, to attract, intimidate or protect themselves from conspecifics. Humans engage in conscious manipulation of visual signals using cultural tools in real time rather than g...

  19. Facial expression recognition based on Gabor wavelet transform%基于Gabor小波的人脸表情特征提取研究

    Institute of Scientific and Technical Information of China (English)

    王甫龙; 薄华

    2012-01-01

    In order to make the computer have a better recognition to face expression,the method of facial expression recognition based on Gabor wavelets transform is discussed.Firstly,with pre-processing is executed to a given static grey image containing facial expression information.Pre-processing including the identification of pure face facial expression region,size and gray-scale normalized,the methods based on two-dimensional Gabor transform for feature extraction and fastPCA mentioned in this paper for diminishing Gabor feature are discussed.Secondly,in the low dimensional space,use the FLD to obtain the features useful to classification.Finally,SVM is applied to sort the facial expressions.Compared with the conventional methods,experimental results show that this method has fast identification speed and better higher recognition accuracy.%为了使计算机能更好的识别人脸表情,对基于Gabor小波变换的人脸表情识别方法进行了研究。首先对包含表情区域的静态灰度图像进行预处理,包括对确定的人脸表情区域进行尺寸和灰度归一化,然后利用二维Gabor小波变换提取脸部表情特征,使用快速PCA方法对提取的Gabor小波特征初步降维。再在低维的空间中,利用Fisher准则提取那些有利于分类的特征,最后用SVM分类器进行分类。实验结果表明,上述提出的方法比传统的方法识别速度更快,能达到实时性的要求,并且具有很好的鲁棒性,识别率高。

  20. A Sparse-Feature-Based Face Detector

    Institute of Scientific and Technical Information of China (English)

    LUXiaofeng; ZHENGNanning; ZHENGSongfeng

    2003-01-01

    Local features and global features are two kinds of important statistical features used to distinguish faces from nonfaces. They are both special cases of sparse features. A final classifier can be considered as a combination of a set of selected weak classiflers, and each weak classifier uses a sparse feature to classify samples. Motivated by this thought, we construct an over complete set of weak classifiers using LPSVM (Linear proximal support vector machine) algorithm, and then we select part of them using AdaBoost algorithm and combine the selected weak classifiers to form a strong classifier. And duringthe course of feature extraction and selection, our method can minimize the classification error directly, whereas most previous works cannot do this. The main difference from other methods is that the local features are learned from the training set instead of being arbitrarily defined. We applied our method to face detection; the test result shows that this method performs well.

  1. SVM Based Recognition of Facial Expressions Used In Indian Sign Language

    Directory of Open Access Journals (Sweden)

    Daleesha M Viswanathan

    2015-02-01

    Full Text Available In sign language systems, facial expressions are an intrinsic component that usually accompanies hand gestures. The facial expressions would modify or change the meaning of hand gesture into a statement, a question or improve the meaning and understanding of hand gestures. The scientific literature available in Indian Sign Language (ISL on facial expression recognition is scanty. Contrary to American Sign Language (ASL, head movements are less conspicuous in ISL and the answers to questions such as yes or no are signed by hand. Purpose of this paper is to present our work in recognizing facial expression changes in isolated ISL sentences. Facial gesture pattern results in the change of skin textures by forming wrinkles and furrows. Gabor wavelet method is well-known for capturing subtle textural changes on surfaces. Therefore, a unique approach was developed to model facial expression changes with Gabor wavelet parameters that were chosen from partitioned face areas. These parameters were incorporated with Euclidian distance measure. Multi class SVM classifier was used in this recognition system to identify facial expressions in an isolated facial expression sequences in ISL. An accuracy of 92.12 % was achieved by our proposed system.

  2. Realistic Facial Expression of Virtual Human Based on Color, Sweat, and Tears Effects

    Directory of Open Access Journals (Sweden)

    Mohammed Hazim Alkawaz

    2014-01-01

    Full Text Available Generating extreme appearances such as scared awaiting sweating while happy fit for tears (cry and blushing (anger and happiness is the key issue in achieving the high quality facial animation. The effects of sweat, tears, and colors are integrated into a single animation model to create realistic facial expressions of 3D avatar. The physical properties of muscles, emotions, or the fluid properties with sweating and tears initiators are incorporated. The action units (AUs of facial action coding system are merged with autonomous AUs to create expressions including sadness, anger with blushing, happiness with blushing, and fear. Fluid effects such as sweat and tears are simulated using the particle system and smoothed-particle hydrodynamics (SPH methods which are combined with facial animation technique to produce complex facial expressions. The effects of oxygenation of the facial skin color appearance are measured using the pulse oximeter system and the 3D skin analyzer. The result shows that virtual human facial expression is enhanced by mimicking actual sweating and tears simulations for all extreme expressions. The proposed method has contribution towards the development of facial animation industry and game as well as computer graphics.

  3. De Novo 17q24.2-q24.3 microdeletion presenting with generalized hypertrichosis terminalis, gingival fibromatous hyperplasia, and distinctive facial features.

    Science.gov (United States)

    Afifi, Hanan H; Fukai, Ryoko; Miyake, Noriko; Gamal El Din, Amina A; Eid, Maha M; Eid, Ola M; Thomas, Manal M; El-Badry, Tarek H; Tosson, Angie M S; Abdel-Salam, Ghada M H; Matsumoto, Naomichi

    2015-10-01

    Generalized hypertrichosis is a feature of several genetic disorders, and the nosology of these entities is still provisional. Recent studies have implicated chromosome 17q24.2-q24.3 microdeletion and the reciprocal microduplication in a very rare form of congenital generalized hypertrichosis terminalis (CGHT) with or without gingival hyperplasia. Here, we report on a 5-year-old Egyptian girl born to consanguineous parents. The girl presented with CGHT and gingival hyperplasia for whom we performed detailed clinical, pathological, and molecular studies. The girl had coarse facies characterized by bilateral epicanthic folds, thick and abundant eyelashes, a broad nose, full cheeks, and lips that constituted the distinctive facial features for this syndrome. Biopsy of the gingiva showed epithelial marked acanthosis and hyperkeratosis with hyperplastic thick collagen bundles and dense fibrosis in the underlying tissues. Array analysis indicated a 17q24.2-q24.3 chromosomal microdeletion. We validated this microdeletion by real-time quantitative PCR and confirmed a perfect co-segregation of the disease phenotype within the family. In summary, this study indicates that 17q24.2-q24.3 microdeletion caused CGHT with gingival hyperplasia and distinctive facies, which should be differentiated from the autosomal recessive type that lacks the distinctive facies.

  4. Research on Method of Facial Expression Recognition Based on Curvelet Transform and SVM%基于Curvelet变换和SVM的人脸表情识别方法研究

    Institute of Scientific and Technical Information of China (English)

    薄璐; 周菊香

    2013-01-01

    In this paper,curvelet transform is used for facial expression recognition.A method based on curvelet transform and SVM is introduced to facial expression recognition.During expression feature extracting,principal component analysis is also used to reduce the dimension of coefficient features after curvelet transform decomposition.Conducting experiments on JAFFE and Cohn-Kanade expression database respectively,the results show that the method can effectively identify the facial expression.Compared with other methods,the proposed method that gets an average recognition rate of facial expression is significantly better.%论文将Curvelet变换用于人脸表情识别,提出了一种基于Curvelet变换与SVM相结合的人脸表情识别方法.在表情特征提取过程中,还采用了主分量分析方法对Curvelet变换分解后得到的系数特征进行降维处理.分别对JAFFE和Cohn-Kanade表情数据库进行了实验,结果表明该方法可以有效地对人脸表情进行识别,与其他方法比较,采用该文方法得到人脸表情的平均识别率明显更优.

  5. Using Kinect for real-time emotion recognition via facial expressions

    Institute of Scientific and Technical Information of China (English)

    Qi-rong MAO; Xin-yu PAN; Yong-zhao ZHAN; Xiang-jun SHEN

    2015-01-01

    Emotion recognition via facial expressions (ERFE) has attracted a great deal of interest with recent advances in artificial intelligence and pattern recognition. Most studies are based on 2D images, and their performance is usually computationally expensive. In this paper, we propose a real-time emotion recognition approach based on both 2D and 3D facial expression features captured by Kinect sensors. To capture the deformation of the 3D mesh during facial expression, we combine the features of animation units (AUs) and feature point positions (FPPs) tracked by Kinect. A fusion algorithm based on improved emotional profiles (IEPs) and maximum confidence is proposed to recognize emotions with these real-time facial expression features. Experiments on both an emotion dataset and a real-time video show the superior performance of our method.

  6. Facial Expression Recognition via Non-Negative Least-Squares Sparse Coding

    Directory of Open Access Journals (Sweden)

    Ying Chen

    2014-05-01

    Full Text Available Sparse coding is an active research subject in signal processing, computer vision, and pattern recognition. A novel method of facial expression recognition via non-negative least squares (NNLS sparse coding is presented in this paper. The NNLS sparse coding is used to form a facial expression classifier. To testify the performance of the presented method, local binary patterns (LBP and the raw pixels are extracted for facial feature representation. Facial expression recognition experiments are conducted on the Japanese Female Facial Expression (JAFFE database. Compared with other widely used methods such as linear support vector machines (SVM, sparse representation-based classifier (SRC, nearest subspace classifier (NSC, K-nearest neighbor (KNN and radial basis function neural networks (RBFNN, the experiment results indicate that the presented NNLS method performs better than other used methods on facial expression recognition tasks.

  7. Local Feature based Gender Independent Bangla ASR

    Directory of Open Access Journals (Sweden)

    Bulbul Ahamed

    2012-11-01

    Full Text Available This paper presents an automatic speech recognition (ASR for Bangla (widely used as Bengali by suppressing the speaker gender types based on local features extracted from an input speech. Speaker-specific characteristics play an important role on the performance of Bangla automatic speech recognition (ASR. Gender factor shows adverse effect in the classifier while recognizing a speech by an opposite gender, such as, training a classifier by male but testing is done by female or vice-versa. To obtain a robust ASR system in practice it is necessary to invent a system that incorporates gender independent effect for particular gender. In this paper, we have proposed a Gender-Independent technique for ASR that focused on a gender factor. The proposed method trains the classifier with the both types of gender, male and female, and evaluates the classifier for the male and female. For the experiments, we have designed a medium size Bangla (widely known as Bengali speech corpus for both the male and female.The proposed system has showed a significant improvement of word correct rates, word accuracies and sentence correct rates in comparison with the method that suffers from gender effects using. Moreover, it provides the highest level recognition performance by taking a fewer mixture component in hidden Markov model (HMMs.

  8. Facial Transplantation.

    Science.gov (United States)

    Russo, Jack E; Genden, Eric M

    2016-08-01

    Reconstruction of severe facial deformities poses a unique surgical challenge: restoring the aesthetic form and function of the face. Facial transplantation has emerged over the last decade as an option for reconstruction of these defects in carefully selected patients. As the world experience with facial transplantation grows, debate remains regarding whether such a highly technical, resource-intensive procedure is warranted, all to improve quality of life but not necessarily prolong it. This article reviews the current state of facial transplantation with focus on the current controversies and challenges, with particular attention to issues of technique, immunology, and ethics. PMID:27400850

  9. Fast Facial Detection by Depth Map Analysis

    OpenAIRE

    Ming-Yuan Shieh; Tsung-Min Hsieh

    2013-01-01

    In order to obtain correct facial recognition results, one needs to adopt appropriate facial detection techniques. Moreover, the effects of facial detection are usually affected by the environmental conditions such as background, illumination, and complexity of objectives. In this paper, the proposed facial detection scheme, which is based on depth map analysis, aims to improve the effectiveness of facial detection and recognition under different environmental illumination conditions. The pro...

  10. Predicting facial characteristics from complex polygenic variations

    DEFF Research Database (Denmark)

    Fagertun, Jens; Wolffhechel, Karin Marie Brandt; Pers, Tune;

    2015-01-01

    traits in a linear regression. We show in this proof-of-concept study for facial trait prediction from genome-wide SNP data that some facial characteristics can be modeled by genetic information: facial width, eyebrow width, distance between eyes, and features involving mouth shape are predicted with...

  11. Multi-Feature Segmentation and Cluster based Approach for Product Feature Categorization

    Directory of Open Access Journals (Sweden)

    Bharat Singh

    2016-03-01

    Full Text Available At a recent time, the web has become a valuable source of online consumer review however as the number of reviews is growing in high speed. It is infeasible for user to read all reviews to make a valuable or satisfying decision because the same features, people can write it contrary words or phrases. To produce a useful summary of domain synonyms words and phrase, need to be a group into same feature group. We focus on feature-based opinion mining problem and this paper mainly studies feature based product categorization from the number of users - generated review available on the different website. First, a multi-feature segmentation method is proposed which segment multi-feature review sentences into the single feature unit. Second part of speech dictionary and context information is used to consider the irrelevant feature identification, sentiment words are used to identify the polarity of feature and finally an unsupervised clustering based product feature categorization method is proposed. Clustering is unsupervised machine learning approach that groups feature that have a high degree of similarity in a same cluster. The proposed approach provides satisfactory results and can achieve 100% average precision for clustering based product feature categorization task. This approach can be applicable to different product.

  12. Fingerprint Feature Extraction Based on Macroscopic Curvature

    Institute of Scientific and Technical Information of China (English)

    Zhang Xiong; He Gui-ming; Zhang Yun

    2003-01-01

    In the Automatic Fingerprint Identification System (AFIS), extracting the feature of fingerprint is very important. The local curvature of ridges of fingerprint is irregular, so people have the barrier to effectively extract the fingerprint curve features to describe fingerprint. This article proposes a novel algorithm; it embraces information of few nearby fingerprint ridges to extract a new characteristic which can describe the curvature feature of fingerprint. Experimental results show the algorithm is feasible, and the characteristics extracted by it can clearly show the inner macroscopic curve properties of fingerprint. The result also shows that this kind of characteristic is robust to noise and pollution.

  13. Fingerprint Feature Extraction Based on Macroscopic Curvature

    Institute of Scientific and Technical Information of China (English)

    Zhang; Xiong; He; Gui-Ming; 等

    2003-01-01

    In the Automatic Fingerprint Identification System(AFIS), extracting the feature of fingerprint is very important. The local curvature of ridges of fingerprint is irregular, so people have the barrier to effectively extract the fingerprint curve features to describe fingerprint. This article proposes a novel algorithm; it embraces information of few nearby fingerprint ridges to extract a new characterstic which can describe the curvature feature of fingerprint. Experimental results show the algorithm is feasible, and the characteristics extracted by it can clearly show the inner macroscopic curve properties of fingerprint. The result also shows that this kind of characteristic is robust to noise and pollution.

  14. Innovations in individual feature history management - The significance of feature-based temporal model

    Science.gov (United States)

    Choi, J.; Seong, J.C.; Kim, B.; Usery, E.L.

    2008-01-01

    A feature relies on three dimensions (space, theme, and time) for its representation. Even though spatiotemporal models have been proposed, they have principally focused on the spatial changes of a feature. In this paper, a feature-based temporal model is proposed to represent the changes of both space and theme independently. The proposed model modifies the ISO's temporal schema and adds new explicit temporal relationship structure that stores temporal topological relationship with the ISO's temporal primitives of a feature in order to keep track feature history. The explicit temporal relationship can enhance query performance on feature history by removing topological comparison during query process. Further, a prototype system has been developed to test a proposed feature-based temporal model by querying land parcel history in Athens, Georgia. The result of temporal query on individual feature history shows the efficiency of the explicit temporal relationship structure. ?? Springer Science+Business Media, LLC 2007.

  15. Clustering Based Feature Learning on Variable Stars

    CERN Document Server

    Mackenzie, Cristóbal; Protopapas, Pavlos

    2016-01-01

    The success of automatic classification of variable stars strongly depends on the lightcurve representation. Usually, lightcurves are represented as a vector of many statistical descriptors designed by astronomers called features. These descriptors commonly demand significant computational power to calculate, require substantial research effort to develop and do not guarantee good performance on the final classification task. Today, lightcurve representation is not entirely automatic; algorithms that extract lightcurve features are designed by humans and must be manually tuned up for every survey. The vast amounts of data that will be generated in future surveys like LSST mean astronomers must develop analysis pipelines that are both scalable and automated. Recently, substantial efforts have been made in the machine learning community to develop methods that prescind from expert-designed and manually tuned features for features that are automatically learned from data. In this work we present what is, to our ...

  16. 基于改进 LTP 算子和稀疏表示的人脸表情识别%Facial Expression Recognition Based on Improved LTP and Sparse Representation

    Institute of Scientific and Technical Information of China (English)

    李立赛; 应自炉

    2015-01-01

    In order to improve the facial expression recognition rate in practical application, an improved local ternary patterns (ILTP) algorithm was proposed on the basis of the local ternary patterns (LTP) algorithm,and was combined with sparse representation-based classifier (SRC) to form a new algorithm to be applied to human facial expression recognition. Then facial expression features are extracted by ILTP algorithm, and the features are treated as the input of the SRC to complete facial expressions recognition. Experimental results based on JAFFE database prove that the new algorithm can get a facial expression recognition rate of 70.48% and is highly feasible.%为了提高实际应用中的人脸表情识别率,本文提出了改进局部三值模式算法(ILTP),并结合稀疏表达分类器(SRC)组成新的算法应用于人脸表情识别.该算法首先利用 ILTP 算法对人脸表情图像进行特征提取,然后将得到的图像顶层特征数据和图像底层特征数据作为SRC 的输入,从而完成人脸表情分类.基于 JAFFE 数据的实验结果表明:改进算法的人脸表情识别率达70.48%,具有较高的可行性.

  17. Mesh Deformation Based 3D Facial Modeling From Images%基于网格变形的从图像重建三维人脸

    Institute of Scientific and Technical Information of China (English)

    董洪伟

    2012-01-01

    从图像重建高质量三维人脸一直是计算机视觉和图形学的一个重要研究问题.不同于传统的基于立体匹配的窄基线多视几何和数据驱动的人脸形变方法,提出一种结合网格变形技术和立体视觉原理的、从图像重建高质量三维人脸模型方法.给定从不同视角拍摄的几幅人脸图像,基于健壮图像特征获得可靠的相机外部参数和稀疏三维点;在此基础上,提出一种结合几何细节保持和图像一致性约束的三维人脸变形算法重建三维人脸,通过对人脸模板的网格变形,使得变形人脸在多幅图像中的可见投影具有一致性的图像颜色强度.基于模板的人脸变形可以有效地解决三维模型成像中的遮挡问题,采用健壮估计法消除噪声、离群点和光照对目标函数收敛性的影响,对目标函数的多次非线性优化求解进一步改进了人脸重建的质量.采用合成人脸图像和真实人脸图像重建三维人脸的实验结果表明,文中算法可以从几幅宽基线图像重建高质量的三维人脸模型.%High quality 3D facial modeling from images is an important problem in computer vision and graphics.Unlike traditional narrow-baseline multi-view geometry and data driven shape blending facial modeling from images, we propose a new technique for high quality facial modeling from images by deforming a template mesh model using Laplacian mesh deformation and color consistency metric defined by all input images.Given a few facial images taken from different viewpoints, we first do sparse stereo matching from matched robust image feature points to recover camera extrinsic parameters and a few 3D matched points.Then an image based mesh deforming process is employed to deform a 3D facial template to make the color intensities of visible images consistent.Template based deformation avoids the difficulty due to the occlusion in images.In addition, the robust estimation technique reduces

  18. [Plant Spectral Discrimination Based on Phenological Features].

    Science.gov (United States)

    Zhang, Lei; Zhao, Jian-long; Jia, Kun; Li, Xiao-song

    2015-10-01

    Spectral analysis plays a significant role onplant characteristic identification and mechanism recognition, there were many papers published on the aspects of absorption features in the spectra of chlorophyll and moisture, spectral analysis onvegetation red edge effect, spectra profile feature extraction, spectra profile conversion, vegetation leaf structure and chemical composition impacts on the spectra in past years. However, fewer researches issued on spectral changes caused by plant seasonal changes of life form, chlorophyll, leaf area index. This paper studied on spectral observation of 11 plants of various life form, plant leaf structure and its size, phenological characteristics, they include deciduous forest with broad vertical leaf, needle leaf evergreen forest, needle leaf deciduous forest, deciduous forest with broadflat leaf, high shrub with big leaf, high shrub with little leaf, deciduous forest with broad little leaf, short shrub, meadow, steppe and grass. Field spectral data were observed with SVC-HR768 (Spectra Vista company, USA), the band width covers 350-2 500 nm, spectral resolution reaches 1-4 nm. The features of NDVI, spectral maximum absorption depth in green band, and spectral maximum absorption depth in red band were measured after continuum removal processing, the mean, amplitude and gradient of these features on seasonal change profile were analyzed, meanwhile, separability research on plant spectral feature of growth period and maturation period were compared. The paper presents a calculation method of separability of vegetation spectra which consider feature spatial distances. This index is carried on analysis of the vegetation discrimination. The results show that: the spectral features during plant growth period are easier to distinguish than them during maturation period. With the same features comparison, plant separability of growth period is 3 points higher than it during maturation period. The overall separabilityof vegetation

  19. 基于小波MCBP和WEF的人脸表情识别%Facial expression recognition based on wavelet transformed MCBP and WEF

    Institute of Scientific and Technical Information of China (English)

    胡敏; 陈杏; 王晓华; 许良凤; 李瑞

    2012-01-01

    现有的多尺度中心化二值模式(MCBP)通过在原始图像上改变CBP算子的半径,随着算子半径的增加计算量也迅速增加.针对这个问题,提出一种基于小波的MCBP(WMCBP)的人脸表情识别方法,对小波分解后的两幅低频图像的特征区域进行CBP变换,得到多级局部CBP直方图序列特征.该方法不仅能获得更加准确的多尺度信息,而且大大降低了运算量.为进一步提高表情识别率,引入了加权的小波能量特征(WWEF).通过对JAFFE人脸表情库的实验证明;这两部分特征在一定程度上可互补,将它们融合能在不明显增加运算量的前提下增强WMCBP的表情识别能力.%The existing multi-scale centralized binary patterns get multi-scale features by changing the radius of CBP operator on the original image, with the increase of the radius of operator, the computation of the algorithm increase rapidly. To deal with this problem, a facial expression recognition method based on Wavelet Transformed MCBP is developed in this paper. The method can not only get more accurate multi-scale information, but also greatly reduces the computation complexity.Furthermore, WWEF is introduced in order to enhance the facial expression recognition accuracy. Experiments on JAFFE facial expression database show that these two types of features are complementary to some extent, and the fusion of them can enhance the performance of WMCBP in facial expression recognition without increasing the computation obviously.

  20. Facial Schwannoma

    Directory of Open Access Journals (Sweden)

    Mohammadtaghi Khorsandi Ashtiani

    2005-06-01

    Full Text Available Background: Facial schwannoma is a rare tumor arising from any part of the nerve. Probable symptoms are partial or facial weakness, hearing loss, visible mass in the ear, otorrhea, loss of taste, rarely pain, and sometimes without any symptoms. Patients should undergo a complete neurotologic history, examination with documentation of facial and auditory function, specially C.T. scan or M.R.I. Surgery is the only treatment option although the decision of when to remove facial schwannoma in the presence of normal facial function is difficult. Case: A 19-year-old girl with all above symptoms in the right side except loss of taste is diagnosed having facial schwannoma with full examination, audiometric, and radiological tests. She underwent surgery. In follow-up facial function were mostly restored. Conclusion: The need for careful assessment of patients with Bell's palsy cannot be overemphasized. In spite of the negative results if still there is any suspicoin, total facial nerve exploration is necessary.

  1. Cosmetics alter biologically-based factors of beauty: evidence from facial contrast.

    Science.gov (United States)

    Jones, Alex L; Russell, Richard; Ward, Robert

    2015-01-01

    The use of cosmetics by women seems to consistently increase their attractiveness. What factors of attractiveness do cosmetics alter to achieve this? Facial contrast is a known cue to sexual dimorphism and youth, and cosmetics exaggerate sexual dimorphisms in facial contrast. Here, we demonstrate that the luminance contrast pattern of the eyes and eyebrows is consistently sexually dimorphic across a large sample of faces, with females possessing lower brow contrasts than males, and greater eye contrast than males. Red-green and yellow-blue color contrasts were not found to differ consistently between the sexes. We also show that women use cosmetics not only to exaggerate sexual dimorphisms of brow and eye contrasts, but also to increase contrasts that decline with age. These findings refine the notion of facial contrast, and demonstrate how cosmetics can increase attractiveness by manipulating factors of beauty associated with facial contrast. PMID:25725411

  2. Cosmetics alter biologically-based factors of beauty: evidence from facial contrast.

    Science.gov (United States)

    Jones, Alex L; Russell, Richard; Ward, Robert

    2015-02-28

    The use of cosmetics by women seems to consistently increase their attractiveness. What factors of attractiveness do cosmetics alter to achieve this? Facial contrast is a known cue to sexual dimorphism and youth, and cosmetics exaggerate sexual dimorphisms in facial contrast. Here, we demonstrate that the luminance contrast pattern of the eyes and eyebrows is consistently sexually dimorphic across a large sample of faces, with females possessing lower brow contrasts than males, and greater eye contrast than males. Red-green and yellow-blue color contrasts were not found to differ consistently between the sexes. We also show that women use cosmetics not only to exaggerate sexual dimorphisms of brow and eye contrasts, but also to increase contrasts that decline with age. These findings refine the notion of facial contrast, and demonstrate how cosmetics can increase attractiveness by manipulating factors of beauty associated with facial contrast.

  3. Analytical Features: A Knowledge-Based Approach to Audio Feature Generation

    Directory of Open Access Journals (Sweden)

    Pachet François

    2009-01-01

    Full Text Available We present a feature generation system designed to create audio features for supervised classification tasks. The main contribution to feature generation studies is the notion of analytical features (AFs, a construct designed to support the representation of knowledge about audio signal processing. We describe the most important aspects of AFs, in particular their dimensional type system, on which are based pattern-based random generators, heuristics, and rewriting rules. We show how AFs generalize or improve previous approaches used in feature generation. We report on several projects using AFs for difficult audio classification tasks, demonstrating their advantage over standard audio features. More generally, we propose analytical features as a paradigm to bring raw signals into the world of symbolic computation.

  4. Facial Expressions with Some Mixed Expressions Recognition Using Neural Networks

    Directory of Open Access Journals (Sweden)

    Dr.R.Parthasarathi

    2011-01-01

    Full Text Available Facial feature extraction is the essential step of facial expression recognition. The automatic facial impression evaluation applies for wide area use. The important facial feature vectors for expressionanalysis are analyzed. The extracted feature vector loads all known feature vectors and trains the NN using as input training vectors while PCA is used for dimensionality reduction. The method is effective for both dimension reduction and good recognition performance in comparison with other proposed methods as shown in experiment results.

  5. Facial Expressions with Some Mixed Expressions Recognition Using Neural Networks

    OpenAIRE

    Dr.R.Parthasarathi; V.Lokeswar Reddy,; K.Vishnuthej,; G.Vishnu Vandan

    2011-01-01

    Facial feature extraction is the essential step of facial expression recognition. The automatic facial impression evaluation applies for wide area use. The important facial feature vectors for expressionanalysis are analyzed. The extracted feature vector loads all known feature vectors and trains the NN using as input training vectors while PCA is used for dimensionality reduction. The method is effective for both dimension reduction and good recognition performance in comparison with other p...

  6. 3D Facial Depth Map Recognition in Different Poses with Surface Contour Feature%基于曲面等高线特征的不同姿态三维人脸深度图识别

    Institute of Scientific and Technical Information of China (English)

    叶长明; 蒋建国; 詹曙; ANDO Shigeru

    2013-01-01

    Three-dimensional face recognition has drown more and more attention,for it overcomes the shortcomings of two-dimensional face recognition technology that two-dimensional face recognition is susceptible to the influence of light,expression changes and pose variations.A face recognition method,Fourier descriptor and contour (FDAC),is proposed in this paper.It is based on the depth maps by the three-dimensional facial imaging system in different poses.Firstly,depth maps are corrected under the guidance of the differential geometry theory,and the human face features are described by the contours.Secondly,Fourier descriptor is employed to extract the facial features.Finally,these extracted features are used in the face recognition process.Experimental results show that FDAC has good recognition accuracy and it performs better in time cost compared with Eigenface method.%三维人脸识别因能克服二维人脸识别易受光照,姿态和表情等因素影响的缺点,从而日益受到关注和重视.文中针对三维人脸实时成像系统所获得的不同姿态下的三维人脸深度图,提出一种人脸识别方法(FDAC).首先利用微分几何相关理论来指导三维深度人脸深度图的校正,再根据曲面等高线来描述人脸的面部特征并使用傅里叶描绘子实现特征提取,最后利用提取的等高线特征进行人脸分类识别.实验结果表明,FDAC方法对于不同姿态下的三维人脸图像有较好的识别率,并且在时间开销方面优于常规的特征脸识别方法.

  7. Feature-based Ontology Mapping from an Information Receivers’ Viewpoint

    DEFF Research Database (Denmark)

    Glückstad, Fumiko Kano; Mørup, Morten

    2012-01-01

    This paper compares four algorithms for computing feature-based similarities between concepts respectively possessing a distinctive set of features. The eventual purpose of comparing these feature-based similarity algorithms is to identify a candidate term in a Target Language (TL) that can...

  8. Patch-guided facial image inpainting by shape propagation

    Institute of Scientific and Technical Information of China (English)

    Yue-ting ZHUANG; Yu-shun WANG; Timothy K. SHIH; Nick C. TANG

    2009-01-01

    Images with human faces comprise an essential part in the imaging realm. Occlusion or damage in facial portions will bring a remarkable discomfort and information loss. We propose an algorithm that can repair occluded or damaged facial images automatically, named 'facial image inpainting'. Inpainting is a set of image processing methods to recover missing image portions. We extend the image inpainting methods by introducing facial domain knowledge. With the support of a face database, our ap-proach propagates structural information, i.e., feature points and edge maps, from similar faces to the missing facial regions. Using the inferred structural information as guidance, an exemplar-based image inpainting algorithm is employed to copy patches in the same face from the source portion to the missing portion. This newly proposed concept of facial image inpainting outperforms the traditional inpainting methods by propagating the facial shapes from a face database, and avoids the problem of variations in imaging conditions from different images by inferring colors and textures from the same face image. Our system produces seamless faces that are hardly seen drawbacks.

  9. Automatic Facial Expression Recognition and Operator Functional State

    Science.gov (United States)

    Blanson, Nina

    2012-01-01

    The prevalence of human error in safety-critical occupations remains a major challenge to mission success despite increasing automation in control processes. Although various methods have been proposed to prevent incidences of human error, none of these have been developed to employ the detection and regulation of Operator Functional State (OFS), or the optimal condition of the operator while performing a task, in work environments due to drawbacks such as obtrusiveness and impracticality. A video-based system with the ability to infer an individual's emotional state from facial feature patterning mitigates some of the problems associated with other methods of detecting OFS, like obtrusiveness and impracticality in integration with the mission environment. This paper explores the utility of facial expression recognition as a technology for inferring OFS by first expounding on the intricacies of OFS and the scientific background behind emotion and its relationship with an individual's state. Then, descriptions of the feedback loop and the emotion protocols proposed for the facial recognition program are explained. A basic version of the facial expression recognition program uses Haar classifiers and OpenCV libraries to automatically locate key facial landmarks during a live video stream. Various methods of creating facial expression recognition software are reviewed to guide future extensions of the program. The paper concludes with an examination of the steps necessary in the research of emotion and recommendations for the creation of an automatic facial expression recognition program for use in real-time, safety-critical missions

  10. Surface characterization based upon significant topographic features

    Energy Technology Data Exchange (ETDEWEB)

    Blanc, J; Grime, D; Blateyron, F, E-mail: fblateyron@digitalsurf.fr [Digital Surf, 16 rue Lavoisier, F-25000 Besancon (France)

    2011-08-19

    Watershed segmentation and Wolf pruning, as defined in ISO 25178-2, allow the detection of significant features on surfaces and their characterization in terms of dimension, area, volume, curvature, shape or morphology. These new tools provide a robust way to specify functional surfaces.

  11. Estimation of human emotions using thermal facial information

    Science.gov (United States)

    Nguyen, Hung; Kotani, Kazunori; Chen, Fan; Le, Bac

    2014-01-01

    In recent years, research on human emotion estimation using thermal infrared (IR) imagery has appealed to many researchers due to its invariance to visible illumination changes. Although infrared imagery is superior to visible imagery in its invariance to illumination changes and appearance differences, it has difficulties in handling transparent glasses in the thermal infrared spectrum. As a result, when using infrared imagery for the analysis of human facial information, the regions of eyeglasses are dark and eyes' thermal information is not given. We propose a temperature space method to correct eyeglasses' effect using the thermal facial information in the neighboring facial regions, and then use Principal Component Analysis (PCA), Eigen-space Method based on class-features (EMC), and PCA-EMC method to classify human emotions from the corrected thermal images. We collected the Kotani Thermal Facial Emotion (KTFE) database and performed the experiments, which show the improved accuracy rate in estimating human emotions.

  12. FACIAL GEOMETRIC BEAUTY SCORE BASED ON SEMI-SUPERVISED REGRESSION LEARNING%基于半监督回归学习的人脸几何美丽分数

    Institute of Scientific and Technical Information of China (English)

    戴礼青; 金忠; 孙明明

    2015-01-01

    基于人脸美学的迅速发展,对人脸的几何特征定义、几何特征规范化以及几何特征对判断人脸美与否的贡献进行研究。首先定义人脸几何美丽分数函数,然后将流形学习与半监督学习相结合,用流形上的半监督回归方法学习人脸几何美丽分数。为了突出几何特征,还验证了人脸表情与几何美丽分数之间的关系。与 K 近邻(KNN)、支持向量机(SVM)、C4.5决策树分类方法相比,通过实验验证,证明了所提方法的有效性和可行性。%Based on rapid development of facial aesthetics,we mainly study the definition of facial geometric feature,the normalisation of geometric features and the contribution of geometric features to judging whether the face is beauty or not.First,we define the facial geometric beauty score function,and then combine the manifold learning with semi-supervised learning,use semi-supervised regression on manifolds to learn geometric beauty score of faces.In order to highlight the geometric features,we also verify the relationship between facial expression and geometric beauty scores.Compared with KNN,SVM and C4.5 decision tree classification methods,the validity and the feasibility of the proposed methods are proved by experiment.

  13. Content-Based Image Retrieval Using Multiple Features

    OpenAIRE

    Zhang, Chi; Huang, Lei

    2014-01-01

    Algorithms of Content-Based Image Retrieval (CBIR) have been well developed along with the explosion of information. These algorithms are mainly distinguished based on feature used to describe the image content. In this paper, the algorithms that are based on color feature and texture feature for image retrieval will be presented. Color Coherence Vector based image retrieval algorithm is also attempted during the implementation process, but the best result is generated from the algorithms tha...

  14. A Framework for Real-Time Face and Facial Feature Tracking using Optical Flow Pre-estimation and Template Tracking

    CERN Document Server

    Gast, E R

    2011-01-01

    This work presents a framework for tracking head movements and capturing the movements of the mouth and both the eyebrows in real-time. We present a head tracker which is a combination of a optical flow and a template based tracker. The estimation of the optical flow head tracker is used as starting point for the template tracker which fine-tunes the head estimation. This approach together with re-updating the optical flow points prevents the head tracker from drifting. This combination together with our switching scheme, makes our tracker very robust against fast movement and motion-blur. We also propose a way to reduce the influence of partial occlusion of the head. In both the optical flow and the template based tracker we identify and exclude occluded points.

  15. Straight line feature based image distortion correction

    Institute of Scientific and Technical Information of China (English)

    Zhang Haofeng; Zhao Chunxia; Lu Jianfeng; Tang Zhenmin; Yang Jingyu

    2008-01-01

    An image distortion correction method is proposed, which uses the straight line features. Many parallel lines of different direction from different images were extracted, and then were used to optimize the distortion parameters by nonlinear least square. The thought of step by step was added when the optimization method working. 3D world coordi-nation is not need to know, and the method is easy to implement. The experiment result shows its high accuracy.

  16. Feature selection with neighborhood entropy-based cooperative game theory.

    Science.gov (United States)

    Zeng, Kai; She, Kun; Niu, Xinzheng

    2014-01-01

    Feature selection plays an important role in machine learning and data mining. In recent years, various feature measurements have been proposed to select significant features from high-dimensional datasets. However, most traditional feature selection methods will ignore some features which have strong classification ability as a group but are weak as individuals. To deal with this problem, we redefine the redundancy, interdependence, and independence of features by using neighborhood entropy. Then the neighborhood entropy-based feature contribution is proposed under the framework of cooperative game. The evaluative criteria of features can be formalized as the product of contribution and other classical feature measures. Finally, the proposed method is tested on several UCI datasets. The results show that neighborhood entropy-based cooperative game theory model (NECGT) yield better performance than classical ones.

  17. Feature Selection with Neighborhood Entropy-Based Cooperative Game Theory

    Directory of Open Access Journals (Sweden)

    Kai Zeng

    2014-01-01

    Full Text Available Feature selection plays an important role in machine learning and data mining. In recent years, various feature measurements have been proposed to select significant features from high-dimensional datasets. However, most traditional feature selection methods will ignore some features which have strong classification ability as a group but are weak as individuals. To deal with this problem, we redefine the redundancy, interdependence, and independence of features by using neighborhood entropy. Then the neighborhood entropy-based feature contribution is proposed under the framework of cooperative game. The evaluative criteria of features can be formalized as the product of contribution and other classical feature measures. Finally, the proposed method is tested on several UCI datasets. The results show that neighborhood entropy-based cooperative game theory model (NECGT yield better performance than classical ones.

  18. Facial Data Field

    Institute of Scientific and Technical Information of China (English)

    WANG Shuliang; YUAN Hanning; CAO Baohua; WANG Dakui

    2015-01-01

    Expressional face recognition is a challenge in computer vision for complex expressions. Facial data field is proposed to recognize expression. Fundamentals are presented in the methodology of face recognition upon data field and subsequently, technical algorithms including normalizing faces, generating facial data field, extracting feature points in partitions, assigning weights and recog-nizing faces. A case is studied with JAFFE database for its verification. Result indicates that the proposed method is suitable and eff ective in expressional face recognition con-sidering the whole average recognition rate is up to 94.3%. In conclusion, data field is considered as a valuable alter-native to pattern recognition.

  19. Geometrically Invariant Watermarking Scheme Based on Local Feature Points

    Directory of Open Access Journals (Sweden)

    Jing Li

    2012-06-01

    Full Text Available Based on local invariant feature points and cross ratio principle, this paper presents a feature-point-based image watermarking scheme. It is robust to geometric attacks and some signal processes. It extracts local invariant feature points from the image using the improved scale invariant feature transform algorithm. Utilizing these points as vertexes it constructs some quadrilaterals to be as local feature regions. Watermark is inserted these local feature regions repeatedly. In order to get stable local regions it adjusts the number and distribution of extracted feature points. In every chosen local feature region it decides locations to embed watermark bits based on the cross ratio of four collinear points, the cross ratio is invariant to projective transformation. Watermark bits are embedded by quantization modulation, in which the quantization step value is computed with the given PSNR. Experimental results show that the proposed method can strongly fight more geometrical attacks and the compound attacks of geometrical ones.

  20. 3D animation of facial plastic surgery based on computer graphics

    Science.gov (United States)

    Zhang, Zonghua; Zhao, Yan

    2013-12-01

    More and more people, especial women, are getting desired to be more beautiful than ever. To some extent, it becomes true because the plastic surgery of face was capable in the early 20th and even earlier as doctors just dealing with war injures of face. However, the effect of post-operation is not always satisfying since no animation could be seen by the patients beforehand. In this paper, by combining plastic surgery of face and computer graphics, a novel method of simulated appearance of post-operation will be given to demonstrate the modified face from different viewpoints. The 3D human face data are obtained by using 3D fringe pattern imaging systems and CT imaging systems and then converted into STL (STereo Lithography) file format. STL file is made up of small 3D triangular primitives. The triangular mesh can be reconstructed by using hash function. Top triangular meshes in depth out of numbers of triangles must be picked up by ray-casting technique. Mesh deformation is based on the front triangular mesh in the process of simulation, which deforms interest area instead of control points. Experiments on face model show that the proposed 3D animation facial plastic surgery can effectively demonstrate the simulated appearance of post-operation.

  1. Heartbeat Signal from Facial Video for Biometric Recognition

    DEFF Research Database (Denmark)

    Haque, Mohammad Ahsanul; Nasrollahi, Kamal; Moeslund, Thomas B.

    2015-01-01

    , and blood volume pressure provide the possibility of extracting heartbeat signal from facial video instead of using obtrusive ECG or PCG sensors in the body. This paper proposes the Heartbeat Signal from Facial Video (HSFV) as a new biometric trait for human identity recognition, for the first time...... to the best of our knowledge. Feature extraction from the HSFV is accomplished by employing Radon transform on a waterfall model of the replicated HSFV. The pairwise Minkowski distances are obtained from the Radon image as the features. The authentication is accomplished by a decision tree based supervised...

  2. Multiresolution Feature Based Fractional Power Polynomial Kernel Fisher Discriminant Model for Face Recognition

    Directory of Open Access Journals (Sweden)

    Dattatray V. Jadhav

    2008-05-01

    Full Text Available This paper prese nts a technique for face recognition which uses wavelet transform to derive desirable facial features. Three level decompositions are used to form the pyramidal multiresolution features to cope with the variations due to illumination and facial expression changes. The fractional power polynomial kernel maps the input data into an implicit feature space with a nonlinear mapping. Being linear in the feature space, but nonlinear in the input space, kernel is capable of deriving low dimensional features that incorporate higher order statistic. The Linear Discriminant Analysis is applied to kernel mapped multiresolution featured data. The effectiveness of this Wavelet Kernel Fisher Classifier algorithm is compared with the different existing popular algorithms for face recognition using FERET, ORL Yale and YaleB databases. This algorithm performs better than some of the existing popular algorithms.

  3. A New Color Facial Identification Feature Extraction Method' and Automatic Identification%一种改进的彩色人脸鉴别特征抽取方法及自动识别

    Institute of Scientific and Technical Information of China (English)

    高燕; 明曙军; 刘永俊

    2011-01-01

    Currently face recognition has made some success, algorithms are constantly being improved. According to the common needs of the average sample solution in traditional linear analysis methods, this paper proposes the face recognition based on intermediate samples. This method can remove the influence of average samples to interference samples. Combined with the color of face recognition, the paper proposes color facial identification feature extraction and automatic identification based on the middle samples. Finally, extensive experiments performed on the international and universal AR standard color face database verify the effectiveness of the proposed method.%针对传统的线性分析方法中都需要的平均样本的共性,提出了基于中间样本的人脸识别.这种方法有效去除了干扰样本对平均样本的影响,并结合彩色人脸识别,提出了基于中间样本的彩色人脸鉴别特征抽取及自动识别方法.最后,在国际通用的AR标准彩色人脸库中进行了大量实验,验证了算法的有效性.

  4. Dwt - Based Feature Extraction from ecg Signal

    Directory of Open Access Journals (Sweden)

    V.K.Srivastava

    2013-01-01

    Full Text Available Electrocardiogram is used to measure the rate and regularity of heartbeats to detect any irregularity to the heart. An ECG translates the heart electrical activity into wave-line on paper or screen. For the feature extraction and classification task we will be using discrete wavelet transform (DWT as wavelet transform is a two-dimensional timescale processing method, so it is suitable for the non-stationary ECG signals(due to adequate scale values and shifting in time. Then the data will be analyzed and classified using neuro-fuzzy which is a hybrid of artificial neural networks and fuzzy logic.

  5. Simultaneous Channel and Feature Selection of Fused EEG Features Based on Sparse Group Lasso

    Directory of Open Access Journals (Sweden)

    Jin-Jia Wang

    2015-01-01

    Full Text Available Feature extraction and classification of EEG signals are core parts of brain computer interfaces (BCIs. Due to the high dimension of the EEG feature vector, an effective feature selection algorithm has become an integral part of research studies. In this paper, we present a new method based on a wrapped Sparse Group Lasso for channel and feature selection of fused EEG signals. The high-dimensional fused features are firstly obtained, which include the power spectrum, time-domain statistics, AR model, and the wavelet coefficient features extracted from the preprocessed EEG signals. The wrapped channel and feature selection method is then applied, which uses the logistical regression model with Sparse Group Lasso penalized function. The model is fitted on the training data, and parameter estimation is obtained by modified blockwise coordinate descent and coordinate gradient descent method. The best parameters and feature subset are selected by using a 10-fold cross-validation. Finally, the test data is classified using the trained model. Compared with existing channel and feature selection methods, results show that the proposed method is more suitable, more stable, and faster for high-dimensional feature fusion. It can simultaneously achieve channel and feature selection with a lower error rate. The test accuracy on the data used from international BCI Competition IV reached 84.72%.

  6. Feature selection for splice site prediction: A new method using EDA-based feature ranking

    Directory of Open Access Journals (Sweden)

    Rouzé Pierre

    2004-05-01

    Full Text Available Abstract Background The identification of relevant biological features in large and complex datasets is an important step towards gaining insight in the processes underlying the data. Other advantages of feature selection include the ability of the classification system to attain good or even better solutions using a restricted subset of features, and a faster classification. Thus, robust methods for fast feature selection are of key importance in extracting knowledge from complex biological data. Results In this paper we present a novel method for feature subset selection applied to splice site prediction, based on estimation of distribution algorithms, a more general framework of genetic algorithms. From the estimated distribution of the algorithm, a feature ranking is derived. Afterwards this ranking is used to iteratively discard features. We apply this technique to the problem of splice site prediction, and show how it can be used to gain insight into the underlying biological process of splicing. Conclusion We show that this technique proves to be more robust than the traditional use of estimation of distribution algorithms for feature selection: instead of returning a single best subset of features (as they normally do this method provides a dynamical view of the feature selection process, like the traditional sequential wrapper methods. However, the method is faster than the traditional techniques, and scales better to datasets described by a large number of features.

  7. A genome-wide association study identifies five loci influencing facial morphology in Europeans.

    Directory of Open Access Journals (Sweden)

    Fan Liu

    2012-09-01

    Full Text Available Inter-individual variation in facial shape is one of the most noticeable phenotypes in humans, and it is clearly under genetic regulation; however, almost nothing is known about the genetic basis of normal human facial morphology. We therefore conducted a genome-wide association study for facial shape phenotypes in multiple discovery and replication cohorts, considering almost ten thousand individuals of European descent from several countries. Phenotyping of facial shape features was based on landmark data obtained from three-dimensional head magnetic resonance images (MRIs and two-dimensional portrait images. We identified five independent genetic loci associated with different facial phenotypes, suggesting the involvement of five candidate genes--PRDM16, PAX3, TP63, C5orf50, and COL17A1--in the determination of the human face. Three of them have been implicated previously in vertebrate craniofacial development and disease, and the remaining two genes potentially represent novel players in the molecular networks governing facial development. Our finding at PAX3 influencing the position of the nasion replicates a recent GWAS of facial features. In addition to the reported GWA findings, we established links between common DNA variants previously associated with NSCL/P at 2p21, 8q24, 13q31, and 17q22 and normal facial-shape variations based on a candidate gene approach. Overall our study implies that DNA variants in genes essential for craniofacial development contribute with relatively small effect size to the spectrum of normal variation in human facial morphology. This observation has important consequences for future studies aiming to identify more genes involved in the human facial morphology, as well as for potential applications of DNA prediction of facial shape such as in future forensic applications.

  8. Kernel based visual tracking with scale invariant features

    Institute of Scientific and Technical Information of China (English)

    Risheng Han; Zhongliang Jing; Yuanxiang Li

    2008-01-01

    The kernel based tracking has two disadvantages:the tracking window size cannot be adjusted efficiently,and the kernel based color distribution may not have enough ability to discriminate object from clutter background.FDr boosting up the feature's discriminating ability,both scale invariant features and kernel based color distribution features are used as descriptors of tracked object.The proposed algorithm can keep tracking object of varying scales even when the surrounding background is similar to the object's appearance.

  9. Facial trauma

    Science.gov (United States)

    Maxillofacial injury; Midface trauma; Facial injury; LeFort injuries ... Kellman RM. Maxillofacial trauma. In: Flint PW, Haughey BH, Lund LJ, et al, eds. Cummings Otolaryngology: Head & Neck Surgery . 6th ed. Philadelphia, PA: ...

  10. Study on Isomerous CAD Model Exchange Based on Feature

    Institute of Scientific and Technical Information of China (English)

    SHAO Xiaodong; CHEN Feng; XU Chenguang

    2006-01-01

    A model-exchange method based on feature between isomerous CAD systems is put forward in this paper. In this method, CAD model information is accessed at both feature and geometry levels and converted according to standard feature operation. The feature information including feature tree, dimensions and constraints, which will be lost in traditional data conversion, as well as geometry are converted completely from source CAD system to destination one. So the transferred model can be edited through feature operation, which cannot be implemented by general model-exchange interface.

  11. CONSTRUCTION AND MODIFICATION OF FLEXIBLE FEATURE-BASED MODELS

    Institute of Scientific and Technical Information of China (English)

    1999-01-01

    A new approach is proposed to generate flexible featrure-based models (FFBM), which can be modified dynamically. BRep/CSFG/FRG hybrid scheme is used to describe FFBM, in which BRep explicitly defines the model, CSFG (Constructive solid-feature geometry) tree records the feature-based modelling procedure and FRG (Feature relation graph) reflects different knids of relationship among features. Topological operators with local retrievability are designed to implement feature addition, which is traced by topological operation list (TOL) in detail. As a result, FFBM can be modified directly in the system database. Related features' chain reactions and variable topologies are supported in design modification, after which the product information adhering on features will not be lost. Further, a feature can be modified as rapidly as it was added.

  12. Perceived sexual orientation based on vocal and facial stimuli is linked to self-rated sexual orientation in Czech men.

    Directory of Open Access Journals (Sweden)

    Jaroslava Varella Valentova

    Full Text Available Previous research has shown that lay people can accurately assess male sexual orientation based on limited information, such as face, voice, or behavioral display. Gender-atypical traits are thought to serve as cues to sexual orientation. We investigated the presumed mechanisms of sexual orientation attribution using a standardized set of facial and vocal stimuli of Czech men. Both types of stimuli were rated for sexual orientation and masculinity-femininity by non-student heterosexual women and homosexual men. Our data showed that by evaluating vocal stimuli both women and homosexual men can judge sexual orientation of the target men in agreement with their self-reported sexual orientation. Nevertheless, only homosexual men accurately attributed sexual orientation of the two groups from facial images. Interestingly, facial images of homosexual targets were rated as more masculine than heterosexual targets. This indicates that attributions of sexual orientation are affected by stereotyped association between femininity and male homosexuality; however, reliance on such cues can lead to frequent misjudgments as was the case with the female raters. Although our study is based on a community sample recruited in a non-English speaking country, the results are generally consistent with the previous research and thus corroborate the validity of sexual orientation attributions.

  13. Robust speech features representation based on computational auditory model

    Institute of Scientific and Technical Information of China (English)

    LU Xugang; JIA Chuan; DANG Jianwu

    2004-01-01

    A speech signal processing and features extracting method based on computational auditory model is proposed. The computational model is based on psychological, physiological knowledge and digital signal processing methods. In each stage of a hearing perception system, there is a corresponding computational model to simulate its function. Based on this model, speech features are extracted. In each stage, the features in different kinds of level are extracted. A further processing for primary auditory spectrum based on lateral inhibition is proposed to extract much more robust speech features. All these features can be regarded as the internal representations of speech stimulation in hearing system. The robust speech recognition experiments are conducted to test the robustness of the features. Results show that the representations based on the proposed computational auditory model are robust representations for speech signals.

  14. Features Fusion Based on FLD for Face Recognition

    OpenAIRE

    Changjun Zhou; Qiang Zhang; Xiaopeng Wei; Ziqi Wei

    2010-01-01

    In this paper, we introduced a features fusion method for face recognition based on Fisher’s Linear Discriminant (FLD). The method extract features by employed Two-Dimensional principal component analysis (2DPCA) and Gabor wavelets, and then fuse their features which are extracted with FLD respectively. As a holistic feature extraction method, 2DPCA performs dimensional reduction to the input dataset while retaining characteristics of the dataset that contribute most to its variance by elimin...

  15. Accurate Image Retrieval Algorithm Based on Color and Texture Feature

    Directory of Open Access Journals (Sweden)

    Chunlai Yan

    2013-06-01

    Full Text Available Content-Based Image Retrieval (CBIR is one of the most active hot spots in the current research field of multimedia retrieval. According to the description and extraction of visual content (feature of the image, CBIR aims to find images that contain specified content (feature in the image database. In this paper, several key technologies of CBIR, e. g. the extraction of the color and texture features of the image, as well as the similarity measures are investigated. On the basis of the theoretical research, an image retrieval system based on color and texture features is designed. In this system, the Weighted Color Feature based on HSV space is adopted as a color feature vector, four features of the Co-occurrence Matrix, saying Energy, Entropy, Inertia Quadrature and Correlation, are used to construct texture vectors, and the Euclidean distance for similarity measure is employed as well. Experimental results show that this CBIR system is efficient in image retrieval.

  16. High Dimensional Data Clustering Using Fast Cluster Based Feature Selection

    Directory of Open Access Journals (Sweden)

    Karthikeyan.P

    2014-03-01

    Full Text Available Feature selection involves identifying a subset of the most useful features that produces compatible results as the original entire set of features. A feature selection algorithm may be evaluated from both the efficiency and effectiveness points of view. While the efficiency concerns the time required to find a subset of features, the effectiveness is related to the quality of the subset of features. Based on these criteria, a fast clustering-based feature selection algorithm (FAST is proposed and experimentally evaluated in this paper. The FAST algorithm works in two steps. In the first step, features are divided into clusters by using graph-theoretic clustering methods. In the second step, the most representative feature that is strongly related to target classes is selected from each cluster to form a subset of features. Features in different clusters are relatively independent; the clustering-based strategy of FAST has a high probability of producing a subset of useful and independent features. To ensure the efficiency of FAST, we adopt the efficient minimum-spanning tree (MST using the Kruskal‟s Algorithm clustering method. The efficiency and effectiveness of the FAST algorithm are evaluated through an empirical study. Index Terms—

  17. Segmentation-Based PolSAR Image Classification Using Visual Features: RHLBP and Color Features

    Directory of Open Access Journals (Sweden)

    Jian Cheng

    2015-05-01

    Full Text Available A segmentation-based fully-polarimetric synthetic aperture radar (PolSAR image classification method that incorporates texture features and color features is designed and implemented. This method is based on the framework that conjunctively uses statistical region merging (SRM for segmentation and support vector machine (SVM for classification. In the segmentation step, we propose an improved local binary pattern (LBP operator named the regional homogeneity local binary pattern (RHLBP to guarantee the regional homogeneity in PolSAR images. In the classification step, the color features extracted from false color images are applied to improve the classification accuracy. The RHLBP operator and color features can provide discriminative information to separate those pixels and regions with similar polarimetric features, which are from different classes. Extensive experimental comparison results with conventional methods on L-band PolSAR data demonstrate the effectiveness of our proposed method for PolSAR image classification.

  18. Precise Facial Feature Localization under Non-Restraint Environment with Limited Training Images%非约束环境下基于小样本的人脸特征精确定位

    Institute of Scientific and Technical Information of China (English)

    陈莹; 张龙媛

    2013-01-01

    针对非约束环境下的人脸特征定位问题,在概率框架下提出一种基于小样本的精确定位策略.通过对比分析,提取人脸主要特征的颜色和灰度信息及人脸特征之间的几何约束信息,利用混合高斯模型分别对其进行概率建模.之后建立定位融合策略,不仅考虑每种人脸特征的概率分布,还考虑其周围元素的概率分布特性,及各元素之间的几何约束.实验结果表明,该方法能在少量训练样本图像且样本个体较为单一的条件下,实现人脸主要特征的精确定位,且定位精度高于现有方法.%After analyzing the limitation of current methods,a precise localization strategy with limited training data is proposed in a probability framework.Texture and geometry information of facial elements are extracted as model features after comparison analysis with other traditional descriptors.Gaussian mixture model is used for the probability modeling,which describes the distribution of each model features extracted from different facial conditions well.Then,a series of fusion strategies are designed for the facial features localization,which considers the probability distribution of each facial feature,the distribution characters of their surrounding elements and their geometry constraints.The experimental results show that the proposed method can realize precise localization for the facial features with limited training sample images which belong to a single subject,and it outperforms other methods in localization accuracy.

  19. Multifinger Feature Level Fusion Based Fingerprint Identification

    OpenAIRE

    Praveen N; Tessamma Thomas

    2012-01-01

    Fingerprint based authentication systems are one of the cost-effective biometric authentication techniques employed for personal identification. As the data base population increases, fast identification/recognition algorithms are required with high accuracy. Accuracy can be increased using multimodal evidences collected by multiple biometric traits. In this work, consecutive fingerprint images are taken, global singularities are located using directional field strength and their local orient...

  20. 智慧学习环境中基于面部表情的情感分析%Emotion Analysis Based on Facial Expression Recognition in Smart Learning Environment

    Institute of Scientific and Technical Information of China (English)

    孙波; 刘永娜; 陈玖冰; 罗继鸿; 张迪

    2015-01-01

    expression recognition. In an optimal situation, the related individual facial feature can be separated during the process of expression recognition. According to FACS (Facial Action Coding System) proposed by Ekman, a famous psychologist, we constructed an emotion analysis framework based on facial expression recognition in smart learning environment. We used feature decomposition method to decompose the facial feature and the expressional feature into face subspace and expression subspace respectively. After that, expression recognition will be done in the expression subspace and the interference of facial features will be eliminated. Experimental results on JAFFE database suggest that our method is effective. Facial expression recognition for emotional intervention has been performed in Magic Learning, which is an emotional interaction subsystem between learners and virtual teachers in 3D virtual learning environment.

  1. Feature Selection for Neural Network Based Stock Prediction

    Science.gov (United States)

    Sugunnasil, Prompong; Somhom, Samerkae

    We propose a new methodology of feature selection for stock movement prediction. The methodology is based upon finding those features which minimize the correlation relation function. We first produce all the combination of feature and evaluate each of them by using our evaluate function. We search through the generated set with hill climbing approach. The self-organizing map based stock prediction model is utilized as the prediction method. We conduct the experiment on data sets of the Microsoft Corporation, General Electric Co. and Ford Motor Co. The results show that our feature selection method can improve the efficiency of the neural network based stock prediction.

  2. Face Puzzle – Two new video-based tasks for measuring explicit and implicit aspects of facial emotion recognition

    Directory of Open Access Journals (Sweden)

    Dorit eKliemann

    2013-06-01

    Full Text Available Recognizing others’ emotional states is crucial for effective social interaction. While most facial emotion recognition tasks use explicit prompts that trigger consciously controlled processing, emotional faces are almost exclusively processed implicitly in real life. Recent attempts in social cognition suggest a dual process perspective, whereby explicit and implicit processes largely operate independently. However, due to differences in methodology the direct comparison of implicit and explicit social cognition has remained a challenge.Here, we introduce a new tool to comparably measure implicit and explicit processing aspects comprising basic and complex emotions in facial expressions. We developed two video-based tasks with similar answer formats to assess performance in respective facial emotion recognition processes: Face Puzzle, implicit and explicit. To assess the tasks’ sensitivity to atypical social cognition and to infer interrelationship patterns between explicit and implicit processes in typical and atypical development, we included healthy adults (NT, n= 24 and adults with autism spectrum disorder (ASD, n = 24.Item analyses yielded good reliability of the new tasks. Group-specific results indicated sensitivity to subtle social impairments in high-functioning ASD. Correlation analyses with established implicit and explicit socio-cognitive measures were further in favor of the tasks’ external validity. Between group comparisons provide first hints of differential relations between implicit and explicit aspects of facial emotion recognition processes in healthy compared to ASD participants. In addition, an increased magnitude of between group differences in the implicit task was found for a speed-accuracy composite measure. The new Face Puzzle tool thus provides two new tasks to separately assess explicit and implicit social functioning, for instance, to measure subtle impairments as well as potential improvements due to social

  3. Face puzzle-two new video-based tasks for measuring explicit and implicit aspects of facial emotion recognition.

    Science.gov (United States)

    Kliemann, Dorit; Rosenblau, Gabriela; Bölte, Sven; Heekeren, Hauke R; Dziobek, Isabel

    2013-01-01

    Recognizing others' emotional states is crucial for effective social interaction. While most facial emotion recognition tasks use explicit prompts that trigger consciously controlled processing, emotional faces are almost exclusively processed implicitly in real life. Recent attempts in social cognition suggest a dual process perspective, whereby explicit and implicit processes largely operate independently. However, due to differences in methodology the direct comparison of implicit and explicit social cognition has remained a challenge. Here, we introduce a new tool to comparably measure implicit and explicit processing aspects comprising basic and complex emotions in facial expressions. We developed two video-based tasks with similar answer formats to assess performance in respective facial emotion recognition processes: Face Puzzle, implicit and explicit. To assess the tasks' sensitivity to atypical social cognition and to infer interrelationship patterns between explicit and implicit processes in typical and atypical development, we included healthy adults (NT, n = 24) and adults with autism spectrum disorder (ASD, n = 24). Item analyses yielded good reliability of the new tasks. Group-specific results indicated sensitivity to subtle social impairments in high-functioning ASD. Correlation analyses with established implicit and explicit socio-cognitive measures were further in favor of the tasks' external validity. Between group comparisons provide first hints of differential relations between implicit and explicit aspects of facial emotion recognition processes in healthy compared to ASD participants. In addition, an increased magnitude of between group differences in the implicit task was found for a speed-accuracy composite measure. The new Face Puzzle tool thus provides two new tasks to separately assess explicit and implicit social functioning, for instance, to measure subtle impairments as well as potential improvements due to social cognitive

  4. Gender Classification Based on Geometry Features of Palm Image

    OpenAIRE

    Ming Wu; Yubo Yuan

    2014-01-01

    This paper presents a novel gender classification method based on geometry features of palm image which is simple, fast, and easy to handle. This gender classification method based on geometry features comprises two main attributes. The first one is feature extraction by image processing. The other one is classification system with polynomial smooth support vector machine (PSSVM). A total of 180 palm images were collected from 30 persons to verify the validity of the proposed gender classi...

  5. Unsupervised Feature Selection Based on the Distribution of Features Attributed to Imbalanced Data Sets

    Directory of Open Access Journals (Sweden)

    Mina Alibeigi, Sattar Hashemi & Ali Hamzeh

    2011-04-01

    Full Text Available Since dealing with high dimensional data is computationally complex and sometimes evenintractable, recently several feature reduction methods have been developed to reduce thedimensionality of the data in order to simplify the calculation analysis in various applications suchas text categorization, signal processing, image retrieval and gene expressions among manyothers. Among feature reduction techniques, feature selection is one of the most popular methodsdue to the preservation of the original meaning of features. However, most of the current featureselection methods do not have a good performance when fed on imbalanced data sets which arepervasive in real world applications.In this paper, we propose a new unsupervised feature selection method attributed to imbalanceddata sets, which will remove redundant features from the original feature space based on thedistribution of features. To show the effectiveness of the proposed method, popular featureselection methods have been implemented and compared. Experimental results on the severalimbalanced data sets, derived from UCI repository database, illustrate the effectiveness of theproposed method in comparison with other rival methods in terms of both AUC and F1performance measures of 1-Nearest Neighbor and Naïve Bayes classifiers and the percent of theselected features.

  6. Fingerprint image segmentation based on multi-features histogram analysis

    Science.gov (United States)

    Wang, Peng; Zhang, Youguang

    2007-11-01

    An effective fingerprint image segmentation based on multi-features histogram analysis is presented. We extract a new feature, together with three other features to segment fingerprints. Two of these four features, each of which is related to one of the other two, are reciprocals with each other, so features are divided into two groups. These two features' histograms are calculated respectively to determine which feature group is introduced to segment the aim-fingerprint. The features could also divide fingerprints into two classes with high and low quality. Experimental results show that our algorithm could classify foreground and background effectively with lower computational cost, and it can also reduce pseudo-minutiae detected and improve the performance of AFIS.

  7. Optimized features selection for gender classification using optimization algorithms

    OpenAIRE

    KHAN, Sajid Ali; Nazir, Muhammad; RIAZ, Naveed

    2013-01-01

    Optimized feature selection is an important task in gender classification. The optimized features not only reduce the dimensions, but also reduce the error rate. In this paper, we have proposed a technique for the extraction of facial features using both appearance-based and geometric-based feature extraction methods. The extracted features are then optimized using particle swarm optimization (PSO) and the bee algorithm. The geometric-based features are optimized by PSO with ensem...

  8. Facial Sports Injuries

    Science.gov (United States)

    ... Calendar Find an ENT Doctor Near You Facial Sports Injuries Facial Sports Injuries Patient Health Information News media ... should receive immediate medical attention. Prevention Of Facial Sports Injuries The best way to treat facial sports injuries ...

  9. Children's Facial Trustworthiness Judgments: Agreement and Relationship with Facial Attractiveness.

    Science.gov (United States)

    Ma, Fengling; Xu, Fen; Luo, Xianming

    2016-01-01

    This study examined developmental changes in children's abilities to make trustworthiness judgments based on faces and the relationship between a child's perception of trustworthiness and facial attractiveness. One hundred and one 8-, 10-, and 12-year-olds, along with 37 undergraduates, were asked to judge the trustworthiness of 200 faces. Next, they issued facial attractiveness judgments. The results indicated that children made consistent trustworthiness and attractiveness judgments based on facial appearance, but with-adult and within-age agreement levels of facial judgments increased with age. Additionally, the agreement levels of judgments made by girls were higher than those by boys. Furthermore, the relationship between trustworthiness and attractiveness judgments increased with age, and the relationship between two judgments made by girls was closer than those by boys. These findings suggest that face-based trait judgment ability develops throughout childhood and that, like adults, children may use facial attractiveness as a heuristic cue that signals a stranger's trustworthiness. PMID:27148111

  10. Facial attractiveness.

    Science.gov (United States)

    Little, Anthony C

    2014-11-01

    Facial attractiveness has important social consequences. Despite a widespread belief that beauty cannot be defined, in fact, there is considerable agreement across individuals and cultures on what is found attractive. By considering that attraction and mate choice are critical components of evolutionary selection, we can better understand the importance of beauty. There are many traits that are linked to facial attractiveness in humans and each may in some way impart benefits to individuals who act on their preferences. If a trait is reliably associated with some benefit to the perceiver, then we would expect individuals in a population to find that trait attractive. Such an approach has highlighted face traits such as age, health, symmetry, and averageness, which are proposed to be associated with benefits and so associated with facial attractiveness. This view may postulate that some traits will be universally attractive; however, this does not preclude variation. Indeed, it would be surprising if there existed a template of a perfect face that was not affected by experience, environment, context, or the specific needs of an individual. Research on facial attractiveness has documented how various face traits are associated with attractiveness and various factors that impact on an individual's judgments of facial attractiveness. Overall, facial attractiveness is complex, both in the number of traits that determine attraction and in the large number of factors that can alter attraction to particular faces. A fuller understanding of facial beauty will come with an understanding of how these various factors interact with each other. WIREs Cogn Sci 2014, 5:621-634. doi: 10.1002/wcs.1316 CONFLICT OF INTEREST: The author has declared no conflicts of interest for this article. For further resources related to this article, please visit the WIREs website. PMID:26308869

  11. Application of data fusion in computer facial recognition

    Directory of Open Access Journals (Sweden)

    Wang Ai Qiang

    2013-11-01

    Full Text Available The recognition rate of single recognition method is inefficiency in computer facial recognition. We proposed a new confluent facial recognition method using data fusion technology, a variety of recognition algorithm are combined to form the fusion-based face recognition system to improve the recognition rate in many ways. Data fusion considers three levels of data fusion, feature level fusion and decision level fusion. And the data layer uses a simple weighted average algorithm, which is easy to implement. Artificial neural network algorithm was selected in feature layer and fuzzy reasoning algorithm was used in decision layer. Finally, we compared with the BP neural network algorithm in the MATLAB experimental platform. The result shows that the recognition rate has been greatly improved after adopting data fusion technology in computer facial recognition.

  12. 基于纠错输出编码的人脸表情识别%Facial expression recognition based on error-correcting output coding

    Institute of Scientific and Technical Information of China (English)

    余棉水; 朱岸青; 解晓萌

    2014-01-01

    多分类问题一直是模式识别领域的一个热点,提出了一种基于纠错输出编码和支持向量机的多分类器算法。根据通信编码理论设计纠错输出编码矩阵;按照该编码矩阵设计若干个互不相关的子支持向量机,根据编码原理将它们融合为一个多分类器。为了验证本分类器的有效性,采用Gabor小波提取人脸表情特征,应用二元主成分(2DPCA)分析法对提取的特征进行降维处理,应用该分类器进行了人脸表情的识别。实验结果表明,提出的方法能有效提高人脸表情的识别率,并具有极好的鲁棒性。%Multiple classification problems has been a hot topic in the field of pattern recognition. This paper proposes a multiple classifier algorithm based on Error-Correcting Output Coding(ECOC)and Support Vector Machine(SVM). According to the communication coding theory to design error-correcting output coding matrix, it constructs some irrelevant SVMs, and integrates them as a multiple classifier. In order to verify the effectiveness of the classifier, using the Gabor wavelet to extract facial expression features, and application of two principal components(2DPCA)to reduce the dimension of extracted features, the classifier is used for facial expression recognition. Experimental results show that the method can effectively improve facial expression recognition and has excellent robustness.

  13. Techniques in Facial Expression Recognition

    Directory of Open Access Journals (Sweden)

    Avinash Prakash Pandhare

    2016-05-01

    Full Text Available Facial expression recognition is gaining widespread importance as the applications related to Human – Computer interactions are increasing. This paper mentions various techniques and approaches that have been used in the field of facial expression recognition. Facial expression recognition takes place in various stages and these stages have been implemented by various approaches. Viola and Jones for face detection, Gabor filters for feature extraction, SVM classifiers for classification, L1 minimization for sparse representation, facial expression recognition, geometric deformation model, multiple gabor filters for robust feature extraction, parallel implementation of Viola and Jones for face detection and parallel implementation of SVM classifier for classification of expressions are discussed in this paper.

  14. Mutual information-based feature selection for radiomics

    Science.gov (United States)

    Oubel, Estanislao; Beaumont, Hubert; Iannessi, Antoine

    2016-03-01

    Background The extraction and analysis of image features (radiomics) is a promising field in the precision medicine era, with applications to prognosis, prediction, and response to treatment quantification. In this work, we present a mutual information - based method for quantifying reproducibility of features, a necessary step for qualification before their inclusion in big data systems. Materials and Methods Ten patients with Non-Small Cell Lung Cancer (NSCLC) lesions were followed over time (7 time points in average) with Computed Tomography (CT). Five observers segmented lesions by using a semi-automatic method and 27 features describing shape and intensity distribution were extracted. Inter-observer reproducibility was assessed by computing the multi-information (MI) of feature changes over time, and the variability of global extrema. Results The highest MI values were obtained for volume-based features (VBF). The lesion mass (M), surface to volume ratio (SVR) and volume (V) presented statistically significant higher values of MI than the rest of features. Within the same VBF group, SVR showed also the lowest variability of extrema. The correlation coefficient (CC) of feature values was unable to make a difference between features. Conclusions MI allowed to discriminate three features (M, SVR, and V) from the rest in a statistically significant manner. This result is consistent with the order obtained when sorting features by increasing values of extrema variability. MI is a promising alternative for selecting features to be considered as surrogate biomarkers in a precision medicine context.

  15. Remote sensing image classification based on block feature point density analysis and multiple-feature fusion

    Science.gov (United States)

    Li, Shijin; Jiang, Yaping; Zhang, Yang; Feng, Jun

    2015-10-01

    With the development of remote sensing (RS) and the related technologies, the resolution of RS images is enhancing. Compared with moderate or low resolution images, high-resolution ones can provide more detailed ground information. However, a variety of terrain has complex spatial distribution. The different objectives of high-resolution images have a variety of features. The effectiveness of these features is not the same, but some of them are complementary. Considering the above information and characteristics, a new method is proposed to classify RS images based on hierarchical fusion of multi-features. Firstly, RS images are pre-classified into two categories in terms of whether feature points are uniformly or non-uniformly distributed. Then, the color histogram and Gabor texture feature are extracted from the uniformly-distributed categories, and the linear spatial pyramid matching using sparse coding (ScSPM) feature is obtained from the non-uniformly-distributed categories. Finally, the classification is performed by two support vector machine classifiers. The experimental results on a large RS image database with 2100 images show that the overall classification accuracy is boosted by 10.1% in comparison with the highest accuracy of single feature classification method. Compared with other multiple-feature fusion methods, the proposed method has achieved the highest classification accuracy on this dataset which has reached 90.1%, and the time complexity of the algorithm is also greatly reduced.

  16. INTEGRATED EXPRESSIONAL AND COLOR INVARIANT FACIAL RECOGNITION SCHEME FOR HUMAN BIOMETRIC SYSTEM

    Directory of Open Access Journals (Sweden)

    M.Punithavalli

    2013-09-01

    Full Text Available In many practical applications like biometrics, video surveillance and human computer interaction, face recognition plays a major role. The previous works focused on recognizing and enhancing the biometric systems based on the facial components of the system. In this work, we are going to build Integrated Expressional and Color Invariant Facial Recognition scheme for human biometric recognition suited to different security provisioning public participation areas.At first, the features of the face are identified and processed using bayes classifier with RGB and HSV color bands. Second, psychological emotional variance are identified and linked with the respective human facial expression based on the facial action code system. Finally, an integrated expressional and color invariant facial recognition is proposed for varied conditions of illumination, pose, transformation, etc. These conditions on color invariant model are suited to easy and more efficient biometric recognition system in public domain and high confidential security zones. The integration is made derived genetic operation on the color and expression components of the facial feature system. Experimental evaluation is planned to done with public face databases (DBs such as CMU-PIE, Color FERET, XM2VTSDB, SCface, and FRGC 2.0 to estimate the performance of the proposed integrated expressional facial and color invariant recognition scheme [IEFCIRS]. Performance evaluation is done based on the constraints like recognition rate, security and evalaution time.

  17. Facial blindsight

    Directory of Open Access Journals (Sweden)

    Marco eSolcà

    2015-09-01

    Full Text Available Blindsight denotes unconscious residual visual capacities in the context of an inability to consciously recollect or identify visual information. It has been described for color and shape discrimination, movement or facial emotion recognition. The present study investigates a patient suffering from cortical blindness whilst maintaining select residual abilities in face detection. Our patient presented the capacity to distinguish between jumbled/normal faces, known/unknown faces or famous people’s categories although he failed to explicitly recognize or describe them. Conversely, performance was at chance level when asked to categorize non-facial stimuli. Our results provide clinical evidence for the notion that some aspects of facial processing can occur without perceptual awareness, possibly using direct tracts from the thalamus to associative visual cortex, bypassing the primary visual cortex.

  18. Facial blindsight.

    Science.gov (United States)

    Solcà, Marco; Guggisberg, Adrian G; Schnider, Armin; Leemann, Béatrice

    2015-01-01

    Blindsight denotes unconscious residual visual capacities in the context of an inability to consciously recollect or identify visual information. It has been described for color and shape discrimination, movement or facial emotion recognition. The present study investigates a patient suffering from cortical blindness whilst maintaining select residual abilities in face detection. Our patient presented the capacity to distinguish between jumbled/normal faces, known/unknown faces or famous people's categories although he failed to explicitly recognize or describe them. Conversely, performance was at chance level when asked to categorize non-facial stimuli. Our results provide clinical evidence for the notion that some aspects of facial processing can occur without perceptual awareness, possibly using direct tracts from the thalamus to associative visual cortex, bypassing the primary visual cortex. PMID:26483655

  19. Classifying Chimpanzee Facial Expressions Using Muscle Action

    OpenAIRE

    Parr, Lisa A.; Bridget M Waller; Vick, Sarah J.; Bard, Kim A.

    2007-01-01

    The Chimpanzee Facial Action Coding System (ChimpFACS) is an objective, standardized observational tool for measuring facial movement in chimpanzees based on the well-known human Facial Action Coding System (FACS; P. Ekman & W. V. Friesen, 1978). This tool enables direct structural comparisons of facial expressions between humans and chimpanzees in terms of their common underlying musculature. Here the authors provide data on the first application of the ChimpFACS to validate existing categor...

  20. Dynamic Approaches for Facial Recognition Using Digital Image Speckle Correlation

    Science.gov (United States)

    Rafailovich-Sokolov, Sara; Guan, E.; Afriat, Isablle; Rafailovich, Miriam; Sokolov, Jonathan; Clark, Richard

    2004-03-01

    Digital image analysis techniques have been extensively used in facial recognition. To date, most static facial characterization techniques, which are usually based on Fourier transform techniques, are sensitive to lighting, shadows, or modification of appearance by makeup, natural aging or surgery. In this study we have demonstrated that it is possible to uniquely identify faces by analyzing the natural motion of facial features with Digital Image Speckle Correlation (DISC). Human skin has a natural pattern produced by the texture of the skin pores, which is easily visible with conventional digital cameras of resolution greater than 4 mega pixels. Hence the application of the DISC method to the analysis of facial motion appears to be very straightforward. Here we demonstrate that the vector diagrams produced by this method for facial images are directly correlated to the underlying muscle structure which is unique for an individual and is not affected by lighting or make-up. Furthermore, we will show that this method can also be used for medical diagnosis in early detection of facial paralysis and other forms of skin disorders.

  1. Moment feature based fast feature extraction algorithm for moving object detection using aerial images.

    Directory of Open Access Journals (Sweden)

    A F M Saifuddin Saif

    Full Text Available Fast and computationally less complex feature extraction for moving object detection using aerial images from unmanned aerial vehicles (UAVs remains as an elusive goal in the field of computer vision research. The types of features used in current studies concerning moving object detection are typically chosen based on improving detection rate rather than on providing fast and computationally less complex feature extraction methods. Because moving object detection using aerial images from UAVs involves motion as seen from a certain altitude, effective and fast feature extraction is a vital issue for optimum detection performance. This research proposes a two-layer bucket approach based on a new feature extraction algorithm referred to as the moment-based feature extraction algorithm (MFEA. Because a moment represents the coherent intensity of pixels and motion estimation is a motion pixel intensity measurement, this research used this relation to develop the proposed algorithm. The experimental results reveal the successful performance of the proposed MFEA algorithm and the proposed methodology.

  2. Highly comparative, feature-based time-series classification

    CERN Document Server

    Fulcher, Ben D

    2014-01-01

    A highly comparative, feature-based approach to time series classification is introduced that uses an extensive database of algorithms to extract thousands of interpretable features from time series. These features are derived from across the scientific time-series analysis literature, and include summaries of time series in terms of their correlation structure, distribution, entropy, stationarity, scaling properties, and fits to a range of time-series models. After computing thousands of features for each time series in a training set, those that are most informative of the class structure are selected using greedy forward feature selection with a linear classifier. The resulting feature-based classifiers automatically learn the differences between classes using a reduced number of time-series properties, and circumvent the need to calculate distances between time series. Representing time series in this way results in orders of magnitude of dimensionality reduction, allowing the method to perform well on ve...

  3. Parotid lymphangioma associated with facial nerve paralysis.

    Science.gov (United States)

    Imaizumi, Mitsuyoshi; Tani, Akiko; Ogawa, Hiroshi; Omori, Koichi

    2014-10-01

    Parotid lymphangioma is a relatively rare disease that is usually detected in infancy or early childhood, and which has typical features. Clinical reports of facial nerve paralysis caused by lymphangioma, however, are very rare. Usually, facial nerve paralysis in a child suggests malignancy. Here we report a very rare case of parotid lymphangioma associated with facial nerve paralysis. A 7-year-old boy was admitted to hospital with a rapidly enlarging mass in the left parotid region. Left peripheral-type facial nerve paralysis was also noted. Computed tomography and magnetic resonance imaging also revealed multiple cystic lesions. Open biopsy was undertaken in order to investigate the cause of the facial nerve paralysis. The histopathological findings of the excised tumor were consistent with lymphangioma. Prednisone (40 mg/day) was given in a tapering dose schedule. Facial nerve paralysis was completely cured 1 month after treatment. There has been no recurrent facial nerve paralysis for eight years.

  4. Automatic facial responses to near-threshold presented facial displays of emotion: imitation or evaluation?

    Science.gov (United States)

    Neumann, Roland; Schulz, Stefan M; Lozo, Ljubica; Alpers, Georg W

    2014-02-01

    Automatic facial reactions to near-threshold presented facial displays of emotion can be due to motor-mimicry or evaluation. To examine the mechanisms underlying such automatic facial responses we presented facial displays of joy, anger, and disgust for 16.67ms with a backwards masking technique and assessed electromyographic activity over the zygomaticus major, the levator labii, and the corrugator supercilii. As expected, we found that participants responded to displays of joy with contractions of the zygomaticus major and to expressions of anger with contractions of the corrugator supercilii. Critically, facial displays of disgust automatically activated the corrugator supercilii rather than the levator labii. This supports the notion that evaluative processes mediate facial responses to near-threshold presented facial displays of emotion rather than direct mimicry of emotional facial features. PMID:24370542

  5. A New Computational Methodology for the Construction of Forensic, Facial Composites

    Science.gov (United States)

    Solomon, Christopher; Gibson, Stuart; Maylin, Matthew

    A facial composite generated from an eyewitness’s memory often constitutes the first and only means available for police forces to identify a criminal suspect. To date, commercial computerised systems for constructing facial composites have relied almost exclusively on a feature-based, ‘cut-andpaste’ method whose effectiveness has been fundamentally limited by both the witness’s limited ability to recall and verbalise facial features and by the large dimensionality of the search space. We outline a radically new approach to composite generation which combines a parametric, statistical model of facial appearance with a computational search algorithm based on interactive, evolutionary principles. We describe the fundamental principles on which the new system has been constructed, outline recent innovations in the computational search procedure and also report on the real-world experience of UK police forces who have been using a commercial version of the system.

  6. Acoustic Event Detection Based on MRMR Selected Feature Vectors

    OpenAIRE

    VOZARIKOVA Eva; Juhar, Jozef; CIZMAR Anton

    2012-01-01

    This paper is focused on the detection of potentially dangerous acoustic events such as gun shots and breaking glass in the urban environment. Various feature extraction methods can be used forrepresenting the sound in the detection system based on Hidden Markov Models of acoustic events. Mel – frequency cepstral coefficients, low - level descriptors defined in MPEG-7 standard and another time andspectral features were considered in the system. For the selection of final subset of features Mi...

  7. Image Retrieval Based on Content Using Color Feature

    OpenAIRE

    Afifi, Ahmed J.; Wesam M. Ashour

    2012-01-01

    Content-based image retrieval from large resources has become an area of wide interest in many applications. In this paper we present a CBIR system that uses Ranklet Transform and the color feature as a visual feature to represent the images. Ranklet Transform is proposed as a preprocessing step to make the image invariant to rotation and any image enhancement operations. To speed up the retrieval time, images are clustered according to their features using k-means clustering algorithm.

  8. Feature-based multiresolution techniques for product design

    Institute of Scientific and Technical Information of China (English)

    LEE Sang Hun; LEE Kunwoo

    2006-01-01

    3D computer-aided design (CAD) systems based on feature-based solid modelling technique have been widely spread and used for product design. However, when part models associated with features are used in various downstream applications,simplified models in various levels of detail (LODs) are frequently more desirable than the full details of the parts. In particular,the need for feature-based multiresolution representation of a solid model representing an object at multiple LODs in the feature unit is increasing for engineering tasks. One challenge is to generate valid models at various LODs after an arbitrary rearrangement of features using a certain LOD criterion, because composite Boolean operations consisting of union and subtraction are not commutative. The other challenges are to devise proper topological framework for multiresolution representation, to suggest more reasonable LOD criteria, and to extend applications. This paper surveys the recent research on these issues.

  9. Spatial Circular Granulation Method Based on Multimodal Finger Feature

    Directory of Open Access Journals (Sweden)

    Jinfeng Yang

    2016-01-01

    Full Text Available Finger-based personal identification has become an active research topic in recent years because of its high user acceptance and convenience. How to reliably and effectively fuse the multimodal finger features together, however, has still been a challenging problem in practice. In this paper, viewing the finger trait as the combination of a fingerprint, finger vein, and finger-knuckle-print, a new multimodal finger feature recognition scheme is proposed based on granular computing. First, the ridge texture features of FP, FV, and FKP are extracted using Gabor Ordinal Measures (GOM. Second, combining the three-modal GOM feature maps in a color-based manner, we then constitute the original feature object set of a finger. To represent finger features effectively, they are granulated at three levels of feature granules (FGs in a bottom-up manner based on spatial circular granulation. In order to test the performance of the multilevel FGs, a top-down matching method is proposed. Experimental results show that the proposed method achieves higher accuracy recognition rate in finger feature recognition.

  10. Feature-based attention enhances performance by increasing response gain.

    Science.gov (United States)

    Herrmann, Katrin; Heeger, David J; Carrasco, Marisa

    2012-12-01

    Covert spatial attention can increase contrast sensitivity either by changes in contrast gain or by changes in response gain, depending on the size of the attention field and the size of the stimulus (Herrmann et al., 2010), as predicted by the normalization model of attention (Reynolds & Heeger, 2009). For feature-based attention, unlike spatial attention, the model predicts only changes in response gain, regardless of whether the featural extent of the attention field is small or large. To test this prediction, we measured the contrast dependence of feature-based attention. Observers performed an orientation-discrimination task on a spatial array of grating patches. The spatial locations of the gratings were varied randomly so that observers could not attend to specific locations. Feature-based attention was manipulated with a 75% valid and 25% invalid pre-cue, and the featural extent of the attention field was manipulated by introducing uncertainty about the upcoming grating orientation. Performance accuracy was better for valid than for invalid pre-cues, consistent with a change in response gain, when the featural extent of the attention field was small (low uncertainty) or when it was large (high uncertainty) relative to the featural extent of the stimulus. These results for feature-based attention clearly differ from results of analogous experiments with spatial attention, yet both support key predictions of the normalization model of attention. PMID:22580017

  11. Multi-features Based Approach for Moving Shadow Detection

    Institute of Scientific and Technical Information of China (English)

    ZHOU Ning; ZHOU Man-li; XU Yi-ping; FANG Bao-hong

    2004-01-01

    In the video-based surveillance application, moving shadows can affect the correct localization and detection of moving objects. This paper aims to present a method for shadow detection and suppression used for moving visual object detection. The major novelty of the shadow suppression is the integration of several features including photometric invariant color feature, motion edge feature, and spatial feature etc. By modifying process for false shadow detected, the averaging detection rate of moving object reaches above 90% in the test of Hall-Monitor sequence.

  12. Linear feature selection in texture analysis - A PLS based method

    DEFF Research Database (Denmark)

    Marques, Joselene; Igel, Christian; Lillholm, Martin;

    2013-01-01

    , which first applied a PLS regression to rank the features and then defined the best number of features to retain in the model by an iterative learning phase. The outliers in the dataset, that could inflate the number of selected features, were eliminated by a pre-processing step. To cope...... and considering all CV groups, the methods selected 36 % of the original features available. The diagnosis evaluation reached a generalization area-under-the-ROC curve of 0.92, which was higher than established cartilage-based markers known to relate to OA diagnosis....

  13. Opinion mining feature-level using Naive Bayes and feature extraction based analysis dependencies

    Science.gov (United States)

    Sanda, Regi; Baizal, Z. K. Abdurahman; Nhita, Fhira

    2015-12-01

    Development of internet and technology, has major impact and providing new business called e-commerce. Many e-commerce sites that provide convenience in transaction, and consumers can also provide reviews or opinions on products that purchased. These opinions can be used by consumers and producers. Consumers to know the advantages and disadvantages of particular feature of the product. Procuders can analyse own strengths and weaknesses as well as it's competitors products. Many opinions need a method that the reader can know the point of whole opinion. The idea emerged from review summarization that summarizes the overall opinion based on sentiment and features contain. In this study, the domain that become the main focus is about the digital camera. This research consisted of four steps 1) giving the knowledge to the system to recognize the semantic orientation of an opinion 2) indentify the features of product 3) indentify whether the opinion gives a positive or negative 4) summarizing the result. In this research discussed the methods such as Naï;ve Bayes for sentiment classification, and feature extraction algorithm based on Dependencies Analysis, which is one of the tools in Natural Language Processing (NLP) and knowledge based dictionary which is useful for handling implicit features. The end result of research is a summary that contains a bunch of reviews from consumers on the features and sentiment. With proposed method, accuration for sentiment classification giving 81.2 % for positive test data, 80.2 % for negative test data, and accuration for feature extraction reach 90.3 %.

  14. Facial image of Biblical Jews from Israel.

    Science.gov (United States)

    Kobyliansky, E; Balueva, T; Veselovskaya, E; Arensburg, B

    2008-06-01

    The present report deals with reconstructing the facial shapes of ancient inhabitants of Israel based on their cranial remains. The skulls of a male from the Hellenistic period and a female from the Roman period have been reconstructed. They were restored using the most recently developed programs in anthropological facial reconstruction, especially that of the Institute of Ethnology and Anthropology of the Russian Academy of Sciences (Balueva & Veselovskaya 2004). The basic craniometrical measurements of the two skulls were measured according to Martin & Saller (1957) and compared to the data from three ancient populations of Israel described by Arensburg et al. (1980): that of the Hellenistic period dating from 332 to 37 B.C., that of the Roman period, from 37 B.C. to 324 C.E., and that of the Byzantine period that continued until the Arab conquest in 640 C.E. Most of this osteological material was excavated in the Jordan River and the Dead Sea areas. A sample from the XVIIth century Jews from Prague (Matiegka 1926) was also used for osteometrical comparisons. The present study will characterize not only the osteological morphology of the material, but also the facial appearance of ancient inhabitants of Israel. From an anthropometric point of view, the two skulls studied here definitely belong to the same sample from the Hellenistic, Roman, and Byzantine populations of Israel as well as from Jews from Prague. Based on its facial reconstruction, the male skull may belong to the large Mediterranean group that inhabited this area from historic to modern times. The female skull also exhibits all the Mediterranean features but, in addition, probably some equatorial (African) mixture manifested by the shape of the reconstructed nose and the facial prognatism. PMID:18712157

  15. Features of underwater echo extraction based on signal sparse decomposition

    Institute of Scientific and Technical Information of China (English)

    YANG Bo; BU Yinyong; ZHAO Haiming

    2012-01-01

    In order to better realize sound echo recognition of underwater materials with heavily uneven surface, a features abstraction method based on the theory of signal sparse decomposition has been proposed. Instead of the common time frequency dictionary, sets of training echo samples are used directly as dictionary to realize echo sparse decomposition under L1 optimization and abstract a kind of energy features of the echo. Experiments on three kinds of bottom materials including the Cobalt Crust show that the Fisher distribution with this method is superior to that of edge features and of Singular Value Decomposition (SVD) features in wavelet domain. It means no doubt that much better classification result of underwater bottom materials can be obtained with the proposed energy features than the other two. It is concluded that echo samples used as a dictionary is feasible and the class information of echo introduced by this dictionary can help to obtain better echo features.

  16. Content Based Image Retrieval by Multi Features using Image Blocks

    Directory of Open Access Journals (Sweden)

    Arpita Mathur

    2013-12-01

    Full Text Available Content based image retrieval (CBIR is an effective method of retrieving images from large image resources. CBIR is a technique in which images are indexed by extracting their low level features like, color, texture, shape, and spatial location, etc. Effective and efficient feature extraction mechanisms are required to improve existing CBIR performance. This paper presents a novel approach of CBIR system in which higher retrieval efficiency is achieved by combining the information of image features color, shape and texture. The color feature is extracted using color histogram for image blocks, for shape feature Canny edge detection algorithm is used and the HSB extraction in blocks is used for texture feature extraction. The feature set of the query image are compared with the feature set of each image in the database. The experiments show that the fusion of multiple features retrieval gives better retrieval results than another approach used by Rao et al. This paper presents comparative study of performance of the two different approaches of CBIR system in which the image features color, shape and texture are used.

  17. Feature selection gait-based gender classification under different circumstances

    Science.gov (United States)

    Sabir, Azhin; Al-Jawad, Naseer; Jassim, Sabah

    2014-05-01

    This paper proposes a gender classification based on human gait features and investigates the problem of two variations: clothing (wearing coats) and carrying bag condition as addition to the normal gait sequence. The feature vectors in the proposed system are constructed after applying wavelet transform. Three different sets of feature are proposed in this method. First, Spatio-temporal distance that is dealing with the distance of different parts of the human body (like feet, knees, hand, Human Height and shoulder) during one gait cycle. The second and third feature sets are constructed from approximation and non-approximation coefficient of human body respectively. To extract these two sets of feature we divided the human body into two parts, upper and lower body part, based on the golden ratio proportion. In this paper, we have adopted a statistical method for constructing the feature vector from the above sets. The dimension of the constructed feature vector is reduced based on the Fisher score as a feature selection method to optimize their discriminating significance. Finally k-Nearest Neighbor is applied as a classification method. Experimental results demonstrate that our approach is providing more realistic scenario and relatively better performance compared with the existing approaches.

  18. An Effective Combined Feature For Web Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    H.M.R.B Herath

    2015-08-01

    Full Text Available Abstract Technology advances as well as the emergence of large scale multimedia applications and the revolution of the World Wide Web has changed the world into a digital age. Anybody can use their mobile phone to take a photo at any time anywhere and upload that image to ever growing image databases. Development of effective techniques for visual and multimedia retrieval systems is one of the most challenging and important directions of the future research. This paper proposes an effective combined feature for web based image retrieval. Frequently used colour and texture features are explored in order to develop a combined feature for this purpose. Widely used three colour features Colour moments Colour coherence vector and Colour Correlogram and three texture features Grey Level Co-occurrence matrix Tamura features and Gabor filter were analyzed for their performance. Precision and Recall were used to evaluate the performance of each of these techniques. By comparing precision and recall values the methods that performed best were taken and combined to form a hybrid feature. The developed combined feature was evaluated by developing a web based CBIR system. A web crawler was used to first crawl through Web sites and images found in those sites are downloaded and the combined feature representation technique was used to extract image features. The test results indicated that this web system can be used to index web images with the combined feature representation schema and to find similar images. Random image retrievals using the web system shows that the combined feature can be used to retrieve images belonging to the general image domain. Accuracy of the retrieval can be noted high for natural images like outdoor scenes images of flowers etc. Also images which have a similar colour and texture distribution were retrieved as similar even though the images were belonging to deferent semantic categories. This can be ideal for an artist who wants

  19. Age group and gender recognition from human facial images

    OpenAIRE

    Shewaye, Tizita Nesibu

    2013-01-01

    This work presents an automatic human gender and age group recognition system based on human facial images. It makes an extensive experiment with row pixel intensity valued features and Discrete Cosine Transform (DCT) coefficient features with Principal Component Analysis and k-Nearest Neighbor classification to identify the best recognition approach. The final results show approaches using DCT coefficient outperform their counter parts resulting in a 99% correct gender recognition rate and 6...

  20. Novel dynamic Bayesian networks for facial action element recognition and understanding

    Science.gov (United States)

    Zhao, Wei; Park, Jeong-Seon; Choi, Dong-You; Lee, Sang-Woong

    2011-12-01

    In daily life, language is an important tool of communication between people. Besides language, facial action can also provide a great amount of information. Therefore, facial action recognition has become a popular research topic in the field of human-computer interaction (HCI). However, facial action recognition is quite a challenging task due to its complexity. In a literal sense, there are thousands of facial muscular movements, many of which have very subtle differences. Moreover, muscular movements always occur simultaneously when the pose is changed. To address this problem, we first build a fully automatic facial points detection system based on a local Gabor filter bank and principal component analysis. Then, novel dynamic Bayesian networks are proposed to perform facial action recognition using the junction tree algorithm over a limited number of feature points. In order to evaluate the proposed method, we have used the Korean face database for model training. For testing, we used the CUbiC FacePix, facial expressions and emotion database, Japanese female facial expression database, and our own database. Our experimental results clearly demonstrate the feasibility of the proposed approach.

  1. Facial expression (mood) recognition from facial images using committee neural networks

    OpenAIRE

    Hariharan SI; Reddy Narender P; Kulkarni Saket S

    2009-01-01

    Abstract Background Facial expressions are important in facilitating human communication and interactions. Also, they are used as an important tool in behavioural studies and in medical rehabilitation. Facial image based mood detection techniques may provide a fast and practical approach for non-invasive mood detection. The purpose of the present study was to develop an intelligent system for facial image based expression classification using committee neural networks. Methods Several facial ...

  2. Feature-based tolerancing for intelligent inspection process definition

    Energy Technology Data Exchange (ETDEWEB)

    Brown, C.W.

    1993-07-01

    This paper describes a feature-based tolerancing capability that complements a geometric solid model with an explicit representation of conventional and geometric tolerances. This capability is focused on supporting an intelligent inspection process definition system. The feature-based tolerance model`s benefits include advancing complete product definition initiatives (e.g., STEP -- Standard for Exchange of Product model dam), suppling computer-integrated manufacturing applications (e.g., generative process planning and automated part programming) with product definition information, and assisting in the solution of measurement performance issues. A feature-based tolerance information model was developed based upon the notion of a feature`s toleranceable aspects and describes an object-oriented scheme for representing and relating tolerance features, tolerances, and datum reference frames. For easy incorporation, the tolerance feature entities are interconnected with STEP solid model entities. This schema will explicitly represent the tolerance specification for mechanical products, support advanced dimensional measurement applications, and assist in tolerance-related methods divergence issues.

  3. 基于层次分析法语义知识的人脸表情识别新方法%A novel facial expression recognition method based on semantic knowledge of analytical hierarchy process

    Institute of Scientific and Technical Information of China (English)

    胡步发; 黄银成; 陈炳兴

    2011-01-01

    At present, there are intrinsic differences between machine recognition of facial expression and human perception in the facial expression recognition system, which affect the precision of facial expression recognition. In order to reduce the semantic gap between the low-level visual features of face images and high-level semantic, a novel facial expression recognition method based on semantic knowledge of analytical hierarchy process (AHP) is presented. The analytical hierarchy process method is adopted to describe the high-level semantic of face images of the training set, which further used to establish semantic features. In the stage of low-level visual features extraction, the 2nd-order principal component analysis method is proposed to extract the texture features of face images. In the recognition stage, only low-level visual features of the input face image is used, and k-nearest neighbor method combined with semantic features in the study stage is used to classify the facial expressions. The proposed method combines the low-level visual features with high-level semantic features, reducing the semantic gap between them. The experiments are conducted on Japanese Female Facial Expression (JAFFE) database and the overall recognition rate of 93.92% is achieved. Theoretical analysis and experimental results both show that the proposed method has higher recognition rate.%在目前的人脸表情识别系统中,人脸表情的机器识别和人类感知之间存在着本质的差异,造成人脸表情识别率不高.为了减小人脸图像底层视觉特征与高层语义之间的语义鸿沟,提出一种基于层次分析法(AHP)语义知识的人脸表情识别新方法.该方法首先采用层次分析法对训练集中人脸图像进行高层语义描述,建立语义特征向量,在底层视觉特征提取阶段,提出一种二阶PCA(principal component analysis)方法来提取人脸图像的纹理特征;在识别阶段,仅利用输入人脸图像的底层

  4. Methods to quantify soft-tissue based facial growth and treatment outcomes in children: a systematic review.

    Directory of Open Access Journals (Sweden)

    Sander Brons

    Full Text Available CONTEXT: Technological advancements have led craniofacial researchers and clinicians into the era of three-dimensional digital imaging for quantitative evaluation of craniofacial growth and treatment outcomes. OBJECTIVE: To give an overview of soft-tissue based methods for quantitative longitudinal assessment of facial dimensions in children until six years of age and to assess the reliability of these methods in studies with good methodological quality. DATA SOURCE: PubMed, EMBASE, Cochrane Library, Web of Science, Scopus and CINAHL were searched. A hand search was performed to check for additional relevant studies. STUDY SELECTION: Primary publications on facial growth and treatment outcomes in children younger than six years of age were included. DATA EXTRACTION: Independent data extraction by two observers. A quality assessment instrument was used to determine the methodological quality. Methods, used in studies with good methodological quality, were assessed for reliability expressed as the magnitude of the measurement error and the correlation coefficient between repeated measurements. RESULTS: In total, 47 studies were included describing 4 methods: 2D x-ray cephalometry; 2D photography; anthropometry; 3D imaging techniques (surface laser scanning, stereophotogrammetry and cone beam computed tomography. In general the measurement error was below 1 mm and 1° and correlation coefficients range from 0.65 to 1.0. CONCLUSION: Various methods have shown to be reliable. However, at present stereophotogrammetry seems to be the best 3D method for quantitative longitudinal assessment of facial dimensions in children until six years of age due to its millisecond fast image capture, archival capabilities, high resolution and no exposure to ionizing radiation.

  5. Emotion Classification Using Facial Expression

    Directory of Open Access Journals (Sweden)

    Devi Arumugam

    2011-08-01

    Full Text Available Human emotional facial expressions play an important role in interpersonal relations. This is because humans demonstrate and convey a lot of evident information visually rather than verbally. Although humans recognize facial expressions virtually without effort or delay, reliable expression recognition by machine remains a challenge as of today. To automate recognition of emotional state, machines must be taught to understand facial gestures. In this paper we developed an algorithm which is used to identify the person’s emotional state through facial expression such as angry, disgust, happy. This can be done with different age group of people with different situation. We Used a Radial Basis Function network (RBFN for classification and Fisher’s Linear Discriminant (FLD, Singular Value Decomposition (SVD for feature selection.

  6. Spatiotemporal Features for Asynchronous Event-based Data

    Directory of Open Access Journals (Sweden)

    Xavier eLagorce

    2015-02-01

    Full Text Available Bio-inspired asynchronous event-based vision sensors are currently introducing a paradigm shift in visual information processing. These new sensors rely on a stimulus-driven principle of light acquisition similar to biological retinas. They are event-driven and fully asynchronous, thereby reducing redundancy and encoding exact times of input signal changes, leading to a very precise temporal resolution. Approaches for higher-level computer vision often rely on the realiable detection of features in visual frames, but similar definitions of features for the novel dynamic and event-based visual input representation of silicon retinas have so far been lacking. This article addresses the problem of learning and recognizing features for event-based vision sensors, which capture properties of truly spatiotemporal volumes of sparse visual event information. A novel computational architecture for learning and encoding spatiotemporal features is introduced based on a set of predictive recurrent reservoir networks, competing via winner-take-all selection. Features are learned in an unsupervised manner from real-world input recorded with event-based vision sensors. It is shown that the networks in the architecture learn distinct and task-specific dynamic visual features, and can predict their trajectories over time.

  7. Contextual Query Perfection by Affective Features Based Implicit Contextual Semantic Relevance Feedback in Multimedia Information Retrieval

    Directory of Open Access Journals (Sweden)

    Anil K. Tripathi

    2012-09-01

    Full Text Available Multimedia Information may have multiple semantics depending on context, a temporal interest and user preferences. Hence we are exploiting the plausibility of context associated with semantic concept in retrieving relevance information. We are proposing an Affective Feature Based Implicit Contextual Semantic Relevance Feedback (AICSRF to investigate whether audio and speech along with visual could determine the current context in which user wants to retrieve the information and to further investigate whether we could employ Affective Feedback as an implicit source of evidence in CSRF cycle to increase the systems contextual semantic understanding. We introduce an Emotion Recognition Unit (ERU that comprises of spatiotemporal Gabor filter to capture spontaneous facial expression and emotional word recognition system that uses phonemes to recognize the spoken emotional words. We propose Contextual Query Perfection Scheme (CQPS to learn, refine the current context that could be used in query perfection in RF cycle to understand the semantic of query on the basis of relevance judgment taken by ERU. Observations suggest that CQPS in AICSRF incorporating such affective features reduce the search space hence retrieval time and increase the systems contextual semantic understanding.

  8. Colesteatoma causando paralisia facial Cholesteatoma causing facial paralysis

    Directory of Open Access Journals (Sweden)

    José Ricardo Gurgel Testa

    2003-10-01

    blood supply or production of neurotoxic substances secreted from either the cholesteatoma matrix or bacteria enclosed in the tumor. AIM: To evaluate the incidence, clinical features and treatment of the facial palsy due cholesteatoma. STUDY DESIGN: Clinical retrospective. MATERIAL AND METHOD: Retrospective study of 10 cases of facial paralysis due cholesteatoma selected through a survey of 206 decompressions of the facial nerve due various aetiologies realized in the last 10 years in UNIFESP-EPM. RESULTS: The incidence of facial paralysis due cholesteatoma in this study was 4,85%, with female predominance (60%. The average age of the patients was 39 years. The duration and severity of the facial palsy associated with the extension of lesion were important for the functional recovery of the facial nerve. CONCLUSION: Early surgical approach is necessary in these cases to improve the nerve function more adequately. When disruption or intense fibrous replacement occurs in the facial nerve, nerve grafting (greater auricular/sural nerves and/or hypoglossal facial anastomosis may be suggested.

  9. EMOTION ANALYSIS OF SONGS BASED ON LYRICAL AND AUDIO FEATURES

    Directory of Open Access Journals (Sweden)

    Adit Jamdar

    2015-05-01

    Full Text Available In this paper, a method is proposed to detect the emotion of a song based on its lyrical and audio features. Lyrical features are generated by segmentation of lyrics during the process of data extraction. ANEW and WordNet knowledge is then incorporated to compute Valence and Arousal values. In addition to this, linguistic association rules are applied to ensure that the issue of ambiguity is properly addressed. Audio features are used to supplement the lyrical ones and include attributes like energy, tempo, and danceability. These features are extracted from The Echo Nest, a widely used music intelligence platform. Construction of training and test sets is done on the basis of social tags extracted from the last.fm website. The classification is done by applying feature weighting and stepwise threshold reduction on the k-Nearest Neighbors algorithm to provide fuzziness in the classification.

  10. 一种基于深度学习的表情识别方法%A Facial Expression Recognition Method Based on Deep Learning

    Institute of Scientific and Technical Information of China (English)

    王剑云; 李小霞

    2015-01-01

    According to the problem that traditional facial expression recognition method could not act a robust performance , we propose an algorithm based on deep learning .First of all, we train two sparse auto encoder in two scales , and the parameter of the hidden layer should be a series of convolutional kernel , we use these kernels to extract first-layer features .Then we get second-layer features through max-pooling operators , it improves the invariance of the features .Finally we parallelize seven four-layers neural networks to accomplish the recognition task .The experiment result shows this deep neural networks structure act a robust perform-ance in facial expression recognition task in the case of the test samples ’ ID information did not appear in the training samples .%针对人脸表情识别鲁棒性差,容易受身份信息干扰的问题,提出一种具有局部并行结构的深度神经网络识别算法。首先使用稀疏自编码算法训练得到不同尺度的卷积核,然后提取卷积核特征并作池化处理,使特征具有一定的平移不变性,最后采用与表情相关的7个并行的4层网络得到最终的分类结果。实验结果表明,在标准的人脸表情识别库上进行独立测试时,本文提出的局部并行深度神经网络的表情识别方法对测试集的人不出现在训练集中的情况有较好表现,相比其他算法更具有实用性。

  11. NEURO FUZZY MODEL FOR FACE RECOGNITION WITH CURVELET BASED FEATURE IMAGE

    Directory of Open Access Journals (Sweden)

    SHREEJA R,

    2011-06-01

    Full Text Available A facial recognition system is a computer application for automatically identifying or verifying a person from a digital image or a video frame from a video source. One of the ways to do this is by comparing selected facial features from the image and a facial database. It is typically used in security systems and can be compared to other biometric techniques such as fingerprint or iris recognition systems. Every face has approximately 80 nodal points like (Distance between the eyes, Width of the nose etc.The basic face recognition system capture the sample, extract feature, compare template and perform matching. In this paper two methods of face recognition are compared- neural networks and neuro fuzzy method. For this curvelet transform is used for feature extraction. Feature vector is formed by extracting statistical quantities of curve coefficients. From the statistical results it is concluded that neuro fuzzy method is the better technique for face recognition as compared to neural network.

  12. Image feature extraction based multiple ant colonies cooperation

    Science.gov (United States)

    Zhang, Zhilong; Yang, Weiping; Li, Jicheng

    2015-05-01

    This paper presents a novel image feature extraction algorithm based on multiple ant colonies cooperation. Firstly, a low resolution version of the input image is created using Gaussian pyramid algorithm, and two ant colonies are spread on the source image and low resolution image respectively. The ant colony on the low resolution image uses phase congruency as its inspiration information, while the ant colony on the source image uses gradient magnitude as its inspiration information. These two ant colonies cooperate to extract salient image features through sharing a same pheromone matrix. After the optimization process, image features are detected based on thresholding the pheromone matrix. Since gradient magnitude and phase congruency of the input image are used as inspiration information of the ant colonies, our algorithm shows higher intelligence and is capable of acquiring more complete and meaningful image features than other simpler edge detectors.

  13. Intrinsic feature-based pose measurement for imaging motion compensation

    Science.gov (United States)

    Baba, Justin S.; Goddard, Jr., James Samuel

    2014-08-19

    Systems and methods for generating motion corrected tomographic images are provided. A method includes obtaining first images of a region of interest (ROI) to be imaged and associated with a first time, where the first images are associated with different positions and orientations with respect to the ROI. The method also includes defining an active region in the each of the first images and selecting intrinsic features in each of the first images based on the active region. Second, identifying a portion of the intrinsic features temporally and spatially matching intrinsic features in corresponding ones of second images of the ROI associated with a second time prior to the first time and computing three-dimensional (3D) coordinates for the portion of the intrinsic features. Finally, the method includes computing a relative pose for the first images based on the 3D coordinates.

  14. 基于提升小波和FLD的人脸表情识别%Facial expression recognition based on lifting wavelet and FLD

    Institute of Scientific and Technical Information of China (English)

    董玉龙; 姜威

    2012-01-01

    A new facial expression feature extraction method based lifting wavelet and FLD is presented. The lifting wavelet is transformed completely in time-space domain and has the multi-resolution characters, so it is advantageous in dealing with feature extraction of the image's details. The result shows that the whole character made up by the LF and HF components contains the main expression feature of the expression image. Then the Fisher linear discriminant (FLD) is used to extract features from the lifting wavelet processing images. The K-neighbor method is used for classificatioa Experiment shows effectively that in the JAFFE database recognition rate reaches 94. 3% and recognition time only lasts 2. 9 s. The new methods proves to be faster and more effective.%提出一种基于提升小波和Fisher线性判别法(FLD)相结合的人脸表情特征提取方法.提升小波是完全基于时空域的变换,具有多分辨率的特征,更有利于表情细节信息的提取,并且运算时间短,便于实现.图像经过提升小波变换后,取其低频分量和高频分量相结合作为整体特征,实验证明保存了绝大部分的表情分量,然后用Fisher线性判别法(FLD)进行特征提取,采用K-近邻法进行分类.在JAFFE数据库中,分辨率达到94.3%,识别时间为2.9s,证明了方法的有效性.

  15. Facial expression recognition algorithm based on brightness detecting and SVM%基才亮度检测检测和SVM的人脸表情识别算法

    Institute of Scientific and Technical Information of China (English)

    陈亚雄; 王西博; 王超

    2011-01-01

    针对包含表情信息的静态图像,提出基于皮肤检测和SVM的人脸表情识别算法。首先根据先验知识,并使用皮肤检测和积分投影相结合定位眉毛眼睛区域和嘴巴区域,自动分割出表情子区域。接着,对分割出的表情子区域进行Gabor小波特征提取,在利用Fisher线性判别对特征进行降维,去除冗余和相关。最后利用支持向量机对人脸表情进行分类。用该算法在日本表情数据库上进行测试,获得了较高的识别准确率。证明了该算法的有效性。%A facial recognition algorithm based on Skin detecting and SVM to still image containing expression Information was introduced. Firstly, skin detecting algorithm combined with projection to locate the eye region and the mouth region, which can segment the expression sub-regions automatically. Secondly, features of the expression sub-regions were extracted by Gabor wavelet transformation and then effective Gabor expression features were selected by Fisher Linear Discriminat (FLD) to deduce the dimension and redundancy of the features. Finally, the features were sent to Support Vector Machine (SVM) to classify the different expressions. The algorithm was tested on Japanese female expression database. It can get a high precision of recognition. The feasibility of this method has been verified by experiments.

  16. 基于Gabor小波和SVM的人脸表情识别算法%Facial Expression Recognition Algorithm Based on Gabor Wavelet Automatic Segmentation and SVM

    Institute of Scientific and Technical Information of China (English)

    陈亚雄

    2011-01-01

    针对包含表情信息的静态图像,提出基于Gabor小波和SVM的人脸表情识别算法.根据先验知识,并使用形态学和积分投影相结合定位眉毛眼睛区域,采用模板内计算均值定位嘴巴区域,自动分割出表情子区域.对分割出的表情子区域进行Gabor小波特征提取,在利用Fisher线性判别对特征进行降维,去除冗余和相关.利用支持向量机对人脸表情进行分类.用该算法在日本表情数据库上进行测试,获得了较高的识别准确率.证明了该算法的有效性.%A facial recognition algorithm based on Gabor wavelet and SVM is proposed in allusion to static image containing expression Information. The mathematical morphology combined with projection is adopted to locate the brow and eye region < and the calculating mean value in template is employed to locate the mouth region, which can segment the expression sub-regions automatically. The features of the expression sub-regions are extracted by Gabor wavelet transformation and then effective Gabor expression features are selected by Fisher linear discriminate (FLD) to deduce the dimension and redundancy of the features. The features are sent to support vector machine (SVM) to classify the different expressions. The algorithm was tested in Japanese female expression database. It can get a high precision of recognition. The feasibility of this method was verified by experiments.

  17. Electronic image stabilization system based on global feature tracking

    Institute of Scientific and Technical Information of China (English)

    Zhu Juanjuan; Guo Baolong

    2008-01-01

    A new robust electronic image stabilization system is presented, which involves feature-point, tracking based global motion estimation and Kalman filtering based motion compensation. First, global motion is estimated from the local motions of selected feature points. Considering the local moving objects or the inevitable mismatch,the matching validation, based on the stable relative distance between the points set is proposed, thus maintaining high accuracy and robustness. Next, the global motion parameters are accumulated for correction by Kalman filter-ation. The experimental result illustrates that the proposed system is effective to stabilize translational, rotational,and zooming jitter and robust to local motions.

  18. Facial Expression Recognition in Nonvisual Imagery

    Science.gov (United States)

    Olague, Gustavo; Hammoud, Riad; Trujillo, Leonardo; Hernández, Benjamín; Romero, Eva

    This chapter presents two novel approaches that allow computer vision applications to perform human facial expression recognition (FER). From a prob lem standpoint, we focus on FER beyond the human visual spectrum, in long-wave infrared imagery, thus allowing us to offer illumination-independent solutions to this important human-computer interaction problem. From a methodological stand point, we introduce two different feature extraction techniques: a principal com ponent analysis-based approach with automatic feature selection and one based on texture information selected by an evolutionary algorithm. In the former, facial fea tures are selected based on interest point clusters, and classification is carried out us ing eigenfeature information; in the latter, an evolutionary-based learning algorithm searches for optimal regions of interest and texture features based on classification accuracy. Both of these approaches use a support vector machine-committee for classification. Results show effective performance for both techniques, from which we can conclude that thermal imagery contains worthwhile information for the FER problem beyond the human visual spectrum.

  19. Prototype Theory Based Feature Representation for PolSAR Images

    OpenAIRE

    Huang Xiaojing; Yang Xiangli; Huang Pingping; Yang Wen

    2016-01-01

    This study presents a new feature representation approach for Polarimetric Synthetic Aperture Radar (PolSAR) image based on prototype theory. First, multiple prototype sets are generated using prototype theory. Then, regularized logistic regression is used to predict similarities between a test sample and each prototype set. Finally, the PolSAR image feature representation is obtained by ensemble projection. Experimental results of an unsupervised classification of PolSAR images show that our...

  20. Feature Learning for Fingerprint-Based Positioning in Indoor Environment

    OpenAIRE

    Zengwei Zheng; Yuanyi Chen; Tao He; Lin Sun; Dan Chen

    2015-01-01

    Recent years have witnessed a growing interest in using Wi-Fi received signal strength for indoor fingerprint-based positioning. However, previous study about this problem has primarily faced two main challenges. One is that positioning fingerprint feature using received signal strength is unstable due to heterogeneous devices and dynamic environment status, which will greatly degrade the positioning accuracy. Another is that some improved positioning fingerprint features will suffer the curs...

  1. Feature-based attention enhances performance by increasing response gain

    OpenAIRE

    Herrmann, Katrin; Heeger, David J.; Carrasco, Marisa

    2012-01-01

    Covert spatial attention can increase contrast sensitivity either by changes in contrast gain or by changes in response gain, depending on the size of the attention field and the size of the stimulus (Herrmann, Montaser-Kouhsari, Carrasco, & Heeger, 2010), as predicted by the normalization model of attention (Reynolds & Heeger, 2009). For feature-based attention, unlike spatial attention, the model predicts only changes in response gain, regardless of whether the featural extent of the attent...

  2. Frequency feature based quantification of defect depth and thickness

    Science.gov (United States)

    Tian, Shulin; Chen, Kai; Bai, Libing; Cheng, Yuhua; Tian, Lulu; Zhang, Hong

    2014-06-01

    This study develops a frequency feature based pulsed eddy current method. A frequency feature, termed frequency to zero, is proposed for subsurface defects and metal loss quantification in metallic specimens. A curve fitting method is also employed to generate extra frequency components and improve the accuracy of the proposed method. Experimental validation is carried out. Conclusions and further work are derived on the basis of the studies.

  3. Dependence of the appearance-based perception of criminality, suggestibility, and trustworthiness on the level of pixelation of facial images.

    Science.gov (United States)

    Nurmoja, Merle; Eamets, Triin; Härma, Hanne-Loore; Bachmann, Talis

    2012-10-01

    While the dependence of face identification on the level of pixelation-transform of the images of faces has been well studied, similar research on face-based trait perception is underdeveloped. Because depiction formats used for hiding individual identity in visual media and evidential material recorded by surveillance cameras often consist of pixelized images, knowing the effects of pixelation on person perception has practical relevance. Here, the results of two experiments are presented showing the effect of facial image pixelation on the perception of criminality, trustworthiness, and suggestibility. It appears that individuals (N = 46, M age = 21.5 yr., SD = 3.1 for criminality ratings; N = 94, M age = 27.4 yr., SD = 10.1 for other ratings) have the ability to discriminate between facial cues ndicative of these perceived traits from the coarse level of image pixelation (10-12 pixels per face horizontally) and that the discriminability increases with a decrease in the coarseness of pixelation. Perceived criminality and trustworthiness appear to be better carried by the pixelized images than perceived suggestibility.

  4. Gender differences in the neural network of facial mimicry of smiles – An rTMS study

    OpenAIRE

    Korb, Sebastian; Malsert, Jennifer; Rochas, Vincent; Rihs, Tonia; Rieger, Sebastian Walter; Schwab, Samir; Niedenthal, Paula M.; Grandjean, Didier Maurice

    2015-01-01

    Under theories of embodied emotion, exposure to a facial expression triggers facial mimicry. Facial feedback is then used to recognize and judge the perceived expression. However, the neural bases of facial mimicry and of the use of facial feedback remain poorly understood. Furthermore, gender differences in facial mimicry and emotion recognition suggest that different neural substrates might accompany the production of facial mimicry, and the processing of facial feedback, in men and women. ...

  5. A spiking neural network based cortex-like mechanism and application to facial expression recognition.

    Science.gov (United States)

    Fu, Si-Yao; Yang, Guo-Sheng; Kuai, Xin-Kai

    2012-01-01

    In this paper, we present a quantitative, highly structured cortex-simulated model, which can be simply described as feedforward, hierarchical simulation of ventral stream of visual cortex using biologically plausible, computationally convenient spiking neural network system. The motivation comes directly from recent pioneering works on detailed functional decomposition analysis of the feedforward pathway of the ventral stream of visual cortex and developments on artificial spiking neural networks (SNNs). By combining the logical structure of the cortical hierarchy and computing power of the spiking neuron model, a practical framework has been presented. As a proof of principle, we demonstrate our system on several facial expression recognition tasks. The proposed cortical-like feedforward hierarchy framework has the merit of capability of dealing with complicated pattern recognition problems, suggesting that, by combining the cognitive models with modern neurocomputational approaches, the neurosystematic approach to the study of cortex-like mechanism has the potential to extend our knowledge of brain mechanisms underlying the cognitive analysis and to advance theoretical models of how we recognize face or, more specifically, perceive other people's facial expression in a rich, dynamic, and complex environment, providing a new starting point for improved models of visual cortex-like mechanism. PMID:23193391

  6. High-precision Detection of Facial Landmarks to Estimate Head Motions Based on Vision Models

    Directory of Open Access Journals (Sweden)

    Xiaohong W. Gao

    2007-01-01

    Full Text Available A new approach of determination of head movement is presented from the pictures recorded via digital cameras monitoring the scanning processing of PET. Two human vision models of CIECAMs and BMV are applied to segment the face region via skin colour and to detect local facial landmarks respectively. The developed algorithms are evaluated on the pictures (n=12 monitoring a subject’s head while simulating PET scanning captured by two calibrated cameras (located in the front and left side from a subject. It is shown that centers of chosen facial landmarks of eye corners and middle point of nose basement have been detected with very high precision (1 0.64 pixels. Three landmarks on pictures received by the front camera and two by the side camera have been identified. Preliminary results on 2D images with known moving parameters show that movement parameters of rotations and translations along X, Y, and Z directions can be obtained very accurately via the described methods.

  7. A Spiking Neural Network Based Cortex-Like Mechanism and Application to Facial Expression Recognition

    Directory of Open Access Journals (Sweden)

    Si-Yao Fu

    2012-01-01

    Full Text Available In this paper, we present a quantitative, highly structured cortex-simulated model, which can be simply described as feedforward, hierarchical simulation of ventral stream of visual cortex using biologically plausible, computationally convenient spiking neural network system. The motivation comes directly from recent pioneering works on detailed functional decomposition analysis of the feedforward pathway of the ventral stream of visual cortex and developments on artificial spiking neural networks (SNNs. By combining the logical structure of the cortical hierarchy and computing power of the spiking neuron model, a practical framework has been presented. As a proof of principle, we demonstrate our system on several facial expression recognition tasks. The proposed cortical-like feedforward hierarchy framework has the merit of capability of dealing with complicated pattern recognition problems, suggesting that, by combining the cognitive models with modern neurocomputational approaches, the neurosystematic approach to the study of cortex-like mechanism has the potential to extend our knowledge of brain mechanisms underlying the cognitive analysis and to advance theoretical models of how we recognize face or, more specifically, perceive other people’s facial expression in a rich, dynamic, and complex environment, providing a new starting point for improved models of visual cortex-like mechanism.

  8. Facial Recognition Technology: An analysis with scope in India

    CERN Document Server

    Thorat, S B; Dandale, Jyoti P

    2010-01-01

    A facial recognition system is a computer application for automatically identifying or verifying a person from a digital image or a video frame from a video source. One of the way is to do this is by comparing selected facial features from the image and a facial database.It is typically used in security systems and can be compared to other biometrics such as fingerprint or eye iris recognition systems. In this paper we focus on 3-D facial recognition system and biometric facial recognision system. We do critics on facial recognision system giving effectiveness and weaknesses. This paper also introduces scope of recognision system in India.

  9. Incorporating Feature-Based Annotations into Automatically Generated Knowledge Representations

    Science.gov (United States)

    Lumb, L. I.; Lederman, J. I.; Aldridge, K. D.

    2006-12-01

    Earth Science Markup Language (ESML) is efficient and effective in representing scientific data in an XML- based formalism. However, features of the data being represented are not accounted for in ESML. Such features might derive from events (e.g., a gap in data collection due to instrument servicing), identifications (e.g., a scientifically interesting area/volume in an image), or some other source. In order to account for features in an ESML context, we consider them from the perspective of annotation, i.e., the addition of information to existing documents without changing the originals. Although it is possible to extend ESML to incorporate feature-based annotations internally (e.g., by extending the XML schema for ESML), there are a number of complicating factors that we identify. Rather than pursuing the ESML-extension approach, we focus on an external representation for feature-based annotations via XML Pointer Language (XPointer). In previous work (Lumb &Aldridge, HPCS 2006, IEEE, doi:10.1109/HPCS.2006.26), we have shown that it is possible to extract relationships from ESML-based representations, and capture the results in the Resource Description Format (RDF). Thus we explore and report on this same requirement for XPointer-based annotations of ESML representations. As in our past efforts, the Global Geodynamics Project (GGP) allows us to illustrate with a real-world example this approach for introducing annotations into automatically generated knowledge representations.

  10. Feature-based tolerancing for advanced manufacturing applications

    Energy Technology Data Exchange (ETDEWEB)

    Brown, C.W.; Kirk, W.J. III; Simons, W.R.; Ward, R.C.; Brooks, S.L.

    1994-11-01

    A primary requirement for the successful deployment of advanced manufacturing applications is the need for a complete and accessible definition of the product. This product definition must not only provide an unambiguous description of a product`s nominal shape but must also contain complete tolerance specification and general property attributes. Likewise, the product definition`s geometry, topology, tolerance data, and modeler manipulative routines must be fully accessible through a robust application programmer interface. This paper describes a tolerancing capability using features that complements a geometric solid model with a representation of conventional and geometric tolerances and non-shape property attributes. This capability guarantees a complete and unambiguous definition of tolerances for manufacturing applications. An object-oriented analysis and design of the feature-based tolerance domain was performed. The design represents and relates tolerance features, tolerances, and datum reference frames. The design also incorporates operations that verify correctness and check for the completeness of the overall tolerance definition. The checking algorithm is based upon the notion of satisfying all of a feature`s toleranceable aspects. Benefits from the feature-based tolerance modeler include: advancing complete product definition initiatives, incorporating tolerances in product data exchange, and supplying computer-integrated manufacturing applications with tolerance information.

  11. Facial Erythema of Rosacea - Aetiology, Different Pathophysiologies and Treatment Options.

    Science.gov (United States)

    Steinhoff, Martin; Schmelz, Martin; Schauber, Jürgen

    2016-06-15

    Rosacea is a common chronic skin condition that displays a broad diversity of clinical manifestations. Although the pathophysiological mechanisms of the four subtypes are not completely elucidated, the key elements often present are augmented immune responses of the innate and adaptive immune system, and neurovascular dysregulation. The most common primary feature of all cutaneous subtypes of rosacea is transient or persistent facial erythema. Perilesional erythema of papules or pustules is based on the sustained vasodilation and plasma extravasation induced by the inflammatory infiltrates. In contrast, transient erythema has rapid kinetics induced by trigger factors independent of papules or pustules. Amongst the current treatments for facial erythema of rosacea, only the selective α2-adrenergic receptor agonist brimonidine 0.33% topical gel (Mirvaso®) is approved. This review aims to discuss the potential causes, different pathophysiologies and current treatment options to address the unmet medical needs of patients with facial erythema of rosacea. PMID:26714888

  12. Guiding atypical facial growth back to normal. Part 1: Understanding facial growth.

    Science.gov (United States)

    Galella, Steve; Chow, Daniel; Jones, Earl; Enlow, Donald; Masters, Ari

    2011-01-01

    Many practitioners find the complexity of facial growth overwhelming and thus merely observe and accept the clinical features of atypical growth and do not comprehend the long-term consequences. Facial growth and development is a strictly controlled biological process. Normal growth involves ongoing bone remodeling and positional displacement. Atypical growth begins when this biological balance is disturbed With the understanding of these processes, clinicians can adequately assess patients and determine the causes of these atypical facial growth patterns and design effective treatment plans. This is the first of a series of articles which addresses normal facial growth, atypical facial growth, patient assessment, causes of atypical facial growth, and guiding facial growth back to normal.

  13. Facial myokymia as a presenting symptom of vestibular schwannoma.

    Directory of Open Access Journals (Sweden)

    Joseph B

    2002-07-01

    Full Text Available Facial myokymia is a rare presenting feature of a vestibular schwannoma. We present a 48 year old woman with a large right vestibular schwannoma, who presented with facial myokymia. It is postulated that facial myokymia might be due to a defect in the motor axons of the 7th nerve or due to brain stem compression by the tumor.

  14. Feature-based attentional modulation of orientation perception in somatosensation

    Directory of Open Access Journals (Sweden)

    Meike Annika Schweisfurth

    2014-07-01

    Full Text Available In a reaction time study of human tactile orientation detection the effects of spatial attention and feature-based attention were investigated. Subjects had to give speeded responses to target orientations (parallel and orthogonal to the finger axis in a random stream of oblique tactile distractor orientations presented to their index and ring fingers. Before each block of trials, subjects received a tactile cue at one finger. By manipulating the validity of this cue with respect to its location and orientation (feature, we provided an incentive to subjects to attend spatially to the cued location and only there to the cued orientation. Subjects showed quicker responses to parallel compared to orthogonal targets, pointing to an orientation anisotropy in sensory processing. Also, faster reaction times were observed in location-matched trials, i.e. when targets appeared on the cued finger, representing a perceptual benefit of spatial attention. Most importantly, reaction times were shorter to orientations matching the cue, both at the cued and at the uncued location, documenting a global enhancement of tactile sensation by feature-based attention. This is the first report of a perceptual benefit of feature-based attention outside the spatial focus of attention in somatosensory perception. The similarity to effects of feature-based attention in visual perception supports the notion of matching attentional mechanisms across sensory domains.

  15. Iris Recognition System Based on Feature Level Fusion

    Directory of Open Access Journals (Sweden)

    Dr. S. R. Ganorkar

    2013-11-01

    Full Text Available Multibiometric systems utilize the evidence presented by multiple biometric sources (e.g., face and fingerprint, multiple fingers of a single user, multiple matchers, etc. in order to determine or verify the identity of an individual. Information from multiple sources can be consolidated in several distinct levels. But fusion of two different biometric traits are difficult due to (i the feature sets of multiple modalities may be incompatible (e.g., minutiae set of fingerprints and eigen-coefficients of face; (ii the relationship between the feature spaces of different biometric systems may not be known; (iii concatenating two feature vectors may result in a feature vector with very large dimensionality leading to the `curse of dimensionality problem, huge storage space and different processing algorithm. Also if we are use multiple images of single biometric trait, then it doesn’t show much variations. So in this paper, we present a efficient technique of feature-based fusion in a multimodal system where left eye and right eye are used as input. Iris recognition basically contains iris location, feature extraction, and identification. This algorithm uses canny edge detection to identify inner and outer boundary of iris. Then this image is feed to Gabor wavelet transform to extract the feature and finally matching is done by using indexing algorithm. The results from the analysis of works indicate that the proposed technique can lead to substantial improvement in performance.

  16. Web-based Visualisation of Head Pose and Facial Expressions Changes: Monitoring Human Activity Using Depth Data

    DEFF Research Database (Denmark)

    Kalliatakis, Grigorios; Vidakis, Nikolaos; Triantafyllidis, Georgios

    2016-01-01

    Despite significant recent advances in the field of head pose estimation and facial expression recognition, raising the cognitive level when analysing human activity presents serious challenges to current concepts. Motivated by the need of generating comprehensible visual representations from...... and accurately estimate head pose changes in unconstrained environment. In order to complete the secondary process of recognising four universal dominant facial expressions (happiness, anger, sadness and surprise), emotion recognition via facial expressions (ERFE) was adopted. After that, a lightweight data...

  17. Image counter-forensics based on feature injection

    Science.gov (United States)

    Iuliani, M.; Rossetto, S.; Bianchi, T.; De Rosa, Alessia; Piva, A.; Barni, M.

    2014-02-01

    Starting from the concept that many image forensic tools are based on the detection of some features revealing a particular aspect of the history of an image, in this work we model the counter-forensic attack as the injection of a specific fake feature pointing to the same history of an authentic reference image. We propose a general attack strategy that does not rely on a specific detector structure. Given a source image x and a target image y, the adversary processes x in the pixel domain producing an attacked image ~x, perceptually similar to x, whose feature f(~x) is as close as possible to f(y) computed on y. Our proposed counter-forensic attack consists in the constrained minimization of the feature distance Φ(z) =│ f(z) - f(y)│ through iterative methods based on gradient descent. To solve the intrinsic limit due to the numerical estimation of the gradient on large images, we propose the application of a feature decomposition process, that allows the problem to be reduced into many subproblems on the blocks the image is partitioned into. The proposed strategy has been tested by attacking three different features and its performance has been compared to state-of-the-art counter-forensic methods.

  18. A flower image retrieval method based on ROI feature

    Institute of Scientific and Technical Information of China (English)

    洪安祥; 陈刚; 李均利; 池哲儒; 张亶

    2004-01-01

    Flower image retrieval is a very important step for computer-aided plant species recognition.In this paper,we propose an efficient segmentation method based on color clustering and domain knowledge to extract flower regions from flower images.For flower retrieval,we use the color histogram of a flower region to characterize the color features of flower and two shape-based features sets,Centroid-Contour Distance(CCD)and Angle Code Histogram(ACH),to characterize the shape features of a flower contour.Experimental results showed that our flower region extraction method based on color clustering and domain knowledge can produce accurate flower regions.Flower retrieval results on a database of 885 flower images collected from 14 plant species showed that our Region-of-Interest(ROD based retrieval approach using both color and shape features can perform better than a method based on the global color histogram proposed by Swain and Ballard(1991)and a method based on domain knowledge-driven segmentation and color names proposed by Das et al.(1999).

  19. A flower image retrieval method based on ROI feature

    Institute of Scientific and Technical Information of China (English)

    洪安祥; 陈刚; 李均利; 池哲儒; 张亶

    2004-01-01

    Flower image retrieval is a very important step for computer-aided plant species recognition. In this paper, we propose an efficient segmentation method based on color clustering and domain knowledge to extract flower regions from flower images. For flower retrieval, we use the color histogram of a flower region to characterize the color features of flower and two shape-based features sets, Centroid-Contour Distance (CCD) and Angle Code Histogram (ACH), to characterize the shape features of a flower contour. Experimental results showed that our flower region extraction method based on color clustering and domain knowledge can produce accurate flower regions. Flower retrieval results on a database of 885 flower images collected from 14 plant species showed that our Region-of-Interest (ROI) based retrieval approach using both color and shape features can perform better than a method based on the global color histogram proposed by Swain and Ballard (1991) and a method based on domain knowledge-driven segmentation and color names proposed by Das et al.(1999).

  20. Whispered speaker identification based on feature and model hybrid compensation

    Institute of Scientific and Technical Information of China (English)

    GU Xiaojiang; ZHAO Heming; Lu Gang

    2012-01-01

    In order to increase short time whispered speaker recognition rate in variable chan- nel conditions, the hybrid compensation in model and feature domains was proposed. This method is based on joint factor analysis in training model stage. It extracts speaker factor and eliminates channel factor by estimating training speech speaker and channel spaces. Then in the test stage, the test speech channel factor is projected into feature space to engage in feature compensation, so it can remove channel information both in model and feature domains in order to improve recognition rate. The experiment result shows that the hybrid compensation can obtain the similar recognition rate in the three different training channel conditions and this method is more effective than joint factor analysis in the test of short whispered speech.