WorldWideScience

Sample records for based facial feature

  1. Face Recognition Based on Facial Features

    Directory of Open Access Journals (Sweden)

    Muhammad Sharif

    2012-08-01

    Full Text Available Commencing from the last decade several different methods have been planned and developed in the prospect of face recognition that is one of the chief stimulating zone in the area of image processing. Face recognitions processes have various applications in the prospect of security systems and crime investigation systems. The study is basically comprised of three phases, i.e., face detection, facial features extraction and face recognition. The first phase is the face detection process where region of interest i.e., features region is extracted. The 2nd phase is features extraction. Here face features i.e., eyes, nose and lips are extracted out commencing the extracted face area. The last module is the face recognition phase which makes use of the extracted left eye for the recognition purpose by combining features of Eigenfeatures and Fisherfeatures.

  2. Frame-Based Facial Expression Recognition Using Geometrical Features

    Directory of Open Access Journals (Sweden)

    Anwar Saeed

    2014-01-01

    Full Text Available To improve the human-computer interaction (HCI to be as good as human-human interaction, building an efficient approach for human emotion recognition is required. These emotions could be fused from several modalities such as facial expression, hand gesture, acoustic data, and biophysiological data. In this paper, we address the frame-based perception of the universal human facial expressions (happiness, surprise, anger, disgust, fear, and sadness, with the help of several geometrical features. Unlike many other geometry-based approaches, the frame-based method does not rely on prior knowledge of a person-specific neutral expression; this knowledge is gained through human intervention and not available in real scenarios. Additionally, we provide a method to investigate the performance of the geometry-based approaches under various facial point localization errors. From an evaluation on two public benchmark datasets, we have found that using eight facial points, we can achieve the state-of-the-art recognition rate. However, this state-of-the-art geometry-based approach exploits features derived from 68 facial points and requires prior knowledge of the person-specific neutral expression. The expression recognition rate using geometrical features is adversely affected by the errors in the facial point localization, especially for the expressions with subtle facial deformations.

  3. Sequential Clustering based Facial Feature Extraction Method for Automatic Creation of Facial Models from Orthogonal Views

    CERN Document Server

    Ghahari, Alireza

    2009-01-01

    Multiview 3D face modeling has attracted increasing attention recently and has become one of the potential avenues in future video systems. We aim to make more reliable and robust automatic feature extraction and natural 3D feature construction from 2D features detected on a pair of frontal and profile view face images. We propose several heuristic algorithms to minimize possible errors introduced by prevalent nonperfect orthogonal condition and noncoherent luminance. In our approach, we first extract the 2D features that are visible to both cameras in both views. Then, we estimate the coordinates of the features in the hidden profile view based on the visible features extracted in the two orthogonal views. Finally, based on the coordinates of the extracted features, we deform a 3D generic model to perform the desired 3D clone modeling. Present study proves the scope of resulted facial models for practical applications like face recognition and facial animation.

  4. Facial Feature Extraction Method Based on Coefficients of Variances

    Institute of Scientific and Technical Information of China (English)

    Feng-Xi Song; David Zhang; Cai-Kou Chen; Jing-Yu Yang

    2007-01-01

    Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) are two popular feature ex- traction techniques in statistical pattern recognition field. Due to small sample size problem LDA cannot be directly applied to appearance-based face recognition tasks. As a consequence, a lot of LDA-based facial feature extraction techniques are proposed to deal with the problem one after the other. Nullspace Method is one of the most effective methods among them. The Nullspace Method tries to find a set of discriminant vectors which maximize the between-class scatter in the null space of the within-class scatter matrix. The calculation of its discriminant vectors will involve performing singular value decomposition on a high-dimensional matrix. It is generally memory- and time-consuming. Borrowing the key idea in Nullspace method and the concept of coefficient of variance in statistical analysis we present a novel facial feature extraction method, i.e., Discriminant based on Coefficient of Variance (DCV) in this paper. Experimental results performed on the FERET and AR face image databases demonstrate that DCV is a promising technique in comparison with Eigenfaces, Nullspace Method, and other state-of-the-art facial feature extraction methods.

  5. Novel Facial Features Segmentation Algorithm

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    An efficient algorithm for facial features extractions is proposed. The facial features we segment are the two eyes, nose and mouth. The algorithm is based on an improved Gabor wavelets edge detector, morphological approach to detect the face region and facial features regions, and an improved T-shape face mask to locate the extract location of facial features. The experimental results show that the proposed method is robust against facial expression, illumination, and can be also effective if the person wearing glasses, and so on.

  6. Live facial feature extraction

    Institute of Scientific and Technical Information of China (English)

    ZHAO JieYu

    2008-01-01

    Precise facial feature extraction is essential to the high-level face recognition and expression analysis. This paper presents a novel method for the real-time geomet-ric facial feature extraction from live video. In this paper, the input image is viewed as a weighted graph. The segmentation of the pixels corresponding to the edges of facial components of the mouth, eyes, brows, and nose is implemented by means of random walks on the weighted graph. The graph has an 8-connected lattice structure and the weight value associated with each edge reflects the likelihood that a random walker will cross that edge. The random walks simulate an anisot-ropic diffusion process that filters out the noise while preserving the facial expres-sion pixels. The seeds for the segmentation are obtained from a color and motion detector. The segmented facial pixels are represented with linked lists in the origi-nal geometric form and grouped into different parts corresponding to facial com-ponents. For the convenience of implementing high-level vision, the geometric description of facial component pixels is further decomposed into shape and reg-istration information. Shape is defined as the geometric information that is invari-ant under the registration transformation, such as translation, rotation, and iso-tropic scale. Statistical shape analysis is carried out to capture global facial fea-tures where the Procrustes shape distance measure is adopted. A Bayesian ap-proach is used to incorporate high-level prior knowledge of face structure. Ex-perimental results show that the proposed method is capable of real-time extraction of precise geometric facial features from live video. The feature extraction is robust against the illumination changes, scale variation, head rotations, and hand inter-ference.

  7. Age Estimation Based on AAM and 2D-DCT Features of Facial Images

    Directory of Open Access Journals (Sweden)

    Asuman Günay

    2015-02-01

    Full Text Available This paper proposes a novel age estimation method - Global and Local feAture based Age estiMation (GLAAM - relying on global and local features of facial images. Global features are obtained with Active Appearance Models (AAM. Local features are extracted with regional 2D-DCT (2- dimensional Discrete Cosine Transform of normalized facial images. GLAAM consists of the following modules: face normalization, global feature extraction with AAM, local feature extraction with 2D-DCT, dimensionality reduction by means of Principal Component Analysis (PCA and age estimation with multiple linear regression. Experiments have shown that GLAAM outperforms many methods previously applied to the FG-NET database.

  8. Learning representative features for facial images based on a modified principal component analysis

    Science.gov (United States)

    Averkin, Anton; Potapov, Alexey

    2013-05-01

    The paper is devoted to facial image analysis and particularly deals with the problem of automatic evaluation of the attractiveness of human faces. We propose a new approach for automatic construction of feature space based on a modified principal component analysis. Input data sets for the algorithm are the learning data sets of facial images, which are rated by one person. The proposed approach allows one to extract features of the individual subjective face beauty perception and to predict attractiveness values for new facial images, which were not included into a learning data set. The Pearson correlation coefficient between values predicted by our method for new facial images and personal attractiveness estimation values equals to 0.89. This means that the new approach proposed is promising and can be used for predicting subjective face attractiveness values in real systems of the facial images analysis.

  9. Simultaneous facial feature tracking and facial expression recognition.

    Science.gov (United States)

    Li, Yongqiang; Wang, Shangfei; Zhao, Yongping; Ji, Qiang

    2013-07-01

    The tracking and recognition of facial activities from images or videos have attracted great attention in computer vision field. Facial activities are characterized by three levels. First, in the bottom level, facial feature points around each facial component, i.e., eyebrow, mouth, etc., capture the detailed face shape information. Second, in the middle level, facial action units, defined in the facial action coding system, represent the contraction of a specific set of facial muscles, i.e., lid tightener, eyebrow raiser, etc. Finally, in the top level, six prototypical facial expressions represent the global facial muscle movement and are commonly used to describe the human emotion states. In contrast to the mainstream approaches, which usually only focus on one or two levels of facial activities, and track (or recognize) them separately, this paper introduces a unified probabilistic framework based on the dynamic Bayesian network to simultaneously and coherently represent the facial evolvement in different levels, their interactions and their observations. Advanced machine learning methods are introduced to learn the model based on both training data and subjective prior knowledge. Given the model and the measurements of facial motions, all three levels of facial activities are simultaneously recognized through a probabilistic inference. Extensive experiments are performed to illustrate the feasibility and effectiveness of the proposed model on all three level facial activities.

  10. A Classifier Model based on the Features Quantitative Analysis for Facial Expression Recognition

    Directory of Open Access Journals (Sweden)

    Amir Jamshidnezhad

    2011-01-01

    Full Text Available In recent decades computer technology has considerable developed in use of intelligent systems for classification. The development of HCI systems is highly depended on accurate understanding of emotions. However, facial expressions are difficult to classify by a mathematical models because of natural quality. In this paper, quantitative analysis is used in order to find the most effective features movements between the selected facial feature points. Therefore, the features are extracted not only based on the psychological studies, but also based on the quantitative methods to arise the accuracy of recognitions. Also in this model, fuzzy logic and genetic algorithm are used to classify facial expressions. Genetic algorithm is an exclusive attribute of proposed model which is used for tuning membership functions and increasing the accuracy.

  11. Multi-Cue-Based Face and Facial Feature Detection on Video Segments

    Institute of Scientific and Technical Information of China (English)

    PENG ZhenYun(彭振云); AI HaiZhou(艾海舟); Hong Wei(洪微); LIANG LuHong(梁路宏); XU GuangYou(徐光祐)

    2003-01-01

    An approach is presented to detect faces and facial features on a video segmentbased on multi-cues, including gray-level distribution, color, motion, templates, algebraic featuresand so on. Faces are first detected across the frames by using color segmentation, template matchingand artificial neural network. A PCA-based (Principal Component Analysis) feature detector forstill images is then used to detect facial features on each single frame until the resulting features ofthree adjacent frames, named as base frames, are consistent with each other. The features of framesneighboring the base frames are first detected by the still-image feature detector, then verifiedand corrected according to the smoothness constraint and the planar surface motion constraint.Experiments have been performed on video segments captured under different environments, andthe presented method is proved to be robust and accurate over variable poses, ages and illuminationconditions.

  12. Facial Expression Recognition Based on Features Derived From the Distinct LBP and GLCM

    Directory of Open Access Journals (Sweden)

    Gorti Satyanarayana Murty

    2014-01-01

    Full Text Available Automatic recognition of facial expressions can be an important component of natural human-machine interfaces; it may also be used in behavioural science and in clinical practice. Although humans recognise facial expressions virtually without effort or delay, reliable expression recognition by machine is still a challenge. This paper, presents recognition of facial expression by integrating the features derived from Grey Level Co-occurrence Matrix (GLCM with a new structural approach derived from distinct LBP’s (DLBP ona 3 x 3 First order Compressed Image (FCI. The proposed method precisely recognizes the 7 categories of expressions i.e.: neutral, happiness, sadness, surprise, anger, disgust and fear. The proposed method contains three phases. In the first phase each 5 x 5 sub image is compressed into a 3 x 3 sub image. The second phase derives two distinct LBP’s (DLBP using the Triangular patterns between the upper and lower parts of the 3 x 3 sub image. In the third phase GLCM is constructed based on the DLBP’s and feature parameters are evaluated for precise facial expression recognition. The derived DLBP is effective because it integrated with GLCM and provides better classification performance. The proposed method overcomes the disadvantages of statistical and formal LBP methods in estimating the facial expressions. The experimental results demonstrate the effectiveness of the proposed method on facial expression recognition.

  13. Hybrid Feature Extraction-based Approach for Facial Parts Representation and Recognition

    Science.gov (United States)

    Rouabhia, C.; Tebbikh, H.

    2008-06-01

    Face recognition is a specialized image processing which has attracted a considerable attention in computer vision. In this article, we develop a new facial recognition system from video sequences images dedicated to person identification whose face is partly occulted. This system is based on a hybrid image feature extraction technique called ACPDL2D (Rouabhia et al. 2007), it combines two-dimensional principal component analysis and two-dimensional linear discriminant analysis with neural network. We performed the feature extraction task on the eyes and the nose images separately then a Multi-Layers Perceptron classifier is used. Compared to the whole face, the results of simulation are in favor of the facial parts in terms of memory capacity and recognition (99.41% for the eyes part, 98.16% for the nose part and 97.25 % for the whole face).

  14. Facial Age Estimation based on Decision Level Fusion of AAM, LBP and Gabor Features

    Directory of Open Access Journals (Sweden)

    Asuman Günay

    2015-08-01

    Full Text Available In this paper a new hierarchical age estimation method based on decision level fusion of global and local features is proposed. The shape and appearance information of human faces which are extracted with active appearance models (AAM are used as global facial features. The local facial features are the wrinkle features extracted with Gabor filters and skin features extracted with local binary patterns (LBP. Then feature classification is performed using a hierarchical classifier which is the combination of an age group classification and detailed age estimation. In the age group classification phase, three distinct support vector machines (SVM classifiers are trained using each feature vector. Then decision level fusion is performed to combine the results of these classifiers. The detailed age of the classified image is then estimated in that age group, using the aging functions modeled with global and local features, separately. Aging functions are modeled with multiple linear regression. To make a final decision, the results of these aging functions are also fused in decision level. Experimental results on the FG-NET and PAL aging databases have shown that the age estimation accuracy of the proposed method is better than the previous methods.

  15. Recognition of Facial Expression Using Eigenvector Based Distributed Features and Euclidean Distance Based Decision Making Technique

    Directory of Open Access Journals (Sweden)

    Jeemoni Kalita

    2013-03-01

    Full Text Available In this paper, an Eigenvector based system has been presented to recognize facial expressions from digital facial images. In the approach, firstly the images were acquired and cropping of five significant portions from the image was performed to extract and store the Eigenvectors specific to the expressions. The Eigenvectors for the test images were also computed, and finally the input facial image was recognized when similarity was obtained by calculating the minimum Euclidean distance between the test image and the different expressions.

  16. Tracking facial features with occlusions

    Institute of Scientific and Technical Information of China (English)

    MARKIN Evgeny; PRAKASH Edmond C.

    2006-01-01

    Facial expression recognition consists of determining what kind of emotional content is presented in a human face.The problem presents a complex area for exploration, since it encompasses face acquisition, facial feature tracking, facial expression classification. Facial feature tracking is of the most interest. Active Appearance Model (AAM) enables accurate tracking of facial features in real-time, but lacks occlusions and self-occlusions. In this paper we propose a solution to improve the accuracy of fitting technique. The idea is to include occluded images into AAM training data. We demonstrate the results by running ex periments using gradient descent algorithm for fitting the AAM. Our experiments show that using fitting algorithm with occluded training data improves the fitting quality of the algorithm.

  17. A Classification Method of Normal and Overweight Females Based on Facial Features for Automated Medical Applications

    Directory of Open Access Journals (Sweden)

    Bum Ju Lee

    2012-01-01

    Full Text Available Obesity and overweight have become serious public health problems worldwide. Obesity and abdominal obesity are associated with type 2 diabetes, cardiovascular diseases, and metabolic syndrome. In this paper, we first suggest a method of predicting normal and overweight females according to body mass index (BMI based on facial features. A total of 688 subjects participated in this study. We obtained the area under the ROC curve (AUC value of 0.861 and kappa value of 0.521 in Female: 21–40 (females aged 21–40 years group, and AUC value of 0.76 and kappa value of 0.401 in Female: 41–60 (females aged 41–60 years group. In two groups, we found many features showing statistical differences between normal and overweight subjects by using an independent two-sample t-test. We demonstrated that it is possible to predict BMI status using facial characteristics. Our results provide useful information for studies of obesity and facial characteristics, and may provide useful clues in the development of applications for alternative diagnosis of obesity in remote healthcare.

  18. Automatic facial feature extraction and expression recognition based on neural network

    CERN Document Server

    Khandait, S P; Khandait, P D

    2012-01-01

    In this paper, an approach to the problem of automatic facial feature extraction from a still frontal posed image and classification and recognition of facial expression and hence emotion and mood of a person is presented. Feed forward back propagation neural network is used as a classifier for classifying the expressions of supplied face into seven basic categories like surprise, neutral, sad, disgust, fear, happy and angry. For face portion segmentation and localization, morphological image processing operations are used. Permanent facial features like eyebrows, eyes, mouth and nose are extracted using SUSAN edge detection operator, facial geometry, edge projection analysis. Experiments are carried out on JAFFE facial expression database and gives better performance in terms of 100% accuracy for training set and 95.26% accuracy for test set.

  19. Fusing Facial Features for Face Recognition

    Directory of Open Access Journals (Sweden)

    Jamal Ahmad Dargham

    2012-06-01

    Full Text Available Face recognition is an important biometric method because of its potential applications in many fields, such as access control, surveillance, and human-computer interaction. In this paper, a face recognition system that fuses the outputs of three face recognition systems based on Gabor jets is presented. The first system uses the magnitude, the second uses the phase, and the third uses the phase-weighted magnitude of the jets. The jets are generated from facial landmarks selected using three selection methods. It was found out that fusing the facial features gives better recognition rate than either facial feature used individually regardless of the landmark selection method.

  20. An Efficient Multimodal 2D + 3D Feature-based Approach to Automatic Facial Expression Recognition

    KAUST Repository

    Li, Huibin

    2015-07-29

    We present a fully automatic multimodal 2D + 3D feature-based facial expression recognition approach and demonstrate its performance on the BU-3DFE database. Our approach combines multi-order gradient-based local texture and shape descriptors in order to achieve efficiency and robustness. First, a large set of fiducial facial landmarks of 2D face images along with their 3D face scans are localized using a novel algorithm namely incremental Parallel Cascade of Linear Regression (iPar-CLR). Then, a novel Histogram of Second Order Gradients (HSOG) based local image descriptor in conjunction with the widely used first-order gradient based SIFT descriptor are used to describe the local texture around each 2D landmark. Similarly, the local geometry around each 3D landmark is described by two novel local shape descriptors constructed using the first-order and the second-order surface differential geometry quantities, i.e., Histogram of mesh Gradients (meshHOG) and Histogram of mesh Shape index (curvature quantization, meshHOS). Finally, the Support Vector Machine (SVM) based recognition results of all 2D and 3D descriptors are fused at both feature-level and score-level to further improve the accuracy. Comprehensive experimental results demonstrate that there exist impressive complementary characteristics between the 2D and 3D descriptors. We use the BU-3DFE benchmark to compare our approach to the state-of-the-art ones. Our multimodal feature-based approach outperforms the others by achieving an average recognition accuracy of 86.32%. Moreover, a good generalization ability is shown on the Bosphorus database.

  1. Tracking subtle stereotypes of children with trisomy 21: from facial-feature-based to implicit stereotyping.

    Directory of Open Access Journals (Sweden)

    Claire Enea-Drapeau

    Full Text Available BACKGROUND: Stigmatization is one of the greatest obstacles to the successful integration of people with Trisomy 21 (T21 or Down syndrome, the most frequent genetic disorder associated with intellectual disability. Research on attitudes and stereotypes toward these people still focuses on explicit measures subjected to social-desirability biases, and neglects how variability in facial stigmata influences attitudes and stereotyping. METHODOLOGY/PRINCIPAL FINDINGS: The participants were 165 adults including 55 young adult students, 55 non-student adults, and 55 professional caregivers working with intellectually disabled persons. They were faced with implicit association tests (IAT, a well-known technique whereby response latency is used to capture the relative strength with which some groups of people--here photographed faces of typically developing children and children with T21--are automatically (without conscious awareness associated with positive versus negative attributes in memory. Each participant also rated the same photographed faces (consciously accessible evaluations. We provide the first evidence that the positive bias typically found in explicit judgments of children with T21 is smaller for those whose facial features are highly characteristic of this disorder, compared to their counterparts with less distinctive features and to typically developing children. We also show that this bias can coexist with negative evaluations at the implicit level (with large effect sizes, even among professional caregivers. CONCLUSION: These findings support recent models of feature-based stereotyping, and more importantly show how crucial it is to go beyond explicit evaluations to estimate the true extent of stigmatization of intellectually disabled people.

  2. Responses in the right posterior superior temporal sulcus show a feature-based response to facial expression.

    Science.gov (United States)

    Flack, Tessa R; Andrews, Timothy J; Hymers, Mark; Al-Mosaiwi, Mohammed; Marsden, Samuel P; Strachan, James W A; Trakulpipat, Chayanit; Wang, Liang; Wu, Tian; Young, Andrew W

    2015-08-01

    The face-selective region of the right posterior superior temporal sulcus (pSTS) plays an important role in analysing facial expressions. However, it is less clear how facial expressions are represented in this region. In this study, we used the face composite effect to explore whether the pSTS contains a holistic or feature-based representation of facial expression. Aligned and misaligned composite images were created from the top and bottom halves of faces posing different expressions. In Experiment 1, participants performed a behavioural matching task in which they judged whether the top half of two images was the same or different. The ability to discriminate the top half of the face was affected by changes in the bottom half of the face when the images were aligned, but not when they were misaligned. This shows a holistic behavioural response to expression. In Experiment 2, we used fMR-adaptation to ask whether the pSTS has a corresponding holistic neural representation of expression. Aligned or misaligned images were presented in blocks that involved repeating the same image or in which the top or bottom half of the images changed. Increased neural responses were found in the right pSTS regardless of whether the change occurred in the top or bottom of the image, showing that changes in expression were detected across all parts of the face. However, in contrast to the behavioural data, the pattern did not differ between aligned and misaligned stimuli. This suggests that the pSTS does not encode facial expressions holistically. In contrast to the pSTS, a holistic pattern of response to facial expression was found in the right inferior frontal gyrus (IFG). Together, these results suggest that pSTS reflects an early stage in the processing of facial expression in which facial features are represented independently.

  3. Geometric Feature-Based Facial Expression Recognition in Image Sequences Using Multi-Class AdaBoost and Support Vector Machines

    Directory of Open Access Journals (Sweden)

    Joonwhoan Lee

    2013-06-01

    Full Text Available Facial expressions are widely used in the behavioral interpretation of emotions, cognitive science, and social interactions. In this paper, we present a novel method for fully automatic facial expression recognition in facial image sequences. As the facial expression evolves over time facial landmarks are automatically tracked in consecutive video frames, using displacements based on elastic bunch graph matching displacement estimation. Feature vectors from individual landmarks, as well as pairs of landmarks tracking results are extracted, and normalized, with respect to the first frame in the sequence. The prototypical expression sequence for each class of facial expression is formed, by taking the median of the landmark tracking results from the training facial expression sequences. Multi-class AdaBoost with dynamic time warping similarity distance between the feature vector of input facial expression and prototypical facial expression, is used as a weak classifier to select the subset of discriminative feature vectors. Finally, two methods for facial expression recognition are presented, either by using multi-class AdaBoost with dynamic time warping, or by using support vector machine on the boosted feature vectors. The results on the Cohn-Kanade (CK+ facial expression database show a recognition accuracy of 95.17% and 97.35% using multi-class AdaBoost and support vector machines, respectively.

  4. Artificial Neural Networks and Gene Expression Programing based age estimation using facial features

    Directory of Open Access Journals (Sweden)

    Baddrud Z. Laskar

    2015-10-01

    Full Text Available This work is about estimating human age automatically through analysis of facial images. It has got a lot of real-world applications. Due to prompt advances in the fields of machine vision, facial image processing, and computer graphics, automatic age estimation via faces in computer is one of the dominant topics these days. This is due to widespread real-world applications, in areas of biometrics, security, surveillance, control, forensic art, entertainment, online customer management and support, along with cosmetology. As it is difficult to estimate the exact age, this system is to estimate a certain range of ages. Four sets of classifications have been used to differentiate a person’s data into one of the different age groups. The uniqueness about this study is the usage of two technologies i.e., Artificial Neural Networks (ANN and Gene Expression Programing (GEP to estimate the age and then compare the results. New methodologies like Gene Expression Programing (GEP have been explored here and significant results were found. The dataset has been developed to provide more efficient results by superior preprocessing methods. This proposed approach has been developed, tested and trained using both the methods. A public data set was used to test the system, FG-NET. The quality of the proposed system for age estimation using facial features is shown by broad experiments on the available database of FG-NET.

  5. Research of Facial Expression Recognization Based on Facial Features%基于人脸五官结构特征的表情识别研究

    Institute of Scientific and Technical Information of China (English)

    马飞; 刘红娟; 程荣花

    2011-01-01

    In the field of the facial expression recognization, this paper analyzed the facial features and present the method of the facial features for facial expression recognization. This paper constructed a new feature vector weight function to discrete the features data and present a new facial expression recognization classifier, the experiments show that the algorithms are effective.%在对人脸表情识别的研究中,对人脸五官的结构特征进行了分析,提出了基于五官结构特征的方法进行人脸表情的识别,文章构造了一种新的表情特征向量权重函数对五官各结构特征向量进行离散化,并构建了一个表情识别分类器,实验表明文章所提出的表情识别方法是有效的.

  6. Reflectance confocal microscopy features of facial angiofibromas

    Science.gov (United States)

    Millán-Cayetano, José-Francisco; Yélamos, Oriol; Rossi, Anthony M.; Marchetti, Michael A.; Jain, Manu

    2017-01-01

    Facial angiofibromas are benign tumors presenting as firm, dome-shaped, flesh-colored to pink papules, typically on the nose and adjoining central face. Clinically and dermoscopically they can mimic melanocytic nevi or basal cell carcinomas (BCC). Reflectance confocal microscopy (RCM) is a noninvasive imaging tool that is useful in diagnosing melanocytic and non-melanocytic facial lesions. To date no studies have described the RCM features of facial angiofibromas. Herein, we present two cases of facial angiofibromas that were imaged with RCM and revealed tumor island-like structures that mimicked BCC, leading to skin biopsy.

  7. Facial expression feature extraction based on tensor analysis%基于张量分析的表情特征提取

    Institute of Scientific and Technical Information of China (English)

    孙波; 刘永娜; 罗继鸿; 张迪; 张树玲; 陈玖冰

    2016-01-01

    表情识别的性能依赖于所提取表情特征的有效性,现有方法提取的表情基本上是人脸与表情的融合体,然而不同个体的人脸差异是表情识别的主要干扰因素。在表情识别时,理想情况是将个体相关的人脸特征和与个体无关的表情特征相分离。针对此问题,在三维空间建立人脸张量;然后用张量分析的方法将人脸特征与表情特征进行分离,使获取的表情参数与人脸无关。从而排除不同个体的人脸差异对表情识别的干扰。最后,在JAFFE表情数据库上验证了该方法的有效性。%Facial expression feature extraction plays an important role in facial expression recognition. The expression feature extracted by existing methods is the combination of individual facial feature and expression feature. Facial recogni-tion is based on different individual facial feature, but facial expression recognition needs to find out the differences of different expressions. What is more important is individual difference will influence the facial expression recognition, and obstruct the expression reorganization rate. In an optimal situation, the related individual facial feature can be separated during the process of facial expression recognition. This paper presents a method that can eliminate interference of facial features when recognizing the facial expression. Firstly, a three order tensor will be built. Secondly, it uses the tensor analysis method to decompose the face feature and the expression feature into the person subspace and the expression subspace respectively. This method can ensure that parameters of expression and face are not related. The evaluation experiment on JAFFE proves the validity of the method.

  8. 基于链码的人脸表情几何特征提取%Facial Expression Geometrical Feature Extraction Based on Chain Code

    Institute of Scientific and Technical Information of China (English)

    张庆; 代锐; 朱雪莹; 韦穗

    2012-01-01

    已有人脸表情特征提取算法的表情识别率较低.为此,提出一种基于链码的人脸表情几何特征提取算法.以主动形状模型特征点定位为基础,对面部目标上定位的特征点位置进行循环链码编码,以提取出人脸表情几何特征.实验结果表明,相比经典的LBP表情特征鉴别方法,该算法的识别率提高约10%.%The existing facial expression recognition rate of facial expression feature extraction algorithm is low. For this, this paper proposes a facial geometric feature extraction algorithm based chain codes. Based on active shape model that locates feature points and outputs the points' coordinates of facial targets the coding method gives a circular codes to extract the facial geometric feature. Experimental results show that, compared with the method of typical LBP expression recognition, the accuracy of the algorithm is increased by nearly 10%.

  9. Extraction of Facial Features from Color Images

    Directory of Open Access Journals (Sweden)

    J. Pavlovicova

    2008-09-01

    Full Text Available In this paper, a method for localization and extraction of faces and characteristic facial features such as eyes, mouth and face boundaries from color image data is proposed. This approach exploits color properties of human skin to localize image regions – face candidates. The facial features extraction is performed only on preselected face-candidate regions. Likewise, for eyes and mouth localization color information and local contrast around eyes are used. The ellipse of face boundary is determined using gradient image and Hough transform. Algorithm was tested on image database Feret.

  10. Tracking facial features in video sequences using a deformable-model-based approach

    Science.gov (United States)

    Malciu, Marius; Preteux, Francoise J.

    2000-10-01

    This paper addresses the issue of computer vision-based face motion capture as an alternative to physical sensor-based technologies. The proposed method combines a deformable template-based tracking of mouth and eyes in arbitrary video sequences with a single speaking person with a global 3D head pose estimation procedure yielding robust initializations. Mathematical principles underlying deformable template matching together with definition and extraction of salient image features are presented. Specifically, interpolating cubic B-splines between the MPEG-4 Face Animation Parameters (FAPs) associated with the mouth and eyes are used as template parameterization. Modeling the template a network of springs interconnecting with the mouth and eyes FAPs, the internal energy is expressed as a combination of elastic and symmetry local constraints. The external energy function, which allows to enforce interactions with image data, involves contour, texture and topography properties properly combined within robust potential functions. Template matching is achieved by applying the downhill simplex method for minimizing the global energy cost. Stability and accuracy of the results are discussed on a set of 2000 frames corresponding to 5 video sequences of speaking people.

  11. Detection and tracking of facial features

    Science.gov (United States)

    De Silva, Liyanage C.; Aizawa, Kiyoharu; Hatori, Mitsutoshi

    1995-04-01

    Detection and tracking of facial features without using any head mounted devices may become required in various future visual communication applications, such as teleconferencing, virtual reality etc. In this paper we propose an automatic method of face feature detection using a method called edge pixel counting. Instead of utilizing color or gray scale information of the facial image, the proposed edge pixel counting method utilized the edge information to estimate the face feature positions such as eyes, nose and mouth in the first frame of a moving facial image sequence, using a variable size face feature template. For the remaining frames, feature tracking is carried out alternatively using a method called deformable template matching and edge pixel counting. One main advantage of using edge pixel counting in feature tracking is that it does not require the condition of a high inter frame correlation around the feature areas as is required in template matching. Some experimental results are shown to demonstrate the effectiveness of the proposed method.

  12. Internal facial features are signals of personality and health.

    Science.gov (United States)

    Kramer, Robin S S; Ward, Robert

    2010-11-01

    We investigated forms of socially relevant information signalled from static images of the face. We created composite images from women scoring high and low values on personality and health dimensions and measured the accuracy of raters in discriminating high from low trait values. We also looked specifically at the information content within the internal facial features, by presenting the composite images with an occluding mask. Four of the Big Five traits were accurately discriminated on the basis of the internal facial features alone (conscientiousness was the exception), as was physical health. The addition of external features in the full-face images led to improved detection for extraversion and physical health and poorer performance on intellect/imagination (or openness). Visual appearance based on internal facial features alone can therefore accurately predict behavioural biases in the form of personality, as well as levels of physical health.

  13. Facial expression identification based on combinational feature of facial action units%基于面部动作单元组合特征的表情识别

    Institute of Scientific and Technical Information of China (English)

    欧阳琰; 桑农

    2011-01-01

    人脸表情可以被看作是由面部表情编码系统(FACS)定义的不同面部运动单元的组合.不同于人脸图像的灰度、纹理等表象特征,基于面部运动单元的表情混合特征能够更准确地描述表情,然而,面部运动单元很难精确定位,为了避免这个问题.在前人的工作中通过将图像分成许多子块,并从子块中提取面部运动单元信息来组成基于面部运动单元的表情成分特征.在此基础上.本文首先通过对人脸图像的眼睛和口部作粗定位,接着根据眼睛和口部的水平位置,提取眼睛区域、口部区域和鼻子区域的图像子块,然后对每个子块提取Haar特征,并采用错误率最小策略从这些子块中选出面部运动单元组合特征,最后使用组合特征进行学习得出弱分类器,并嵌入到Boost学习结构中构造出强分类器.通过在Cohn-Kanada数据库上的测试,证明本文的方法能够取得很好的表情分类效果.%Facial expressions may be described as combination of facial action units defined by facial action coding system. Unlike appearance features of face images, such as gray and texture, the combinational feature of facial action units can describe the facial expressions better. However, it is difficult to detect facial action units accurately. So, many previous works try to divided face image into local patches, and extract the information of facial action units to compose the compositional features of facial expressions.According to these works, in this paper we firstly locate the position of eye and mouth in face images, and then divide face images into local patches due to the position of eye and mouth, after that extracted Haar features from each patches and use a minimum error based combination strategy to build combinational feature of facial action units from these features of patches, then use combinational feature to build weak learners. Finally boosting learning structure is used to build the

  14. Mutual information-based facial expression recognition

    Science.gov (United States)

    Hazar, Mliki; Hammami, Mohamed; Hanêne, Ben-Abdallah

    2013-12-01

    This paper introduces a novel low-computation discriminative regions representation for expression analysis task. The proposed approach relies on interesting studies in psychology which show that most of the descriptive and responsible regions for facial expression are located around some face parts. The contributions of this work lie in the proposition of new approach which supports automatic facial expression recognition based on automatic regions selection. The regions selection step aims to select the descriptive regions responsible or facial expression and was performed using Mutual Information (MI) technique. For facial feature extraction, we have applied Local Binary Patterns Pattern (LBP) on Gradient image to encode salient micro-patterns of facial expressions. Experimental studies have shown that using discriminative regions provide better results than using the whole face regions whilst reducing features vector dimension.

  15. Identification based on facial parts

    Directory of Open Access Journals (Sweden)

    Stevanov Zorica

    2007-01-01

    Full Text Available Two opposing views dominate face identification literature, one suggesting that the face is processed as a whole and another suggesting analysis based on parts. Our research tried to establish which of these two is the dominant strategy and our results fell in the direction of analysis based on parts. The faces were covered with a mask and the participants were uncovering different parts, one at the time, in an attempt to identify a person. Already at the level of a single facial feature, such as mouth or eye and top of the nose, some observers were capable to establish the identity of a familiar face. Identification is exceptionally successful when a small assembly of facial parts is visible, such as eye, eyebrow and the top of the nose. Some facial parts are not very informative on their own but do enhance recognition when given as a part of such an assembly. Novel finding here is importance of the top of the nose for the face identification. Additionally observers have a preference toward the left side of the face. Typically subjects view the elements in the following order: left eye, left eyebrow, right eye, lips, region between the eyes, right eyebrow, region between the eyebrows, left check, right cheek. When observers are not in a position to see eyes, eyebrows or top of the nose, they go for lips first and then region between the eyebrows, region between the eyes, left check, right cheek and finally chin.

  16. An Active Model for Facial Feature Tracking

    Directory of Open Access Journals (Sweden)

    Jörgen Ahlberg

    2002-06-01

    Full Text Available We present a system for finding and tracking a face and extract global and local animation parameters from a video sequence. The system uses an initial colour processing step for finding a rough estimate of the position, size, and inplane rotation of the face, followed by a refinement step drived by an active model. The latter step refines the pre­vious estimate, and also extracts local animation parame­ters. The system is able to track the face and some facial features in near real-time, and can compress the result to a bitstream compliant to MPEG-4 face and body animation.

  17. Dermoscopic Features of Facial Pigmented Skin Lesions

    Science.gov (United States)

    Goncharova, Yana; Attia, Enas A. S.; Souid, Khawla; Vasilenko, Inna V.

    2013-01-01

    Four types of facial pigmented skin lesions (FPSLs) constitute diagnostic challenge to dermatologists; early seborrheic keratosis (SK), pigmented actinic keratosis (AK), lentigo maligna (LM), and solar lentigo (SL). A retrospective analysis of dermoscopic images of histopathologically diagnosed clinically-challenging 64 flat FPSLs was conducted to establish the dermoscopic findings corresponding to each of SK, pigmented AK, LM, and SL. Four main dermoscopic features were evaluated: sharp demarcation, pigment pattern, follicular/epidermal pattern, and vascular pattern. In SK, the most specific dermoscopic features are follicular/epidermal pattern (cerebriform pattern; 100% of lesions, milia-like cysts; 50%, and comedo-like openings; 37.50%), and sharp demarcation (54.17%). AK and LM showed a composite characteristic pattern named “strawberry pattern” in 41.18% and 25% of lesions respectively, characterized by a background erythema and red pseudo-network, associated with prominent follicular openings surrounded by a white halo. However, in LM “strawberry pattern” is widely covered by psewdonetwork (87.5%), homogenous structureless pigmentation (75%) and other vascular patterns. In SL, structureless homogenous pigmentation was recognized in all lesions (100%). From the above mentioned data, we developed an algorithm to guide in dermoscopic features of FPSLs. PMID:23431466

  18. Wavelet based approach for facial expression recognition

    Directory of Open Access Journals (Sweden)

    Zaenal Abidin

    2015-03-01

    Full Text Available Facial expression recognition is one of the most active fields of research. Many facial expression recognition methods have been developed and implemented. Neural networks (NNs have capability to undertake such pattern recognition tasks. The key factor of the use of NN is based on its characteristics. It is capable in conducting learning and generalizing, non-linear mapping, and parallel computation. Backpropagation neural networks (BPNNs are the approach methods that mostly used. In this study, BPNNs were used as classifier to categorize facial expression images into seven-class of expressions which are anger, disgust, fear, happiness, sadness, neutral and surprise. For the purpose of feature extraction tasks, three discrete wavelet transforms were used to decompose images, namely Haar wavelet, Daubechies (4 wavelet and Coiflet (1 wavelet. To analyze the proposed method, a facial expression recognition system was built. The proposed method was tested on static images from JAFFE database.

  19. Facial Expression Recognition Using Stationary Wavelet Transform Features

    Directory of Open Access Journals (Sweden)

    Huma Qayyum

    2017-01-01

    Full Text Available Humans use facial expressions to convey personal feelings. Facial expressions need to be automatically recognized to design control and interactive applications. Feature extraction in an accurate manner is one of the key steps in automatic facial expression recognition system. Current frequency domain facial expression recognition systems have not fully utilized the facial elements and muscle movements for recognition. In this paper, stationary wavelet transform is used to extract features for facial expression recognition due to its good localization characteristics, in both spectral and spatial domains. More specifically a combination of horizontal and vertical subbands of stationary wavelet transform is used as these subbands contain muscle movement information for majority of the facial expressions. Feature dimensionality is further reduced by applying discrete cosine transform on these subbands. The selected features are then passed into feed forward neural network that is trained through back propagation algorithm. An average recognition rate of 98.83% and 96.61% is achieved for JAFFE and CK+ dataset, respectively. An accuracy of 94.28% is achieved for MS-Kinect dataset that is locally recorded. It has been observed that the proposed technique is very promising for facial expression recognition when compared to other state-of-the-art techniques.

  20. 基于面部特征的疲劳驾驶检测%Detection of fatigue driving based on facial features

    Institute of Scientific and Technical Information of China (English)

    张丽雯; 杨艳芳; 齐美彬; 蒋建国

    2013-01-01

    A method to detect driving fatigue based on the features of eyes and yawning is proposed.First,the face area is detected and located by using the Gaussian model in YCrCb color space.Then the facial gray image is linearized,and in the binary image,human eye regions are robustly located under the geometric constraints.By using the region growing and the morphological operating,the eyes positioning is accurately performed.Accordingly,the closure of eyes is calculated.Then the candidate lip area is located according to the best threshold of color space and the facial gray value feature.The degree of mouth opening shows whether the driver yawns.Finally,the driving fatigue is decided based on two facial features.The detection result of driving fatigue is improved because of the combination of the features of eye and yawning frequency.%文章采用一种基于眼睛闭合度及打呵欠来检测驾驶员疲劳的方法,在YCrCb颜色空间中利用高斯模型进行肤色检测得到人脸的区域,在人脸灰度二值化图中利用五官几何结构的先验知识粗略定位人眼,利用区域生长和形态学运算得到人眼轮廓并计算眼睛的闭合度;检测嘴唇时利用唇色最佳阈值大致确定嘴唇位置,在此基础上通过人脸灰度值特征精确定位嘴唇,然后通过嘴张开程度判断驾驶员是否打呵欠;最后基于2个特征对驾驶疲劳进行判决,实验证明这种方法对驾驶疲劳检测具有较好的效果.

  1. 基于特征区域自动分割的人脸表情识别%Facial Expression Recognition Based on Feature Regions Automatic Segmentation

    Institute of Scientific and Technical Information of China (English)

    张腾飞; 闵锐; 王保云

    2011-01-01

    针对目前三维人脸表情区域分割方法复杂、费时问题,提出一种人脸表情区域自动分割方法,通过投影、曲率计算的方法检测人脸的部分特征点,以上述特征点为基础进行人脸表情区域的自动分割.为得到更加丰富的表情特征,结合人脸表情识别编码规则对提取到的特征矩阵进行扩充,利用分类器进行人脸表情的识别.通过对三维人脸表情数据库部分样本的识别结果表明,该方法可以取得较高的识别率.%To improve 3D facial expression feature regions segmentation, an automatic feature regions segmentation method is presented.The facial feature points are detected by conducting projection and curvature calculation, and are used as the basis of facial expression feature regions automatic segmentation.To obtain more abundant facial expression information, the Facial Action Coding System(FACS) coding roles is introduced to extend the extracted characteristic matrix.And facial expressions can be recognized by combining classifiers.Experimental results of 3D facial expression samples show that the method is effective with high recognition rate.

  2. Regression-based Multi-View Facial Expression Recognition

    NARCIS (Netherlands)

    Rudovic, Ognjen; Patras, Ioannis; Pantic, Maja

    2010-01-01

    We present a regression-based scheme for multi-view facial expression recognition based on 2-D geometric features. We address the problem by mapping facial points (e.g. mouth corners) from non-frontal to frontal view where further recognition of the expressions can be performed using a state-of-the-

  3. Facial Expression Synthesis Based on Imitation

    OpenAIRE

    Yihjia Tsai; Hwei Jen Lin; Fu Wen Yang

    2012-01-01

    It is an interesting and challenging problem to synthesise vivid facial expression images. In this paper, we propose a facial expression synthesis system which imitates a reference facial expression image according to the difference between shape feature vectors of the neutral image and expression image. To improve the result, two stages of postprocessing are involved. We focus on the facial expressions of happiness, sadness, and surprise. Experimental results show vivid and flexible results.

  4. Concealing the Level-3 features of Fingerprint in a Facial Image

    Directory of Open Access Journals (Sweden)

    Dr.R.Seshadri,

    2010-11-01

    Full Text Available individual based on their physical, chemical and behavioral characteristics of the person. Biometrics is increasingly being used for authentication and protection purposes and this has generated considerable interest from many parts of the information technology people. In this paper we proposed facial image Watermarking methods that can embedded fingerprint level-3 features information into host facial images. This scheme has the advantage that in addition to facial matching, the recovered fingerprint level-3 features during the decoding can be used to establish the authentication. Here the proposed system concealing of vital information human being for identification and at the same time the system protect themselves fromattackers.

  5. Face recognition using improved-LDA with facial combined feature

    Institute of Scientific and Technical Information of China (English)

    Dake Zhou; Xin Yang; Ningsong Peng

    2005-01-01

    @@ Face recognition subjected to various conditions is a challenging task. This paper presents a combined feature improved Fisher classifier method for face recognition. Both of the facial holistic information and local information are used for face representation. In addition, the improved linear discriminant analysis (I-LDA) is employed for good generalization capability. Experiments show that the method is not only robust to moderate changes of illumination, pose and facial expression but also superior to the traditional methods, such as eigenfaces and Fisherfaces.

  6. Extracted facial feature of racial closely related faces

    Science.gov (United States)

    Liewchavalit, Chalothorn; Akiba, Masakazu; Kanno, Tsuneo; Nagao, Tomoharu

    2010-02-01

    Human faces contain a lot of demographic information such as identity, gender, age, race and emotion. Human being can perceive these pieces of information and use it as an important clue in social interaction with other people. Race perception is considered the most delicacy and sensitive parts of face perception. There are many research concerning image-base race recognition, but most of them are focus on major race group such as Caucasoid, Negroid and Mongoloid. This paper focuses on how people classify race of the racial closely related group. As a sample of racial closely related group, we choose Japanese and Thai face to represents difference between Northern and Southern Mongoloid. Three psychological experiment was performed to study the strategies of face perception on race classification. As a result of psychological experiment, it can be suggested that race perception is an ability that can be learn. Eyes and eyebrows are the most attention point and eyes is a significant factor in race perception. The Principal Component Analysis (PCA) was performed to extract facial features of sample race group. Extracted race features of texture and shape were used to synthesize faces. As the result, it can be suggested that racial feature is rely on detailed texture rather than shape feature. This research is a indispensable important fundamental research on the race perception which are essential in the establishment of human-like race recognition system.

  7. Facial Feature Tracking and Head Pose Tracking as Input for Platform Games

    OpenAIRE

    Andersson, Anders Tobias

    2016-01-01

    Modern facial feature tracking techniques can automatically extract and accurately track multiple facial landmark points from faces in video streams in real time. Facial landmark points are defined as points distributed on a face in regards to certain facial features, such as eye corners and face contour. This opens up for using facial feature movements as a handsfree human-computer interaction technique. These alternatives to traditional input devices can give a more interesting gaming experi...

  8. Facial expression feature selection method based on neighborhood rough set theory and quantum genetic algorithm%基于邻域粗糙集与量子遗传算法的人脸表情特征选择方法

    Institute of Scientific and Technical Information of China (English)

    冯林; 李聪; 沈莉

    2013-01-01

    人脸表情特征选择是人脸表情识别研究领域关注的一个热点.基于量子遗传算法与邻域粗糙集理论,文章提出一种新的人脸表情特征选择方法(Feature Selection based on Neighborhood Rough Set Theory and Quantum Genetic Algorithm,简称FSNRSTQGA),以邻域粗糙集理论为基础,定义了最优特征集的适应度函数来评价表情特征子集的选择效果;并结合量子遗传算法进化策略,提出了一种表情特征选择方法.Cohn-Kanade表情数据集上的仿真实验结果表明了该方法的有效性.%Facial expression feature selection is one of the hot issues in the field of facial expression recognition. A novel facial expression feature selection method named feature selection based on neighborhood rough set theory and quantum genetic algorithm (FSNRSTQGA) is proposed. First, an evaluation criterion of the optimization expression feature subset is established based on neighborhood rough set theory and used as the fitness function. Then, by quantum genetic algorithm evolutionary strategy, an approach of facial expression feature selection is proposed. The results of the simulation on Cohn-Kanade expression dataset illustrate that the FSNRSTQGA method is effective.

  9. Analysis and Reliability Performance Comparison of Different Facial Image Features

    Directory of Open Access Journals (Sweden)

    J. Madhavan

    2014-11-01

    Full Text Available This study performs reliability analysis on the different facial features with weighted retrieval accuracy on increasing facial database images. There are many methods analyzed in the existing papers with constant facial databases mentioned in the literature review. There were not much work carried out to study the performance in terms of reliability and also how the method will perform on increasing the size of the database. In this study certain feature extraction methods were analyzed on the regular performance measure and also the performance measures are modified to fit the real time requirements by giving weight ages for the closer matches. In this study four facial feature extraction methods are performed, they are DWT with PCA, LWT with PCA, HMM with SVD and Gabor wavelet with HMM. Reliability of these methods are analyzed and reported. Among all these methods Gabor wavelet with HMM gives more reliability than other three methods performed. Experiments are carried out to evaluate the proposed approach on the Olivetti Research Laboratory (ORL face database.

  10. Extraction of Facial Feature Points Using Cumulative Histogram

    CERN Document Server

    Paul, Sushil Kumar; Bouakaz, Saida

    2012-01-01

    This paper proposes a novel adaptive algorithm to extract facial feature points automatically such as eyebrows corners, eyes corners, nostrils, nose tip, and mouth corners in frontal view faces, which is based on cumulative histogram approach by varying different threshold values. At first, the method adopts the Viola-Jones face detector to detect the location of face and also crops the face region in an image. From the concept of the human face structure, the six relevant regions such as right eyebrow, left eyebrow, right eye, left eye, nose, and mouth areas are cropped in a face image. Then the histogram of each cropped relevant region is computed and its cumulative histogram value is employed by varying different threshold values to create a new filtering image in an adaptive way. The connected component of interested area for each relevant filtering image is indicated our respective feature region. A simple linear search algorithm for eyebrows, eyes and mouth filtering images and contour algorithm for nos...

  11. Facial Expression Recognition Based on Gabor Feature and Adaboost%基于Gabor特征和Adaboost的人脸表情识别

    Institute of Scientific and Technical Information of China (English)

    刘燚; 高智勇; 王军

    2011-01-01

    为了改菩人脸表情的识别率,提高分类器的性能,通过提取人脸表情图像的Gabor特征,再结合Adaboost算法,从而进行人脸表情的识别(facial expression recognition,FER).利用Gabor滤波器是人脸表情特征提取的一个重要手段,Adaboost算法则将一系列的弱分类器组合,最终生成一个强分类器.对表情识别这个多类识别问题,采取1:1的办法来解决,总共产生k(k-1)/2(k为总类别数)个强分类器,将多个强分类器进行级联实现人脸表情的多类分类.实验结果表明,相对于其他识别方法如MVBoost算法等,这种方法的识别准确率有很大的提高.%In order to improve the recognition rate of facial expression and enhance the performance of classifier,an approach is proposed to recognize facial expression using Gabor feature combined Adaboost algorithm.Gabor filter is one of the most important methods to extract features, weak classifiers would be constructed by Adaboost algorithm to generate a strong classifier.To solve the multi-class classification problem, we designed classifier by one-to-one mode,so the number of strong classifiers of Adaboost was k(k-1)/2 (k,number of categories).Finally, all strong classifiers were cascaded, Gabor features were feed into these classifiers and facial expression classification can be recognized.Experiment resuks showed that the recognition rate of Gabor plus Adaboost algorithm is significantly higher than that of other methods such as MVBoost algorithm.

  12. 人脸特征约束点的三维表情合成%New three-dimensional expression synthesis based on facial feature constraints points

    Institute of Scientific and Technical Information of China (English)

    胡勇; 王国胤; 杨勇

    2012-01-01

    Aiming at a lot of three-dimensional facial expression of the synthesis method calculation was very large, complex and realistic not strong, this paper proposed the distribution characteristic of the face on the basis of a new Delaunay triangulations on three-dimensional expression synthesis method. The method was through a set of features to face rapid triangulations to avoid pathological triangular mesh. It could effectively improve the realism of facial expressions and reduce the complexity of the algorithm. A lot of actual face expression synthesis experiment results show that the proposed algorithm through a small amount of feature point generates three-dimensional facial expression more real, which can effectively fast synthesis of various real face expression.%针对目前很多三维表情合成算法计算量大、方法比较复杂、真实感不强的特点,结合人脸分布特征,提出了一种基于Delaunay三角剖分的三维表情合成新算法.该方法通过对人脸特征点集的快速三角剖分避免了病态三角网格,有效提高了合成后人脸表情的真实度,降低了算法复杂度.大量实际人脸表情合成的实验结果表明,该算法通过少量的特征点生成的三维人脸表情更加真实,可有效快捷地合成各种真实的人脸表情.

  13. Facial expression identification using 3D geometric features from Microsoft Kinect device

    Science.gov (United States)

    Han, Dongxu; Al Jawad, Naseer; Du, Hongbo

    2016-05-01

    Facial expression identification is an important part of face recognition and closely related to emotion detection from face images. Various solutions have been proposed in the past using different types of cameras and features. Microsoft Kinect device has been widely used for multimedia interactions. More recently, the device has been increasingly deployed for supporting scientific investigations. This paper explores the effectiveness of using the device in identifying emotional facial expressions such as surprise, smile, sad, etc. and evaluates the usefulness of 3D data points on a face mesh structure obtained from the Kinect device. We present a distance-based geometric feature component that is derived from the distances between points on the face mesh and selected reference points in a single frame. The feature components extracted across a sequence of frames starting and ending by neutral emotion represent a whole expression. The feature vector eliminates the need for complex face orientation correction, simplifying the feature extraction process and making it more efficient. We applied the kNN classifier that exploits a feature component based similarity measure following the principle of dynamic time warping to determine the closest neighbors. Preliminary tests on a small scale database of different facial expressions show promises of the newly developed features and the usefulness of the Kinect device in facial expression identification.

  14. Facial Feature Extraction Using Frequency Map Series in PCNN

    Directory of Open Access Journals (Sweden)

    Rencan Nie

    2016-01-01

    Full Text Available Pulse coupled neural network (PCNN has been widely used in image processing. The 3D binary map series (BMS generated by PCNN effectively describes image feature information such as edges and regional distribution, so BMS can be treated as the basis of extracting 1D oscillation time series (OTS for an image. However, the traditional methods using BMS did not consider the correlation of the binary sequence in BMS and the space structure for every map. By further processing for BMS, a novel facial feature extraction method is proposed. Firstly, consider the correlation among maps in BMS; a method is put forward to transform BMS into frequency map series (FMS, and the method lessens the influence of noncontinuous feature regions in binary images on OTS-BMS. Then, by computing the 2D entropy for every map in FMS, the 3D FMS is transformed into 1D OTS (OTS-FMS, which has good geometry invariance for the facial image, and contains the space structure information of the image. Finally, by analyzing the OTS-FMS, the standard Euclidean distance is used to measure the distances for OTS-FMS. Experimental results verify the effectiveness of OTS-FMS in facial recognition, and it shows better recognition performance than other feature extraction methods.

  15. Featural processing in recognition of emotional facial expressions.

    Science.gov (United States)

    Beaudry, Olivia; Roy-Charland, Annie; Perron, Melanie; Cormier, Isabelle; Tapp, Roxane

    2014-04-01

    The present study aimed to clarify the role played by the eye/brow and mouth areas in the recognition of the six basic emotions. In Experiment 1, accuracy was examined while participants viewed partial and full facial expressions; in Experiment 2, participants viewed full facial expressions while their eye movements were recorded. Recognition rates were consistent with previous research: happiness was highest and fear was lowest. The mouth and eye/brow areas were not equally important for the recognition of all emotions. More precisely, while the mouth was revealed to be important in the recognition of happiness and the eye/brow area of sadness, results are not as consistent for the other emotions. In Experiment 2, consistent with previous studies, the eyes/brows were fixated for longer periods than the mouth for all emotions. Again, variations occurred as a function of the emotions, the mouth having an important role in happiness and the eyes/brows in sadness. The general pattern of results for the other four emotions was inconsistent between the experiments as well as across different measures. The complexity of the results suggests that the recognition process of emotional facial expressions cannot be reduced to a simple feature processing or holistic processing for all emotions.

  16. Men's preference for women's facial features: testing homogamy and the paternity uncertainty hypothesis.

    Directory of Open Access Journals (Sweden)

    Jeanne Bovet

    Full Text Available Male mate choice might be based on both absolute and relative strategies. Cues of female attractiveness are thus likely to reflect both fitness and reproductive potential, as well as compatibility with particular male phenotypes. In humans, absolute clues of fertility and indices of favorable developmental stability are generally associated with increased women's attractiveness. However, why men exhibit variable preferences remains less studied. Male mate choice might be influenced by uncertainty of paternity, a selective factor in species where the survival of the offspring depends on postnatal paternal care. For instance, in humans, a man might prefer a woman with recessive traits, thereby increasing the probability that his paternal traits will be visible in the child and ensuring paternity. Alternatively, attractiveness is hypothesized to be driven by self-resembling features (homogamy, which would reduce outbreeding depression. These hypotheses have been simultaneously evaluated for various facial traits using both real and artificial facial stimuli. The predicted preferences were then compared to realized mate choices using facial pictures from couples with at least 1 child. No evidence was found to support the paternity uncertainty hypothesis, as recessive features were not preferred by male raters. Conversely, preferences for self-resembling mates were found for several facial traits (hair and eye color, chin dimple, and thickness of lips and eyebrows. Moreover, realized homogamy for facial traits was also found in a sample of long-term mates. The advantages of homogamy in evolutionary terms are discussed.

  17. Men's preference for women's facial features: testing homogamy and the paternity uncertainty hypothesis.

    Science.gov (United States)

    Bovet, Jeanne; Barthes, Julien; Durand, Valérie; Raymond, Michel; Alvergne, Alexandra

    2012-01-01

    Male mate choice might be based on both absolute and relative strategies. Cues of female attractiveness are thus likely to reflect both fitness and reproductive potential, as well as compatibility with particular male phenotypes. In humans, absolute clues of fertility and indices of favorable developmental stability are generally associated with increased women's attractiveness. However, why men exhibit variable preferences remains less studied. Male mate choice might be influenced by uncertainty of paternity, a selective factor in species where the survival of the offspring depends on postnatal paternal care. For instance, in humans, a man might prefer a woman with recessive traits, thereby increasing the probability that his paternal traits will be visible in the child and ensuring paternity. Alternatively, attractiveness is hypothesized to be driven by self-resembling features (homogamy), which would reduce outbreeding depression. These hypotheses have been simultaneously evaluated for various facial traits using both real and artificial facial stimuli. The predicted preferences were then compared to realized mate choices using facial pictures from couples with at least 1 child. No evidence was found to support the paternity uncertainty hypothesis, as recessive features were not preferred by male raters. Conversely, preferences for self-resembling mates were found for several facial traits (hair and eye color, chin dimple, and thickness of lips and eyebrows). Moreover, realized homogamy for facial traits was also found in a sample of long-term mates. The advantages of homogamy in evolutionary terms are discussed.

  18. Facial Expression Feature Extraction Based on the J-divergence Entropy of IMF%基于IMF解析信号能量熵的人脸表情特征提取方法磁

    Institute of Scientific and Technical Information of China (English)

    李茹; 张建伟

    2016-01-01

    人脸表情识别是指利用计算机技术、图像处理、机器视觉等技术对人脸表情图像或图像序列进行特征提取、建模,以及表情分类的过程,从而使得计算机程序能够依据人的脸部表情信息推断人的心理状态。人脸表情识别主要分为三个阶段:人脸检测、表情特征提取、表情特征分类。其中,表情特征的选取是人脸表情识别的关键步骤,特征选取的好坏直接影响表情分类的效果。论文提出了一种基于IM F解析信号能量熵的人脸表情特征提取方法,将希尔伯特黄变换方法应用到人脸表情识别中。首先,对表情图像进行Radon变换,得到人脸表情信号,然后对该信号进行经验模态分解(EMD),得到一系列本征模态函数(IM F),对得到本征模态函数(IM F)进行 Hilbert变换,得到IM F解析信号,计算瞬时振幅,瞬时频率。选择IM F以及其解析信号的振幅作为特征向量,计算其能量判别熵,选择同类之间有较小判别熵,不同信号类之间有较大判别熵的特征作为表情分类的特征向量。采用PCA算法对选取的特征进行降维,使用支持向量机(SVM )对两类表情进行分类。%Facial emotion or facial expression recognition refers to using computer technology ,image processing and machine vision technology to process the object from a given image or video sequence for feature extraction ,modeling ,classi‐fication to identify the psychological mood of the subject .Facial expression recognition is mainly divided into three stages ,in‐cluding face detection ,face feature extraction and expression classification .Expression feature extraction and selection is a key step in efficient and effective facial emotion recognition and may affect the classification results .In this study ,a novel ap‐proach of face expression feature extraction is proposed based on energy entropy of IMF analytic signal

  19. 基于小波变换的人脸表情特征提取%Wavelet-based Facial Expression Feature Extraction

    Institute of Scientific and Technical Information of China (English)

    张秀艳; 裴雷雷

    2011-01-01

    First,a histogram equalization process is used to enhance the overall image contrast to make image detail clearer.By discrete cosine transform we can reduce the image feature dimension,remove redundant information,retain an important low-frequency information.Then it uses the Gabor wavelet transform,selects a different scale and direction of facial expression feature extraction.Finally,by comparing experimental results,it proves that during the pre-image after wavelet transform we can save a lot of computing time.%首先通过直方图均衡化处理增强图像的整体对比度,使图像的细节更加清晰.通过离散余弦变换来降低图像特征维数、去除冗余信息、保留重要的低频信息.然后利用Gabor小波变换,选取不同的尺度和方向对人脸表情特征进行提取.最后通过实验结果对比证明预处理后的图片在进行小波变换时能节省大量的运算时间.

  20. Orientation-sensitivity to facial features explains the Thatcher illusion.

    Science.gov (United States)

    Psalta, Lilia; Young, Andrew W; Thompson, Peter; Andrews, Timothy J

    2014-10-09

    The Thatcher illusion provides a compelling example of the perceptual cost of face inversion. The Thatcher illusion is often thought to result from a disruption to the processing of spatial relations between face features. Here, we show the limitations of this account and instead demonstrate that the effect of inversion in the Thatcher illusion is better explained by a disruption to the processing of purely local facial features. Using a matching task, we found that participants were able to discriminate normal and Thatcherized versions of the same face when they were presented in an upright orientation, but not when the images were inverted. Next, we showed that the effect of inversion was also apparent when only the eye region or only the mouth region was visible. These results demonstrate that a key component of the Thatcher illusion is to be found in orientation-specific encoding of the expressive features (eyes and mouth) of the face.

  1. Quantification of Cranial Asymmetry in Infants by Facial Feature Extraction

    Institute of Scientific and Technical Information of China (English)

    Chun-Ming Chang; Wei-Cheng Li; Chung-Lin Huang; Pei-Yeh Chang

    2014-01-01

    In this paper, a facial feature extracting method is proposed to transform three-dimension (3D) head images of infants with deformational plagiocephaly for assessment of asymmetry. The features of 3D point clouds of an infant’s cranium can be identified by local feature analysis and a two-phase k-means classification algorithm. The 3D images of infants with asymmetric cranium can then be aligned to the same pose. The mirrored head model obtained from the symmetry plane is compared with the original model for the measurement of asymmetry. Numerical data of the cranial volume can be reviewed by a pediatrician to adjust the treatment plan. The system can also be used to demonstrate the treatment progress.

  2. Facial Affect Recognition Using Regularized Discriminant Analysis-Based Algorithms

    Directory of Open Access Journals (Sweden)

    Cheng-Yuan Shih

    2010-01-01

    Full Text Available This paper presents a novel and effective method for facial expression recognition including happiness, disgust, fear, anger, sadness, surprise, and neutral state. The proposed method utilizes a regularized discriminant analysis-based boosting algorithm (RDAB with effective Gabor features to recognize the facial expressions. Entropy criterion is applied to select the effective Gabor feature which is a subset of informative and nonredundant Gabor features. The proposed RDAB algorithm uses RDA as a learner in the boosting algorithm. The RDA combines strengths of linear discriminant analysis (LDA and quadratic discriminant analysis (QDA. It solves the small sample size and ill-posed problems suffered from QDA and LDA through a regularization technique. Additionally, this study uses the particle swarm optimization (PSO algorithm to estimate optimal parameters in RDA. Experiment results demonstrate that our approach can accurately and robustly recognize facial expressions.

  3. Mood Extraction Using Facial Features to Improve Learning Curves of Students in E-Learning Systems

    Directory of Open Access Journals (Sweden)

    Abdulkareem Al-Alwani

    2016-11-01

    Full Text Available Students’ interest and involvement during class lectures is imperative for grasping concepts and significantly improves academic performance of the students. Direct supervision of lectures by instructors is the main reason behind student attentiveness in class. Still, there is sufficient percentage of students who even under direct supervision tend to lose concentration. Considering the e-learning environment, this problem is aggravated due to absence of any human supervision. This calls for an approach to assess and identify lapses of attention by a student in an e-learning session. This study is carried out to improve student’s involvement in e-learning platforms by using their facial feature to extract mood patterns. Analyzing themoods based on emotional states of a student during an online lecture can provide interesting results which can be readily used to improvethe efficacy of content delivery in an e-learning platform. A survey is carried out among instructors involved in e-learning to identify most probable facial features that represent the facial expressions or mood patterns of a student. A neural network approach is used to train the system using facial feature sets to predict specific facial expressions. Moreover, a data association based algorithm specifically for extracting information on emotional states by correlating multiple sets of facial features is also proposed. This framework showed promising results in inciting student’s interest by varying the content being delivered.Different combinations of inter-related facial expressions for specific time frames were used to estimate mood patterns and subsequently level of involvement of a student in an e-learning environment.The results achieved during the course of research showed that mood patterns of a student provide a good correlation with his interest or involvement during online lectures and can be used to vary the content to improve students’ involvement in the e

  4. Active Shape Model of Combining Pca and Ica: Application to Facial Feature Extraction

    Institute of Scientific and Technical Information of China (English)

    DENG Lin; RAO Ni-ni; WANG Gang

    2006-01-01

    Active Shape Model (ASM) is a powerful statistical tool to extract the facial features of a face image under frontal view. It mainly relies on Principle Component Analysis (PCA) to statistically model the variability in the training set of example shapes. Independent Component Analysis (ICA) has been proven to be more efficient to extract face features than PCA . In this paper, we combine the PCA and ICA by the consecutive strategy to form a novel ASM. Firstly, an initial model, which shows the global shape variability in the training set, is generated by the PCA-based ASM. And then, the final shape model, which contains more local characters, is established by the ICA-based ASM. Experimental results verify that the accuracy of facial feature extraction is statistically significantly improved by applying the ICA modes after the PCA modes.

  5. Facial Beautification Method Based on Age Evolution

    Institute of Scientific and Technical Information of China (English)

    CHEN Yan; DING Shou-hong; HU Gan-le; MA Li-zhuang

    2013-01-01

    This paper proposes a new facial beautification method using facial rejuvenation based on the age evolution. Traditional facial beautification methods only focus on the color of skin and deformation and do the transformation based on an experimental standard of beauty. Our method achieves the beauty effect by making facial image looks younger, which is different from traditional methods and is more reasonable than them. Firstly, we decompose the image into different layers and get a detail layer. Secondly, we get an age-related parameter:the standard deviation of the Gaussian distribution that the detail layer follows, and the support vector machine (SVM) regression is used to fit a function about the age and the standard deviation. Thirdly, we use this function to estimate the age of input image and generate a new detail layer with a new standard deviation which is calculated by decreasing the age. Lastly, we combine the original layers and the new detail layer to get a new face image. Experimental results show that this algo-rithm can make facial image become more beautiful by facial rejuvenation. The proposed method opens up a new way about facial beautification, and there are great potentials for applications.

  6. Constraint-based facial animation

    NARCIS (Netherlands)

    Ruttkay, Z.M.

    1999-01-01

    Constraints have been traditionally used for computer animation applications to define side conditions for generating synthesized motion according to a standard, usually physically realistic, set of motion equations. The case of facial animation is very different, as no set of motion equations for f

  7. An Improved AAM Method for Extracting Human Facial Features

    Directory of Open Access Journals (Sweden)

    Tao Zhou

    2012-01-01

    Full Text Available Active appearance model is a statistically parametrical model, which is widely used to extract human facial features and recognition. However, intensity values used in original AAM cannot provide enough information for image texture, which will lead to a larger error or a failure fitting of AAM. In order to overcome these defects and improve the fitting performance of AAM model, an improved texture representation is proposed in this paper. Firstly, translation invariant wavelet transform is performed on face images and then image structure is represented using the measure which is obtained by fusing the low-frequency coefficients with edge intensity. Experimental results show that the improved algorithm can increase the accuracy of the AAM fitting and express more information for structures of edge and texture.

  8. 基于概率关系的面部特征点定位技术方法%New technique based on probabilistic model for facial features location

    Institute of Scientific and Technical Information of China (English)

    彭小宁; 邹北骥; 王磊; 罗平

    2009-01-01

    This paper described a novel technique called analytic boosted cascade detector (ABCD) to automatically locate features on the human face. ABCD extended the original boosted cascade detector (BCD) in three ways: a) a probabilistic model was included to connect the classifier responses with the facial features, b) formulated a features location method based on the probabilistic model, c) presented two selection criterions for face candidates. The new technique melted face detection and facial features location into a unified process. It outperformed average positions (AVG) and boosted classifiers + best response (BestHit). It also shows great speed superior to the methods based on nonlinear optimization, e.g. AAM and SOS.%基于BCD提出了一种新的面部特征点定位方法,该方法在以下三个方面扩展了传统的BCD(boosted cascade detector):a) 建立了BCD决策响应与特征点位置之间的概率关系;b) 提出一种基于上述概率关系的特征点定位方法;c) 设计了两种最佳人脸候选区域的选择方法.解析式的BCD把人脸检测和面部特征点定位融合成一个统一的过程.实验表明其精度和速度高于平均位置法(AVG)和基于boosted classifiers的最佳命中法(BestHit),并且它的运行速度也明显高于基于非线性优化的AAM和SOS法.

  9. Interpretation of appearance: the effect of facial features on first impressions and personality.

    Science.gov (United States)

    Wolffhechel, Karin; Fagertun, Jens; Jacobsen, Ulrik Plesner; Majewski, Wiktor; Hemmingsen, Astrid Sofie; Larsen, Catrine Lohmann; Lorentzen, Sofie Katrine; Jarmer, Hanne

    2014-01-01

    Appearance is known to influence social interactions, which in turn could potentially influence personality development. In this study we focus on discovering the relationship between self-reported personality traits, first impressions and facial characteristics. The results reveal that several personality traits can be read above chance from a face, and that facial features influence first impressions. Despite the former, our prediction model fails to reliably infer personality traits from either facial features or first impressions. First impressions, however, could be inferred more reliably from facial features. We have generated artificial, extreme faces visualising the characteristics having an effect on first impressions for several traits. Conclusively, we find a relationship between first impressions, some personality traits and facial features and consolidate that people on average assess a given face in a highly similar manner.

  10. Interpretation of appearance: the effect of facial features on first impressions and personality

    DEFF Research Database (Denmark)

    Wolffhechel, Karin Marie Brandt; Fagertun, Jens; Jacobsen, Ulrik Plesner;

    2014-01-01

    Appearance is known to influence social interactions, which in turn could potentially influence personality development. In this study we focus on discovering the relationship between self-reported personality traits, first impressions and facial characteristics. The results reveal that several...... personality traits can be read above chance from a face, and that facial features influence first impressions. Despite the former, our prediction model fails to reliably infer personality traits from either facial features or first impressions. First impressions, however, could be inferred more reliably from...... facial features. We have generated artificial, extreme faces visualising the characteristics having an effect on first impressions for several traits. Conclusively, we find a relationship between first impressions, some personality traits and facial features and consolidate that people on average assess...

  11. Facial Feature Movements Caused by Various Emotions: Differences According to Sex

    Directory of Open Access Journals (Sweden)

    Kun Ha Suh

    2016-08-01

    Full Text Available Facial muscle micro movements for eight emotions were induced via visual and auditory stimuli and were verified according to sex. Thirty-one main facial features were chosen from the Kinect API out of 121 initially obtained facial features; the average change of pixel value was measured after image alignment. The proposed method is advantageous as it allows for comparisons. Facial micro-expressions are analyzed in real time using 31 facial feature points. The amount of micro-expressions for the various emotion stimuli was comparatively analyzed for differences according to sex. Men’s facial movements were similar for each emotion, whereas women’s facial movements were different for each emotion. The six feature positions were significantly different according to sex; in particular, the inner eyebrow of the right eye had a confidence level of p < 0.01. Consequently, discriminative power showed that men’s ability to separate one emotion from the others was lower compared to women’s ability in terms of facial expression, despite men’s average movements being higher compared to women’s. Additionally, the asymmetric phenomena around the left eye region of women appeared more strongly in cases of positive emotions.

  12. 3D facial geometric features for constrained local model

    NARCIS (Netherlands)

    Cheng, Shiyang; Zafeiriou, Stefanos; Asthana, Akshay; Pantic, Maja

    2014-01-01

    We propose a 3D Constrained Local Model framework for deformable face alignment in depth image. Our framework exploits the intrinsic 3D geometric information in depth data by utilizing robust histogram-based 3D geometric features that are based on normal vectors. In addition, we demonstrate the fusi

  13. Analysis of Sasang constitutional types using facial features with compensation for photographic distance

    Directory of Open Access Journals (Sweden)

    Jun-Hyeong Do

    2012-12-01

    Conclusion: It is noted that the significant facial features represent common characteristics of each SC type in the sense that we collected extensive opinions from many Sasang constitutional medicine doctors with various points of view. Additionally, a compensation method for the photographic distance is needed to find the significant facial features. We expect these findings and the related compensation technique to contribute to establishing a scientific basis for the precise diagnosis of SC types in clinical practice.

  14. Contributions of feature shapes and surface cues to the recognition of facial expressions.

    Science.gov (United States)

    Sormaz, Mladen; Young, Andrew W; Andrews, Timothy J

    2016-10-01

    Theoretical accounts of face processing often emphasise feature shapes as the primary visual cue to the recognition of facial expressions. However, changes in facial expression also affect the surface properties of the face. In this study, we investigated whether this surface information can also be used in the recognition of facial expression. First, participants identified facial expressions (fear, anger, disgust, sadness, happiness) from images that were manipulated such that they varied mainly in shape or mainly in surface properties. We found that the categorization of facial expression is possible in either type of image, but that different expressions are relatively dependent on surface or shape properties. Next, we investigated the relative contributions of shape and surface information to the categorization of facial expressions. This employed a complementary method that involved combining the surface properties of one expression with the shape properties from a different expression. Our results showed that the categorization of facial expressions in these hybrid images was equally dependent on the surface and shape properties of the image. Together, these findings provide a direct demonstration that both feature shape and surface information make significant contributions to the recognition of facial expressions.

  15. 一种基于几何特征的表情相似性度量方法%A Similarity Measurement Method of Facial Expression Based on Geometric Features

    Institute of Scientific and Technical Information of China (English)

    黄忠; 胡敏; 王晓华

    2015-01-01

    在表演驱动、表情克隆等人脸动画中,需要寻找最相似表情以提高动画真实感和逼真度。基于面部表情几何特征提出一种特征加权的表情相似性度量方法。首先,在主动外观模型上,利用链码描述各区域的形状特征以刻画局部表情细节,并根据区域特征点间的拓扑关系构建形变特征以反映整体表情信息。然后,采用特征加权方式对融合的几何特征进行相似性度量,并将权重的求解过程转化为加权目标函数最小化。最后,利用求解的权重以及特征加权函数度量表情间的相似性,寻找与之最相似的表情图像。在BU-3DFE数据库和FEEDTUM数据库上的实验结果表明,该方法在寻找相似表情的正确率方面明显高于现有的度量方法,并且对不同类型、不同强度的表情描述保持较好鲁棒性,尤其在嘴型、脸颊收缩、嘴开合幅度等表情细节维持较高相似度。%In facial animations such as performance-driven and expression cloning, it needs to find the most similar expression to enhance the reality and fidelity of animations. A feature-weighted expression similarity measurement method is proposed based on facial geometric features. Firstly, chain code is used to characterize shape features for local expression regions, meanwhile deformation features are built based on topological relations among regional feature points to reflect holistic expression information. Then, feature-weighted method is adopted to measure the similarities of fused geometric features, and the solving process of feature weights is transformed to minimizing process of the weighted objective function. Finally, the solved weights as well as feature weighting functions are performed to measure similarities between two expressions and seek the most similar image with a input expression image. The experimental results on BU-3 DFE database and FEEDTUM database show that the proposed method

  16. Facial biometrics based on 2D vector geometry

    Science.gov (United States)

    Malek, Obaidul; Venetsanopoulos, Anastasios; Androutsos, Dimitrios

    2014-05-01

    The main challenge of facial biometrics is its robustness and ability to adapt to changes in position orientation, facial expression, and illumination effects. This research addresses the predominant deficiencies in this regard and systematically investigates a facial authentication system in the Euclidean domain. In the proposed method, Euclidean geometry in 2D vector space is being constructed for features extraction and the authentication method. In particular, each assigned point of the candidates' biometric features is considered to be a 2D geometrical coordinate in the Euclidean vector space. Algebraic shapes of the extracted candidate features are also computed and compared. The proposed authentication method is being tested on images from the public "Put Face Database". The performance of the proposed method is evaluated based on Correct Recognition (CRR), False Acceptance (FAR), and False Rejection (FRR) rates. The theoretical foundation of the proposed method along with the experimental results are also presented in this paper. The experimental results demonstrate the effectiveness of the proposed method.

  17. Extraction of Subject-Specific Facial Expression Categories and Generation of Facial Expression Feature Space using Self-Mapping

    Directory of Open Access Journals (Sweden)

    Masaki Ishii

    2008-06-01

    Full Text Available This paper proposes a generation method of a subject-specific Facial Expression Map (FEMap using the Self-Organizing Maps (SOM of unsupervised learning and Counter Propagation Networks (CPN of supervised learning together. The proposed method consists of two steps. In the first step, the topological change of a face pattern in the expressional process of facial expression is learned hierarchically using the SOM of a narrow mapping space, and the number of subject-specific facial expression categories and the representative images of each category are extracted. Psychological significance based on the neutral and six basic emotions (anger, sadness, disgust, happiness, surprise, and fear is assigned to each extracted category. In the latter step, the categories and the representative images described above are learned using the CPN of a large mapping space, and a category map that expresses the topological characteristics of facial expression is generated. This paper defines this category map as an FEMap. Experimental results for six subjects show that the proposed method can generate a subject-specific FEMap based on the topological characteristics of facial expression appearing on face images.

  18. Facial animation on an anatomy-based hierarchical face model

    Science.gov (United States)

    Zhang, Yu; Prakash, Edmond C.; Sung, Eric

    2003-04-01

    In this paper we propose a new hierarchical 3D facial model based on anatomical knowledge that provides high fidelity for realistic facial expression animation. Like real human face, the facial model has a hierarchical biomechanical structure, incorporating a physically-based approximation to facial skin tissue, a set of anatomically-motivated facial muscle actuators and underlying skull structure. The deformable skin model has multi-layer structure to approximate different types of soft tissue. It takes into account the nonlinear stress-strain relationship of the skin and the fact that soft tissue is almost incompressible. Different types of muscle models have been developed to simulate distribution of the muscle force on the skin due to muscle contraction. By the presence of the skull model, our facial model takes advantage of both more accurate facial deformation and the consideration of facial anatomy during the interactive definition of facial muscles. Under the muscular force, the deformation of the facial skin is evaluated using numerical integration of the governing dynamic equations. The dynamic facial animation algorithm runs at interactive rate with flexible and realistic facial expressions to be generated.

  19. De Novo Mutation in ABCC9 Causes Hypertrichosis Acromegaloid Facial Features Disorder.

    Science.gov (United States)

    Afifi, Hanan H; Abdel-Hamid, Mohamed S; Eid, Maha M; Mostafa, Inas S; Abdel-Salam, Ghada M H

    2016-01-01

    A 13-year-old Egyptian girl with generalized hypertrichosis, gingival hyperplasia, coarse facial appearance, no cardiovascular or skeletal anomalies, keloid formation, and multiple labial frenula was referred to our clinic for counseling. Molecular analysis of the ABCC9 gene showed a de novo missense mutation located in exon 27, which has been described previously with Cantu syndrome. An overlap between Cantu syndrome, acromegaloid facial syndrome, and hypertrichosis acromegaloid facial features disorder is apparent at the phenotypic and molecular levels. The patient reported here gives further evidence that these syndromes are an expression of the ABCC9-related disorders, ranging from hypertrichosis and acromegaloid facies to the severe end of Cantu syndrome.

  20. Enhanced retinal modeling for face recognition and facial feature point detection under complex illumination conditions

    Science.gov (United States)

    Cheng, Yong; Li, Zuoyong; Jiao, Liangbao; Lu, Hong; Cao, Xuehong

    2016-07-01

    We improved classic retinal modeling to alleviate the adverse effect of complex illumination on face recognition and extracted robust image features. Our improvements on classic retinal modeling included three aspects. First, a combined filtering scheme was applied to simulate functions of horizontal and amacrine cells for accurate local illumination estimation. Second, we developed an optimal threshold method for illumination classification. Finally, we proposed an adaptive factor acquisition model based on the arctangent function. Experimental results on the combined Yale B; the Carnegie Mellon University poses, illumination, and expression; and the Labeled Face Parts in the Wild databases show that the proposed method can effectively alleviate illumination difference of images under complex illumination conditions, which is helpful for improving the accuracy of face recognition and that of facial feature point detection.

  1. The Importance of Facial Features and Their Spatial Organization for Attractiveness is Modulated by Gender

    Directory of Open Access Journals (Sweden)

    D Gill

    2011-04-01

    Full Text Available Many studies suggest that facial attractiveness signals mate quality. Fewer studies argue that the preference criteria emerge as a by-product of cortical processes. One way or the other, preference criteria should not be necessarily identical between female and male observers because either their preferences may have different evolutionary roles or they may even be due to known differences in visiospatial skills and brain function lateralization (ie, advantages favoring males' inability to determine spatial relations despite distracting information. The goal of this study was to assess sex differences in face attractiveness judgments by estimating the importance of facial features and their spatial organization. To this end, semipartial correlations were measured between intact-face preferences and preferences based on specific facial parts (eyes, nose, mouth, and hairstyle or preferences based more on configuration (as reflected by low spatial frequency images. The results show strategy modulations by both observers' and faces' genders. In general, the association between intact-face preferences and parts-based preferences was significantly higher for female compared with male participants. For female faces, males' preferences were more strongly associated with their low spatial frequency preferences than were those of females. The two genders' strategies were more similar when judging male faces, and males performed more criteria modifications across face gender. The similarities between sexes regarding male faces are in line with previous studies that showed higher assignment of importance among men to attractiveness. Moreover, the results may suggest that men adjust their strategy to assess the danger of other males as potential rivals for mates.

  2. Facial Features: What Women Perceive as Attractive and What Men Consider Attractive.

    Science.gov (United States)

    Muñoz-Reyes, José Antonio; Iglesias-Julios, Marta; Pita, Miguel; Turiegano, Enrique

    2015-01-01

    Attractiveness plays an important role in social exchange and in the ability to attract potential mates, especially for women. Several facial traits have been described as reliable indicators of attractiveness in women, but very few studies consider the influence of several measurements simultaneously. In addition, most studies consider just one of two assessments to directly measure attractiveness: either self-evaluation or men's ratings. We explored the relationship between these two estimators of attractiveness and a set of facial traits in a sample of 266 young Spanish women. These traits are: facial fluctuating asymmetry, facial averageness, facial sexual dimorphism, and facial maturity. We made use of the advantage of having recently developed methodologies that enabled us to measure these variables in real faces. We also controlled for three other widely used variables: age, body mass index and waist-to-hip ratio. The inclusion of many different variables allowed us to detect any possible interaction between the features described that could affect attractiveness perception. Our results show that facial fluctuating asymmetry is related both to self-perceived and male-rated attractiveness. Other facial traits are related only to one direct attractiveness measurement: facial averageness and facial maturity only affect men's ratings. Unmodified faces are closer to natural stimuli than are manipulated photographs, and therefore our results support the importance of employing unmodified faces to analyse the factors affecting attractiveness. We also discuss the relatively low equivalence between self-perceived and male-rated attractiveness and how various anthropometric traits are relevant to them in different ways. Finally, we highlight the need to perform integrated-variable studies to fully understand female attractiveness.

  3. Facial Features: What Women Perceive as Attractive and What Men Consider Attractive.

    Directory of Open Access Journals (Sweden)

    José Antonio Muñoz-Reyes

    Full Text Available Attractiveness plays an important role in social exchange and in the ability to attract potential mates, especially for women. Several facial traits have been described as reliable indicators of attractiveness in women, but very few studies consider the influence of several measurements simultaneously. In addition, most studies consider just one of two assessments to directly measure attractiveness: either self-evaluation or men's ratings. We explored the relationship between these two estimators of attractiveness and a set of facial traits in a sample of 266 young Spanish women. These traits are: facial fluctuating asymmetry, facial averageness, facial sexual dimorphism, and facial maturity. We made use of the advantage of having recently developed methodologies that enabled us to measure these variables in real faces. We also controlled for three other widely used variables: age, body mass index and waist-to-hip ratio. The inclusion of many different variables allowed us to detect any possible interaction between the features described that could affect attractiveness perception. Our results show that facial fluctuating asymmetry is related both to self-perceived and male-rated attractiveness. Other facial traits are related only to one direct attractiveness measurement: facial averageness and facial maturity only affect men's ratings. Unmodified faces are closer to natural stimuli than are manipulated photographs, and therefore our results support the importance of employing unmodified faces to analyse the factors affecting attractiveness. We also discuss the relatively low equivalence between self-perceived and male-rated attractiveness and how various anthropometric traits are relevant to them in different ways. Finally, we highlight the need to perform integrated-variable studies to fully understand female attractiveness.

  4. Facial expression analysis using LBP features. Computer Engineering and Applications, 2011,47(2): 149-152.%人脸表情的LBP特征分析

    Institute of Scientific and Technical Information of China (English)

    刘伟锋; 李树娟; 王延江

    2011-01-01

    为了有效提取面部表情特征,提出了一种新的基于LBP(局部二值模式)特征的人脸表情识别特征提取方法.首先用均值方差法对表情图像进行灰度规一化,通过对图像进行积分投影,定位出眉毛、眼睛、鼻和嘴巴这些关键特征点,进而划分出各特征部件所在子区域,然后对子区域进行分块,提取各个子区域的分块LBP直方图特征.为了验证所提出的方法的合理性,最后在JAFFE表情库上进行了实验,结果表明提出的方法能够有效地描述表情的特征.%In order to effectively extract facial expression feature,a novel facial feature extraction approach for facial expression recognition based on Local Binary Pattern(LBP) is proposed in the paper. Firstly,facial expression images' gray level is normalized with the average-variance method. By doing integral projection, some critical facial feature points are located,such as eyebrow,eye,nose and mouth. Then sub-regions belong to each facial component are partitioned. And then facial expression features are presented with LBP histograms of each sub-region, which is divided into several blocks. To validate the rationality of the method proposed,experiments are implemented on JAFEE(Japanese female facial expression database) database. The results illustrate that the method proposed is effective to represent facial expression feature.

  5. Facial-paralysis diagnostic system based on 3D reconstruction

    Science.gov (United States)

    Khairunnisaa, Aida; Basah, Shafriza Nisha; Yazid, Haniza; Basri, Hassrizal Hassan; Yaacob, Sazali; Chin, Lim Chee

    2015-05-01

    The diagnostic process of facial paralysis requires qualitative assessment for the classification and treatment planning. This result is inconsistent assessment that potential affect treatment planning. We developed a facial-paralysis diagnostic system based on 3D reconstruction of RGB and depth data using a standard structured-light camera - Kinect 360 - and implementation of Active Appearance Models (AAM). We also proposed a quantitative assessment for facial paralysis based on triangular model. In this paper, we report on the design and development process, including preliminary experimental results. Our preliminary experimental results demonstrate the feasibility of our quantitative assessment system to diagnose facial paralysis.

  6. Dense mesh sampling for video-based facial animation

    Science.gov (United States)

    Peszor, Damian; Wojciechowska, Marzena

    2016-06-01

    The paper describes an approach for selection of feature points on three-dimensional, triangle mesh obtained using various techniques from several video footages. This approach has a dual purpose. First, it allows to minimize the data stored for the purpose of facial animation, so that instead of storing position of each vertex in each frame, one could store only a small subset of vertices for each frame and calculate positions of others based on the subset. Second purpose is to select feature points that could be used for anthropometry-based retargeting of recorded mimicry to another model, with sampling density beyond that which can be achieved using marker-based performance capture techniques. Developed approach was successfully tested on artificial models, models constructed using structured light scanner, and models constructed from video footages using stereophotogrammetry.

  7. Binary pattern flavored feature extractors for Facial Expression Recognition: An overview

    DEFF Research Database (Denmark)

    Kristensen, Rasmus Lyngby; Tan, Zheng-Hua; Ma, Zhanyu;

    2015-01-01

    This paper conducts a survey of modern binary pattern flavored feature extractors applied to the Facial Expression Recognition (FER) problem. In total, 26 different feature extractors are included, of which six are selected for in depth description. In addition, the paper unifies important FER...... terminology, describes open challenges, and provides recommendations to scientific evaluation of FER systems. Lastly, it studies the facial expression recognition accuracy and blur invariance of the Local Frequency Descriptor. The paper seeks to bring together disjointed studies, and the main contribution...

  8. Effects of face feature and contour crowding in facial expression adaptation.

    Science.gov (United States)

    Liu, Pan; Montaser-Kouhsari, Leila; Xu, Hong

    2014-12-01

    Prolonged exposure to a visual stimulus, such as a happy face, biases the perception of subsequently presented neutral face toward sad perception, the known face adaptation. Face adaptation is affected by visibility or awareness of the adapting face. However, whether it is affected by discriminability of the adapting face is largely unknown. In the current study, we used crowding to manipulate discriminability of the adapting face and test its effect on face adaptation. Instead of presenting flanking faces near the target face, we shortened the distance between facial features (internal feature crowding), and reduced the size of face contour (external contour crowding), to introduce crowding. We are interested in whether internal feature crowding or external contour crowding is more effective in inducing crowding effect in our first experiment. We found that combining internal feature and external contour crowding, but not either of them alone, induced significant crowding effect. In Experiment 2, we went on further to investigate its effect on adaptation. We found that both internal feature crowding and external contour crowding reduced its facial expression aftereffect (FEA) significantly. However, we did not find a significant correlation between discriminability of the adapting face and its FEA. Interestingly, we found a significant correlation between discriminabilities of the adapting and test faces. Experiment 3 found that the reduced adaptation aftereffect in combined crowding by the external face contour and the internal facial features cannot be decomposed into the effects from the face contour and facial features linearly. It thus suggested a nonlinear integration between facial features and face contour in face adaptation.

  9. Contactless measurement of muscles fatigue by tracking facial feature points in a video

    DEFF Research Database (Denmark)

    Irani, Ramin; Nasrollahi, Kamal; Moeslund, Thomas B.

    2014-01-01

    their exercises when the level of the fatigue might be dangerous for the patients. The current technology for measuring tiredness, like Electromyography (EMG), requires installing some sensors on the body. In some applications, like remote patient monitoring, this however might not be possible. To deal......Physical exercise may result in muscle tiredness which is known as muscle fatigue. This occurs when the muscles cannot exert normal force, or when more than normal effort is required. Fatigue is a vital sign, for example, for therapists to assess their patient’s progress or to change...... with such cases, in this paper we present a contactless method based on computer vision techniques to measure tiredness by detecting, tracking, and analyzing some facial feature points during the exercise. Experimental results on several test subjects and comparing them against ground truth data show...

  10. Active AU Based Patch Weighting for Facial Expression Recognition

    Science.gov (United States)

    Xie, Weicheng; Shen, Linlin; Yang, Meng; Lai, Zhihui

    2017-01-01

    Facial expression has many applications in human-computer interaction. Although feature extraction and selection have been well studied, the specificity of each expression variation is not fully explored in state-of-the-art works. In this work, the problem of multiclass expression recognition is converted into triplet-wise expression recognition. For each expression triplet, a new feature optimization model based on action unit (AU) weighting and patch weight optimization is proposed to represent the specificity of the expression triplet. The sparse representation-based approach is then proposed to detect the active AUs of the testing sample for better generalization. The algorithm achieved competitive accuracies of 89.67% and 94.09% for the Jaffe and Cohn–Kanade (CK+) databases, respectively. Better cross-database performance has also been observed. PMID:28146094

  11. Active AU Based Patch Weighting for Facial Expression Recognition

    Directory of Open Access Journals (Sweden)

    Weicheng Xie

    2017-01-01

    Full Text Available Facial expression has many applications in human-computer interaction. Although feature extraction and selection have been well studied, the specificity of each expression variation is not fully explored in state-of-the-art works. In this work, the problem of multiclass expression recognition is converted into triplet-wise expression recognition. For each expression triplet, a new feature optimization model based on action unit (AU weighting and patch weight optimization is proposed to represent the specificity of the expression triplet. The sparse representation-based approach is then proposed to detect the active AUs of the testing sample for better generalization. The algorithm achieved competitive accuracies of 89.67% and 94.09% for the Jaffe and Cohn–Kanade (CK+ databases, respectively. Better cross-database performance has also been observed.

  12. Automated Facial Expression Recognition Using Gradient-Based Ternary Texture Patterns

    Directory of Open Access Journals (Sweden)

    Faisal Ahmed

    2013-01-01

    Full Text Available Recognition of human expression from facial image is an interesting research area, which has received increasing attention in the recent years. A robust and effective facial feature descriptor is the key to designing a successful expression recognition system. Although much progress has been made, deriving a face feature descriptor that can perform consistently under changing environment is still a difficult and challenging task. In this paper, we present the gradient local ternary pattern (GLTP—a discriminative local texture feature for representing facial expression. The proposed GLTP operator encodes the local texture of an image by computing the gradient magnitudes of the local neighborhood and quantizing those values in three discrimination levels. The location and occurrence information of the resulting micropatterns is then used as the face feature descriptor. The performance of the proposed method has been evaluated for the person-independent face expression recognition task. Experiments with prototypic expression images from the Cohn-Kanade (CK face expression database validate that the GLTP feature descriptor can effectively encode the facial texture and thus achieves improved recognition performance than some well-known appearance-based facial features.

  13. Recovering Faces from Memory: The Distracting Influence of External Facial Features

    Science.gov (United States)

    Frowd, Charlie D.; Skelton, Faye; Atherton, Chris; Pitchford, Melanie; Hepton, Gemma; Holden, Laura; McIntyre, Alex H.; Hancock, Peter J. B.

    2012-01-01

    Recognition memory for unfamiliar faces is facilitated when contextual cues (e.g., head pose, background environment, hair and clothing) are consistent between study and test. By contrast, inconsistencies in external features, especially hair, promote errors in unfamiliar face-matching tasks. For the construction of facial composites, as carried…

  14. Robust facial expression recognition algorithm based on local metric learning

    Science.gov (United States)

    Jiang, Bin; Jia, Kebin

    2016-01-01

    In facial expression recognition tasks, different facial expressions are often confused with each other. Motivated by the fact that a learned metric can significantly improve the accuracy of classification, a facial expression recognition algorithm based on local metric learning is proposed. First, k-nearest neighbors of the given testing sample are determined from the total training data. Second, chunklets are selected from the k-nearest neighbors. Finally, the optimal transformation matrix is computed by maximizing the total variance between different chunklets and minimizing the total variance of instances in the same chunklet. The proposed algorithm can find the suitable distance metric for every testing sample and improve the performance on facial expression recognition. Furthermore, the proposed algorithm can be used for vector-based and matrix-based facial expression recognition. Experimental results demonstrate that the proposed algorithm could achieve higher recognition rates and be more robust than baseline algorithms on the JAFFE, CK, and RaFD databases.

  15. Support vector machine-based facial-expression recognition method combining shape and appearance

    Science.gov (United States)

    Han, Eun Jung; Kang, Byung Jun; Park, Kang Ryoung; Lee, Sangyoun

    2010-11-01

    Facial expression recognition can be widely used for various applications, such as emotion-based human-machine interaction, intelligent robot interfaces, face recognition robust to expression variation, etc. Previous studies have been classified as either shape- or appearance-based recognition. The shape-based method has the disadvantage that the individual variance of facial feature points exists irrespective of similar expressions, which can cause a reduction of the recognition accuracy. The appearance-based method has a limitation in that the textural information of the face is very sensitive to variations in illumination. To overcome these problems, a new facial-expression recognition method is proposed, which combines both shape and appearance information, based on the support vector machine (SVM). This research is novel in the following three ways as compared to previous works. First, the facial feature points are automatically detected by using an active appearance model. From these, the shape-based recognition is performed by using the ratios between the facial feature points based on the facial-action coding system. Second, the SVM, which is trained to recognize the same and different expression classes, is proposed to combine two matching scores obtained from the shape- and appearance-based recognitions. Finally, a single SVM is trained to discriminate four different expressions, such as neutral, a smile, anger, and a scream. By determining the expression of the input facial image whose SVM output is at a minimum, the accuracy of the expression recognition is much enhanced. The experimental results showed that the recognition accuracy of the proposed method was better than previous researches and other fusion methods.

  16. Facial Expression Recognition from Video Sequences Based on Spatial-Temporal Motion Local Binary Pattern and Gabor Multiorientation Fusion Histogram

    Directory of Open Access Journals (Sweden)

    Lei Zhao

    2017-01-01

    Full Text Available This paper proposes novel framework for facial expressions analysis using dynamic and static information in video sequences. First, based on incremental formulation, discriminative deformable face alignment method is adapted to locate facial points to correct in-plane head rotation and break up facial region from background. Then, spatial-temporal motion local binary pattern (LBP feature is extracted and integrated with Gabor multiorientation fusion histogram to give descriptors, which reflect static and dynamic texture information of facial expressions. Finally, a one-versus-one strategy based multiclass support vector machine (SVM classifier is applied to classify facial expressions. Experiments on Cohn-Kanade (CK + facial expression dataset illustrate that integrated framework outperforms methods using single descriptors. Compared with other state-of-the-art methods on CK+, MMI, and Oulu-CASIA VIS datasets, our proposed framework performs better.

  17. Video-based facial animation with detailed appearance texture

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Facial shape transformation described by facial animation parameters (FAPs) involves the dynamic movement or deformation of eyes, brows, mouth, and lips, while detailed facial appearance concerns the facial textures such as creases, wrinkles, etc.Video-based facial animation exhibits not only facial shape transformation but also detailed appearance updates. In this paper, a novel algorithm for effectively extracting FAPs from video is proposed. Our system adopts the ICA-enforced direct appearance model (DAM) to track faces from video sequences; and then, FAPs are extracted from every frame of the video based on an extended model of Wincandidate 3.1. Facial appearance details are transformed from each frame by mapping an expression ratio image to the original image. We adopt wavelet to synthesize expressive details by combining the low-frequency signals of the original face and high-frequency signals of the expressive face from each frame of the video. Experimental results show that our proposed algorithm is suitable for reproducing realistic, expressive facial animations.

  18. Detection of Human Head Direction Based on Facial Normal Algorithm

    Directory of Open Access Journals (Sweden)

    Lam Thanh Hien

    2015-01-01

    Full Text Available Many scholars worldwide have paid special efforts in searching for advance approaches to efficiently estimate human head direction which has been successfully applied in numerous applications such as human-computer interaction, teleconferencing, virtual reality, and 3D audio rendering. However, one of the existing shortcomings in the current literature is the violation of some ideal assumptions in practice. Hence, this paper aims at proposing a novel algorithm based on the normal of human face to recognize human head direction by optimizing a 3D face model combined with the facial normal model. In our experiments, a computational program was also developed based on the proposed algorithm and integrated with the surveillance system to alert the driver drowsiness. The program intakes data from either video or webcam, and then automatically identify the critical points of facial features based on the analysis of major components on the faces; and it keeps monitoring the slant angle of the head closely and makes alarming signal whenever the driver dozes off. From our empirical experiments, we found that our proposed algorithm effectively works in real-time basis and provides highly accurate results

  19. Newborns' Face Recognition: Role of Inner and Outer Facial Features

    Science.gov (United States)

    Turati, Chiara; Macchi Cassia, Viola; Simion, Francesca; Leo, Irene

    2006-01-01

    Existing data indicate that newborns are able to recognize individual faces, but little is known about what perceptual cues drive this ability. The current study showed that either the inner or outer features of the face can act as sufficient cues for newborns' face recognition (Experiment 1), but the outer part of the face enjoys an advantage…

  20. 精神疲劳实时监测中多面部特征时序分类模型%Time-series classification model based on multiple facial feature for real-time mental fatigue monitoring

    Institute of Scientific and Technical Information of China (English)

    陈云华; 张灵; 丁伍洋; 严明玉

    2013-01-01

    针对现有疲劳监测方法仅根据单帧图像嘴巴形态进行哈欠识别准确率低,采用阈值法分析眨眼参数适应性较差,无法对疲劳状态的过渡进行实时监测等问题,提出一种新的进行精神疲劳实时监测的多面部特征时序分类模型.首先,通过面部视觉特征提取张口度曲线与虹膜似圆比曲线;然后,采用滑动窗口分段、隐马尔可夫模型(HMM)建模等方法在张口度曲线的基础上构建哈欠特征时序并进行类别标记,在虹膜似圆比曲线的基础上构建眨眼持续时间时序并进行类别标记;最后,在HMM的基础上增加时间戳,以便自适应地选取时序初始时刻点并进行多个特征时序的同步与标记结果的融合.实验结果表明,本文模型可降低哈欠误判率,对不同年龄的人群眨眼具有很好的适应性,并可实现对精神疲劳过渡状态的实时监测.%In computer vision based fatigue monitoring,there are still some unresolved issues remained,including low recognition accuracy in yawn detection based on a single-frame; poor adaptability in blink analysis because of the required threshold,the inability to monitor the transition stages of fatigue in real-time.Attempted to solve these problems,we propose a new classification model in this paper,which is based on two feature time-series for real-time mental fatigue monitoring.First,the mouth opening degree and iris circularity ratio are calculated through facial visual feature extraction.Based on this,we can generate a corresponding time-series called α (the proportion of the time during which mouth opening exceeds a given threshold) time series and eye blink time (EBT) time series.Then,using sliding window to partition and annotate the two kinds of time series and build hidden markov model (HMM) for EBT time series.Finally,add a time stamp on HMM to adaptively calculate the initial time point of the next time series,in addition,we can use it to perform the

  1. Does my face FIT?: a face image task reveals structure and distortions of facial feature representation.

    Directory of Open Access Journals (Sweden)

    Christina T Fuentes

    Full Text Available Despite extensive research on face perception, few studies have investigated individuals' knowledge about the physical features of their own face. In this study, 50 participants indicated the location of key features of their own face, relative to an anchor point corresponding to the tip of the nose, and the results were compared to the true location of the same individual's features from a standardised photograph. Horizontal and vertical errors were analysed separately. An overall bias to underestimate vertical distances revealed a distorted face representation, with reduced face height. Factor analyses were used to identify separable subconfigurations of facial features with correlated localisation errors. Independent representations of upper and lower facial features emerged from the data pattern. The major source of variation across individuals was in representation of face shape, with a spectrum from tall/thin to short/wide representation. Visual identification of one's own face is excellent, and facial features are routinely used for establishing personal identity. However, our results show that spatial knowledge of one's own face is remarkably poor, suggesting that face representation may not contribute strongly to self-awareness.

  2. Facial Expression Recognition Based on Local Binary Patterns and Kernel Discriminant Isomap

    Directory of Open Access Journals (Sweden)

    Xiaoming Zhao

    2011-10-01

    Full Text Available Facial expression recognition is an interesting and challenging subject. Considering the nonlinear manifold structure of facial images, a new kernel-based manifold learning method, called kernel discriminant isometric mapping (KDIsomap, is proposed. KDIsomap aims to nonlinearly extract the discriminant information by maximizing the interclass scatter while minimizing the intraclass scatter in a reproducing kernel Hilbert space. KDIsomap is used to perform nonlinear dimensionality reduction on the extracted local binary patterns (LBP facial features, and produce low-dimensional discrimimant embedded data representations with striking performance improvement on facial expression recognition tasks. The nearest neighbor classifier with the Euclidean metric is used for facial expression classification. Facial expression recognition experiments are performed on two popular facial expression databases, i.e., the JAFFE database and the Cohn-Kanade database. Experimental results indicate that KDIsomap obtains the best accuracy of 81.59% on the JAFFE database, and 94.88% on the Cohn-Kanade database. KDIsomap outperforms the other used methods such as principal component analysis (PCA, linear discriminant analysis (LDA, kernel principal component analysis (KPCA, kernel linear discriminant analysis (KLDA as well as kernel isometric mapping (KIsomap.

  3. Clustering Based Approximation in Facial Image Retrieval

    Directory of Open Access Journals (Sweden)

    R.Pitchaiah

    2016-11-01

    Full Text Available The web search tool returns a great many pictures positioned by the essential words separated from the encompassing content. Existing article acknowledgment systems to prepare characterization models from human-named preparing pictures or endeavor to deduce the connection/probabilities in the middle of pictures and commented magic words. Albeit proficient in supporting in mining comparatively looking facial picture results utilizing feebly named ones, the learning phase of above bunch based close estimations is shortened with idleness elements for ongoing usage which is fundamentally highlighted in our showings. So we propose to utilize shading based division driven auto face location methodology combined with an adjusted Clustering Based Approximation (CBA plan to decrease the dormancy but then holding same proficiency amid questioning. The specialized phases of our proposed drew closer is highlighted in the accompanying stream diagram. Every phase of the above specialized procedure guarantees the question results at tremendously lessened handling time in this way making our method much achievable for ongoing usage

  4. Feature Fusion Algorithm for Multimodal Emotion Recognition from Speech and Facial Expression Signal

    Directory of Open Access Journals (Sweden)

    Han Zhiyan

    2016-01-01

    Full Text Available In order to overcome the limitation of single mode emotion recognition. This paper describes a novel multimodal emotion recognition algorithm, and takes speech signal and facial expression signal as the research subjects. First, fuse the speech signal feature and facial expression signal feature, get sample sets by putting back sampling, and then get classifiers by BP neural network (BPNN. Second, measure the difference between two classifiers by double error difference selection strategy. Finally, get the final recognition result by the majority voting rule. Experiments show the method improves the accuracy of emotion recognition by giving full play to the advantages of decision level fusion and feature level fusion, and makes the whole fusion process close to human emotion recognition more, with a recognition rate 90.4%.

  5. Facial Expression Recognition Based on WAPA and OEPA Fastica

    Directory of Open Access Journals (Sweden)

    Humayra Binte Ali

    2014-06-01

    Full Text Available Face is one of the most important biometric traits for its uniqueness and robustness. For this reason researchers from many diversified fields, like: security, psychology, image processing, and computer vision, started to do research on face detection as well as facial expression recognition. Subspace learning methods work very good for recognizing same facial features. Among subspace learning techniques PCA, ICA, NMF are the most prominent topics. In this work, our main focus is on Independent Component Analysis (ICA. Among several architectures of ICA,we used here FastICA and LS-ICA algorithm. We applied Fast-ICA on whole faces and on different facial parts to analyze the influence of different parts for basic facial expressions. Our extended algorithm WAPA-FastICA and OEPA-FastICA are discussed in proposed algorithm section. Locally Salient ICA is implemented on whole face by using 8x8 windows to find the more prominent facial features for facial expression. The experiment shows our proposed OEPA-FastICA and WAPA-FastICA outperform the existing prevalent Whole-FastICA and LS-ICA methods.

  6. 3D facial expression recognition using maximum relevance minimum redundancy geometrical features

    Science.gov (United States)

    Rabiu, Habibu; Saripan, M. Iqbal; Mashohor, Syamsiah; Marhaban, Mohd Hamiruce

    2012-12-01

    In recent years, facial expression recognition (FER) has become an attractive research area, which besides the fundamental challenges, it poses, finds application in areas, such as human-computer interaction, clinical psychology, lie detection, pain assessment, and neurology. Generally the approaches to FER consist of three main steps: face detection, feature extraction and expression recognition. The recognition accuracy of FER hinges immensely on the relevance of the selected features in representing the target expressions. In this article, we present a person and gender independent 3D facial expression recognition method, using maximum relevance minimum redundancy geometrical features. The aim is to detect a compact set of features that sufficiently represents the most discriminative features between the target classes. Multi-class one-against-one SVM classifier was employed to recognize the seven facial expressions; neutral, happy, sad, angry, fear, disgust, and surprise. The average recognition accuracy of 92.2% was recorded. Furthermore, inter database homogeneity was investigated between two independent databases the BU-3DFE and UPM-3DFE the results showed a strong homogeneity between the two databases.

  7. Facial anatomy.

    Science.gov (United States)

    Marur, Tania; Tuna, Yakup; Demirci, Selman

    2014-01-01

    Dermatologic problems of the face affect both function and aesthetics, which are based on complex anatomical features. Treating dermatologic problems while preserving the aesthetics and functions of the face requires knowledge of normal anatomy. When performing successfully invasive procedures of the face, it is essential to understand its underlying topographic anatomy. This chapter presents the anatomy of the facial musculature and neurovascular structures in a systematic way with some clinically important aspects. We describe the attachments of the mimetic and masticatory muscles and emphasize their functions and nerve supply. We highlight clinically relevant facial topographic anatomy by explaining the course and location of the sensory and motor nerves of the face and facial vasculature with their relations. Additionally, this chapter reviews the recent nomenclature of the branching pattern of the facial artery.

  8. A Method for Head-shoulder Segmentation and Human Facial Feature Positioning

    Institute of Scientific and Technical Information of China (English)

    1998-01-01

    This paper proposes a method of head-shoulder segmentation and human facial feature allocation for videotelephone application. Utilizing the characteristic of multi-resolution processing of human eyes, analyzing the edge information of only a single frame in different frequency bands, this method can automatically perform head-shoulder segmentation and locate the facial feature regions (eyes, mouth, etc.) with rather high precision, simple and fast computation. Therefore, this method makes the 3-D model automatic adaptation and 3-D motion estimation possible. However, this method may fail while processing practical images with a complex background. Then it is preferable to use some pre-known information and multi-frame joint processing.

  9. Automatic Facial Expression Analysis A Survey

    Directory of Open Access Journals (Sweden)

    C.P. Sumathi

    2013-01-01

    Full Text Available The Automatic Facial Expression Recognition has been one of the latest research topic since1990’s.There have been recent advances in detecting face, facial expression recognition andclassification. There are multiple methods devised for facial feature extraction which helps in identifyingface and facial expressions. This paper surveys some of the published work since 2003 till date. Variousmethods are analysed to identify the Facial expression. The Paper also discusses about the facialparameterization using Facial Action Coding System(FACS action units and the methods whichrecognizes the action units parameters using facial expression data that are extracted. Various kinds offacial expressions are present in human face which can be identified based on their geometric features,appearance features and hybrid features . The two basic concepts of extracting features are based onfacial deformation and facial motion. This article also identifies the techniques based on thecharacteristics of expressions and classifies the suitable methods that can be implemented.

  10. 基于图像处理的不同脏腑疾病患者面部颜色特征分析%Facial color feature analysis in the different organs of disease based on image processing

    Institute of Scientific and Technical Information of China (English)

    董梦青; 李福凤; 周睿; 王忆勤

    2013-01-01

    目的:对冠心病、慢性肾功能衰竭、慢性乙型肝炎患者面色特征信息进行客观化探讨.方法:应用中医面诊数字化检测仪采集并分析冠心病、慢性肾功能衰竭、慢性乙肝患者面色特征信息.结果:冠心病组的面色以红黄隐隐和红色多见,其面部红色指数、黑色指数和面部总体指数较慢性肾衰组和慢性乙肝组明显升高(P<0.05);慢性肾衰组的面色主要以黄色、青色和白色多见,其面部白色指数、青色指数较冠心病组和慢性乙肝组明显升高(P<0.05);慢性乙肝组面色以黄色和黑色多见,其面部红色指数、白色指数、青色指数和面色总体指数较慢性肾衰组明显降低(P<0.05).黄色指数三病之间无显著性差异.结论:不同脏腑疾病面色及其参数的变化有一定规律,中医面诊数字化检测仪辅助中医临床诊断是可行的,为慢性肾功能衰竭、冠心病、慢性乙肝的中医辨证诊断提供了客观依据.%Objective: Objectively discussing the facial color feature's information of CHD(coronary heart disease) patients, CRF(chronic renal failure) patients, CHB(chronic hepatitis B) patients. Methods: Applying traditional Chinese medicine diagnosing digital detecting instrument to gather and analysis the facial color characteristic information of CHD patients, CRF patients, and CHB patients. Results: Normal complexion and red were more appeared in the CHD group, facial red index, black index, white index, cyan index and overall index of CHD group elevated markedly than CRF group and CHB group(P<0.05). While yellow and black were more appeared in CRF group, and facial red index, white index, black index, yellow index and overall index decreased obviously than CHD group and CHB group(P<0.05). Yellow and cyan were more common in CRF group, and red index, white index, cyan index and overall index decreased obviously than CHB group(P<0.05). Yellow index was no significant difference among

  11. Surface Electromyography-Based Facial Expression Recognition in Bi-Polar Configuration

    Directory of Open Access Journals (Sweden)

    Mahyar Hamedi

    2011-01-01

    Full Text Available Problem statement: Facial expression recognition has been improved recently and it has become a significant issue in diagnostic and medical fields, particularly in the areas of assistive technology and rehabilitation. Apart from their usefulness, there are some problems in their applications like peripheral conditions, lightening, contrast and quality of video and images. Approach: Facial Action Coding System (FACS and some other methods based on images or videos were applied. This study proposed two methods for recognizing 8 different facial expressions such as natural (rest, happiness in three conditions, anger, rage, gesturing ‘a’ like in apple word and gesturing no by pulling up the eyebrows based on Three-channels in Bi-polar configuration by SEMG. Raw signals were processed in three main steps (filtration, feature extraction and active features selection sequentially. Processed data was fed into Support Vector Machine and Fuzzy C-Means classifiers for being classified into 8 facial expression groups. Results: 91.8 and 80.4% recognition ratio had been achieved for FCM and SVM respectively. Conclusion: The confirmed enough accuracy and power in this field of study and FCM showed its better ability and performance in comparison with SVM. It’s expected that in near future, new approaches in the frequency bandwidth of each facial gesture will provide better results.

  12. Facial contour deformity correction with microvascular flaps based on the 3-dimentional template and facial moulage

    Directory of Open Access Journals (Sweden)

    Dinesh Kadam

    2013-01-01

    Full Text Available Introduction: Facial contour deformities presents with varied aetiology and degrees severity. Accurate assessment, selecting a suitable tissue and sculpturing it to fill the defect is challenging and largely subjective. Objective assessment with imaging and software is not always feasible and preparing a template is complicated. A three-dimensional (3D wax template pre-fabricated over the facial moulage aids surgeons to fulfil these tasks. Severe deformities demand a stable vascular tissue for an acceptable outcome. Materials and Methods: We present review of eight consecutive patients who underwent augmentation of facial contour defects with free flaps between June 2005 and January 2011. De-epithelialised free anterolateral thigh (ALT flap in three, radial artery forearm flap and fibula osteocutaneous flap in two each and groin flap was used in one patient. A 3D wax template was fabricated by augmenting the deformity on facial moulage. It was utilised to select the flap, to determine the exact dimensions and to sculpture intraoperatively. Ancillary procedures such as genioplasty, rhinoplasty and coloboma correction were performed. Results: The average age at the presentation was 25 years and average disease free interval was 5.5 years and all flaps survived. Mean follow-up period was 21.75 months. The correction was aesthetically acceptable and was maintained without any recurrence or atrophy. Conclusion: The 3D wax template on facial moulage is simple, inexpensive and precise objective tool. It provides accurate guide for the planning and execution of the flap reconstruction. The selection of the flap is based on the type and extent of the defect. Superiority of vascularised free tissue is well-known and the ALT flap offers a versatile option for correcting varying degrees of the deformities. Ancillary procedures improve the overall aesthetic outcomes and minor flap touch-up procedures are generally required.

  13. Facial paralysis

    Science.gov (United States)

    ... this page: //medlineplus.gov/ency/article/003028.htm Facial paralysis To use the sharing features on this page, please enable JavaScript. Facial paralysis occurs when a person is no longer able ...

  14. Application of LBP information of feature-points in facial expression recognition%特征点LBP信息在表情识别中的应用

    Institute of Scientific and Technical Information of China (English)

    刘伟锋; 王延江

    2009-01-01

    提出一种基于特征点LBP信息的表情识别方法.在分析了表情识别中的LBP特征之后,选择含有丰富表情信息的上半脸眼部周围和下半脸嘴部周围的特征点,计算每个特征点邻域的LBP信息作为表情特征进行表情识别.实验表明,基于特征点LBP信息的方法不需要对人脸进行预配准,较传统的LBP特征更有利于表情识别的实现.%An facial expression recognition method is proposed based on the Local Binary Pattern (LBP) of feature-points.First, the LBP feature in facial expression recognition is presented.Then the feature-points around the eyes of upper face and the mouth of lower face is fixed which hold rich expression information.And the LBP map of the neighbor field of each feature point is computed as expression feature for facial expression recognilion.Experimental results show that,the face normalization is not necessary by using the proposed method,which can improve the facial expression recognition.

  15. Scattered Data Processing Approach Based on Optical Facial Motion Capture

    Directory of Open Access Journals (Sweden)

    Qiang Zhang

    2013-01-01

    Full Text Available In recent years, animation reconstruction of facial expressions has become a popular research field in computer science and motion capture-based facial expression reconstruction is now emerging in this field. Based on the facial motion data obtained using a passive optical motion capture system, we propose a scattered data processing approach, which aims to solve the common problems of missing data and noise. To recover missing data, given the nonlinear relationships among neighbors with the current missing marker, we propose an improved version of a previous method, where we use the motion of three muscles rather than one to recover the missing data. To reduce the noise, we initially apply preprocessing to eliminate impulsive noise, before our proposed three-order quasi-uniform B-spline-based fitting method is used to reduce the remaining noise. Our experiments showed that the principles that underlie this method are simple and straightforward, and it delivered acceptable precision during reconstruction.

  16. [Peculiar features of mastoiditis in a brest-fed infant with the "exposed" facial nerve].

    Science.gov (United States)

    Andreeva, I G

    2013-01-01

    This paper reports the clinical case of mastoiditis in a 5-month old child in whom an unusual localization of the totally "naked" facial nerve outside of the bone canal in the mastoid part was discovered intraoperatively. This finding was quite unexpected because nerves are not visible on CT scanograms. The author emphasizes that the clinical course of otitis media in the breast- fed infants and young children is characterized by a number of peculiarities due to specific anatomical, physiological, and immunological features of the child's organism. She also notes that the number of antromastoidotomies for the treatment of mastoiditis has increased in Tatarstan during the recent years.

  17. Local Illumination Normalization and Facial Feature Point Selection for Robust Face Recognition

    Directory of Open Access Journals (Sweden)

    Song HAN

    2013-03-01

    Full Text Available Face recognition systems must be robust to the variation of various factors such as facial expression, illumination, head pose and aging. Especially, the robustness against illumination variation is one of the most important problems to be solved for the practical use of face recognition systems. Gabor wavelet is widely used in face detection and recognition because it gives the possibility to simulate the function of human visual system. In this paper, we propose a method for extracting Gabor wavelet features which is stable under the variation of local illumination and show experiment results demonstrating its effectiveness.

  18. Dysmorphic Facial Features and Other Clinical Characteristics in Two Patients with PEX1 Gene Mutations

    Directory of Open Access Journals (Sweden)

    Mehmet Gunduz

    2016-01-01

    Full Text Available Peroxisomal disorders are a group of genetically heterogeneous metabolic diseases related to dysfunction of peroxisomes. Dysmorphic features, neurological abnormalities, and hepatic dysfunction can be presenting signs of peroxisomal disorders. Here we presented dysmorphic facial features and other clinical characteristics in two patients with PEX1 gene mutation. Follow-up periods were 3.5 years and 1 year in the patients. Case I was one-year-old girl that presented with neurodevelopmental delay, hepatomegaly, bilateral hearing loss, and visual problems. Ophthalmologic examination suggested septooptic dysplasia. Cranial magnetic resonance imaging (MRI showed nonspecific gliosis at subcortical and periventricular deep white matter. Case II was 2.5-year-old girl referred for investigation of global developmental delay and elevated liver enzymes. Ophthalmologic examination findings were consistent with bilateral nystagmus and retinitis pigmentosa. Cranial MRI was normal. Dysmorphic facial features including broad nasal root, low set ears, downward slanting eyes, downward slanting eyebrows, and epichantal folds were common findings in two patients. Molecular genetic analysis indicated homozygous novel IVS1-2A>G mutation in Case I and homozygous p.G843D (c.2528G>A mutation in Case II in the PEX1 gene. Clinical findings and developmental prognosis vary in PEX1 gene mutation. Kabuki-like phenotype associated with liver pathology may indicate Zellweger spectrum disorders (ZSD.

  19. Facial expression recognition using biologically inspired features and SVM%基于生物启发特征和SVM的人脸表情识别

    Institute of Scientific and Technical Information of China (English)

    穆国旺; 王阳; 郭蔚

    2014-01-01

    将C1特征应用于静态图像人脸表情识别,提出了一种新的基于生物启发特征和SVM的表情识别算法。提取人脸图像的C1特征,利用PCA+LDA方法对特征进行降维,用SVM进行分类。在JAFFE和Extended Cohn-Kanade(CK+)人脸表情数据库上的实验结果表明,该算法具有较高的识别率,是一种有效的人脸表情识别方法。%C1 features are introduced to facial expression recognition for static images, and a new algorithm for facial expression recognition based on Biologically Inspired Features(BIFs)and SVM is proposed. C1 features of the facial images are extracted, PCA+LDA method is used to reduce the dimensionality of the C1 features, SVM is used for classifi-cation of the expression. The experiments on the JAFFE and Extended Cohn-Kanade(CK+)facial expression data sets show the effectiveness and the good performance of the algorithm.

  20. Contributions of feature shapes and surface cues to the recognition and neural representation of facial identity.

    Science.gov (United States)

    Andrews, Timothy J; Baseler, Heidi; Jenkins, Rob; Burton, A Mike; Young, Andrew W

    2016-10-01

    A full understanding of face recognition will involve identifying the visual information that is used to discriminate different identities and how this is represented in the brain. The aim of this study was to explore the importance of shape and surface properties in the recognition and neural representation of familiar faces. We used image morphing techniques to generate hybrid faces that mixed shape properties (more specifically, second order spatial configural information as defined by feature positions in the 2D-image) from one identity and surface properties from a different identity. Behavioural responses showed that recognition and matching of these hybrid faces was primarily based on their surface properties. These behavioural findings contrasted with neural responses recorded using a block design fMRI adaptation paradigm to test the sensitivity of Haxby et al.'s (2000) core face-selective regions in the human brain to the shape or surface properties of the face. The fusiform face area (FFA) and occipital face area (OFA) showed a lower response (adaptation) to repeated images of the same face (same shape, same surface) compared to different faces (different shapes, different surfaces). From the behavioural data indicating the critical contribution of surface properties to the recognition of identity, we predicted that brain regions responsible for familiar face recognition should continue to adapt to faces that vary in shape but not surface properties, but show a release from adaptation to faces that vary in surface properties but not shape. However, we found that the FFA and OFA showed an equivalent release from adaptation to changes in both shape and surface properties. The dissociation between the neural and perceptual responses suggests that, although they may play a role in the process, these core face regions are not solely responsible for the recognition of facial identity.

  1. Is the emotion recognition deficit associated with frontotemporal dementia caused by selective inattention to diagnostic facial features?

    Science.gov (United States)

    Oliver, Lindsay D; Virani, Karim; Finger, Elizabeth C; Mitchell, Derek G V

    2014-07-01

    Frontotemporal dementia (FTD) is a debilitating neurodegenerative disorder characterized by severely impaired social and emotional behaviour, including emotion recognition deficits. Though fear recognition impairments seen in particular neurological and developmental disorders can be ameliorated by reallocating attention to critical facial features, the possibility that similar benefits can be conferred to patients with FTD has yet to be explored. In the current study, we examined the impact of presenting distinct regions of the face (whole face, eyes-only, and eyes-removed) on the ability to recognize expressions of anger, fear, disgust, and happiness in 24 patients with FTD and 24 healthy controls. A recognition deficit was demonstrated across emotions by patients with FTD relative to controls. Crucially, removal of diagnostic facial features resulted in an appropriate decline in performance for both groups; furthermore, patients with FTD demonstrated a lack of disproportionate improvement in emotion recognition accuracy as a result of isolating critical facial features relative to controls. Thus, unlike some neurological and developmental disorders featuring amygdala dysfunction, the emotion recognition deficit observed in FTD is not likely driven by selective inattention to critical facial features. Patients with FTD also mislabelled negative facial expressions as happy more often than controls, providing further evidence for abnormalities in the representation of positive affect in FTD. This work suggests that the emotional expression recognition deficit associated with FTD is unlikely to be rectified by adjusting selective attention to diagnostic features, as has proven useful in other select disorders.

  2. 3D Facial Similarity Measure Based on Geodesic Network and Curvatures

    Directory of Open Access Journals (Sweden)

    Junli Zhao

    2014-01-01

    Full Text Available Automated 3D facial similarity measure is a challenging and valuable research topic in anthropology and computer graphics. It is widely used in various fields, such as criminal investigation, kinship confirmation, and face recognition. This paper proposes a 3D facial similarity measure method based on a combination of geodesic and curvature features. Firstly, a geodesic network is generated for each face with geodesics and iso-geodesics determined and these network points are adopted as the correspondence across face models. Then, four metrics associated with curvatures, that is, the mean curvature, Gaussian curvature, shape index, and curvedness, are computed for each network point by using a weighted average of its neighborhood points. Finally, correlation coefficients according to these metrics are computed, respectively, as the similarity measures between two 3D face models. Experiments of different persons’ 3D facial models and different 3D facial models of the same person are implemented and compared with a subjective face similarity study. The results show that the geodesic network plays an important role in 3D facial similarity measure. The similarity measure defined by shape index is consistent with human’s subjective evaluation basically, and it can measure the 3D face similarity more objectively than the other indices.

  3. Analysis of differences between Western and East-Asian faces based on facial region segmentation and PCA for facial expression recognition

    Science.gov (United States)

    Benitez-Garcia, Gibran; Nakamura, Tomoaki; Kaneko, Masahide

    2017-01-01

    Darwin was the first one to assert that facial expressions are innate and universal, which are recognized across all cultures. However, recent some cross-cultural studies have questioned this assumed universality. Therefore, this paper presents an analysis of the differences between Western and East-Asian faces of the six basic expressions (anger, disgust, fear, happiness, sadness and surprise) focused on three individual facial regions of eyes-eyebrows, nose and mouth. The analysis is conducted by applying PCA for two feature extraction methods: appearance-based by using the pixel intensities of facial parts, and geometric-based by handling 125 feature points from the face. Both methods are evaluated using 4 standard databases for both racial groups and the results are compared with a cross-cultural human study applied to 20 participants. Our analysis reveals that differences between Westerns and East-Asians exist mainly on the regions of eyes-eyebrows and mouth for expressions of fear and disgust respectively. This work presents important findings for a better design of automatic facial expression recognition systems based on the difference between two racial groups.

  4. Uitrasonographic evaluation of fetal facial anatomy (Ⅰ):ultrasonographic features of normal fetal face in vitro study

    Institute of Scientific and Technical Information of China (English)

    李胜利; 陈琮瑛; 刘菊玲; 欧阳淑媛

    2004-01-01

    Background Because of lacking skills in scanning the normal fetal facial structures and their corresponding ultrasonic features, misdiagnoses freguently occur. Therefore, we studied the appearance features and improved displaying skills of fetal facial anatomy in order to provide basis for prenatal diagnosis. Methods Twenty fetuses with normal facial anatomy from induced labor because of other malformations except facial anomalies were immersed in a water bath and then scanned ultrasonographically on coronal, sagittal and transverse planes to define the ultrasonic image features of normal anatomy. The coronal and sagittal planes obtained from the submandibular triangle were used for displaying the soft and hard palate in particular. Results Facial anatomic structures of the fetus can be clearly displayed through the three routine orthogonal planes. However, the soft and hard palate can be displayed on the planes obtained from the submandibular triangle only. Conclusions The superficial soft tissues and deep bony structures of the fetal face can be recognized and evaluated by routine ultrasonographic images, which is a reliable prenatal diagnostic technique to evaluate the fetal facial anatomy. The soft and hard palate can be well demonstrated by the submandibular triangle approach.

  5. Neighbors Based Discriminative Feature Difference Learning for Kinship Verification

    DEFF Research Database (Denmark)

    Duan, Xiaodong; Tan, Zheng-Hua

    2015-01-01

    In this paper, we present a discriminative feature difference learning method for facial image based kinship verification. To transform feature difference of an image pair to be discriminative for kinship verification, a linear transformation matrix for feature difference between an image pair is...... databases show that the proposed method combined with a SVM classification method outperforms or is comparable to state-of-the-art kinship verification methods. © Springer International Publishing AG, Part of Springer Science+Business Media...

  6. Fixation to features and neural processing of facial expressions in a gender discrimination task.

    Science.gov (United States)

    Neath, Karly N; Itier, Roxane J

    2015-10-01

    Early face encoding, as reflected by the N170 ERP component, is sensitive to fixation to the eyes. Whether this sensitivity varies with facial expressions of emotion and can also be seen on other ERP components such as P1 and EPN, was investigated. Using eye-tracking to manipulate fixation on facial features, we found the N170 to be the only eye-sensitive component and this was true for fearful, happy and neutral faces. A different effect of fixation to features was seen for the earlier P1 that likely reflected general sensitivity to face position. An early effect of emotion (∼120 ms) for happy faces was seen at occipital sites and was sustained until ∼350 ms post-stimulus. For fearful faces, an early effect was seen around 80 ms followed by a later effect appearing at ∼150 ms until ∼300 ms at lateral posterior sites. Results suggests that in this emotion-irrelevant gender discrimination task, processing of fearful and happy expressions occurred early and largely independently of the eye-sensitivity indexed by the N170. Processing of the two emotions involved different underlying brain networks active at different times.

  7. Using Computers for Assessment of Facial Features and Recognition of Anatomical Variants that Result in Unfavorable Rhinoplasty Outcomes

    Directory of Open Access Journals (Sweden)

    Tarik Ozkul

    2008-04-01

    Full Text Available Rhinoplasty and facial plastic surgery are among the most frequently performed surgical procedures in the world. Although the underlying anatomical features of nose and face are very well known, performing a successful facial surgery requires not only surgical skills but also aesthetical talent from surgeon. Sculpting facial features surgically in correct proportions to end up with an aesthetically pleasing result is highly difficult. To further complicate the matter, some patients may have some anatomical features which affect rhinoplasty operation outcome negatively. If goes undetected, these anatomical variants jeopardize the surgery causing unexpected rhinoplasty outcomes. In this study, a model is developed with the aid of artificial intelligence tools, which analyses facial features of the patient from photograph, and generates an index of "appropriateness" of the facial features and an index of existence of anatomical variants that effect rhinoplasty negatively. The software tool developed is intended to detect the variants and warn the surgeon before the surgery. Another purpose of the tool is to generate an objective score to assess the outcome of the surgery.

  8. Automated detection of pain from facial expressions: a rule-based approach using AAM

    Science.gov (United States)

    Chen, Zhanli; Ansari, Rashid; Wilkie, Diana J.

    2012-02-01

    In this paper, we examine the problem of using video analysis to assess pain, an important problem especially for critically ill, non-communicative patients, and people with dementia. We propose and evaluate an automated method to detect the presence of pain manifested in patient videos using a unique and large collection of cancer patient videos captured in patient homes. The method is based on detecting pain-related facial action units defined in the Facial Action Coding System (FACS) that is widely used for objective assessment in pain analysis. In our research, a person-specific Active Appearance Model (AAM) based on Project-Out Inverse Compositional Method is trained for each patient individually for the modeling purpose. A flexible representation of the shape model is used in a rule-based method that is better suited than the more commonly used classifier-based methods for application to the cancer patient videos in which pain-related facial actions occur infrequently and more subtly. The rule-based method relies on the feature points that provide facial action cues and is extracted from the shape vertices of AAM, which have a natural correspondence to face muscular movement. In this paper, we investigate the detection of a commonly used set of pain-related action units in both the upper and lower face. Our detection results show good agreement with the results obtained by three trained FACS coders who independently reviewed and scored the action units in the cancer patient videos.

  9. Gender Recognition Based on Sift Features

    CERN Document Server

    Yousefi, Sahar

    2011-01-01

    This paper proposes a robust approach for face detection and gender classification in color images. Previous researches about gender recognition suppose an expensive computational and time-consuming pre-processing step in order to alignment in which face images are aligned so that facial landmarks like eyes, nose, lips, chin are placed in uniform locations in image. In this paper, a novel technique based on mathematical analysis is represented in three stages that eliminates alignment step. First, a new color based face detection method is represented with a better result and more robustness in complex backgrounds. Next, the features which are invariant to affine transformations are extracted from each face using scale invariant feature transform (SIFT) method. To evaluate the performance of the proposed algorithm, experiments have been conducted by employing a SVM classifier on a database of face images which contains 500 images from distinct people with equal ratio of male and female.

  10. Data-driven facial animation based on manifold Bayesian regression

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Driving facial animation based on tens of tracked markers is a challenging task due to the complex topology and to the non-rigid nature of human faces. We propose a solution named manifold Bayesian regression. First a novel distance metric, the geodesic manifold distance, is introduced to replace the Euclidean distance. The problem of facial animation can be formulated as a sparse warping kernels regression problem, in which the geodesic manifold distance is used for modelling the topology and discontinuities of the face models. The geodesic manifold distance can be adopted in traditional regression methods, e.g. radial basis functions without much tuning. We put facial animation into the framework of Bayesian regression. Bayesian approaches provide an elegant way of dealing with noise and uncertainty. After the covariance matrix is properly modulated, Hybrid Monte Carlo is used to approximate the integration of probabilities and get deformation results. The experimental results showed that our algorithm can robustly produce facial animation with large motions and complex face models.

  11. Shape-constrained Gaussian Process Regression for Facial-point-based Head-pose Normalization

    NARCIS (Netherlands)

    Rudovic, Ognjen; Pantic, Maja

    2011-01-01

    Given the facial points extracted from an image of a face in an arbitrary pose, the goal of facial-point-based headpose normalization is to obtain the corresponding facial points in a predefined pose (e.g., frontal). This involves inference of complex and high-dimensional mappings due to the large n

  12. Image Based Hair Segmentation Algorithm for the Application of Automatic Facial Caricature Synthesis

    Directory of Open Access Journals (Sweden)

    Yehu Shen

    2014-01-01

    Full Text Available Hair is a salient feature in human face region and are one of the important cues for face analysis. Accurate detection and presentation of hair region is one of the key components for automatic synthesis of human facial caricature. In this paper, an automatic hair detection algorithm for the application of automatic synthesis of facial caricature based on a single image is proposed. Firstly, hair regions in training images are labeled manually and then the hair position prior distributions and hair color likelihood distribution function are estimated from these labels efficiently. Secondly, the energy function of the test image is constructed according to the estimated prior distributions of hair location and hair color likelihood. This energy function is further optimized according to graph cuts technique and initial hair region is obtained. Finally, K-means algorithm and image postprocessing techniques are applied to the initial hair region so that the final hair region can be segmented precisely. Experimental results show that the average processing time for each image is about 280 ms and the average hair region detection accuracy is above 90%. The proposed algorithm is applied to a facial caricature synthesis system. Experiments proved that with our proposed hair segmentation algorithm the facial caricatures are vivid and satisfying.

  13. Long-term assessment of facial features and functions needing more attention in treatment of Treacher Collins syndrome

    NARCIS (Netherlands)

    Plomp, Raul G.; Versnel, Sarah L.; van Lieshout, Manouk J. S.; Poublon, Rene M. L.; Mathijssen, Irene M. J.

    2013-01-01

    Aim: This study aimed to determine which facial features and functions need more attention during surgical treatment of Treacher Collins syndrome (TCS) in the long term. Method: A cross-sectional cohort study was conducted to compare 23 TCS patients with 206 controls (all >= 18 years) regarding sati

  14. Efficient Web-based Facial Recognition System Employing 2DHOG

    CERN Document Server

    Abdelwahab, Moataz M; Yousry, Islam

    2012-01-01

    In this paper, a system for facial recognition to identify missing and found people in Hajj and Umrah is described as a web portal. Explicitly, we present a novel algorithm for recognition and classifications of facial images based on applying 2DPCA to a 2D representation of the Histogram of oriented gradients (2D-HOG) which maintains the spatial relation between pixels of the input images. This algorithm allows a compact representation of the images which reduces the computational complexity and the storage requirments, while maintaining the highest reported recognition accuracy. This promotes this method for usage with very large datasets. Large dataset was collected for people in Hajj. Experimental results employing ORL, UMIST, JAFFE, and HAJJ datasets confirm these excellent properties.

  15. Three-dimensional facial feature points matching based on K-means clustering of relative angle context distribution and support vector machine%基于相对角分布聚类和支持向量机的3维人脸特征点匹配技术的研究

    Institute of Scientific and Technical Information of China (English)

    麻宏静; 张德同; 冯筠; 耿国华

    2011-01-01

    Feature points searching or point correspondence matching is a challenge in computer vision and pattern recognition, which is very important perquisite for many 2D/3D applications such as image registration, object recognition and statistical model construction. In this paper, we propose an algorithm for facial feature points matching among 3D point cloud models. Specifically, the surface points are clustered based on relative angle context (RAC) features, and then the geometric features of the clustered points are extracted. Afterwards, supported Vector Machine based classification is employed for final accurate correspondence location. The experimental results demonstrate that our algorithm achieves better performance than RAC algorithm proposed. Within the confines of a given distance threshold, the accuracy rates of 50% feature points have even reached to 100%.%人脸特征点自动定位及对应点匹配是计算机视觉和模式识别领域一个非常热门的研究方向,应用领域包括图像配准、对象识别与跟踪、3维重建、立体匹配等.通过相对角直方图分布和K均值聚类确定脸部特征点的聚类点集,再利用几何信息提取聚类点集的特征,进而采用支持向量机分类最终从点集中分离出39个脸部特征点.实验结果表明,此混合提取方法比单纯使用RAC得到了更好的匹配准确率,在给定的距离阈值范围内,50%的特征点定位准确率达到了100%.

  16. QR Code Optimization with Salient Facial Feature%呈现人脸显著性特征的二维码视觉优化

    Institute of Scientific and Technical Information of China (English)

    徐明亮; 孙亚西; 吕培; 郭毅博; 周兵; 周清雷

    2016-01-01

    This paper presents a method to generate visually optimized QR Code images with salient facial fea-tures. The input of our method includes a facial image and its corresponding text. Firstly, we generate a standard QR Code using the given text. Secondly, we use an iterative FDoG algorithm to extract salient facial features. Lastly, we adopt an optimization-based pattern replacement algorithm to compute new modules, which are used to replace original ones in QR code. Afterwards, a new QR code image encoding salient facial features can be generated with these new modules. The experiments show that our method can generate more visually-pleasant QR code images without affecting the decoding rate and accuracy.%为了得到视觉美观的二维码艺术图片,提出一种可呈现人脸显著性特征的二维码视觉优化方法,其输入包括一幅人脸图像及该图像对应的文本信息。首先根据文本信息生成标准二维码;然后使用人脸检测算法检测人脸区域,并采用迭代 FDoG 算法提取人脸的显著性特征;最后使用基于模式替换的方法求解原始二维码中每一个 module可替换的最优模式,并利用这些模式重新生成人脸二维码图片。实验结果表明,在保证扫码速度和准确率的基础上,文中方法产生的二维码具有良好的视觉效果。

  17. Intensity Estimation of Spontaneous Facial Action Units Based on Their Sparsity Properties.

    Science.gov (United States)

    Mohammadi, Mohammad Reza; Fatemizadeh, Emad; Mahoor, Mohammad H

    2016-03-01

    Automatic measurement of spontaneous facial action units (AUs) defined by the facial action coding system (FACS) is a challenging problem. The recent FACS user manual defines 33 AUs to describe different facial activities and expressions. In spontaneous facial expressions, a subset of AUs are often occurred or activated at a time. Given this fact that AUs occurred sparsely over time, we propose a novel method to detect the absence and presence of AUs and estimate their intensity levels via sparse representation (SR). We use the robust principal component analysis to decompose expression from facial identity and then estimate the intensity of multiple AUs jointly using a regression model formulated based on dictionary learning and SR. Our experiments on Denver intensity of spontaneous facial action and UNBC-McMaster shoulder pain expression archive databases show that our method is a promising approach for measurement of spontaneous facial AUs.

  18. Facial Image Analysis Based on Local Binary Patterns: A Survey

    NARCIS (Netherlands)

    Huang, D.; Shan, C.; Ardebilian, M.; Chen, L.

    2011-01-01

    Facial image analysis, including face detection, face recognition,facial expression analysis, facial demographic classification, and so on, is an important and interesting research topic in the computervision and image processing area, which has many important applications such as human-computer

  19. [Facial palsy].

    Science.gov (United States)

    Cavoy, R

    2013-09-01

    Facial palsy is a daily challenge for the clinicians. Determining whether facial nerve palsy is peripheral or central is a key step in the diagnosis. Central nervous lesions can give facial palsy which may be easily differentiated from peripheral palsy. The next question is the peripheral facial paralysis idiopathic or symptomatic. A good knowledge of anatomy of facial nerve is helpful. A structure approach is given to identify additional features that distinguish symptomatic facial palsy from idiopathic one. The main cause of peripheral facial palsies is idiopathic one, or Bell's palsy, which remains a diagnosis of exclusion. The most common cause of symptomatic peripheral facial palsy is Ramsay-Hunt syndrome. Early identification of symptomatic facial palsy is important because of often worst outcome and different management. The prognosis of Bell's palsy is on the whole favorable and is improved with a prompt tapering course of prednisone. In Ramsay-Hunt syndrome, an antiviral therapy is added along with prednisone. We also discussed of current treatment recommendations. We will review short and long term complications of peripheral facial palsy.

  20. Confidence-Based Feature Acquisition

    Science.gov (United States)

    Wagstaff, Kiri L.; desJardins, Marie; MacGlashan, James

    2010-01-01

    Confidence-based Feature Acquisition (CFA) is a novel, supervised learning method for acquiring missing feature values when there is missing data at both training (learning) and test (deployment) time. To train a machine learning classifier, data is encoded with a series of input features describing each item. In some applications, the training data may have missing values for some of the features, which can be acquired at a given cost. A relevant JPL example is that of the Mars rover exploration in which the features are obtained from a variety of different instruments, with different power consumption and integration time costs. The challenge is to decide which features will lead to increased classification performance and are therefore worth acquiring (paying the cost). To solve this problem, CFA, which is made up of two algorithms (CFA-train and CFA-predict), has been designed to greedily minimize total acquisition cost (during training and testing) while aiming for a specific accuracy level (specified as a confidence threshold). With this method, it is assumed that there is a nonempty subset of features that are free; that is, every instance in the data set includes these features initially for zero cost. It is also assumed that the feature acquisition (FA) cost associated with each feature is known in advance, and that the FA cost for a given feature is the same for all instances. Finally, CFA requires that the base-level classifiers produce not only a classification, but also a confidence (or posterior probability).

  1. Man-machine collaboration using facial expressions

    Science.gov (United States)

    Dai, Ying; Katahera, S.; Cai, D.

    2002-09-01

    For realizing the flexible man-machine collaboration, understanding of facial expressions and gestures is not negligible. In our method, we proposed a hierarchical recognition approach, for the understanding of human emotions. According to this method, the facial AFs (action features) were firstly extracted and recognized by using histograms of optical flow. Then, based on the facial AFs, facial expressions were classified into two calsses, one of which presents the positive emotions, and the other of which does the negative ones. Accordingly, the facial expressions belonged to the positive class, or the ones belonged to the negative class, were classified into more complex emotions, which were revealed by the corresponding facial expressions. Finally, the system architecture how to coordinate in recognizing facil action features and facial expressions for man-machine collaboration was proposed.

  2. Suction based mechanical characterization of superficial facial soft tissues.

    Science.gov (United States)

    Weickenmeier, J; Jabareen, M; Mazza, E

    2015-12-16

    The present study is aimed at a combined experimental and numerical investigation of the mechanical response of superficial facial tissues. Suction based experiments provide the location, time, and history dependent behavior of skin and SMAS (superficial musculoaponeurotic system) by means of Cutometer and Aspiration measurements. The suction method is particularly suitable for in vivo, multi-axial testing of soft biological tissue including a high repeatability in subsequent tests. The campaign comprises three measurement sites in the face, i.e. jaw, parotid, and forehead, using two different loading profiles (instantaneous loading and a linearly increasing and decreasing loading curve), multiple loading magnitudes, and cyclic loading cases to quantify history dependent behavior. In an inverse finite element analysis based on anatomically detailed models an optimized set of material parameters for the implementation of an elastic-viscoplastic material model was determined, yielding an initial shear modulus of 2.32kPa for skin and 0.05kPa for SMAS, respectively. Apex displacements at maximum instantaneous and linear loading showed significant location specificity with variations of up to 18% with respect to the facial average response while observing variations in repeated measurements in the same location of less than 12%. In summary, the proposed parameter sets for skin and SMAS are shown to provide remarkable agreement between the experimentally observed and numerically predicted tissue response under all loading conditions considered in the present study, including cyclic tests.

  3. Recognizing Action Units for Facial Expression Analysis.

    Science.gov (United States)

    Tian, Ying-Li; Kanade, Takeo; Cohn, Jeffrey F

    2001-02-01

    Most automatic expression analysis systems attempt to recognize a small set of prototypic expressions, such as happiness, anger, surprise, and fear. Such prototypic expressions, however, occur rather infrequently. Human emotions and intentions are more often communicated by changes in one or a few discrete facial features. In this paper, we develop an Automatic Face Analysis (AFA) system to analyze facial expressions based on both permanent facial features (brows, eyes, mouth) and transient facial features (deepening of facial furrows) in a nearly frontal-view face image sequence. The AFA system recognizes fine-grained changes in facial expression into action units (AUs) of the Facial Action Coding System (FACS), instead of a few prototypic expressions. Multistate face and facial component models are proposed for tracking and modeling the various facial features, including lips, eyes, brows, cheeks, and furrows. During tracking, detailed parametric descriptions of the facial features are extracted. With these parameters as the inputs, a group of action units (neutral expression, six upper face AUs and 10 lower face AUs) are recognized whether they occur alone or in combinations. The system has achieved average recognition rates of 96.4 percent (95.4 percent if neutral expressions are excluded) for upper face AUs and 96.7 percent (95.6 percent with neutral expressions excluded) for lower face AUs. The generalizability of the system has been tested by using independent image databases collected and FACS-coded for ground-truth by different research teams.

  4. Relationship between clinical features of facial dry skin and biophysical parameters in Asians.

    Science.gov (United States)

    Baek, J H; Lee, M Y; Koh, J S

    2011-06-01

    There have been few reports classifying the biophysical characteristics of Korean women with healthy skin. Consequently, the aim of this study was to find the most useful parameters for categorizing skin types based on a clinical assessment. One hundred and three female volunteers, aged 20-59, participated in this study. We conducted a self-evaluation questionnaire, a clinical assessment of the facial skin, and non-invasive measurements on the cheek under controlled environmental conditions. The questionnaire survey indicated that 72% of respondents had dry skin. However, results of the clinical assessment focusing on skin roughness and scaling of the cheek showed that 6 subjects had very dry skin (6%), 29 had dry skin (28%) and 68 had normal skin with sufficient moisture (66%). We analysed the correlation between the clinical assessment and biophysical parameters. As a result, we obtained six biophysical parameters that had relatively higher correlations with clinical assessment than other parameters. Our study provided general information about the physiological characteristics of normal skin in Korean women and suggested useful parameters for characterizing dry skin.

  5. Personality Trait and Facial Expression Filter-Based Brain-Computer Interface

    OpenAIRE

    Seongah Chin; Chung-Yeon Lee

    2013-01-01

    In this paper, we present technical approaches that bridge the gap in the research related to the use of brain‐computer interfaces for entertainment and facial expressions. Such facial expressions that reflect an individual’s personal traits can be used to better realize artificial facial expressions in a gaming environment based on a brain‐computer interface. First, an emotion extraction filter is introduced in order to classify emotions on the basis of the users’ brain signals in real time....

  6. Hierarchical Spatio-Temporal Probabilistic Graphical Model with Multiple Feature Fusion for Binary Facial Attribute Classification in Real-World Face Videos.

    Science.gov (United States)

    Demirkus, Meltem; Precup, Doina; Clark, James J; Arbel, Tal

    2016-06-01

    Recent literature shows that facial attributes, i.e., contextual facial information, can be beneficial for improving the performance of real-world applications, such as face verification, face recognition, and image search. Examples of face attributes include gender, skin color, facial hair, etc. How to robustly obtain these facial attributes (traits) is still an open problem, especially in the presence of the challenges of real-world environments: non-uniform illumination conditions, arbitrary occlusions, motion blur and background clutter. What makes this problem even more difficult is the enormous variability presented by the same subject, due to arbitrary face scales, head poses, and facial expressions. In this paper, we focus on the problem of facial trait classification in real-world face videos. We have developed a fully automatic hierarchical and probabilistic framework that models the collective set of frame class distributions and feature spatial information over a video sequence. The experiments are conducted on a large real-world face video database that we have collected, labelled and made publicly available. The proposed method is flexible enough to be applied to any facial classification problem. Experiments on a large, real-world video database McGillFaces [1] of 18,000 video frames reveal that the proposed framework outperforms alternative approaches, by up to 16.96 and 10.13%, for the facial attributes of gender and facial hair, respectively.

  7. Chromosome 22q11.2 Deletion Syndrome Presenting as Adult Onset Hypoparathyroidism: Clues to Diagnosis from Dysmorphic Facial Features

    Directory of Open Access Journals (Sweden)

    Sira Korpaisarn

    2013-01-01

    Full Text Available We report a 26-year-old Thai man who presented with hypoparathyroidism in adulthood. He had no history of cardiac disease and recurrent infection. His subtle dysmorphic facial features and mild intellectual impairment were suspected for chromosome 22q11.2 deletion syndrome. The diagnosis was confirmed by fluorescence in situ hybridization, which found microdeletion in 22q11.2 region. The characteristic facial appearance can lead to clinical suspicion of this syndrome. The case report emphasizes that this syndrome is not uncommon and presents as a remarkable variability in the severity and extent of expression. Accurate diagnosis is important for genetic counseling and long-term health supervision by multidisciplinary team.

  8. Synthesizing Performance-driven Facial Animation

    Institute of Scientific and Technical Information of China (English)

    LUO Chang-Wei; YU Jun; WANG Zeng-Fu

    2014-01-01

    In this paper, we present a system for real-time performance-driven facial animation. With the system, the user can control the facial expression of a digital character by acting out the desired facial action in front of an ordinary camera. First, we create a muscle-based 3D face model. The muscle actuation parameters are used to animate the face model. To increase the reality of facial animation, the orbicularis oris in our face model is divided into the inner part and outer part. We also establish the relationship between jaw rotation and facial surface deformation. Second, a real-time facial tracking method is employed to track the facial features of a performer in the video. Finally, the tracked facial feature points are used to estimate muscle actuation parameters to drive the face model. Experimental results show that our system runs in real time and outputs realistic facial animations. Compared with most existing performance-based facial animation systems, ours does not require facial markers, intrusive lighting, or special scanning equipment, thus it is inexpensive and easy to use.

  9. Granuloma faciale: a cutaneous lesion sharing features with IgG4-associated sclerosing diseases.

    Science.gov (United States)

    Cesinaro, Anna Maria; Lonardi, Silvia; Facchetti, Fabio

    2013-01-01

    The pathogenesis of granuloma faciale (GF), framed in the group of cutaneous vasculopathic dermatitis, is poorly understood. The present study investigated whether GF might be part of the spectrum of IgG4-related sclerosing diseases (IgG4-RD). Erythema elevatum diutinum (EED), believed to belong to the same group of disorders as GF, was also studied for comparison. Thirty-one biopsies of GF obtained from 25 patients (18 men, 7 women) and 5 cases of EED (4 women and 1 man) were analyzed morphologically and for the expression of IgG and IgG4 by immunohistochemistry. The distribution of Th1, T regulatory and Th2 T-cell subsets, respectively, identified by anti-T-bet, anti-FoxP3, and anti-GATA-3 antibodies, was also evaluated. The dermal inflammatory infiltrate in GF contained eosinophils and plasma cells in variable proportions. Obliterative venulitis was found in 16 cases, and storiform fibrosis, a typical feature of IgG4-RD, was observed in 8 cases and was prominent in 3 of them. On immunohistochemical analysis 7 of 31 biopsies (22.6%) from 6 GF patients fulfilled the criteria for IgG4-RD (IgG4/IgG ratio >40%, and absolute number of IgG4 per high-power field >50). Interestingly, the 6 patients were male, and 4 showed recurrent and/or multiple lesions. In an additional 5 cases, only the IgG4/IgG ratio was abnormal. None of the 5 EED cases fulfilled the criteria for IgG4-RD. The T-cell subsets in GF were quite variable in number, GATA-3 lymphocytes were generally more abundant, but no relationship with the number of IgG4 plasma cells was found. The study indicates that a significant number of GF cases are associated with an abnormal content of IgG4 plasma cells; this association was particularly obvious in male patients and in cases presenting with multiple or recurrent lesions. As morphologic changes typically found in IgG4-RD, such as obliterative vascular inflammation and storiform sclerosis, are found in GF, we suggest that GF might represent a localized form of

  10. Pediatric facial nerve rehabilitation.

    Science.gov (United States)

    Banks, Caroline A; Hadlock, Tessa A

    2014-11-01

    Facial paralysis is a rare but severe condition in the pediatric population. Impaired facial movement has multiple causes and varied presentations, therefore individualized treatment plans are essential for optimal results. Advances in facial reanimation over the past 4 decades have given rise to new treatments designed to restore balance and function in pediatric patients with facial paralysis. This article provides a comprehensive review of pediatric facial rehabilitation and describes a zone-based approach to assessment and treatment of impaired facial movement.

  11. Appearance-based human gesture recognition using multimodal features for human computer interaction

    Science.gov (United States)

    Luo, Dan; Gao, Hua; Ekenel, Hazim Kemal; Ohya, Jun

    2011-03-01

    The use of gesture as a natural interface plays an utmost important role for achieving intelligent Human Computer Interaction (HCI). Human gestures include different components of visual actions such as motion of hands, facial expression, and torso, to convey meaning. So far, in the field of gesture recognition, most previous works have focused on the manual component of gestures. In this paper, we present an appearance-based multimodal gesture recognition framework, which combines the different groups of features such as facial expression features and hand motion features which are extracted from image frames captured by a single web camera. We refer 12 classes of human gestures with facial expression including neutral, negative and positive meanings from American Sign Languages (ASL). We combine the features in two levels by employing two fusion strategies. At the feature level, an early feature combination can be performed by concatenating and weighting different feature groups, and LDA is used to choose the most discriminative elements by projecting the feature on a discriminative expression space. The second strategy is applied on decision level. Weighted decisions from single modalities are fused in a later stage. A condensation-based algorithm is adopted for classification. We collected a data set with three to seven recording sessions and conducted experiments with the combination techniques. Experimental results showed that facial analysis improve hand gesture recognition, decision level fusion performs better than feature level fusion.

  12. History of facial pain diagnosis

    DEFF Research Database (Denmark)

    Zakrzewska, Joanna M; Jensen, Troels S

    2017-01-01

    Premise Facial pain refers to a heterogeneous group of clinically and etiologically different conditions with the common clinical feature of pain in the facial area. Among these conditions, trigeminal neuralgia (TN), persistent idiopathic facial pain, temporomandibular joint pain, and trigeminal...

  13. Cosmetics as a feature of the extended human phenotype: modulation of the perception of biologically important facial signals.

    Directory of Open Access Journals (Sweden)

    Nancy L Etcoff

    Full Text Available Research on the perception of faces has focused on the size, shape, and configuration of inherited features or the biological phenotype, and largely ignored the effects of adornment, or the extended phenotype. Research on the evolution of signaling has shown that animals frequently alter visual features, including color cues, to attract, intimidate or protect themselves from conspecifics. Humans engage in conscious manipulation of visual signals using cultural tools in real time rather than genetic changes over evolutionary time. Here, we investigate one tool, the use of color cosmetics. In two studies, we asked viewers to rate the same female faces with or without color cosmetics, and we varied the style of makeup from minimal (natural, to moderate (professional, to dramatic (glamorous. Each look provided increasing luminance contrast between the facial features and surrounding skin. Faces were shown for 250 ms or for unlimited inspection time, and subjects rated them for attractiveness, competence, likeability and trustworthiness. At 250 ms, cosmetics had significant positive effects on all outcomes. Length of inspection time did not change the effect for competence or attractiveness. However, with longer inspection time, the effect of cosmetics on likability and trust varied by specific makeup looks, indicating that cosmetics could impact automatic and deliberative judgments differently. The results suggest that cosmetics can create supernormal facial stimuli, and that one way they may do so is by exaggerating cues to sexual dimorphism. Our results provide evidence that judgments of facial trustworthiness and attractiveness are at least partially separable, that beauty has a significant positive effect on judgment of competence, a universal dimension of social cognition, but has a more nuanced effect on the other universal dimension of social warmth, and that the extended phenotype significantly influences perception of biologically important

  14. Cosmetics as a feature of the extended human phenotype: modulation of the perception of biologically important facial signals.

    Science.gov (United States)

    Etcoff, Nancy L; Stock, Shannon; Haley, Lauren E; Vickery, Sarah A; House, David M

    2011-01-01

    Research on the perception of faces has focused on the size, shape, and configuration of inherited features or the biological phenotype, and largely ignored the effects of adornment, or the extended phenotype. Research on the evolution of signaling has shown that animals frequently alter visual features, including color cues, to attract, intimidate or protect themselves from conspecifics. Humans engage in conscious manipulation of visual signals using cultural tools in real time rather than genetic changes over evolutionary time. Here, we investigate one tool, the use of color cosmetics. In two studies, we asked viewers to rate the same female faces with or without color cosmetics, and we varied the style of makeup from minimal (natural), to moderate (professional), to dramatic (glamorous). Each look provided increasing luminance contrast between the facial features and surrounding skin. Faces were shown for 250 ms or for unlimited inspection time, and subjects rated them for attractiveness, competence, likeability and trustworthiness. At 250 ms, cosmetics had significant positive effects on all outcomes. Length of inspection time did not change the effect for competence or attractiveness. However, with longer inspection time, the effect of cosmetics on likability and trust varied by specific makeup looks, indicating that cosmetics could impact automatic and deliberative judgments differently. The results suggest that cosmetics can create supernormal facial stimuli, and that one way they may do so is by exaggerating cues to sexual dimorphism. Our results provide evidence that judgments of facial trustworthiness and attractiveness are at least partially separable, that beauty has a significant positive effect on judgment of competence, a universal dimension of social cognition, but has a more nuanced effect on the other universal dimension of social warmth, and that the extended phenotype significantly influences perception of biologically important signals at first

  15. Cephalometric soft tissue facial analysis.

    Science.gov (United States)

    Bergman, R T

    1999-10-01

    My objective is to present a cephalometric-based facial analysis to correlate with an article that was published previously in the American Journal of Orthodontic and Dentofacial Orthopedics. Eighteen facial or soft tissue traits are discussed in this article. All of them are significant in successful orthodontic outcome, and none of them depend on skeletal landmarks for measurement. Orthodontic analysis most commonly relies on skeletal and dental measurement, placing far less emphasis on facial feature measurement, particularly their relationship to each other. Yet, a thorough examination of the face is critical for understanding the changes in facial appearance that result from orthodontic treatment. A cephalometric approach to facial examination can also benefit the diagnosis and treatment plan. Individual facial traits and their balance with one another should be identified before treatment. Relying solely on skeletal analysis, assuming that the face will balance if the skeletal/dental cephalometric values are normalized, may not yield the desired outcome. Good occlusion does not necessarily mean good facial balance. Orthodontic norms for facial traits can permit their measurement. Further, with a knowledge of standard facial traits and the patient's soft tissue features, an individualized norm can be established for each patient to optimize facial attractiveness. Four questions should be asked regarding each facial trait before treatment: (1) What is the quality and quantity of the trait? (2) How will future growth affect the trait? (3) How will orthodontic tooth movement affect the existing trait (positively or negatively)? (4) How will surgical bone movement to correct the bite affect the trait (positively or negatively)?

  16. AAM Facial Feature Localization Algorithm Based on Skin Model and Breadth-First Search%基于肤色信息与宽度优先搜索的AAM人脸特征定位算法

    Institute of Scientific and Technical Information of China (English)

    薛卫; 梁敬东; 林金星

    2011-01-01

    This paper presented an advanced AAM face detection algorithm based on skin model and Breadth-First Search to accelerate the process of initialization, which takes full advantage of skin information. Based on skin model, combined with morphological operations and Breadth-First Search,it finds out the face area at first, then gives a rough location of the gravity of landmarks. It effectively narrows the search window, thereby reducing AAM search time. Experiments show that the improved algorithm can increase detection rate and reduce more than 60 percentage computing burden compared with AAM algorithm.%提出了一种结合肤色信息与宽度优先搜索的AAM(Active Appearcance Models)人脸检测算法.该算法充分利用彩色人脸图像中的肤色信息,建立肤色模型,结合形态学运算和宽度优先搜索算法,定位人脸重心,有效地缩小了搜索窗口.实验表明,和AAM算法相比,该算法不仅检测率提高,而且速度提高60%以上.

  17. Chronic neuropathic facial pain after intense pulsed light hair removal. Clinical features and pharmacological management

    Science.gov (United States)

    Párraga-Manzol, Gabriela; Sánchez-Torres, Alba; Moreno-Arias, Gerardo

    2015-01-01

    Intense Pulsed Light (IPL) photodepilation is usually performed as a hair removal method. The treatment is recommended to be indicated by a physician, depending on each patient and on its characteristics. However, the use of laser devices by medical laypersons is frequent and it can suppose a risk of damage for the patients. Most side effects associated to IPL photodepilation are transient, minimal and disappear without sequelae. However, permanent side effects can occur. Some of the complications are laser related but many of them are caused by an operator error or mismanagement. In this work, we report a clinical case of a patient that developed a chronic neuropathic facial pain following IPL hair removal for unwanted hair in the upper lip. The specific diagnosis was painful post-traumatic trigeminal neuropathy, reference 13.1.2.3 according to the International Headache Society (IHS). Key words:Neuropathic facial pain, photodepilation, intense pulse light. PMID:26535105

  18. Facial Nerve Palsy: An Unusual Presenting Feature of Small Cell Lung Cancer

    Directory of Open Access Journals (Sweden)

    Ozcan Yildiz

    2011-01-01

    Full Text Available Lung cancer is the second most common type of cancer in the world and is the most common cause of cancer-related death in men and women; it is responsible for 1.3 million deaths annually worldwide. It can metastasize to any organ. The most common site of metastasis in the head and neck region is the brain; however, it can also metastasize to the oral cavity, gingiva, tongue, parotid gland and lymph nodes. This article reports a case of small cell lung cancer presenting with metastasis to the facial nerve.

  19. A genome-wide association scan in admixed Latin Americans identifies loci influencing facial and scalp hair features.

    Science.gov (United States)

    Adhikari, Kaustubh; Fontanil, Tania; Cal, Santiago; Mendoza-Revilla, Javier; Fuentes-Guajardo, Macarena; Chacón-Duque, Juan-Camilo; Al-Saadi, Farah; Johansson, Jeanette A; Quinto-Sanchez, Mirsha; Acuña-Alonzo, Victor; Jaramillo, Claudia; Arias, William; Barquera Lozano, Rodrigo; Macín Pérez, Gastón; Gómez-Valdés, Jorge; Villamil-Ramírez, Hugo; Hunemeier, Tábita; Ramallo, Virginia; Silva de Cerqueira, Caio C; Hurtado, Malena; Villegas, Valeria; Granja, Vanessa; Gallo, Carla; Poletti, Giovanni; Schuler-Faccini, Lavinia; Salzano, Francisco M; Bortolini, Maria-Cátira; Canizales-Quinteros, Samuel; Rothhammer, Francisco; Bedoya, Gabriel; Gonzalez-José, Rolando; Headon, Denis; López-Otín, Carlos; Tobin, Desmond J; Balding, David; Ruiz-Linares, Andrés

    2016-03-01

    We report a genome-wide association scan in over 6,000 Latin Americans for features of scalp hair (shape, colour, greying, balding) and facial hair (beard thickness, monobrow, eyebrow thickness). We found 18 signals of association reaching genome-wide significance (P values 5 × 10(-8) to 3 × 10(-119)), including 10 novel associations. These include novel loci for scalp hair shape and balding, and the first reported loci for hair greying, monobrow, eyebrow and beard thickness. A newly identified locus influencing hair shape includes a Q30R substitution in the Protease Serine S1 family member 53 (PRSS53). We demonstrate that this enzyme is highly expressed in the hair follicle, especially the inner root sheath, and that the Q30R substitution affects enzyme processing and secretion. The genome regions associated with hair features are enriched for signals of selection, consistent with proposals regarding the evolution of human hair.

  20. Feature Extraction based Face Recognition, Gender and Age Classification

    Directory of Open Access Journals (Sweden)

    Venugopal K R

    2010-01-01

    Full Text Available The face recognition system with large sets of training sets for personal identification normally attains good accuracy. In this paper, we proposed Feature Extraction based Face Recognition, Gender and Age Classification (FEBFRGAC algorithm with only small training sets and it yields good results even with one image per person. This process involves three stages: Pre-processing, Feature Extraction and Classification. The geometric features of facial images like eyes, nose, mouth etc. are located by using Canny edge operator and face recognition is performed. Based on the texture and shape information gender and age classification is done using Posteriori Class Probability and Artificial Neural Network respectively. It is observed that the face recognition is 100%, the gender and age classification is around 98% and 94% respectively.

  1. 男性化与女性化对面孔偏好的影响——基于图像处理技术和眼动的检验%The Effects of Transformed Gender Facial Features on Face Preference of College Students: Based on the Test of Computer Graphics and Eye Movement Tracks

    Institute of Scientific and Technical Information of China (English)

    温芳芳; 佐斌

    2012-01-01

    采用图像处理技术和眼动探讨了性别二态线索对面孔偏好的影响.实验1发现非面孔线索未掩蔽和掩蔽时,感知男性化技术与原始照片条件下女性化的男性面孔更有吸引力和信任度;性别二态技术条件下,非面孔线索未掩蔽时男性化的男性面孔更有吸引力和信任度.实验2表明被试对男性面孔的平均瞳孔大小和注视次数均大于和多于女性面孔,首次注视时间短于女性面孔;被试对男性化面孔的首次注视时间和首次注视持续时间均长于女性化面孔.%Perceived facial attractiveness can influence people's social interactions with one another, including mate selection, intimate relationship, hiring decision, and voting behavior. People evaluate faces using multiple trait dimensions such as attractiveness and trustworthiness both of which are affected by facial masculinity or femininity cues. However, studies manipulating the computer graphics of sexual dimorphism on facial attractiveness have yielded inconsistent results. Some found that feminine facial features in male faces were more attractive than masculine ones. Some others found that women prefer masculine male faces. And still others found that women preferred femininity in male faces.The current study used the computer graphics and the eye tracker to assess the effect of the dimorphic cues on the perception of facial attractiveness among Chinese college students through two experiments. Experiment 1 assessed women's perceptions of attractiveness and trustworthiness of men's faces under the condition of either perceived masculinity vs. Femininity or the sexual dimorphism. Results showed that, when non-face cues (e.g., hairstyle) were masked, women perceived femininity in men's faces as more attractive and trustworthy than the masculinity. However, in the sexual dimorphism condition in which the non-face cues were not masked, women found masculinity in men's faces more attractive and

  2. 3D facial expression recognition based on histograms of surface differential quantities

    KAUST Repository

    Li, Huibin

    2011-01-01

    3D face models accurately capture facial surfaces, making it possible for precise description of facial activities. In this paper, we present a novel mesh-based method for 3D facial expression recognition using two local shape descriptors. To characterize shape information of the local neighborhood of facial landmarks, we calculate the weighted statistical distributions of surface differential quantities, including histogram of mesh gradient (HoG) and histogram of shape index (HoS). Normal cycle theory based curvature estimation method is employed on 3D face models along with the common cubic fitting curvature estimation method for the purpose of comparison. Based on the basic fact that different expressions involve different local shape deformations, the SVM classifier with both linear and RBF kernels outperforms the state of the art results on the subset of the BU-3DFE database with the same experimental setting. © 2011 Springer-Verlag.

  3. Evolutionary Computational Method of Facial Expression Analysis for Content-based Video Retrieval using 2-Dimensional Cellular Automata

    CERN Document Server

    Geetha, P

    2010-01-01

    In this paper, Deterministic Cellular Automata (DCA) based video shot classification and retrieval is proposed. The deterministic 2D Cellular automata model captures the human facial expressions, both spontaneous and posed. The determinism stems from the fact that the facial muscle actions are standardized by the encodings of Facial Action Coding System (FACS) and Action Units (AUs). Based on these encodings, we generate the set of evolutionary update rules of the DCA for each facial expression. We consider a Person-Independent Facial Expression Space (PIFES) to analyze the facial expressions based on Partitioned 2D-Cellular Automata which capture the dynamics of facial expressions and classify the shots based on it. Target video shot is retrieved by comparing the similar expression is obtained for the query frame's face with respect to the key faces expressions in the database video. Consecutive key face expressions in the database that are highly similar to the query frame's face, then the key faces are use...

  4. Robust Wavelet-Based Facial Image Watermarking Against Geometric Attacks Using Coordinate System Recovery

    Institute of Scientific and Technical Information of China (English)

    ZHAO Pei-dong; XIE Jian-ying

    2008-01-01

    A coordinate system of the original image is established using a facial feature point localization technique. After the original image transformed into a new image with the standard coordinate system, a redundant watermark is adaptively embedded in the discrete wavelet transform(DWT) domain based on the statistical characteristics of the wavelet coefficient block. The coordinate system of watermarked image is reestablished as a calibration system. Regardless of the host image rotated, scaled, or translated(RST), all the geometric attacks are eliminated while the watermarked image is transformed into the standard coordinate system. The proposed watermark detection is a blind detection. Experimental results demonstrate the proposed scheme is robust against common and geometric image processing attacks, particularly its robustness against joint geometric attacks.

  5. Facial Emotion Recognition Using Context Based Multimodal Approach

    Directory of Open Access Journals (Sweden)

    Priya Metri

    2011-12-01

    Full Text Available Emotions play a crucial role in person to person interaction. In recent years, there has been a growing interest in improving all aspects of interaction between humans and computers. The ability to understand human emotions is desirable for the computer in several applications especially by observing facial expressions. This paper explores a ways of human-computer interaction that enable the computer to be more aware of the user’s emotional expressions we present a approach for the emotion recognition from a facial expression, hand and body posture. Our model uses multimodal emotion recognition system in which we use two different models for facial expression recognition and for hand and body posture recognition and then combining the result of both classifiers using a third classifier which give the resulting emotion . Multimodal system gives more accurate result than a signal or bimodal system

  6. Features Based Text Similarity Detection

    CERN Document Server

    Kent, Chow Kok

    2010-01-01

    As the Internet help us cross cultural border by providing different information, plagiarism issue is bound to arise. As a result, plagiarism detection becomes more demanding in overcoming this issue. Different plagiarism detection tools have been developed based on various detection techniques. Nowadays, fingerprint matching technique plays an important role in those detection tools. However, in handling some large content articles, there are some weaknesses in fingerprint matching technique especially in space and time consumption issue. In this paper, we propose a new approach to detect plagiarism which integrates the use of fingerprint matching technique with four key features to assist in the detection process. These proposed features are capable to choose the main point or key sentence in the articles to be compared. Those selected sentence will be undergo the fingerprint matching process in order to detect the similarity between the sentences. Hence, time and space usage for the comparison process is r...

  7. Synthesis of Facial Image with Expression Based on Muscular Contraction Parameters Using Linear Muscle and Sphincter Muscle

    Science.gov (United States)

    Ahn, Seonju; Ozawa, Shinji

    We aim to synthesize individual facial image with expression based on muscular contraction parameters. We have proposed a method of calculating the muscular contraction parameters from arbitrary face image without using learning for each individual. As a result, we could generate not only individual facial expression, but also the facial expressions of various persons. In this paper, we propose the muscle-based facial model; the facial muscles define both the linear and the novel sphincter. Additionally, we propose a method of synthesizing individual facial image with expression based on muscular contraction parameters. First, the individual facial model with expression is generated by fitting using the arbitrary face image. Next, the muscular contraction parameters are calculated that correspond to the expression displacement of the input face image. Finally, the facial expression is synthesized by the vertex displacements of a neutral facial model based on calculated muscular contraction parameters. Experimental results reveal that the novel sphincter muscle can synthesize facial expressions of the facial image, which corresponds to the actual face image with arbitrary and mouth or eyes expression.

  8. 一种特征加权融合人脸识别方法%Face recognition by weighted fusion of facial features

    Institute of Scientific and Technical Information of China (English)

    孙劲光; 孟凡宇

    2015-01-01

    针对传统人脸识别算法在非限制条件下识别准确率不高的问题,提出了一种特征加权融合人脸识别方法( DLWF+). 根据人脸面部左眼、右眼、鼻子、嘴、下巴等5个器官位置,将人脸图像划分成5个局部采样区域;将得到的5个局部采样区域和整幅人脸图像分别输入到对应的神经网络中进行网络权值调整,完成子网络的构建;利用softmax回归求出6个相似度向量并组成相似度矩阵与权向量相乘得出最终的识别结果. 经ORL和WFL人脸库上进行实验验证,识别准确率分别达到97%和91.63%. 实验结果表明:该算法能够有效提高人脸识别能力,与传统识别算法相比在限制条件和非限制条件下都具有较高的识别准确率.%The accuracy of face recognition is low under unconstrained conditions. To solve this problem, we pro-pose a new method based on deep learning and the weighted fusion of facial features. First, we divide facial feature points into five regions using an active shape model and then sample different facial components corresponding to those facial feature points. A corresponding deep belief network ( DBN) was then trained based on these regional samples to obtain optimal network parameters. The five regional sampling regions and entire facial image obtained were then inputted into a corresponding neural network to adjust the network weight and complete the construction of sub-networks. Finally, using softmax regression, we obtained six similarity vectors of different components. These six similarity vectors comprise a similarity matrix, which is then multiplied by the weight vector to derive the final recognition result. Recognition accuracy was 97% and 91.63% on the ORL and WFL face databases, respectively. Compared with traditional recognition algorithms such as SVM, DBN, PCA, and FIP+LDA, recognition rates for both databases were improved in both constrained and unconstrained conditions. On the basis of

  9. Facial Expression Analysis

    NARCIS (Netherlands)

    Pantic, Maja; Li, S.; Jain, A.

    2009-01-01

    Facial expression recognition is a process performed by humans or computers, which consists of: 1. Locating faces in the scene (e.g., in an image; this step is also referred to as face detection), 2. Extracting facial features from the detected face region (e.g., detecting the shape of facial compon

  10. Topological Features Based Entity Disambiguation

    Institute of Scientific and Technical Information of China (English)

    Chen-Chen Sun; De-Rong Shen; Tie-Zheng Nie; Ge Yu

    2016-01-01

    This work proposes an unsupervised topological features based entity disambiguation solution. Most existing studies leverage semantic information to resolve ambiguous references. However, the semantic information is not always accessible because of privacy or is too expensive to access. We consider the problem in a setting that only relationships between references are available. A structure similarity algorithm via random walk with restarts is proposed to measure the similarity of references. The disambiguation is regarded as a clustering problem and a family of graph walk based clustering algorithms are brought to group ambiguous references. We evaluate our solution extensively on two real datasets and show its advantage over two state-of-the-art approaches in accuracy.

  11. Feature-based telescope scheduler

    Science.gov (United States)

    Naghib, Elahesadat; Vanderbei, Robert J.; Stubbs, Christopher

    2016-07-01

    Feature-based Scheduler offers a sequencing strategy for ground-based telescopes. This scheduler is designed in the framework of Markovian Decision Process (MDP), and consists of a sub-linear online controller, and an offline supervisory control-optimizer. Online control law is computed at the moment of decision for the next visit, and the supervisory optimizer trains the controller by simulation data. Choice of the Differential Evolution (DE) optimizer, and introducing a reduced state space of the telescope system, offer an efficient and parallelizable optimization algorithm. In this study, we applied the proposed scheduler to the problem of Large Synoptic Survey Telescope (LSST). Preliminary results for a simplified model of LSST is promising in terms of both optimality, and computational cost.

  12. Emotion Index of Cover Song Music Video Clips based on Facial Expression Recognition

    DEFF Research Database (Denmark)

    Vidakis, Nikolaos; Kavallakis, George; Triantafyllidis, Georgios

    2017-01-01

    This paper presents a scheme of creating an emotion index of cover song music video clips by recognizing and classifying facial expressions of the artist in the video. More specifically, it fuses effective and robust algorithms which are employed for expression recognition, along with the use...... of a neural network system using the features extracted by the SIFT algorithm. Also we support the need of this fusion of different expression recognition algorithms, because of the way that emotions are linked to facial expressions in music video clips....

  13. Limitations in 4-Year-Old Children's Sensitivity to the Spacing among Facial Features

    Science.gov (United States)

    Mondloch, Catherine J.; Thomson, Kendra

    2008-01-01

    Four-year-olds' sensitivity to differences among faces in the spacing of features was tested under 4 task conditions: judging distinctiveness when the external contour was visible and when it was occluded, simultaneous match-to-sample, and recognizing the face of a friend. In each task, the foil differed only in the spacing of features, and…

  14. 基于局部SVM分类器的表情识别方法%Facial expression recognition based on local SVM classifiers

    Institute of Scientific and Technical Information of China (English)

    孙正兴; 徐文晖

    2008-01-01

    This paper presents a novel technique developed for the identification of facial expressions in video sources. The method uses two steps: facial expression feature extraction and expression classification. First we used an active shape model (ASM) based on a facial point tracking system to extract the geometric features of facial expressions in videos. Then a new type of local support vector machine (LSVM) was created to classify the facial expressions. Four different classifiers using KNN, SVM, KNN-SVM, and LSVM were compared with the new LSVM. The results on the Cohn-Kanade database showed the effectiveness of our method.%提出了一种新的视频人脸表情识别方法.该方法将识别过程分成人脸表情特征提取和分类2个部分,首先采用基于点跟踪的活动形状模型(ASM)从视频人脸中提取人脸表情几何特征;然后,采用一种新的局部支撑向量机分类器对表情进行分类.在Cohn-Kanade数据库上对KNN、 SVM、 KNN-SVM和LSVM 4种分类器的比较实验结果验证了所提出方法的有效性.

  15. Hepatitis Diagnosis Using Facial Color Image

    Science.gov (United States)

    Liu, Mingjia; Guo, Zhenhua

    Facial color diagnosis is an important diagnostic method in traditional Chinese medicine (TCM). However, due to its qualitative, subjective and experi-ence-based nature, traditional facial color diagnosis has a very limited application in clinical medicine. To circumvent the subjective and qualitative problems of facial color diagnosis of Traditional Chinese Medicine, in this paper, we present a novel computer aided facial color diagnosis method (CAFCDM). The method has three parts: face Image Database, Image Preprocessing Module and Diagnosis Engine. Face Image Database is carried out on a group of 116 patients affected by 2 kinds of liver diseases and 29 healthy volunteers. The quantitative color feature is extracted from facial images by using popular digital image processing techni-ques. Then, KNN classifier is employed to model the relationship between the quantitative color feature and diseases. The results show that the method can properly identify three groups: healthy, severe hepatitis with jaundice and severe hepatitis without jaundice with accuracy higher than 73%.

  16. A novel three-dimensional smile analysis based on dynamic evaluation of facial curve contour

    Science.gov (United States)

    Lin, Yi; Lin, Han; Lin, Qiuping; Zhang, Jinxin; Zhu, Ping; Lu, Yao; Zhao, Zhi; Lv, Jiahong; Lee, Mln Kyeong; Xu, Yue

    2016-02-01

    The influence of three-dimensional facial contour and dynamic evaluation decoding on factors of smile esthetics is essential for facial beauty improvement. However, the kinematic features of the facial smile contour and the contribution from the soft tissue and underlying skeleton are uncharted. Here, the cheekbone-maxilla contour and nasolabial fold were combined into a “smile contour” delineating the overall facial topography emerges prominently in smiling. We screened out the stable and unstable points on the smile contour using facial motion capture and curve fitting, before analyzing the correlation between soft tissue coordinates and hard tissue counterparts of the screened points. Our finding suggests that the mouth corner region was the most mobile area characterizing smile expression, while the other areas remained relatively stable. Therefore, the perioral area should be evaluated dynamically while the static assessment outcome of other parts of the smile contour contribute partially to their dynamic esthetics. Moreover, different from the end piece, morphologies of the zygomatic area and the superior part of the nasolabial crease were determined largely by the skeleton in rest, implying the latter can be altered by orthopedic or orthodontic correction and the former better improved by cosmetic procedures to improve the beauty of smile.

  17. A small-world network model of facial emotion recognition.

    Science.gov (United States)

    Takehara, Takuma; Ochiai, Fumio; Suzuki, Naoto

    2016-01-01

    Various models have been proposed to increase understanding of the cognitive basis of facial emotions. Despite those efforts, interactions between facial emotions have received minimal attention. If collective behaviours relating to each facial emotion in the comprehensive cognitive system could be assumed, specific facial emotion relationship patterns might emerge. In this study, we demonstrate that the frameworks of complex networks can effectively capture those patterns. We generate 81 facial emotion images (6 prototypes and 75 morphs) and then ask participants to rate degrees of similarity in 3240 facial emotion pairs in a paired comparison task. A facial emotion network constructed on the basis of similarity clearly forms a small-world network, which features an extremely short average network distance and close connectivity. Further, even if two facial emotions have opposing valences, they are connected within only two steps. In addition, we show that intermediary morphs are crucial for maintaining full network integration, whereas prototypes are not at all important. These results suggest the existence of collective behaviours in the cognitive systems of facial emotions and also describe why people can efficiently recognize facial emotions in terms of information transmission and propagation. For comparison, we construct three simulated networks--one based on the categorical model, one based on the dimensional model, and one random network. The results reveal that small-world connectivity in facial emotion networks is apparently different from those networks, suggesting that a small-world network is the most suitable model for capturing the cognitive basis of facial emotions.

  18. Automated Facial Action Coding System for dynamic analysis of facial expressions in neuropsychiatric disorders.

    Science.gov (United States)

    Hamm, Jihun; Kohler, Christian G; Gur, Ruben C; Verma, Ragini

    2011-09-15

    Facial expression is widely used to evaluate emotional impairment in neuropsychiatric disorders. Ekman and Friesen's Facial Action Coding System (FACS) encodes movements of individual facial muscles from distinct momentary changes in facial appearance. Unlike facial expression ratings based on categorization of expressions into prototypical emotions (happiness, sadness, anger, fear, disgust, etc.), FACS can encode ambiguous and subtle expressions, and therefore is potentially more suitable for analyzing the small differences in facial affect. However, FACS rating requires extensive training, and is time consuming and subjective thus prone to bias. To overcome these limitations, we developed an automated FACS based on advanced computer science technology. The system automatically tracks faces in a video, extracts geometric and texture features, and produces temporal profiles of each facial muscle movement. These profiles are quantified to compute frequencies of single and combined Action Units (AUs) in videos, and they can facilitate a statistical study of large populations in disorders known to impact facial expression. We derived quantitative measures of flat and inappropriate facial affect automatically from temporal AU profiles. Applicability of the automated FACS was illustrated in a pilot study, by applying it to data of videos from eight schizophrenia patients and controls. We created temporal AU profiles that provided rich information on the dynamics of facial muscle movements for each subject. The quantitative measures of flatness and inappropriateness showed clear differences between patients and the controls, highlighting their potential in automatic and objective quantification of symptom severity.

  19. An optimized ERP brain-computer interface based on facial expression changes

    Science.gov (United States)

    Jin, Jing; Daly, Ian; Zhang, Yu; Wang, Xingyu; Cichocki, Andrzej

    2014-06-01

    Objective. Interferences from spatially adjacent non-target stimuli are known to evoke event-related potentials (ERPs) during non-target flashes and, therefore, lead to false positives. This phenomenon was commonly seen in visual attention-based brain-computer interfaces (BCIs) using conspicuous stimuli and is known to adversely affect the performance of BCI systems. Although users try to focus on the target stimulus, they cannot help but be affected by conspicuous changes of the stimuli (such as flashes or presenting images) which were adjacent to the target stimulus. Furthermore, subjects have reported that conspicuous stimuli made them tired and annoyed. In view of this, the aim of this study was to reduce adjacent interference, annoyance and fatigue using a new stimulus presentation pattern based upon facial expression changes. Our goal was not to design a new pattern which could evoke larger ERPs than the face pattern, but to design a new pattern which could reduce adjacent interference, annoyance and fatigue, and evoke ERPs as good as those observed during the face pattern. Approach. Positive facial expressions could be changed to negative facial expressions by minor changes to the original facial image. Although the changes are minor, the contrast is big enough to evoke strong ERPs. In this paper, a facial expression change pattern between positive and negative facial expressions was used to attempt to minimize interference effects. This was compared against two different conditions, a shuffled pattern containing the same shapes and colours as the facial expression change pattern, but without the semantic content associated with a change in expression, and a face versus no face pattern. Comparisons were made in terms of classification accuracy and information transfer rate as well as user supplied subjective measures. Main results. The results showed that interferences from adjacent stimuli, annoyance and the fatigue experienced by the subjects could be

  20. Qualitative and Quantitative Analysis for Facial Complexion in Traditional Chinese Medicine

    Directory of Open Access Journals (Sweden)

    Changbo Zhao

    2014-01-01

    Full Text Available Facial diagnosis is an important and very intuitive diagnostic method in Traditional Chinese Medicine (TCM. However, due to its qualitative and experience-based subjective property, traditional facial diagnosis has a certain limitation in clinical medicine. The computerized inspection method provides classification models to recognize facial complexion (including color and gloss. However, the previous works only study the classification problems of facial complexion, which is considered as qualitative analysis in our perspective. For quantitative analysis expectation, the severity or degree of facial complexion has not been reported yet. This paper aims to make both qualitative and quantitative analysis for facial complexion. We propose a novel feature representation of facial complexion from the whole face of patients. The features are established with four chromaticity bases splitting up by luminance distribution on CIELAB color space. Chromaticity bases are constructed from facial dominant color using two-level clustering; the optimal luminance distribution is simply implemented with experimental comparisons. The features are proved to be more distinctive than the previous facial complexion feature representation. Complexion recognition proceeds by training an SVM classifier with the optimal model parameters. In addition, further improved features are more developed by the weighted fusion of five local regions. Extensive experimental results show that the proposed features achieve highest facial color recognition performance with a total accuracy of 86.89%. And, furthermore, the proposed recognition framework could analyze both color and gloss degrees of facial complexion by learning a ranking function.

  1. Feature-based Image Sequence Compression Coding

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A novel compressing method for video teleconference applications is presented. Semantic-based coding based on human image feature is realized, where human features are adopted as parameters. Model-based coding and the concept of vector coding are combined with the work on image feature extraction to obtain the result.

  2. Facial orientation and facial shape in extant great apes: a geometric morphometric analysis of covariation.

    Directory of Open Access Journals (Sweden)

    Dimitri Neaux

    Full Text Available The organization of the bony face is complex, its morphology being influenced in part by the rest of the cranium. Characterizing the facial morphological variation and craniofacial covariation patterns in extant hominids is fundamental to the understanding of their evolutionary history. Numerous studies on hominid facial shape have proposed hypotheses concerning the relationship between the anterior facial shape, facial block orientation and basicranial flexion. In this study we test these hypotheses in a sample of adult specimens belonging to three extant hominid genera (Homo, Pan and Gorilla. Intraspecific variation and covariation patterns are analyzed using geometric morphometric methods and multivariate statistics, such as partial least squared on three-dimensional landmarks coordinates. Our results indicate significant intraspecific covariation between facial shape, facial block orientation and basicranial flexion. Hominids share similar characteristics in the relationship between anterior facial shape and facial block orientation. Modern humans exhibit a specific pattern in the covariation between anterior facial shape and basicranial flexion. This peculiar feature underscores the role of modern humans' highly-flexed basicranium in the overall integration of the cranium. Furthermore, our results are consistent with the hypothesis of a relationship between the reduction of the value of the cranial base angle and a downward rotation of the facial block in modern humans, and to a lesser extent in chimpanzees.

  3. FACIAL LANDMARKING LOCALIZATION FOR EMOTION RECOGNITION USING BAYESIAN SHAPE MODELS

    Directory of Open Access Journals (Sweden)

    Hernan F. Garcia

    2013-02-01

    Full Text Available This work presents a framework for emotion recognition, based in facial expression analysis using Bayesian Shape Models (BSM for facial landmarking localization. The Facial Action Coding System (FACS compliant facial feature tracking based on Bayesian Shape Model. The BSM estimate the parameters of the model with an implementation of the EM algorithm. We describe the characterization methodology from parametric model and evaluated the accuracy for feature detection and estimation of the parameters associated with facial expressions, analyzing its robustness in pose and local variations. Then, a methodology for emotion characterization is introduced to perform the recognition. The experimental results show that the proposed model can effectively detect the different facial expressions. Outperforming conventional approaches for emotion recognition obtaining high performance results in the estimation of emotion present in a determined subject. The model used and characterization methodology showed efficient to detect the emotion type in 95.6% of the cases.

  4. Automatic age estimation based on facial aging patterns.

    Science.gov (United States)

    Geng, Xin; Zhou, Zhi-Hua; Smith-Miles, Kate

    2007-12-01

    While recognition of most facial variations, such as identity, expression and gender, has been extensively studied, automatic age estimation has rarely been explored. In contrast to other facial variations, aging variation presents several unique characteristics which make age estimation a challenging task. This paper proposes an automatic age estimation method named AGES (AGing pattErn Subspace). The basic idea is to model the aging pattern, which is defined as the sequence of a particular individual' s face images sorted in time order, by constructing a representative subspace. The proper aging pattern for a previously unseen face image is determined by the projection in the subspace that can reconstruct the face image with minimum reconstruction error, while the position of the face image in that aging pattern will then indicate its age. In the experiments, AGES and its variants are compared with the limited existing age estimation methods (WAS and AAS) and some well-established classification methods (kNN, BP, C4.5, and SVM). Moreover, a comparison with human perception ability on age is conducted. It is interesting to note that the performance of AGES is not only significantly better than that of all the other algorithms, but also comparable to that of the human observers.

  5. Brief communication: MaqFACS: A muscle-based facial movement coding system for the rhesus macaque.

    Science.gov (United States)

    Parr, L A; Waller, B M; Burrows, A M; Gothard, K M; Vick, S J

    2010-12-01

    Over 125 years ago, Charles Darwin (1872) suggested that the only way to fully understand the form and function of human facial expression was to make comparisons with other species. Nevertheless, it has been only recently that facial expressions in humans and related primate species have been compared using systematic, anatomically based techniques. Through this approach, large-scale evolutionary and phylogenetic analyses of facial expressions, including their homology, can now be addressed. Here, the development of a muscular-based system for measuring facial movement in rhesus macaques (Macaca mulatta) is described based on the well-known FACS (Facial Action Coding System) and ChimpFACS. These systems describe facial movement according to the action of the underlying facial musculature, which is highly conserved across primates. The coding systems are standardized; thus, their use is comparable across laboratories and study populations. In the development of MaqFACS, several species differences in the facial movement repertoire of rhesus macaques were observed in comparison with chimpanzees and humans, particularly with regard to brow movements, puckering of the lips, and ear movements. These differences do not seem to be the result of constraints imposed by morphological differences in the facial structure of these three species. It is more likely that they reflect unique specializations in the communicative repertoire of each species.

  6. Personality Trait and Facial Expression Filter-Based Brain-Computer Interface

    Directory of Open Access Journals (Sweden)

    Seongah Chin

    2013-02-01

    Full Text Available In this paper, we present technical approaches that bridge the gap in the research related to the use of brain‐computer interfaces for entertainment and facial expressions. Such facial expressions that reflect an individual’s personal traits can be used to better realize artificial facial expressions in a gaming environment based on a brain‐computer interface. First, an emotion extraction filter is introduced in order to classify emotions on the basis of the users’ brain signals in real time. Next, a personality trait filter is defined to classify extrovert and introvert types, which manifest as five traits: very extrovert, extrovert, medium, introvert and very introvert. In addition, facial expressions derived from expression rates are obtained by an extrovert‐introvert fuzzy model through its defuzzification process. Finally, we confirm this validation via an analysis of the variance of the personality trait filter, a k‐fold cross validation of the emotion extraction filter, an accuracy analysis, a user study of facial synthesis and a test case game.

  7. 基于LBP和SVM决策树的人脸表情识别%Facial Expression Recognition Based on LBP and SVM Decision Tree

    Institute of Scientific and Technical Information of China (English)

    李扬; 郭海礁

    2014-01-01

    为了提高人脸表情识别的识别率,提出一种LBP和SVM决策树相结合的人脸表情识别算法。首先利用LBP算法将人脸表情图像转换为LBP特征谱,然后将LBP特征谱转换成LBP直方图特征序列,最后通过SVM决策树算法完成人脸表情的分类和识别,并且在JAFFE人脸表情库的识别中证明该算法的有效性。%In order to improve the recognition rate of facial expression, proposes a facial expression recognition algorithm based on a LBP and SVM decision tree. First facial expression image is converted to LBP characteristic spectrum using LBP algorithm, and then the LBP character-istic spectrum into LBP histogram feature sequence, finally completes the classification and recognition of facial expression by SVM deci-sion tree algorithm, and proves the effectiveness of the proposed method in the recognition of facial expression database in JAFFE.

  8. Face verification system for Android mobile devices using histogram based features

    Science.gov (United States)

    Sato, Sho; Kobayashi, Kazuhiro; Chen, Qiu

    2016-07-01

    This paper proposes a face verification system that runs on Android mobile devices. In this system, facial image is captured by a built-in camera on the Android device firstly, and then face detection is implemented using Haar-like features and AdaBoost learning algorithm. The proposed system verify the detected face using histogram based features, which are generated by binary Vector Quantization (VQ) histogram using DCT coefficients in low frequency domains, as well as Improved Local Binary Pattern (Improved LBP) histogram in spatial domain. Verification results with different type of histogram based features are first obtained separately and then combined by weighted averaging. We evaluate our proposed algorithm by using publicly available ORL database and facial images captured by an Android tablet.

  9. Return of feature-based cost modeling

    Science.gov (United States)

    Creese, Robert C.; Patrawala, Taher B.

    1998-10-01

    Feature Based Cost Modeling is thought of as a relative new approach to cost modeling, but feature based cost modeling had considerable development in the 1950's. Considerable work was published in the 1950's by Boeing on cost for various casting processes--sand casting, die casting, investment casting and permanent mold casting--as a function of a single casting feature, casting volume. Additional approaches to feature based cost modeling have been made, and this work is a review of previous works and a proposed integrated model to feature based cost modeling.

  10. Avoiding occlusal derangement in facial fractures: An evidence based approach

    Directory of Open Access Journals (Sweden)

    Derick Mendonca

    2013-01-01

    Full Text Available Facial fractures with occlusal derangement describe any fracture which directly or indirectly affects the occlusal relationship. Such fractures include dento-alveolar fractures in the maxilla and mandible, midface fractures - Le fort I, II, III and mandible fractures of the symphysis, parasymphysis, body, angle, and condyle. In some of these fractures, the fracture line runs through the dento-alveolar component whereas in others the fracture line is remote from the occlusal plane nevertheless altering the occlusion. The complications that could ensue from the management of maxillofacial fractures are predominantly iatrogenic, and therefore can be avoided if adequate care is exercised by the operating surgeon. This paper does not emphasize on complications arising from any particular technique in the management of maxillofacial fractures but rather discusses complications in general, irrespective of the technique used.

  11. Robust facial expression recognition via compressive sensing.

    Science.gov (United States)

    Zhang, Shiqing; Zhao, Xiaoming; Lei, Bicheng

    2012-01-01

    Recently, compressive sensing (CS) has attracted increasing attention in the areas of signal processing, computer vision and pattern recognition. In this paper, a new method based on the CS theory is presented for robust facial expression recognition. The CS theory is used to construct a sparse representation classifier (SRC). The effectiveness and robustness of the SRC method is investigated on clean and occluded facial expression images. Three typical facial features, i.e., the raw pixels, Gabor wavelets representation and local binary patterns (LBP), are extracted to evaluate the performance of the SRC method. Compared with the nearest neighbor (NN), linear support vector machines (SVM) and the nearest subspace (NS), experimental results on the popular Cohn-Kanade facial expression database demonstrate that the SRC method obtains better performance and stronger robustness to corruption and occlusion on robust facial expression recognition tasks.

  12. Thermodynamics of micellization of cholic acid based facial amphiphiles carrying three permanent ionic head groups

    NARCIS (Netherlands)

    Willemen, H.M.; Marcelis, A.T.M.; Sudhölter, E.J.R.

    2003-01-01

    This paper describes a series of cholic acid based facial amphiphiles carrying three ionic headgroups. Their micellization behavior in water was studied as a function of spacer length and alkyl tail length: both were found to have a small influence on the critical micellization concentration (cmc).

  13. Nine-year-old children use norm-based coding to visually represent facial expression.

    Science.gov (United States)

    Burton, Nichola; Jeffery, Linda; Skinner, Andrew L; Benton, Christopher P; Rhodes, Gillian

    2013-10-01

    Children are less skilled than adults at making judgments about facial expression. This could be because they have not yet developed adult-like mechanisms for visually representing faces. Adults are thought to represent faces in a multidimensional face-space, and have been shown to code the expression of a face relative to the norm or average face in face-space. Norm-based coding is economical and adaptive, and may be what makes adults more sensitive to facial expression than children. This study investigated the coding system that children use to represent facial expression. An adaptation aftereffect paradigm was used to test 24 adults and 18 children (9 years 2 months to 9 years 11 months old). Participants adapted to weak and strong antiexpressions. They then judged the expression of an average expression. Adaptation created aftereffects that made the test face look like the expression opposite that of the adaptor. Consistent with the predictions of norm-based but not exemplar-based coding, aftereffects were larger for strong than weak adaptors for both age groups. Results indicate that, like adults, children's coding of facial expressions is norm-based.

  14. Persistent facial pain conditions

    DEFF Research Database (Denmark)

    Forssell, Heli; Alstergren, Per; Bakke, Merete;

    2016-01-01

    , clinical features, consequences, central and peripheral mechanisms, diagnostic criteria (DC/TMD), and principles of management. For each of the neuropathic facial pain entities, the definitions, prevalence, clinical features, and diagnostics are described. The current understanding of the pathophysiology...

  15. A comprehensive approach to long-standing facial paralysis based on lengthening temporalis myoplasty.

    Science.gov (United States)

    Labbè, D; Bussu, F; Iodice, A

    2012-06-01

    Long-standing peripheral monolateral facial paralysis in the adult has challenged otolaryngologists, neurologists and plastic surgeons for centuries. Notwithstanding, the ultimate goal of normality of the paralyzed hemi-face with symmetry at rest, and the achievement of a spontaneous symmetrical smile with corneal protection, has not been fully reached. At the beginning of the 20(th) century, the main options were neural reconstructions including accessory to facial nerve transfer and hypoglossal to facial nerve crossover. In the first half of the 20(th) century, various techniques for static correction with autologous temporalis muscle and fascia grafts were proposed as the techniques of Gillies (1934) and McLaughlin (1949). Cross-facial nerve grafts have been performed since the beginning of the 1970s often with the attempt to transplant free-muscle to restore active movements. However, these transplants were non-vascularized, and further evaluations revealed central fibrosis and minimal return of function. A major step was taken in the second half of the 1970s, with the introduction of microneurovascular muscle transfer in facial reanimation, which, often combined in two steps with a cross-facial nerve graft, has become the most popular option for the comprehensive treatment of long-standing facial paralysis. In the second half of the 1990s in France, a regional muscle transfer technique with the definite advantages of being one-step, technically easier and relatively fast, namely lengthening temporalis myoplasty, acquired popularity and consensus among surgeons treating facial paralysis. A total of 111 patients with facial paralysis were treated in Caen between 1997 and 2005 by a single surgeon who developed 2 variants of the technique (V1, V2), each with its advantages and disadvantages, but both based on the same anatomo-functional background and aim, which is transfer of the temporalis muscle tendon on the coronoid process to the lips. For a comprehensive

  16. Facial expression discrimination varies with presentation time but not with fixation on features: A backward masking study using eye-tracking

    OpenAIRE

    2013-01-01

    The current study investigated the effects of presentation time and fixation to expression-specific diagnostic features on emotion discrimination performance, in a backward masking task. While no differences were found when stimuli were presented for 16.67 ms, differences between facial emotions emerged beyond the happy-superiority effect at presentation times as early as 50 ms. Happy expressions were best discriminated, followed by neutral and disgusted, then surprised, and finally fearful e...

  17. Neural processing of fearful and happy facial expressions during emotion-relevant and emotion-irrelevant tasks: A fixation-to-feature approach.

    Science.gov (United States)

    Neath-Tavares, Karly N; Itier, Roxane J

    2016-09-01

    Research suggests an important role of the eyes and mouth for discriminating facial expressions of emotion. A gaze-contingent procedure was used to test the impact of fixation to facial features on the neural response to fearful, happy and neutral facial expressions in an emotion discrimination (Exp.1) and an oddball detection (Exp.2) task. The N170 was the only eye-sensitive ERP component, and this sensitivity did not vary across facial expressions. In both tasks, compared to neutral faces, responses to happy expressions were seen as early as 100-120ms occipitally, while responses to fearful expressions started around 150ms, on or after the N170, at both occipital and lateral-posterior sites. Analyses of scalp topographies revealed different distributions of these two emotion effects across most of the epoch. Emotion processing interacted with fixation location at different times between tasks. Results suggest a role of both the eyes and mouth in the neural processing of fearful expressions and of the mouth in the processing of happy expressions, before 350ms.

  18. Infrared-based blink-detecting glasses for facial pacing: toward a bionic blink.

    Science.gov (United States)

    Frigerio, Alice; Hadlock, Tessa A; Murray, Elizabeth H; Heaton, James T

    2014-01-01

    IMPORTANCE Facial paralysis remains one of the most challenging conditions to effectively manage, often causing life-altering deficits in both function and appearance. Facial rehabilitation via pacing and robotic technology has great yet unmet potential. A critical first step toward reanimating symmetrical facial movement in cases of unilateral paralysis is the detection of healthy movement to use as a trigger for stimulated movement. OBJECTIVE To test a blink detection system that can be attached to standard eyeglasses and used as part of a closed-loop facial pacing system. DESIGN, SETTING, AND PARTICIPANTS Standard safety glasses were equipped with an infrared (IR) emitter-detector unit, oriented horizontally across the palpebral fissure, creating a monitored IR beam that became interrupted when the eyelids closed, and were tested in 24 healthy volunteers from a tertiary care facial nerve center community. MAIN OUTCOMES AND MEASURES Video-quantified blinking was compared with both IR sensor signal magnitude and rate of change in healthy participants with their gaze in repose, while they shifted their gaze from central to far-peripheral positions, and during the production of particular facial expressions. RESULTS Blink detection based on signal magnitude achieved 100% sensitivity in forward gaze but generated false detections on downward gaze. Calculations of peak rate of signal change (first derivative) typically distinguished blinks from gaze-related eyelid movements. During forward gaze, 87% of detected blink events were true positives, 11% were false positives, and 2% were false negatives. Of the 11% false positives, 6% were associated with partial eyelid closures. During gaze changes, false blink detection occurred 6% of the time during lateral eye movements, 10% of the time during upward movements, 47% of the time during downward movements, and 6% of the time for movements from an upward or downward gaze back to the primary gaze. Facial expressions

  19. Spontaneous Facial Mimicry in Response to Dynamic Facial Expressions

    Science.gov (United States)

    Sato, Wataru; Yoshikawa, Sakiko

    2007-01-01

    Based on previous neuroscientific evidence indicating activation of the mirror neuron system in response to dynamic facial actions, we hypothesized that facial mimicry would occur while subjects viewed dynamic facial expressions. To test this hypothesis, dynamic/static facial expressions of anger/happiness were presented using computer-morphing…

  20. Evaluation of Facial Proportions and Their Association with Thumbprint Patterns among Hausa Ethnic Group

    Directory of Open Access Journals (Sweden)

    Lawan Hassan Adamu

    2017-01-01

    Full Text Available Background. Evolutionary forces such as founder effect resulted in reproductive isolation and reduced genetic diversity may have led to ethnic variation in the facial appearance and other features like fingerprints pattern. Aim. To determine the pattern of facial proportion based on neoclassical facial canon. The associations between facial proportions and thumbprint patterns were also investigated. Subject and Methods. A total of 534 subjects of 18–25 years of age participated. Direct sensing and photographs methods were used to determine fingerprint and facial features, respectively. Fisher’s Exact test was used to test for association between variables. Results. It was observed that in both males and females there was no (0% occurrence of classical canon of facial proportion. There was also no association between sex and facial proportions. A significant association was found in between thumbprint patterns and vertical class III neoclassical facial proportion only when the frequency of both left and right thumbprint patterns was considered a single entity. There is no significant association between the thumbprint patterns of the right and left thumbs with vertical horizontal facial proportions in male and female participants. It was observed that right and left thumbs have more tendency of significance with facial proportion in males and females, respectively. Conclusion. Fingerprint pattern and its associated features may be controlled by a different mechanism such that the two may correlate differently with other features as the case may be with facial features.

  1. Rules versus Prototype Matching: Strategies of Perception of Emotional Facial Expressions in the Autism Spectrum

    Science.gov (United States)

    Rutherford, M. D.; McIntosh, Daniel N.

    2007-01-01

    When perceiving emotional facial expressions, people with autistic spectrum disorders (ASD) appear to focus on individual facial features rather than configurations. This paper tests whether individuals with ASD use these features in a rule-based strategy of emotional perception, rather than a typical, template-based strategy by considering…

  2. FaceTOON: a unified platform for feature-based cartoon expression generation

    Science.gov (United States)

    Zaharia, Titus; Marre, Olivier; Prêteux, Françoise; Monjaux, Perrine

    2008-02-01

    This paper presents the FaceTOON system, a semi-automatic platform dedicated to the creation of verbal and emotional facial expressions, within the applicative framework of 2D cartoon production. The proposed FaceTOON platform makes it possible to rapidly create 3D facial animations with a minimum amount of user interaction. In contrast with existing commercial 3D modeling softwares, which usually require from the users advanced 3D graphics skills and competences, the FaceTOON system is based exclusively on 2D interaction mechanisms, the 3D modeling stage being completely transparent for the user. The system takes as input a neutral 3D face model, free of any facial feature, and a set of 2D drawings, representing the desired facial features. A 2D/3D virtual mapping procedure makes it possible to obtain a ready-for-animation model which can be directly manipulated and deformed for generating expressions. The platform includes a complete set of dedicated tools for 2D/3D interactive deformation, pose management, key-frame interpolation and MPEG-4 compliant animation and rendering. The proposed FaceTOON system is currently considered for industrial evaluation and commercialization by the Quadraxis company.

  3. A Genetic Algorithm-Based Feature Selection

    Directory of Open Access Journals (Sweden)

    Babatunde Oluleye

    2014-07-01

    Full Text Available This article details the exploration and application of Genetic Algorithm (GA for feature selection. Particularly a binary GA was used for dimensionality reduction to enhance the performance of the concerned classifiers. In this work, hundred (100 features were extracted from set of images found in the Flavia dataset (a publicly available dataset. The extracted features are Zernike Moments (ZM, Fourier Descriptors (FD, Lengendre Moments (LM, Hu 7 Moments (Hu7M, Texture Properties (TP and Geometrical Properties (GP. The main contributions of this article are (1 detailed documentation of the GA Toolbox in MATLAB and (2 the development of a GA-based feature selector using a novel fitness function (kNN-based classification error which enabled the GA to obtain a combinatorial set of feature giving rise to optimal accuracy. The results obtained were compared with various feature selectors from WEKA software and obtained better results in many ways than WEKA feature selectors in terms of classification accuracy

  4. Distance-based features in pattern classification

    Directory of Open Access Journals (Sweden)

    Lin Wei-Yang

    2011-01-01

    Full Text Available Abstract In data mining and pattern classification, feature extraction and representation methods are a very important step since the extracted features have a direct and significant impact on the classification accuracy. In literature, numbers of novel feature extraction and representation methods have been proposed. However, many of them only focus on specific domain problems. In this article, we introduce a novel distance-based feature extraction method for various pattern classification problems. Specifically, two distances are extracted, which are based on (1 the distance between the data and its intra-cluster center and (2 the distance between the data and its extra-cluster centers. Experiments based on ten datasets containing different numbers of classes, samples, and dimensions are examined. The experimental results using naïve Bayes, k-NN, and SVM classifiers show that concatenating the original features provided by the datasets to the distance-based features can improve classification accuracy except image-related datasets. In particular, the distance-based features are suitable for the datasets which have smaller numbers of classes, numbers of samples, and the lower dimensionality of features. Moreover, two datasets, which have similar characteristics, are further used to validate this finding. The result is consistent with the first experiment result that adding the distance-based features can improve the classification performance.

  5. A Robust and Efficient Facial Feature Tracking Algorithm%一种鲁棒高效的人脸特征点跟踪方法

    Institute of Scientific and Technical Information of China (English)

    黄琛; 丁晓青; 方驰

    2012-01-01

    人脸特征点跟踪能获取除粗略的人脸位置和运动轨迹以外的人脸部件的精确信息,对计算机视觉研究有重要作用.主动表象模型(Active appearance model,AAM)是描述人脸特征点位置的最有效的方法之一,但是其高维参数空间和梯度下降优化策略使得AAM对初始参数敏感,且易陷入局部极值.因此,基于传统AAM的人脸特征点跟踪方法不能同时较好地解决大姿态、光照和表情的问题.本文在多视角AAM的框架下,提出一种结合随机森林和线性判别分析(Linear discriminate analysis,LDA)的实时姿态估计算法对跟踪的人脸进行姿态预估计和更新,从而有效地解决了视频人脸大姿态变化的问题.提出了一种改进的在线表象模型(Online appearance model,OAM)方法来评估跟踪的准确性,并自适应地通过增量主成分分析(Principle component analysis,PCA)学习来更新AAM的纹理模型,极大地提高了跟踪的稳定性和模型应对光照和表情变化的能力.实验结果表明,本文算法在视频人脸特征点跟踪的准确性、鲁棒性和实时性方面都有良好的性能.%Facial feature tracking obtains precise information of facial components in addition to the coarse face position and moving track, and is important to computer vision. The active appearance model (AAM) is an efficient method to describe the facial features. However, it suffers from the sensitivity to initial parameters and may easily be stuck in local minima due to the gradient-descent optimization, which makes the AAM based tracker unstable in the presence of large pose, illumination and expression changes. In the framework of multi-view AAM, a real time pose estimation algorithm is proposed by combining random forest and linear discriminate analysis (LDA) to estimate and update the head pose during tracking. To improve the robustness to variations in illumination and expression, a modified online appearance model (OAM) is

  6. Initial assessment of facial nerve paralysis based on motion analysis using an optical flow method.

    Science.gov (United States)

    Samsudin, Wan Syahirah W; Sundaraj, Kenneth; Ahmad, Amirozi; Salleh, Hasriah

    2016-01-01

    An initial assessment method that can classify as well as categorize the severity of paralysis into one of six levels according to the House-Brackmann (HB) system based on facial landmarks motion using an Optical Flow (OF) algorithm is proposed. The desired landmarks were obtained from the video recordings of 5 normal and 3 Bell's Palsy subjects and tracked using the Kanade-Lucas-Tomasi (KLT) method. A new scoring system based on the motion analysis using area measurement is proposed. This scoring system uses the individual scores from the facial exercises and grades the paralysis based on the HB system. The proposed method has obtained promising results and may play a pivotal role towards improved rehabilitation programs for patients.

  7. Facial expression recognition in perceptual color space.

    Science.gov (United States)

    Lajevardi, Seyed Mehdi; Wu, Hong Ren

    2012-08-01

    This paper introduces a tensor perceptual color framework (TPCF) for facial expression recognition (FER), which is based on information contained in color facial images. The TPCF enables multi-linear image analysis in different color spaces and demonstrates that color components provide additional information for robust FER. Using this framework, the components (in either RGB, YCbCr, CIELab or CIELuv space) of color images are unfolded to two-dimensional (2- D) tensors based on multi-linear algebra and tensor concepts, from which the features are extracted by Log-Gabor filters. The mutual information quotient (MIQ) method is employed for feature selection. These features are classified using a multi-class linear discriminant analysis (LDA) classifier. The effectiveness of color information on FER using low-resolution and facial expression images with illumination variations is assessed for performance evaluation. Experimental results demonstrate that color information has significant potential to improve emotion recognition performance due to the complementary characteristics of image textures. Furthermore, the perceptual color spaces (CIELab and CIELuv) are better overall for facial expression recognition than other color spaces by providing more efficient and robust performance for facial expression recognition using facial images with illumination variation.

  8. Multifinger Feature Level Fusion Based Fingerprint Identification

    Directory of Open Access Journals (Sweden)

    Praveen N

    2012-12-01

    Full Text Available Fingerprint based authentication systems are one of the cost-effective biometric authentication techniques employed for personal identification. As the data base population increases, fast identification/recognition algorithms are required with high accuracy. Accuracy can be increased using multimodal evidences collected by multiple biometric traits. In this work, consecutive fingerprint images are taken, global singularities are located using directional field strength and their local orientation vector is formulated with respect to the base line of the finger. Featurelevel fusion is carried out and a 32 element feature template is obtained. A matching score is formulated for the identification and 100% accuracy was obtained for a database of 300 persons. The polygonal feature vector helps to reduce the size of the feature database from the present 70-100 minutiae features to just 32 features and also a lower matching threshold can be fixed compared to single finger based identification

  9. SIFT based algorithm for point feature tracking

    Directory of Open Access Journals (Sweden)

    Adrian BURLACU

    2007-12-01

    Full Text Available In this paper a tracking algorithm for SIFT features in image sequences is developed. For each point feature extracted using SIFT algorithm a descriptor is computed using information from its neighborhood. Using an algorithm based on minimizing the distance between two descriptors tracking point features throughout image sequences is engaged. Experimental results, obtained from image sequences that capture scaling of different geometrical type object, reveal the performances of the tracking algorithm.

  10. Linear feature detection based on ridgelet

    Institute of Scientific and Technical Information of China (English)

    HOU; Biao; (侯彪); LIU; Fang; (刘芳); JIAO; Licheng; (焦李成)

    2003-01-01

    Linear feature detection is very important in image processing. The detection efficiency will directly affect the perfomance of pattern recognition and pattern classification. Based on the idea of ridgelet, this paper presents a new discrete localized ridgelet transform and a new method for detecting linear feature in anisotropic images. Experimental results prove the efficiency of the proposed method.

  11. Rehabilitation of long-standing facial nerve paralysis with percutaneous suture-based slings.

    Science.gov (United States)

    Alam, Daniel

    2007-01-01

    Long-standing facial paralysis creates significant functional and aesthetic problems for patients affected by this deficit. Traditional approaches to correct this problem have involved aggressive open procedures such as unilateral face-lifts and sling procedures using fascia and implantable materials. Unfortunately, our results with these techniques over the last 5 years have been suboptimal. The traditional face-lift techniques did not address the nasolabial fold to our satisfaction, and suture-based techniques alone, while offering excellent short-term results, failed to provide a long-term solution. This led to the development of a novel percutaneous technique combining the minimally invasive approach of suture-based lifts with the long-term efficacy of Gore-Tex-based slings. We report our results with this technique for static facial suspension in patients with long-standing facial nerve paralysis and our surgical outcomes in 13 patients. The procedure offers re-creation of the nasolabial crease and suspension of the oral commissure to its normal anatomic relationships. The recovery time is minimal, and the operation is performed as a short outpatient procedure. Long-term 2-year follow-up has shown effective preservation of the surgical results.

  12. FPGA Based Assembling of Facial Components for Human Face Construction

    CERN Document Server

    Halder, Santanu; Nasipuri, Mita; Basu, Dipak Kumar; Kundu, Mahantapas

    2010-01-01

    This paper aims at VLSI realization for generation of a new face from textual description. The FASY (FAce SYnthesis) System is a Face Database Retrieval and new Face generation System that is under development. One of its main features is the generation of the requested face when it is not found in the existing database. The new face generation system works in three steps - searching phase, assembling phase and tuning phase. In this paper the tuning phase using hardware description language and its implementation in a Field Programmable Gate Array (FPGA) device is presented.

  13. Facial Expression Recognition Techniques Based on Bilinear Model%基于双线性模型的人脸表情识别技术

    Institute of Scientific and Technical Information of China (English)

    徐欢

    2014-01-01

    Aiming at the problems existing in facial expression recognition currently , based on the data in the 3D expression data-base BU-3DFE, we study the point cloud alignment of 3D facial expression data , establish the bilinear models based on the align-ment data , and improve the recognition algorithms based on bilinear model in order to form the new recognition and classification algorithms, to reduce the quantity of identity feature calculation in original algorithm , to minimize the influence of identity feature on the total expression recognition process , to improve the results of facial expression recognition , and to ultimately achieve the high robustness of 3D facial expression recognition .%针对现阶段人脸表情识别过程中所遇到的问题,基于三维数据库BU-3DFE中的三维表情数据,研究三维人脸表情数据的点云对齐及基于对齐数据的双线性模型建立,对基于双线性模型的识别算法加以改进,形成新的识别分类算法,降低原有算法中身份特征参与计算的比重,最大可能地降低身份特征对于整个表情识别过程的影响。旨在提高表情识别的结果,最终实现高鲁棒性的三维表情识别。

  14. Neural correlates of affective priming effects based on masked facial emotion: an fMRI study.

    Science.gov (United States)

    Suslow, Thomas; Kugel, Harald; Ohrmann, Patricia; Stuhrmann, Anja; Grotegerd, Dominik; Redlich, Ronny; Bauer, Jochen; Dannlowski, Udo

    2013-03-30

    Affective priming refers to the phenomenon that subliminal presentation of facial emotion biases subsequent evaluation of a neutral object in the direction of the prime. The aim of the present study was to specify the neural correlates of evaluative shifts elicited by facial emotion shown below the threshold of conscious perception. We tested the hypotheses whether the amygdala is involved in negative priming, whereas the nucleus accumbens participates in positive priming. In addition, exploratory whole brain correlation analyses were conducted. During 3T fMRI scanning, pictures of sad, happy, and neutral facial expression masked by neutral faces were presented to 110 healthy adults who had to judge valence of masks on a four-point scale. There was evidence for significant negative priming based on sad faces. A correlation was observed between amygdala activation and negative priming. Activation in medial, middle, and superior frontal and middle temporo-occipital areas, and insula was also associated with negative priming. No significant priming based on happy faces was found. However, nucleus accumbens activation to happy faces correlated with the positive priming score. The present findings confirm that the amygdala but also other brain regions, especially the medial frontal cortex, appear involved in automatically elicited negative evaluative shifts.

  15. Changing the facial features of patients with Treacher Collins syndrome: protocol for 3-stage treatment of hard and soft tissue hypoplasia in the upper half of the face.

    Science.gov (United States)

    Mitsukawa, Nobuyuki; Saiga, Atsuomi; Satoh, Kaneshige

    2014-07-01

    Treacher Collins syndrome is a disorder characterized by various congenital soft tissue anomalies involving hypoplasia of the zygoma, maxilla, and mandible. A variety of treatments have been reported to date. These treatments can be classified into 2 major types. The first type involves osteotomy for hard tissue such as the zygoma and mandible. The second type involves plastic surgery using bone grafting in the malar region and soft tissue repair of eyelid deformities. We devised a new treatment to comprehensively correct hard and soft tissue deformities in the upper half of the face of Treacher Collins patients. The aim was to "change facial features and make it difficult to tell that the patients have this disorder." This innovative treatment strategy consists of 3 stages: (1) placement of dermal fat graft from the lower eyelid to the malar subcutaneous area, (2) custom-made synthetic zygomatic bone grafting, and (3) Z-plasty flap transposition from the upper to the lower eyelid and superior repositioning and fixation of the lateral canthal tendon using a Mitek anchor system. This method was used on 4 patients with Treacher Collins syndrome who had moderate to severe hypoplasia of the zygomas and the lower eyelids. Facial features of these patients were markedly improved and very good results were obtained. There were no major complications intraoperatively or postoperatively in any of the patients during the series of treatments. In synthetic bone grafting in the second stage, the implant in some patients was in the way of the infraorbital nerve. Thus, the nerve was detached and then sutured under the microscope. Postoperatively, patients had almost full restoration of sensory nerve torpor within 5 to 6 months. We devised a 3-stage treatment to "change facial features" of patients with hypoplasia of the upper half of the face due to Treacher Collins syndrome. The treatment protocol provided a very effective way to treat deformities of the upper half of the face

  16. Ontology Based Feature Driven Development Life Cycle

    Directory of Open Access Journals (Sweden)

    Farheen Siddiqui

    2012-01-01

    Full Text Available The upcoming technology support for semantic web promises fresh directions for Software Engineering community. Also semantic web has its roots in knowledge engineering that provoke software engineers to look for application of ontology applications throughout the Software Engineering lifecycle. The internal components of a semantic web are "light weight", and may be of less quality standards than the externally visible modules. In fact the internal components are generated from external (ontological component. That's the reason agile development approaches such as feature driven development are suitable for applications internal component development. As yet there is no particular procedure that describes the role of ontology in FDD processes. Therefore we propose an ontology based feature driven development for semantic web application that can be used form application model development to feature design and implementation. Features are precisely defined in the OWL-based domain model. Transition from OWL based domain model to feature list is directly defined in transformation rules. On the other hand the ontology based overall model can be easily validated through automated tools. Advantages of ontology-based feature Driven development are also discussed.

  17. Feature-Based Classification of Networks

    CERN Document Server

    Barnett, Ian; Kuijjer, Marieke L; Mucha, Peter J; Onnela, Jukka-Pekka

    2016-01-01

    Network representations of systems from various scientific and societal domains are neither completely random nor fully regular, but instead appear to contain recurring structural building blocks. These features tend to be shared by networks belonging to the same broad class, such as the class of social networks or the class of biological networks. At a finer scale of classification within each such class, networks describing more similar systems tend to have more similar features. This occurs presumably because networks representing similar purposes or constructions would be expected to be generated by a shared set of domain specific mechanisms, and it should therefore be possible to classify these networks into categories based on their features at various structural levels. Here we describe and demonstrate a new, hybrid approach that combines manual selection of features of potential interest with existing automated classification methods. In particular, selecting well-known and well-studied features that ...

  18. 一种基于MPEG-4的三维人脸表情动画算法%A 3D facial expression animation system based on MPEG-4

    Institute of Scientific and Technical Information of China (English)

    於俊; 汪增福

    2011-01-01

    面向模型基人脸视频编解码领域,提出了一种基于MPEG-4的三维人脸表情动画算法.首先对编码端发送视频的首帧图像,利用Adaboost+Camshift+AAM(active appearance model算法检测人脸和定位特征点,接着特定化一个简洁人脸通用网格模型得到FDP(facial definition parameter);对于得到的FDP,解码端先用其特定化一个精细人脸通用网格模型,然后基于肌肉模型和参数模型相结合的方式来生成人脸表情动画,同时对人脸功能区进行划分.实验表明,该算法在FAP(facial animation parameter)流的驱动下可以生成真实感较强的三维人脸表情动画.%In view of the model based coding/decoding area, a 3D facial expression animation system based on MPEG-4 was proposed. The coder obtained FDPs (facial definition parameter) through face adaptation of a simple universal triangular model with Adaboost + Camshift + AAM algorithm for face detection and feature localization in the first frame. Then the decoder adapted a complex universal triangular model using these FDPs, Finally the algorithm produced facial animation combining the parameterized model and muscle model. A facial action area split scheme was also proposed. Experiment results confirm that this system can produce realistic facial expression animation with FAP (facial animation parameter) flow.

  19. Robust feature-based object tracking

    Science.gov (United States)

    Han, Bing; Roberts, William; Wu, Dapeng; Li, Jian

    2007-04-01

    Object tracking is an important component of many computer vision systems. It is widely used in video surveillance, robotics, 3D image reconstruction, medical imaging, and human computer interface. In this paper, we focus on unsupervised object tracking, i.e., without prior knowledge about the object to be tracked. To address this problem, we take a feature-based approach, i.e., using feature points (or landmark points) to represent objects. Feature-based object tracking consists of feature extraction and feature correspondence. Feature correspondence is particularly challenging since a feature point in one image may have many similar points in another image, resulting in ambiguity in feature correspondence. To resolve the ambiguity, algorithms, which use exhaustive search and correlation over a large neighborhood, have been proposed. However, these algorithms incur high computational complexity, which is not suitable for real-time tracking. In contrast, Tomasi and Kanade's tracking algorithm only searches corresponding points in a small candidate set, which significantly reduces computational complexity; but the algorithm may lose track of feature points in a long image sequence. To mitigate the limitations of the aforementioned algorithms, this paper proposes an efficient and robust feature-based tracking algorithm. The key idea of our algorithm is as below. For a given target feature point in one frame, we first find a corresponding point in the next frame, which minimizes the sum-of-squared-difference (SSD) between the two points; then we test whether the corresponding point gives large value under the so-called Harris criterion. If not, we further identify a candidate set of feature points in a small neighborhood of the target point; then find a corresponding point from the candidate set, which minimizes the SSD between the two points. The algorithm may output no corresponding point due to disappearance of the target point. Our algorithm is capable of tracking

  20. Texture feature based liver lesion classification

    Science.gov (United States)

    Doron, Yeela; Mayer-Wolf, Nitzan; Diamant, Idit; Greenspan, Hayit

    2014-03-01

    Liver lesion classification is a difficult clinical task. Computerized analysis can support clinical workflow by enabling more objective and reproducible evaluation. In this paper, we evaluate the contribution of several types of texture features for a computer-aided diagnostic (CAD) system which automatically classifies liver lesions from CT images. Based on the assumption that liver lesions of various classes differ in their texture characteristics, a variety of texture features were examined as lesion descriptors. Although texture features are often used for this task, there is currently a lack of detailed research focusing on the comparison across different texture features, or their combinations, on a given dataset. In this work we investigated the performance of Gray Level Co-occurrence Matrix (GLCM), Local Binary Patterns (LBP), Gabor, gray level intensity values and Gabor-based LBP (GLBP), where the features are obtained from a given lesion`s region of interest (ROI). For the classification module, SVM and KNN classifiers were examined. Using a single type of texture feature, best result of 91% accuracy, was obtained with Gabor filtering and SVM classification. Combination of Gabor, LBP and Intensity features improved the results to a final accuracy of 97%.

  1. Facial Action Unit Recognition under Incomplete Data Based on Multi-label Learning with Missing Labels

    KAUST Repository

    Li, Yongqiang

    2016-07-07

    Facial action unit (AU) recognition has been applied in a wild range of fields, and has attracted great attention in the past two decades. Most existing works on AU recognition assumed that the complete label assignment for each training image is available, which is often not the case in practice. Labeling AU is expensive and time consuming process. Moreover, due to the AU ambiguity and subjective difference, some AUs are difficult to label reliably and confidently. Many AU recognition works try to train the classifier for each AU independently, which is of high computation cost and ignores the dependency among different AUs. In this work, we formulate AU recognition under incomplete data as a multi-label learning with missing labels (MLML) problem. Most existing MLML methods usually employ the same features for all classes. However, we find this setting is unreasonable in AU recognition, as the occurrence of different AUs produce changes of skin surface displacement or face appearance in different face regions. If using the shared features for all AUs, much noise will be involved due to the occurrence of other AUs. Consequently, the changes of the specific AUs cannot be clearly highlighted, leading to the performance degradation. Instead, we propose to extract the most discriminative features for each AU individually, which are learned by the supervised learning method. The learned features are further embedded into the instance-level label smoothness term of our model, which also includes the label consistency and the class-level label smoothness. Both a global solution using st-cut and an approximated solution using conjugate gradient (CG) descent are provided. Experiments on both posed and spontaneous facial expression databases demonstrate the superiority of the proposed method in comparison with several state-of-the-art works.

  2. 基于多核学习的画像画风的识别%Drawing Style Recognition of Facial Sketch Based on Multiple Kernel Learning

    Institute of Scientific and Technical Information of China (English)

    张铭津; 李洁; 王楠楠

    2015-01-01

    画像的画风识别广泛应用于名画甄别和刑侦破案领域。文中提出基于多核学习的画像画风的识别算法。首先根据艺术评论家从画像部件的处理方式鉴定画像画风的方法,从画像中提取脸、左眼、右眼、鼻和嘴5个部件。然后根据画家从画像的明暗度和画像作者的绘画笔法识别画像画风的方法,从每个部件上提取灰度直方图特征、灰度矩特征、快速鲁棒特征和多尺度的局部二值模式特征。最后通过多核学习将不同部件和不同特征融合以进行画像画风的识别。实验表明,文中算法性能较好,能取得较高识别率。%The drawing style recognition of facial sketches is widely used for painting authentication and criminal investigation. A drawing style recognition algorithm of facial sketch based on multiple kernel learning is presented. Firstly, according to the way of art critics recognize the drawing style of facial sketch, five parts, the face part, left eye part, right eye part, nose part and mouth part, are extracted from the facial sketch. Then, gray histogram feature, gray moment feature, speeded-up robust feature and multiscale local binary pattern feature are extracted from each part on the basis of artistsˊ different understandings of lights and shadows on a face and various usages of the pencil . Finally, different parts and features are integrated and the drawing styles of facial sketches are classified by multiple kernel learning. Experimental results demonstrate that the proposed algorithm has better performance and obtains higher recognition rates.

  3. Vascularization of the facial bones by facial artery: implications for full face allotransplantation

    OpenAIRE

    Rampazzo, Antonio

    2014-01-01

    Background-The maxillary artery is recognized as the main vascular supply of the facial bones; nonetheless clinical evidence supports a co-dominant role for the facial artery. This study explores the extent of the facial skeleton within a facial allograft that can be harvested based on the facial artery. Methods-Twenty-three cadaver heads were used in this study. In 12 heads, the right facial, superficial temporal and maxillary arteries were injected. In 1 head, facial artery angiography w...

  4. Facial Expression Recognition Based on RGB-D%基于RGB-D的人脸表情识别研究

    Institute of Scientific and Technical Information of China (English)

    吴会霞; 陶青川; 龚雪友

    2016-01-01

    针对二维人脸表情识别在复杂光照及光照条件较差时,识别准确率较低的问题,提出一种基于RGB-D 的融合多分类器的面部表情识别的算法。该算法首先在图像的彩色信息(Y、Cr、Q)和深度信息(D)上分别提取其LPQ,Gabor,LBP 以及HOG 特征信息,并对提取的高维特征信息做线性降维(PCA)及特征空间转换(LDA),而后用最近邻分类法得到各表情弱分类器,并用AdaBoost 算法权重分配弱分类器从而生成强分类器,最后用Bayes 进行多分类器的融合,统计输出平均识别率。在具有复杂光照条件变化的人脸表情库CurtinFaces 和KinectFaceDB 上,该算法平均识别率最高达到98.80%。试验结果表明:比较于单独彩色图像的表情识别算法,深度信息的融合能够更加明显的提升面部表情识别的识别率,并且具有一定的应用价值。%For two-dimensional facial expression recognition complex when poor lighting and illumination conditions, a low recognition rate of prob-lem, proposes a facial expression recognition algorithm based on multi-feature RGB-D fusion. Extracts their LPQ, Gabor, LBP and HOG feature information in image color information(Y, Cr, Q) and depth information (D) on, and the extraction of high-dimensional feature in-formation does linear dimensionality reduction (PCA) and feature space conversion (LDA), and then gives each face of weak classifiers nearest neighbor classification, and with AdaBoost algorithm weight distribution of weak classifiers to generate strong classifier, and finally with Bayes multi-classifier fusion, statistical output average recognition rate. With complex changes in lighting conditions and facial ex-pression libraries CurtinFaces KinectFaceDB, the algorithm average recognition rate of up to 98.80%. The results showed that: compared to a separate color image expression recognition algorithm, the fusion depth information can be more

  5. Face recognition via edge-based Gabor feature representation for plastic surgery-altered images

    Science.gov (United States)

    Chude-Olisah, Chollette C.; Sulong, Ghazali; Chude-Okonkwo, Uche A. K.; Hashim, Siti Z. M.

    2014-12-01

    Plastic surgery procedures on the face introduce skin texture variations between images of the same person (intra-subject), thereby making the task of face recognition more difficult than in normal scenario. Usually, in contemporary face recognition systems, the original gray-level face image is used as input to the Gabor descriptor, which translates to encoding some texture properties of the face image. The texture-encoding process significantly degrades the performance of such systems in the case of plastic surgery due to the presence of surgically induced intra-subject variations. Based on the proposition that the shape of significant facial components such as eyes, nose, eyebrow, and mouth remains unchanged after plastic surgery, this paper employs an edge-based Gabor feature representation approach for the recognition of surgically altered face images. We use the edge information, which is dependent on the shapes of the significant facial components, to address the plastic surgery-induced texture variation problems. To ensure that the significant facial components represent useful edge information with little or no false edges, a simple illumination normalization technique is proposed for preprocessing. Gabor wavelet is applied to the edge image to accentuate on the uniqueness of the significant facial components for discriminating among different subjects. The performance of the proposed method is evaluated on the Georgia Tech (GT) and the Labeled Faces in the Wild (LFW) databases with illumination and expression problems, and the plastic surgery database with texture changes. Results show that the proposed edge-based Gabor feature representation approach is robust against plastic surgery-induced face variations amidst expression and illumination problems and outperforms the existing plastic surgery face recognition methods reported in the literature.

  6. A family with a complex clinical presentation characterized by arrhythmogenic right ventricular dysplasia/cardiomyopathy and features of branchio-oculo-facial syndrome.

    Science.gov (United States)

    Murray, Brittney; Wagle, Rohan; Amat-Alarcon, Nuria; Wilkens, Alisha; Stephens, Paul; Zackai, Elaine H; Goldmuntz, Elizabeth; Calkins, Hugh; Deardorff, Matthew A; Judge, Daniel P

    2013-02-01

    Arrhythmogenic right ventricular dysplasia/cardiomyopathy (ARVD/C) is a familial form of cardiomyopathy typically caused by mutations in genes that encode an element of the cardiac desmosome. Branchio-oculo-facial syndrome (BOFS) is a craniofacial disorder caused by TFAP2A mutations. In a family segregating ARVD/C, some members also had features of BOFS. Genetic testing for ARVD/C identified a mutation in PKP2, encoding plakophilin-2, a component of the cardiac desmosome. Evaluation of dysmorphology by chromosome microarray (CMA) identified a 4.4 Mb deletion at chromosome 6p24 that included both TFAP2A and DSP, encoding desmoplakin, an additional component of the cardiac desmosome implicated in ARVD/C. A family member with both the 6p24 deletion and PKP2 mutation had more severe cardiac dysfunction. These findings suggest that this contiguous gene deletion contributes to both ARVD/C and BOFS, and that DSP haploinsufficiency may contribute to cardiomyopathy. This family provides a clinical example that underscores the need for careful evaluation in clinical scenarios where genetic heterogeneity is known to exist. Finally, it suggests that individuals with unexplained cardiomyopathy and dysmorphic facial features may benefit from CMA analysis.

  7. Facial emotion recognition system for autistic children: a feasible study based on FPGA implementation.

    Science.gov (United States)

    Smitha, K G; Vinod, A P

    2015-11-01

    Children with autism spectrum disorder have difficulty in understanding the emotional and mental states from the facial expressions of the people they interact. The inability to understand other people's emotions will hinder their interpersonal communication. Though many facial emotion recognition algorithms have been proposed in the literature, they are mainly intended for processing by a personal computer, which limits their usability in on-the-move applications where portability is desired. The portability of the system will ensure ease of use and real-time emotion recognition and that will aid for immediate feedback while communicating with caretakers. Principal component analysis (PCA) has been identified as the least complex feature extraction algorithm to be implemented in hardware. In this paper, we present a detailed study of the implementation of serial and parallel implementation of PCA in order to identify the most feasible method for realization of a portable emotion detector for autistic children. The proposed emotion recognizer architectures are implemented on Virtex 7 XC7VX330T FFG1761-3 FPGA. We achieved 82.3% detection accuracy for a word length of 8 bits.

  8. Statistical feature extraction based iris recognition system

    Indian Academy of Sciences (India)

    ATUL BANSAL; RAVINDER AGARWAL; R K SHARMA

    2016-05-01

    Iris recognition systems have been proposed by numerous researchers using different feature extraction techniques for accurate and reliable biometric authentication. In this paper, a statistical feature extraction technique based on correlation between adjacent pixels has been proposed and implemented. Hamming distance based metric has been used for matching. Performance of the proposed iris recognition system (IRS) has been measured by recording false acceptance rate (FAR) and false rejection rate (FRR) at differentthresholds in the distance metric. System performance has been evaluated by computing statistical features along two directions, namely, radial direction of circular iris region and angular direction extending from pupil tosclera. Experiments have also been conducted to study the effect of number of statistical parameters on FAR and FRR. Results obtained from the experiments based on different set of statistical features of iris images show thatthere is a significant improvement in equal error rate (EER) when number of statistical parameters for feature extraction is increased from three to six. Further, it has also been found that increasing radial/angular resolution,with normalization in place, improves EER for proposed iris recognition system

  9. Heartbeat Rate Measurement from Facial Video

    DEFF Research Database (Denmark)

    Haque, Mohammad Ahsanul; Irani, Ramin; Nasrollahi, Kamal

    2016-01-01

    by combining a ‘Good feature to track’ and a ‘Supervised descent method’ in order to overcome the limitations of currently available facial video based HR measuring systems. Such limitations include, e.g., unrealistic restriction of the subject’s movement and artificial lighting during data capture. A face...... in realistic scenarios. Experimental results show that the proposed system outperforms existing video based systems for HR measurement.......Heartbeat Rate (HR) reveals a person’s health condition. This paper presents an effective system for measuring HR from facial videos acquired in a more realistic environment than the testing environment of current systems. The proposed method utilizes a facial feature point tracking method...

  10. Facial Reconstruction and Rehabilitation.

    Science.gov (United States)

    Guntinas-Lichius, Orlando; Genther, Dane J; Byrne, Patrick J

    2016-01-01

    Extracranial infiltration of the facial nerve by salivary gland tumors is the most frequent cause of facial palsy secondary to malignancy. Nevertheless, facial palsy related to salivary gland cancer is uncommon. Therefore, reconstructive facial reanimation surgery is not a routine undertaking for most head and neck surgeons. The primary aims of facial reanimation are to restore tone, symmetry, and movement to the paralyzed face. Such restoration should improve the patient's objective motor function and subjective quality of life. The surgical procedures for facial reanimation rely heavily on long-established techniques, but many advances and improvements have been made in recent years. In the past, published experiences on strategies for optimizing functional outcomes in facial paralysis patients were primarily based on small case series and described a wide variety of surgical techniques. However, in the recent years, larger series have been published from high-volume centers with significant and specialized experience in surgical and nonsurgical reanimation of the paralyzed face that have informed modern treatment. This chapter reviews the most important diagnostic methods used for the evaluation of facial paralysis to optimize the planning of each individual's treatment and discusses surgical and nonsurgical techniques for facial rehabilitation based on the contemporary literature.

  11. Comparative analysis of the anterior and posterior length and deflection angle of the cranial base, in individuals with facial Pattern I, II and III

    Directory of Open Access Journals (Sweden)

    Guilherme Thiesen

    2013-02-01

    Full Text Available OBJECTIVE: This study evaluated the variations in the anterior cranial base (S-N, posterior cranial base (S-Ba and deflection of the cranial base (SNBa among three different facial patterns (Pattern I, II and III. METHOD: A sample of 60 lateral cephalometric radiographs of Brazilian Caucasian patients, both genders, between 8 and 17 years of age was selected. The sample was divided into 3 groups (Pattern I, II and III of 20 individuals each. The inclusion criteria for each group were the ANB angle, Wits appraisal and the facial profile angle (G'.Sn.Pg'. To compare the mean values obtained from (SNBa, S-N, S-Ba each group measures, the ANOVA test and Scheffé's Post-Hoc test were applied. RESULTS AND CONCLUSIONS: There was no statistically significant difference for the deflection angle of the cranial base among the different facial patterns (Patterns I, II and III. There was no significant difference for the measures of the anterior and posterior cranial base between the facial Patterns I and II. The mean values for S-Ba were lower in facial Pattern III with statistically significant difference. The mean values of S-N in the facial Pattern III were also reduced, but without showing statistically significant difference. This trend of lower values in the cranial base measurements would explain the maxillary deficiency and/or mandibular prognathism features that characterize the facial Pattern III.OBJETIVO: o presente estudo avaliou as variações da base craniana anterior (S-N, base craniana posterior (S-Ba, e ângulo de deflexão da base do crânio (SNBa entre três diferentes padrões faciais (Padrão I, II e III. MÉTODOS: selecionou-se uma amostra de 60 telerradiografias em norma lateral de pacientes brasileiros leucodermas, de ambos os sexos, com idades entre 8 anos e 17 anos. A amostra foi dividida em três grupos (Padrão I, II e III, sendo cada grupo constituído de 20 indivíduos. Os critérios de seleção dos indivíduos para cada grupo

  12. Facial Video based Detection of Physical Fatigue for Maximal Muscle Activity

    DEFF Research Database (Denmark)

    Haque, Mohammad Ahsanul; Irani, Ramin; Nasrollahi, Kamal;

    2016-01-01

    Physical fatigue reveals the health condition of a person at for example health checkup, fitness assessment or rehabilitation training. This paper presents an efficient noncontact system for detecting non-localized physi-cal fatigue from maximal muscle activity using facial videos acquired...... the challenges originates from realistic sce-nario. A face quality assessment system was also incorporated in the proposed system to reduce erroneous results by discarding low quality faces that occurred in a video sequence due to problems in realistic lighting, head motion and pose variation. Experimental...... results show that the proposed system outperforms video based existing system for physical fatigue detection....

  13. Feature subset selection based on relevance

    Science.gov (United States)

    Wang, Hui; Bell, David; Murtagh, Fionn

    In this paper an axiomatic characterisation of feature subset selection is presented. Two axioms are presented: sufficiency axiom—preservation of learning information, and necessity axiom—minimising encoding length. The sufficiency axiom concerns the existing dataset and is derived based on the following understanding: any selected feature subset should be able to describe the training dataset without losing information, i.e. it is consistent with the training dataset. The necessity axiom concerns the predictability and is derived from Occam's razor, which states that the simplest among different alternatives is preferred for prediction. The two axioms are then restated in terms of relevance in a concise form: maximising both the r( X; Y) and r( Y; X) relevance. Based on the relevance characterisation, four feature subset selection algorithms are presented and analysed: one is exhaustive and the remaining three are heuristic. Experimentation is also presented and the results are encouraging. Comparison is also made with some well-known feature subset selection algorithms, in particular, with the built-in feature selection mechanism in C4.5.

  14. Facial Expression Recognition Using SVM Classifier

    OpenAIRE

    2015-01-01

    Facial feature tracking and facial actions recognition from image sequence attracted great attention in computer vision field. Computational facial expression analysis is a challenging research topic in computer vision. It is required by many applications such as human-computer interaction, computer graphic animation and automatic facial expression recognition. In recent years, plenty of computer vision techniques have been developed to track or recognize the facial activities in three levels...

  15. Quantative Evaluation of the Efficiency of Facial Bio-potential Signals Based on Forehead Three-Channel Electrode Placement For Facial Gesture Recognition Applicable in a Human-Machine Interface

    Directory of Open Access Journals (Sweden)

    Iman Mohammad Rezazadeh

    2010-06-01

    Full Text Available Introduction: Today, facial bio-potential signals are employed in many human-machine interface applications for enhancing and empowering the rehabilitation process. The main point to achieve that goal is to record appropriate bioelectric signals from the human face by placing and configuring electrodes over it in the right way. In this paper, heuristic geometrical position and configuration of the electrodes has been proposed for improving the quality of the acquired signals and consequently enhancing the performance of the facial gesture classifier. Materials and Methods: Investigation and evaluation of the electrodes' proper geometrical position and configuration can be performed using two methods: clinical and modeling. In the clinical method, the electrodes are placed in predefined positions and the elicited signals from them are then processed. The performance of the method is evaluated based on the results obtained. On the other hand, in the modeling approach, the quality of the recorded signals and their information content are evaluated only by modeling and simulation. In this paper, both methods have been utilized together. First, suitable electrode positions and configuration were proposed and evaluated by modeling and simulation. Then, the experiment was performed with a predefined protocol on 7 healthy subjects to validate the simulation results. Here, the recorded signals were passed through parallel butterworth filter banks to obtain facial EMG, EOG and EEG signals and the RMS features of each 256 msec time slot were extracted.  By using the power of Subtractive Fuzzy C-Mean (SFCM, 8 different facial gestures (including smiling, frowning, pulling up left and right lip corners, left/right/up and down movements of the eyes were discriminated. Results: According to the three-channel electrode configuration derived from modeling of the dipoles effects on the surface electrodes and by employing the SFCM classifier, an average 94

  16. Multi Feature Content Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    Rajshree S. Dubey,

    2010-09-01

    Full Text Available There are numbers of methods prevailing for Image Mining Techniques. This Paper includes the features of four techniques I,e Color Histogram, Color moment, Texture, and Edge Histogram Descriptor. The nature of the Image is basically based on the Human Perception of the Image. The Machine interpretation of the Image is based on the Contours and surfaces of the Images. The study of the Image Mining is a very challenging task because it involves the Pattern Recognition which is a very important tool for the Machine Vision system. A combination of four feature extraction methods namely color istogram, Color Moment, texture, and Edge Histogram Descriptor. There is a provision to add new features in future for better retrievalefficiency. In this paper the combination of the four techniques are used and the Euclidian distances are calculated of the every features are added and the averages are made .The user interface is provided by the Mat lab. The image properties analyzed in this work are by using computer vision and image processing algorithms. For colorthe histogram of images are computed, for texture co occurrence matrix based entropy, energy, etc, are calculated and for edge density it is Edge Histogram Descriptor (EHD that is found. For retrieval of images, the averages of the four techniques are made and the resultant Image is retrieved.

  17. Persistent facial pain conditions

    DEFF Research Database (Denmark)

    Forssell, Heli; Alstergren, Per; Bakke, Merete

    2016-01-01

    , clinical features, consequences, central and peripheral mechanisms, diagnostic criteria (DC/TMD), and principles of management. For each of the neuropathic facial pain entities, the definitions, prevalence, clinical features, and diagnostics are described. The current understanding of the pathophysiology......Persistent facial pains, especially temporomandibular disorders (TMD), are common conditions. As dentists are responsible for the treatment of most of these disorders, up-to date knowledge on the latest advances in the field is essential for successful diagnosis and management. The review covers...... TMD, and different neuropathic or putative neuropathic facial pains such as persistent idiopathic facial pain and atypical odontalgia, trigeminal neuralgia and painful posttraumatic trigeminal neuropathy. The article presents an overview of TMD pain as a biopsychosocial condition, its prevalence...

  18. 一种基于人脸图像的年龄估计方法%An Age Estimation Method Based on Facial Images

    Institute of Scientific and Technical Information of China (English)

    罗佳佳; 蔡超

    2012-01-01

    Research on age estimation has a significant impact on Human-Computer Interaction. In this paper, an age estimation method based on facial images is proposed. The new method establishes a face anthropometry template based on craniofacial growth pattern theory to obtain facial geometric proportion features, and extracts texture features of facial local area using fractional differential approach, combines these two kinds of features to form personal age feature vectors. Machine learning methods such as clustering algorithms we used to obtain age-feature knowledge matrix, and in age estimating, such knowledge matrix voting on estimate age of input facial image. Experimental results show that the estimation error is small and the classification accuracy is close to human judgment.%有关年龄估计的研究在人机交互领域有着非常重要的意义.该文提出一种基于人脸图像的年龄估计方法,该方法首先基于颅面成长模式理论建立人脸测量模板,在此模板上计算面部几何比例特征,然后运用分数阶微分提取人脸局部区域的纹理特征,结合这两类特征构成个体年龄特征向量;通过聚类学习的方法训练年龄特征向量获得年龄-特征映射矩阵,最后由此矩阵表决出输人人脸的估计年龄.实验结果表明,基于这两种特征构建的年龄估计模型可以获得较好的年龄估计结果,年龄误差较小,分类准确率接近人的主观判断结果.

  19. Hirschsprung disease, microcephaly, mental retardation, and characteristic facial features: delineation of a new syndrome and identification of a locus at chromosome 2q22-q23.

    Science.gov (United States)

    Mowat, D R; Croaker, G D; Cass, D T; Kerr, B A; Chaitow, J; Adès, L C; Chia, N L; Wilson, M J

    1998-08-01

    We have identified six children with a distinctive facial phenotype in association with mental retardation (MR), microcephaly, and short stature, four of whom presented with Hirschsprung (HSCR) disease in the neonatal period. HSCR was diagnosed in a further child at the age of 3 years after investigation for severe chronic constipation and another child, identified as sharing the same facial phenotype, had chronic constipation, but did not have HSCR. One of our patients has an interstitial deletion of chromosome 2, del(2)(q21q23). These children strongly resemble the patient reported by Lurie et al with HSCR and dysmorphic features associated with del(2)(q22q23). All patients have been isolated cases, suggesting a contiguous gene syndrome or a dominant single gene disorder involving a locus for HSCR located at 2q22-q23. Review of published reports suggests that there is significant phenotypic and genetic heterogeneity within the group of patients with HSCR, MR, and microcephaly. In particular, our patients appear to have a separate disorder from Goldberg-Shprintzen syndrome, for which autosomal recessive inheritance has been proposed because of sib recurrence and consanguinity in some families.

  20. Partial fingerprint matching based on SIFT Features

    Directory of Open Access Journals (Sweden)

    Ms. S.Malathi,

    2010-07-01

    Full Text Available Fingerprints are being extensively used for person identification in a number of commercial, civil, and forensic applications. The current Fingerprint matching technology is quite mature for matching full prints, matching partial fingerprints still needs lots of improvement. Most of the current fingerprint identification systems utilize features that are based on minutiae points and ridge patterns. The major challenges faced in partial fingerprint matching are the absence of sufficient minutiae features and other structures such as core and delta. However, this technology suffers from the problem of handling incomplete prints and often discards any partial fingerprints obtained. Recent research has begun to delve into the problems of latent or partial fingerprints. In this paper we present a novel approach for partial fingerprint matching scheme based on SIFT(Scale Invariant Feature Transform features and matching is achieved using a modified point matching process. Using Neurotechnology database, we demonstrate that the proposed method exhibits an improved performance when matching full print against partial print.

  1. Facial melanoses: Indian perspective

    Directory of Open Access Journals (Sweden)

    Neena Khanna

    2011-01-01

    Full Text Available Facial melanoses (FM are a common presentation in Indian patients, causing cosmetic disfigurement with considerable psychological impact. Some of the well defined causes of FM include melasma, Riehl′s melanosis, Lichen planus pigmentosus, erythema dyschromicum perstans (EDP, erythrosis, and poikiloderma of Civatte. But there is considerable overlap in features amongst the clinical entities. Etiology in most of the causes is unknown, but some factors such as UV radiation in melasma, exposure to chemicals in EDP, exposure to allergens in Riehl′s melanosis are implicated. Diagnosis is generally based on clinical features. The treatment of FM includes removal of aggravating factors, vigorous photoprotection, and some form of active pigment reduction either with topical agents or physical modes of treatment. Topical agents include hydroquinone (HQ, which is the most commonly used agent, often in combination with retinoic acid, corticosteroids, azelaic acid, kojic acid, and glycolic acid. Chemical peels are important modalities of physical therapy, other forms include lasers and dermabrasion.

  2. Spatiotemporal dynamics of similarity-based neural representations of facial identity.

    Science.gov (United States)

    Vida, Mark D; Nestor, Adrian; Plaut, David C; Behrmann, Marlene

    2017-01-10

    Humans' remarkable ability to quickly and accurately discriminate among thousands of highly similar complex objects demands rapid and precise neural computations. To elucidate the process by which this is achieved, we used magnetoencephalography to measure spatiotemporal patterns of neural activity with high temporal resolution during visual discrimination among a large and carefully controlled set of faces. We also compared these neural data to lower level "image-based" and higher level "identity-based" model-based representations of our stimuli and to behavioral similarity judgments of our stimuli. Between ∼50 and 400 ms after stimulus onset, face-selective sources in right lateral occipital cortex and right fusiform gyrus and sources in a control region (left V1) yielded successful classification of facial identity. In all regions, early responses were more similar to the image-based representation than to the identity-based representation. In the face-selective regions only, responses were more similar to the identity-based representation at several time points after 200 ms. Behavioral responses were more similar to the identity-based representation than to the image-based representation, and their structure was predicted by responses in the face-selective regions. These results provide a temporally precise description of the transformation from low- to high-level representations of facial identity in human face-selective cortex and demonstrate that face-selective cortical regions represent multiple distinct types of information about face identity at different times over the first 500 ms after stimulus onset. These results have important implications for understanding the rapid emergence of fine-grained, high-level representations of object identity, a computation essential to human visual expertise.

  3. Nablus mask-like facial syndrome

    DEFF Research Database (Denmark)

    Allanson, Judith; Smith, Amanda; Hare, Heather

    2012-01-01

    Nablus mask-like facial syndrome (NMLFS) has many distinctive phenotypic features, particularly tight glistening skin with reduced facial expression, blepharophimosis, telecanthus, bulky nasal tip, abnormal external ear architecture, upswept frontal hairline, and sparse eyebrows. Over the last fe...

  4. Holistic facial expression classification

    Science.gov (United States)

    Ghent, John; McDonald, J.

    2005-06-01

    This paper details a procedure for classifying facial expressions. This is a growing and relatively new type of problem within computer vision. One of the fundamental problems when classifying facial expressions in previous approaches is the lack of a consistent method of measuring expression. This paper solves this problem by the computation of the Facial Expression Shape Model (FESM). This statistical model of facial expression is based on an anatomical analysis of facial expression called the Facial Action Coding System (FACS). We use the term Action Unit (AU) to describe a movement of one or more muscles of the face and all expressions can be described using the AU's described by FACS. The shape model is calculated by marking the face with 122 landmark points. We use Principal Component Analysis (PCA) to analyse how the landmark points move with respect to each other and to lower the dimensionality of the problem. Using the FESM in conjunction with Support Vector Machines (SVM) we classify facial expressions. SVMs are a powerful machine learning technique based on optimisation theory. This project is largely concerned with statistical models, machine learning techniques and psychological tools used in the classification of facial expression. This holistic approach to expression classification provides a means for a level of interaction with a computer that is a significant step forward in human-computer interaction.

  5. FEATURE EXTRACTION FOR EMG BASED PROSTHESES CONTROL

    Directory of Open Access Journals (Sweden)

    R. Aishwarya

    2013-01-01

    Full Text Available The control of prosthetic limb would be more effective if it is based on Surface Electromyogram (SEMG signals from remnant muscles. The analysis of SEMG signals depend on a number of factors, such as amplitude as well as time- and frequency-domain properties. Time series analysis using Auto Regressive (AR model and Mean frequency which is tolerant to white Gaussian noise are used as feature extraction techniques. EMG Histogram is used as another feature vector that was seen to give more distinct classification. The work was done with SEMG dataset obtained from the NINAPRO DATABASE, a resource for bio robotics community. Eight classes of hand movements hand open, hand close, Wrist extension, Wrist flexion, Pointing index, Ulnar deviation, Thumbs up, Thumb opposite to little finger are taken into consideration and feature vectors are extracted. The feature vectors can be given to an artificial neural network for further classification in controlling the prosthetic arm which is not dealt in this paper.

  6. Feature-based attention resolves depth ambiguity.

    Science.gov (United States)

    Yu, D; Levinthal, B; Franconeri, S L

    2016-09-07

    Perceiving the world around us requires that we resolve ambiguity. This process is often studied in the lab using ambiguous figures whose structures can be interpreted in multiple ways. One class of figures contains ambiguity in its depth relations, such that either of two surfaces could be seen as being the "front" of an object. Previous research suggests that selectively attending to a given location on such objects can bias the perception of that region as the front. This study asks whether selectively attending to a distributed feature can also bias that region toward the front. Participants viewed a structure-from-motion display of a rotating cylinder that could be perceived as rotating clockwise or counterclockwise (as imagined viewing from the top), depending on whether a set of red or green moving dots were seen as being in the front. A secondary task encouraged observers to globally attend to either red or green. Results from both Experiment 1 and 2 showed that the dots on the cylinder that shared the attended feature, and its corresponding surface, were more likely to be seen as being in the front, as measured by participants' clockwise versus counterclockwise percept reports. Feature-based attention, like location-based attention, is capable of biasing competition among potential interpretations of figures with ambiguous structure in depth.

  7. Vertical dimension: a dynamic concept based on facial form and oropharyngeal function.

    Science.gov (United States)

    Mack, M R

    1991-10-01

    Craniofacial vertical dimension is a more accurate measure of facial proportion than mere measurement of the mid and lower part of the face. Craniomaxillary dimension is skeletally determined, whereas facial height of the lower part of the face is partly dependent on the vertical dimension of occlusion. Alterations in the vertical dimension of occlusion can dramatically affect the esthetics of the soft facial tissue. The "Golden Proportion" quantitatively defines ideal measured relationships and encourages a scientific appreciation of beauty. Faces with deficiencies in lower facial balance (brachyfacial) often exhibit insufficient height of the occlusal plane. The scientific literature has suggested a pliability of skeletal muscle allowing for physiologic variance in vertical facial height. Temporomandibular joint compliance is demonstrated with elevations in resting muscle length. Facial balance and location of the occlusal planes are the primary determinants for establishing an appropriate vertical dimension of occlusion.

  8. Peripheral facial palsy, the only presentation of a primitive neuroectodermal tumor of the skull base

    OpenAIRE

    Kim, Hyung Jin; Kang, Ben; Joo, Eun Young; Kim, Eun Young; Kwon, Young Se

    2015-01-01

    Introduction Peripheral facial palsy is rarely caused by primary neoplasms, which are mostly constituted of tumors of the central nervous system, head and neck, and leukemia. Presentation of case A 2-month-old male infant presented with asymmetric facial expression for 3 weeks. Physical examination revealed suspicious findings of right peripheral facial palsy. Computed tomography of the temporal bone revealed a suspicious bone tumor centered in the right petrous bone involving surrounding bon...

  9. Comparative Study of Triangulation based and Feature based Image Morphing

    Directory of Open Access Journals (Sweden)

    Ms. Bhumika G. Bhatt

    2012-01-01

    Full Text Available Image Morphing is one of the most powerful Digital Image processing technique, which is used to enhancemany multimedia projects, presentations, education and computer based training. It is also used inmedical imaging field to recover features not visible in images by establishing correspondence of featuresamong successive pair of scanned images. This paper discuss what morphing is and implementation ofTriangulation based morphing Technique and Feature based Image Morphing. IT analyze both morphingtechniques in terms of different attributes such as computational complexity, Visual quality of morphobtained and complexity involved in selection of features.

  10. Augmented reality-based self-facial modeling to promote the emotional expression and social skills of adolescents with autism spectrum disorders.

    Science.gov (United States)

    Chen, Chien-Hsu; Lee, I-Jui; Lin, Ling-Yi

    2014-11-01

    Autism spectrum disorders (ASD) are characterized by a reduced ability to understand the emotions of other people; this ability involves recognizing facial expressions. This study assessed the possibility of enabling three adolescents with ASD to become aware of facial expressions observed in situations in a school setting simulated using augmented reality (AR) technology. The AR system provided three-dimensional (3-D) animations of six basic facial expressions overlaid on participant faces to facilitate practicing emotional judgments and social skills. Based on the multiple baseline design across subjects, the data indicated that AR intervention can improve the appropriate recognition and response to facial emotional expressions seen in the situational task.

  11. Down syndrome detection from facial photographs using machine learning techniques

    Science.gov (United States)

    Zhao, Qian; Rosenbaum, Kenneth; Sze, Raymond; Zand, Dina; Summar, Marshall; Linguraru, Marius George

    2013-02-01

    Down syndrome is the most commonly occurring chromosomal condition; one in every 691 babies in United States is born with it. Patients with Down syndrome have an increased risk for heart defects, respiratory and hearing problems and the early detection of the syndrome is fundamental for managing the disease. Clinically, facial appearance is an important indicator in diagnosing Down syndrome and it paves the way for computer-aided diagnosis based on facial image analysis. In this study, we propose a novel method to detect Down syndrome using photography for computer-assisted image-based facial dysmorphology. Geometric features based on facial anatomical landmarks, local texture features based on the Contourlet transform and local binary pattern are investigated to represent facial characteristics. Then a support vector machine classifier is used to discriminate normal and abnormal cases; accuracy, precision and recall are used to evaluate the method. The comparison among the geometric, local texture and combined features was performed using the leave-one-out validation. Our method achieved 97.92% accuracy with high precision and recall for the combined features; the detection results were higher than using only geometric or texture features. The promising results indicate that our method has the potential for automated assessment for Down syndrome from simple, noninvasive imaging data.

  12. Assessing facial wrinkles: automatic detection and quantification

    Science.gov (United States)

    Cula, Gabriela O.; Bargo, Paulo R.; Kollias, Nikiforos

    2009-02-01

    Nowadays, documenting the face appearance through imaging is prevalent in skin research, therefore detection and quantitative assessment of the degree of facial wrinkling is a useful tool for establishing an objective baseline and for communicating benefits to facial appearance due to cosmetic procedures or product applications. In this work, an algorithm for automatic detection of facial wrinkles is developed, based on estimating the orientation and the frequency of elongated features apparent on faces. By over-filtering the skin texture image with finely tuned oriented Gabor filters, an enhanced skin image is created. The wrinkles are detected by adaptively thresholding the enhanced image, and the degree of wrinkling is estimated based on the magnitude of the filter responses. The algorithm is tested against a clinically scored set of images of periorbital lines of different severity and we find that the proposed computational assessment correlates well with the corresponding clinical scores.

  13. Facial porokeratosis.

    Science.gov (United States)

    Carranza, Dafnis C; Haley, Jennifer C; Chiu, Melvin

    2008-01-01

    A 34-year-old man from El Salvador was referred to our clinic with a 10-year history of a pruritic erythematous facial eruption. He reported increased pruritus and scaling of lesions when exposed to the sun. He worked as a construction worker and admitted to frequent sun exposure. Physical examination revealed well-circumscribed erythematous to violaceous papules with raised borders and atrophic centers localized to the nose (Figure 1). He did not have lesions on the arms or legs. He did not report a family history of similar lesions. A biopsy specimen was obtained from the edge of a lesion on the right ala. Histologic examination of the biopsy specimen showed acanthosis of the epidermis with focal invagination of the corneal layer and a homogeneous column of parakeratosis in the center of that layer consistent with a cornoid lamella (Figure 2). Furthermore, the granular layer was absent at the cornoid lamella base. The superficial dermis contained a sparse, perivascular lymphocytic infiltrate. No evidence of dysplasia or malignancy was seen. These findings supported a diagnosis of porokeratosis. The patient underwent a trial of cryotherapy with moderate improvement of the facial lesions.

  14. Facial expression recognition based on Gabor wavelet transform%基于Gabor小波的人脸表情特征提取研究

    Institute of Scientific and Technical Information of China (English)

    王甫龙; 薄华

    2012-01-01

    In order to make the computer have a better recognition to face expression,the method of facial expression recognition based on Gabor wavelets transform is discussed.Firstly,with pre-processing is executed to a given static grey image containing facial expression information.Pre-processing including the identification of pure face facial expression region,size and gray-scale normalized,the methods based on two-dimensional Gabor transform for feature extraction and fastPCA mentioned in this paper for diminishing Gabor feature are discussed.Secondly,in the low dimensional space,use the FLD to obtain the features useful to classification.Finally,SVM is applied to sort the facial expressions.Compared with the conventional methods,experimental results show that this method has fast identification speed and better higher recognition accuracy.%为了使计算机能更好的识别人脸表情,对基于Gabor小波变换的人脸表情识别方法进行了研究。首先对包含表情区域的静态灰度图像进行预处理,包括对确定的人脸表情区域进行尺寸和灰度归一化,然后利用二维Gabor小波变换提取脸部表情特征,使用快速PCA方法对提取的Gabor小波特征初步降维。再在低维的空间中,利用Fisher准则提取那些有利于分类的特征,最后用SVM分类器进行分类。实验结果表明,上述提出的方法比传统的方法识别速度更快,能达到实时性的要求,并且具有很好的鲁棒性,识别率高。

  15. Realistic facial expression of virtual human based on color, sweat, and tears effects.

    Science.gov (United States)

    Alkawaz, Mohammed Hazim; Basori, Ahmad Hoirul; Mohamad, Dzulkifli; Mohamed, Farhan

    2014-01-01

    Generating extreme appearances such as scared awaiting sweating while happy fit for tears (cry) and blushing (anger and happiness) is the key issue in achieving the high quality facial animation. The effects of sweat, tears, and colors are integrated into a single animation model to create realistic facial expressions of 3D avatar. The physical properties of muscles, emotions, or the fluid properties with sweating and tears initiators are incorporated. The action units (AUs) of facial action coding system are merged with autonomous AUs to create expressions including sadness, anger with blushing, happiness with blushing, and fear. Fluid effects such as sweat and tears are simulated using the particle system and smoothed-particle hydrodynamics (SPH) methods which are combined with facial animation technique to produce complex facial expressions. The effects of oxygenation of the facial skin color appearance are measured using the pulse oximeter system and the 3D skin analyzer. The result shows that virtual human facial expression is enhanced by mimicking actual sweating and tears simulations for all extreme expressions. The proposed method has contribution towards the development of facial animation industry and game as well as computer graphics.

  16. Realistic Facial Expression of Virtual Human Based on Color, Sweat, and Tears Effects

    Directory of Open Access Journals (Sweden)

    Mohammed Hazim Alkawaz

    2014-01-01

    Full Text Available Generating extreme appearances such as scared awaiting sweating while happy fit for tears (cry and blushing (anger and happiness is the key issue in achieving the high quality facial animation. The effects of sweat, tears, and colors are integrated into a single animation model to create realistic facial expressions of 3D avatar. The physical properties of muscles, emotions, or the fluid properties with sweating and tears initiators are incorporated. The action units (AUs of facial action coding system are merged with autonomous AUs to create expressions including sadness, anger with blushing, happiness with blushing, and fear. Fluid effects such as sweat and tears are simulated using the particle system and smoothed-particle hydrodynamics (SPH methods which are combined with facial animation technique to produce complex facial expressions. The effects of oxygenation of the facial skin color appearance are measured using the pulse oximeter system and the 3D skin analyzer. The result shows that virtual human facial expression is enhanced by mimicking actual sweating and tears simulations for all extreme expressions. The proposed method has contribution towards the development of facial animation industry and game as well as computer graphics.

  17. Research on Method of Facial Expression Recognition Based on Curvelet Transform and SVM%基于Curvelet变换和SVM的人脸表情识别方法研究

    Institute of Scientific and Technical Information of China (English)

    薄璐; 周菊香

    2013-01-01

    In this paper,curvelet transform is used for facial expression recognition.A method based on curvelet transform and SVM is introduced to facial expression recognition.During expression feature extracting,principal component analysis is also used to reduce the dimension of coefficient features after curvelet transform decomposition.Conducting experiments on JAFFE and Cohn-Kanade expression database respectively,the results show that the method can effectively identify the facial expression.Compared with other methods,the proposed method that gets an average recognition rate of facial expression is significantly better.%论文将Curvelet变换用于人脸表情识别,提出了一种基于Curvelet变换与SVM相结合的人脸表情识别方法.在表情特征提取过程中,还采用了主分量分析方法对Curvelet变换分解后得到的系数特征进行降维处理.分别对JAFFE和Cohn-Kanade表情数据库进行了实验,结果表明该方法可以有效地对人脸表情进行识别,与其他方法比较,采用该文方法得到人脸表情的平均识别率明显更优.

  18. De Novo 17q24.2-q24.3 microdeletion presenting with generalized hypertrichosis terminalis, gingival fibromatous hyperplasia, and distinctive facial features.

    Science.gov (United States)

    Afifi, Hanan H; Fukai, Ryoko; Miyake, Noriko; Gamal El Din, Amina A; Eid, Maha M; Eid, Ola M; Thomas, Manal M; El-Badry, Tarek H; Tosson, Angie M S; Abdel-Salam, Ghada M H; Matsumoto, Naomichi

    2015-10-01

    Generalized hypertrichosis is a feature of several genetic disorders, and the nosology of these entities is still provisional. Recent studies have implicated chromosome 17q24.2-q24.3 microdeletion and the reciprocal microduplication in a very rare form of congenital generalized hypertrichosis terminalis (CGHT) with or without gingival hyperplasia. Here, we report on a 5-year-old Egyptian girl born to consanguineous parents. The girl presented with CGHT and gingival hyperplasia for whom we performed detailed clinical, pathological, and molecular studies. The girl had coarse facies characterized by bilateral epicanthic folds, thick and abundant eyelashes, a broad nose, full cheeks, and lips that constituted the distinctive facial features for this syndrome. Biopsy of the gingiva showed epithelial marked acanthosis and hyperkeratosis with hyperplastic thick collagen bundles and dense fibrosis in the underlying tissues. Array analysis indicated a 17q24.2-q24.3 chromosomal microdeletion. We validated this microdeletion by real-time quantitative PCR and confirmed a perfect co-segregation of the disease phenotype within the family. In summary, this study indicates that 17q24.2-q24.3 microdeletion caused CGHT with gingival hyperplasia and distinctive facies, which should be differentiated from the autosomal recessive type that lacks the distinctive facies.

  19. Using Kinect for real-time emotion recognition via facial expressions

    Institute of Scientific and Technical Information of China (English)

    Qi-rong MAO; Xin-yu PAN; Yong-zhao ZHAN; Xiang-jun SHEN

    2015-01-01

    Emotion recognition via facial expressions (ERFE) has attracted a great deal of interest with recent advances in artificial intelligence and pattern recognition. Most studies are based on 2D images, and their performance is usually computationally expensive. In this paper, we propose a real-time emotion recognition approach based on both 2D and 3D facial expression features captured by Kinect sensors. To capture the deformation of the 3D mesh during facial expression, we combine the features of animation units (AUs) and feature point positions (FPPs) tracked by Kinect. A fusion algorithm based on improved emotional profiles (IEPs) and maximum confidence is proposed to recognize emotions with these real-time facial expression features. Experiments on both an emotion dataset and a real-time video show the superior performance of our method.

  20. Dominant Local Binary Pattern Based Face Feature Selection and Detection

    Directory of Open Access Journals (Sweden)

    Kavitha.T

    2010-04-01

    Full Text Available Face Detection plays a major role in Biometrics.Feature selection is a problem of formidable complexity. Thispaper proposes a novel approach to extract face features forface detection. The LBP features can be extracted faster in asingle scan through the raw image and lie in a lower dimensional space, whilst still retaining facial information efficiently. The LBP features are robust to low-resolution images. The dominant local binary pattern (DLBP is used to extract features accurately. A number of trainable methods are emerging in the empirical practice due to their effectiveness. The proposed method is a trainable system for selecting face features from over-completes dictionaries of imagemeasurements. After the feature selection procedure is completed the SVM classifier is used for face detection. The main advantage of this proposal is that it is trained on a very small training set. The classifier is used to increase the selection accuracy. This is not only advantageous to facilitate the datagathering stage, but, more importantly, to limit the training time. CBCL frontal faces dataset is used for training and validation.

  1. Facial Expression Recognition via Non-Negative Least-Squares Sparse Coding

    Directory of Open Access Journals (Sweden)

    Ying Chen

    2014-05-01

    Full Text Available Sparse coding is an active research subject in signal processing, computer vision, and pattern recognition. A novel method of facial expression recognition via non-negative least squares (NNLS sparse coding is presented in this paper. The NNLS sparse coding is used to form a facial expression classifier. To testify the performance of the presented method, local binary patterns (LBP and the raw pixels are extracted for facial feature representation. Facial expression recognition experiments are conducted on the Japanese Female Facial Expression (JAFFE database. Compared with other widely used methods such as linear support vector machines (SVM, sparse representation-based classifier (SRC, nearest subspace classifier (NSC, K-nearest neighbor (KNN and radial basis function neural networks (RBFNN, the experiment results indicate that the presented NNLS method performs better than other used methods on facial expression recognition tasks.

  2. 预动环境中基于协同信任的表情分析%FacialExpression Analysis Based on Synergetic Trust Model in Proactive Environment

    Institute of Scientific and Technical Information of China (English)

    徐超; 冯志勇; 王家昉

    2012-01-01

    With the rapid development of affective computing and facial expression analysis, it is important to understand trusted facial expressions during human-computer interaction. This paper presents a novel approach for synergetic trust analysis of facial expression. Based on the cooperative mechanism between facial expression features and affective trust evidences, the synergy theory is applied to extend the evidences and to achieve the reasoning algorithm in a proactive environment. The resultant model from cooperative interaction and synergetic dependence evaluation is potentially capable of analyzing the trusted facial expression. Experiments have been conducted to evaluate the rationality of the approach. It is suggested that synergetic trust model can reduce the subjective impacts of overall analysis and perform at a higher credibility can allow the user to further comprehend affective computing with trust factors.%面向情感计算的面部表情研究得到快速发展,人机交互中得到表情的可信分析成为研究热点.提出预动环境中协同依赖的表情分析模型,设计协同信任算法,实现对表情的可信分析.首先,根据个体的表情结构与特征证据的关系提出预动环境中可协同交互的表情分析模型;然后,引入协同理论扩展特征证据,设计协同依赖的可信表情模型,阐述表情特征的证据推理算法;最后通过实验验证协同信任模型对表情的实时可信分析,降低了分析模型主观因素对可信结果的影响.该研究有助于更好地认识信任因素对情感计算的影响.

  3. Multi scale feature based matched filter processing

    Institute of Scientific and Technical Information of China (English)

    LI Jun; HOU Chaohuan

    2004-01-01

    Using the extreme difference of self-similarity and kurtosis at large level scale of wavelet transform approximation between the PTFM (Pulse Trains of Frequency Modulated)signals and its reverberation, a feature-based matched filter method using the classify-beforedetect paragriam is proposed to improve the detection performance in reverberation and multipath environments. Processing the data of lake-trails showed that the processing gain of the proposed method is bigger than that of matched filter about 10 dB. In multipath environments, detection performance of matched filter become badly poorer, while that of the proposed method is improved better. It shows that the method is much more robust with the effect of multipath.

  4. Improved AAG based recognization of machining feature

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    The lost information caused by feature interaction is restored by using auxiliary faces(AF)and virtual links(VL).The delta volume of the interacted features represented by concave attachable connected graph (CACG)can be decomposed into several isolated features represented by complete concave adjacency graph (CCAG).We can recognize the features sketchy type by using CCAG as a hint; the exact type of the feature can be attained by deleting the auxiliary faces from the isolated feature.United machining feature(UMF)is used to represent the features that can be machined in the same machining process.It is important to the rationalizing of the process plans and reduce the time costing in machining.An example is given to demonstrate the effectiveness of this method.

  5. Feature based sliding window technique for face recognition

    Science.gov (United States)

    Javed, Muhammad Younus; Mohsin, Syed Maajid; Anjum, Muhammad Almas

    2010-02-01

    Human beings are commonly identified by biometric schemes which are concerned with identifying individuals by their unique physical characteristics. The use of passwords and personal identification numbers for detecting humans are being used for years now. Disadvantages of these schemes are that someone else may use them or can easily be forgotten. Keeping in view of these problems, biometrics approaches such as face recognition, fingerprint, iris/retina and voice recognition have been developed which provide a far better solution when identifying individuals. A number of methods have been developed for face recognition. This paper illustrates employment of Gabor filters for extracting facial features by constructing a sliding window frame. Classification is done by assigning class label to the unknown image that has maximum features similar to the image stored in the database of that class. The proposed system gives a recognition rate of 96% which is better than many of the similar techniques being used for face recognition.

  6. Facial swelling

    Science.gov (United States)

    ... help reduce facial swelling. When to Contact a Medical Professional Call your health care provider if you have: Sudden, painful, or severe facial ... or if you have breathing problems. The health care provider will ask about your medical and personal history. This helps determine treatment or ...

  7. A Sparse-Feature-Based Face Detector

    Institute of Scientific and Technical Information of China (English)

    LUXiaofeng; ZHENGNanning; ZHENGSongfeng

    2003-01-01

    Local features and global features are two kinds of important statistical features used to distinguish faces from nonfaces. They are both special cases of sparse features. A final classifier can be considered as a combination of a set of selected weak classiflers, and each weak classifier uses a sparse feature to classify samples. Motivated by this thought, we construct an over complete set of weak classifiers using LPSVM (Linear proximal support vector machine) algorithm, and then we select part of them using AdaBoost algorithm and combine the selected weak classifiers to form a strong classifier. And duringthe course of feature extraction and selection, our method can minimize the classification error directly, whereas most previous works cannot do this. The main difference from other methods is that the local features are learned from the training set instead of being arbitrarily defined. We applied our method to face detection; the test result shows that this method performs well.

  8. Lack of Support for the Association between Facial Shape and Aggression: A Reappraisal Based on a Worldwide Population Genetics Perspective

    Science.gov (United States)

    Gómez-Valdés, Jorge; Hünemeier, Tábita; Quinto-Sánchez, Mirsha; Paschetta, Carolina; de Azevedo, Soledad; González, Marina F.; Martínez-Abadías, Neus; Esparza, Mireia; Pucciarelli, Héctor M.; Salzano, Francisco M.; Bau, Claiton H. D.; Bortolini, Maria Cátira; González-José, Rolando

    2013-01-01

    Antisocial and criminal behaviors are multifactorial traits whose interpretation relies on multiple disciplines. Since these interpretations may have social, moral and legal implications, a constant review of the evidence is necessary before any scientific claim is considered as truth. A recent study proposed that men with wider faces relative to facial height (fWHR) are more likely to develop unethical behaviour mediated by a psychological sense of power. This research was based on reports suggesting that sexual dimorphism and selection would be responsible for a correlation between fWHR and aggression. Here we show that 4,960 individuals from 94 modern human populations belonging to a vast array of genetic and cultural contexts do not display significant amounts of fWHR sexual dimorphism. Further analyses using populations with associated ethnographical records as well as samples of male prisoners of the Mexico City Federal Penitentiary condemned by crimes of variable level of inter-personal aggression (homicide, robbery, and minor faults) did not show significant evidence, suggesting that populations/individuals with higher levels of bellicosity, aggressive behaviour, or power-mediated behaviour display greater fWHR. Finally, a regression analysis of fWHR on individual's fitness showed no significant correlation between this facial trait and reproductive success. Overall, our results suggest that facial attributes are poor predictors of aggressive behaviour, or at least, that sexual selection was weak enough to leave a signal on patterns of between- and within-sex and population facial variation. PMID:23326328

  9. 基于改进 LTP 算子和稀疏表示的人脸表情识别%Facial Expression Recognition Based on Improved LTP and Sparse Representation

    Institute of Scientific and Technical Information of China (English)

    李立赛; 应自炉

    2015-01-01

    In order to improve the facial expression recognition rate in practical application, an improved local ternary patterns (ILTP) algorithm was proposed on the basis of the local ternary patterns (LTP) algorithm,and was combined with sparse representation-based classifier (SRC) to form a new algorithm to be applied to human facial expression recognition. Then facial expression features are extracted by ILTP algorithm, and the features are treated as the input of the SRC to complete facial expressions recognition. Experimental results based on JAFFE database prove that the new algorithm can get a facial expression recognition rate of 70.48% and is highly feasible.%为了提高实际应用中的人脸表情识别率,本文提出了改进局部三值模式算法(ILTP),并结合稀疏表达分类器(SRC)组成新的算法应用于人脸表情识别.该算法首先利用 ILTP 算法对人脸表情图像进行特征提取,然后将得到的图像顶层特征数据和图像底层特征数据作为SRC 的输入,从而完成人脸表情分类.基于 JAFFE 数据的实验结果表明:改进算法的人脸表情识别率达70.48%,具有较高的可行性.

  10. Cosmetics alter biologically-based factors of beauty: evidence from facial contrast.

    Science.gov (United States)

    Jones, Alex L; Russell, Richard; Ward, Robert

    2015-02-28

    The use of cosmetics by women seems to consistently increase their attractiveness. What factors of attractiveness do cosmetics alter to achieve this? Facial contrast is a known cue to sexual dimorphism and youth, and cosmetics exaggerate sexual dimorphisms in facial contrast. Here, we demonstrate that the luminance contrast pattern of the eyes and eyebrows is consistently sexually dimorphic across a large sample of faces, with females possessing lower brow contrasts than males, and greater eye contrast than males. Red-green and yellow-blue color contrasts were not found to differ consistently between the sexes. We also show that women use cosmetics not only to exaggerate sexual dimorphisms of brow and eye contrasts, but also to increase contrasts that decline with age. These findings refine the notion of facial contrast, and demonstrate how cosmetics can increase attractiveness by manipulating factors of beauty associated with facial contrast.

  11. Cosmetics Alter Biologically-Based Factors of Beauty: Evidence from Facial Contrast

    Directory of Open Access Journals (Sweden)

    Alex L. Jones

    2015-01-01

    Full Text Available The use of cosmetics by women seems to consistently increase their attractiveness. What factors of attractiveness do cosmetics alter to achieve this? Facial contrast is a known cue to sexual dimorphism and youth, and cosmetics exaggerate sexual dimorphisms in facial contrast. Here, we demonstrate that the luminance contrast pattern of the eyes and eyebrows is consistently sexually dimorphic across a large sample of faces, with females possessing lower brow contrasts than males, and greater eye contrast than males. Red-green and yellow-blue color contrasts were not found to differ consistently between the sexes. We also show that women use cosmetics not only to exaggerate sexual dimorphisms of brow and eye contrasts, but also to increase contrasts that decline with age. These findings refine the notion of facial contrast, and demonstrate how cosmetics can increase attractiveness by manipulating factors of beauty associated with facial contrast.

  12. Automatic Assessment of Facial Nerve Function Based on Infrared Thermal Imaging%红外热成像辅助面神经功能自动评估方法研究

    Institute of Scientific and Technical Information of China (English)

    刘旭龙; 付斌瑞; 许沥文; 鲁宁; 于长永; 柏禄一

    2016-01-01

    distribution features are extracted automatically,including the asymmetry degree of facial tempera-ture distribution,effective thermal area ratio and temperature difference.The automatic classifier is used to assess facial nerve function based on radial basis function neural network (RBFNN).This method comprehensively utilizes the correlation and spe-cificity of the facial temperature distribution,extracts efficiently the facial temperature contralateral asymmetry of facial paralysis in the infrared thermal imaging.In our experiments,390 infrared thermal images were collected from subjects with unilateral fa-cial paralysis.The results show:the average classification accuracy rate of our proposed method was 94.10%.It has achieved a better classification rate which is above 9.31% than K nearest neighbor (kNN)classifier and 4.87% above Support vector ma-chine (SVM).This experiment results is superior to traditional House-Brackmann facial neural function assessment method. The classification accuracy of facial nerve function with the method is full compliance with the clinical application standard.A complete set of automated techniques for the computerized assessment of thermal images has been developed to assess thermal dysfunction caused by facial paralysis,and the clinical diagnosis and treatment of facial paralysis also will benefit by this method.

  13. Patch-guided facial image inpainting by shape propagation

    Institute of Scientific and Technical Information of China (English)

    Yue-ting ZHUANG; Yu-shun WANG; Timothy K. SHIH; Nick C. TANG

    2009-01-01

    Images with human faces comprise an essential part in the imaging realm. Occlusion or damage in facial portions will bring a remarkable discomfort and information loss. We propose an algorithm that can repair occluded or damaged facial images automatically, named 'facial image inpainting'. Inpainting is a set of image processing methods to recover missing image portions. We extend the image inpainting methods by introducing facial domain knowledge. With the support of a face database, our ap-proach propagates structural information, i.e., feature points and edge maps, from similar faces to the missing facial regions. Using the inferred structural information as guidance, an exemplar-based image inpainting algorithm is employed to copy patches in the same face from the source portion to the missing portion. This newly proposed concept of facial image inpainting outperforms the traditional inpainting methods by propagating the facial shapes from a face database, and avoids the problem of variations in imaging conditions from different images by inferring colors and textures from the same face image. Our system produces seamless faces that are hardly seen drawbacks.

  14. Automatic Facial Expression Recognition and Operator Functional State

    Science.gov (United States)

    Blanson, Nina

    2012-01-01

    The prevalence of human error in safety-critical occupations remains a major challenge to mission success despite increasing automation in control processes. Although various methods have been proposed to prevent incidences of human error, none of these have been developed to employ the detection and regulation of Operator Functional State (OFS), or the optimal condition of the operator while performing a task, in work environments due to drawbacks such as obtrusiveness and impracticality. A video-based system with the ability to infer an individual's emotional state from facial feature patterning mitigates some of the problems associated with other methods of detecting OFS, like obtrusiveness and impracticality in integration with the mission environment. This paper explores the utility of facial expression recognition as a technology for inferring OFS by first expounding on the intricacies of OFS and the scientific background behind emotion and its relationship with an individual's state. Then, descriptions of the feedback loop and the emotion protocols proposed for the facial recognition program are explained. A basic version of the facial expression recognition program uses Haar classifiers and OpenCV libraries to automatically locate key facial landmarks during a live video stream. Various methods of creating facial expression recognition software are reviewed to guide future extensions of the program. The paper concludes with an examination of the steps necessary in the research of emotion and recommendations for the creation of an automatic facial expression recognition program for use in real-time, safety-critical missions

  15. Local Feature based Gender Independent Bangla ASR

    Directory of Open Access Journals (Sweden)

    Bulbul Ahamed

    2012-11-01

    Full Text Available This paper presents an automatic speech recognition (ASR for Bangla (widely used as Bengali by suppressing the speaker gender types based on local features extracted from an input speech. Speaker-specific characteristics play an important role on the performance of Bangla automatic speech recognition (ASR. Gender factor shows adverse effect in the classifier while recognizing a speech by an opposite gender, such as, training a classifier by male but testing is done by female or vice-versa. To obtain a robust ASR system in practice it is necessary to invent a system that incorporates gender independent effect for particular gender. In this paper, we have proposed a Gender-Independent technique for ASR that focused on a gender factor. The proposed method trains the classifier with the both types of gender, male and female, and evaluates the classifier for the male and female. For the experiments, we have designed a medium size Bangla (widely known as Bengali speech corpus for both the male and female.The proposed system has showed a significant improvement of word correct rates, word accuracies and sentence correct rates in comparison with the method that suffers from gender effects using. Moreover, it provides the highest level recognition performance by taking a fewer mixture component in hidden Markov model (HMMs.

  16. Multi-Feature Segmentation and Cluster based Approach for Product Feature Categorization

    Directory of Open Access Journals (Sweden)

    Bharat Singh

    2016-03-01

    Full Text Available At a recent time, the web has become a valuable source of online consumer review however as the number of reviews is growing in high speed. It is infeasible for user to read all reviews to make a valuable or satisfying decision because the same features, people can write it contrary words or phrases. To produce a useful summary of domain synonyms words and phrase, need to be a group into same feature group. We focus on feature-based opinion mining problem and this paper mainly studies feature based product categorization from the number of users - generated review available on the different website. First, a multi-feature segmentation method is proposed which segment multi-feature review sentences into the single feature unit. Second part of speech dictionary and context information is used to consider the irrelevant feature identification, sentiment words are used to identify the polarity of feature and finally an unsupervised clustering based product feature categorization method is proposed. Clustering is unsupervised machine learning approach that groups feature that have a high degree of similarity in a same cluster. The proposed approach provides satisfactory results and can achieve 100% average precision for clustering based product feature categorization task. This approach can be applicable to different product.

  17. Facial Data Field

    Institute of Scientific and Technical Information of China (English)

    WANG Shuliang; YUAN Hanning; CAO Baohua; WANG Dakui

    2015-01-01

    Expressional face recognition is a challenge in computer vision for complex expressions. Facial data field is proposed to recognize expression. Fundamentals are presented in the methodology of face recognition upon data field and subsequently, technical algorithms including normalizing faces, generating facial data field, extracting feature points in partitions, assigning weights and recog-nizing faces. A case is studied with JAFFE database for its verification. Result indicates that the proposed method is suitable and eff ective in expressional face recognition con-sidering the whole average recognition rate is up to 94.3%. In conclusion, data field is considered as a valuable alter-native to pattern recognition.

  18. Innovations in individual feature history management - The significance of feature-based temporal model

    Science.gov (United States)

    Choi, J.; Seong, J.C.; Kim, B.; Usery, E.L.

    2008-01-01

    A feature relies on three dimensions (space, theme, and time) for its representation. Even though spatiotemporal models have been proposed, they have principally focused on the spatial changes of a feature. In this paper, a feature-based temporal model is proposed to represent the changes of both space and theme independently. The proposed model modifies the ISO's temporal schema and adds new explicit temporal relationship structure that stores temporal topological relationship with the ISO's temporal primitives of a feature in order to keep track feature history. The explicit temporal relationship can enhance query performance on feature history by removing topological comparison during query process. Further, a prototype system has been developed to test a proposed feature-based temporal model by querying land parcel history in Athens, Georgia. The result of temporal query on individual feature history shows the efficiency of the explicit temporal relationship structure. ?? Springer Science+Business Media, LLC 2007.

  19. Gender classification and age estimation based on facial image%基于人脸图像的性别识别与年龄估计

    Institute of Scientific and Technical Information of China (English)

    张天刚; 任培花; 张景安

    2012-01-01

    Compared to the one-sidedness of gender classification or age estimation based on facial image in the past, a novel method based on public features and private features for gender classification and age estimation is proposed. Face features are extracted by Gabor wavelet transform which are robust to the illumination change and scale variations. Effective face features which have been reduced dimension are divided into public features and private features, public features are used for gender classification, private features are used for age estimation. The experimentation is conducted with radial basis function neural network in FG-NET face database and own OFID face database, and very promising results are achieved.%针对以往利用人脸图像单方面进行性别识别或年龄估计,提出了利用公共特征、私有特征同时进行性别识别与年龄估计.用对光照、尺度变化具有很强鲁棒性的Gabor小波变换提取人脸特征.降维后的有效人脸特征分成公共特征、私有特征两部分,公共特征用于性别识别,私有特征进行年龄估计.在FG-NET人脸库及自建OFID人脸库中用RBF神经网络进行了实验,取得了良好效果.

  20. Fingerprint Feature Extraction Based on Macroscopic Curvature

    Institute of Scientific and Technical Information of China (English)

    Zhang Xiong; He Gui-ming; Zhang Yun

    2003-01-01

    In the Automatic Fingerprint Identification System (AFIS), extracting the feature of fingerprint is very important. The local curvature of ridges of fingerprint is irregular, so people have the barrier to effectively extract the fingerprint curve features to describe fingerprint. This article proposes a novel algorithm; it embraces information of few nearby fingerprint ridges to extract a new characteristic which can describe the curvature feature of fingerprint. Experimental results show the algorithm is feasible, and the characteristics extracted by it can clearly show the inner macroscopic curve properties of fingerprint. The result also shows that this kind of characteristic is robust to noise and pollution.

  1. Fingerprint Feature Extraction Based on Macroscopic Curvature

    Institute of Scientific and Technical Information of China (English)

    Zhang; Xiong; He; Gui-Ming; 等

    2003-01-01

    In the Automatic Fingerprint Identification System(AFIS), extracting the feature of fingerprint is very important. The local curvature of ridges of fingerprint is irregular, so people have the barrier to effectively extract the fingerprint curve features to describe fingerprint. This article proposes a novel algorithm; it embraces information of few nearby fingerprint ridges to extract a new characterstic which can describe the curvature feature of fingerprint. Experimental results show the algorithm is feasible, and the characteristics extracted by it can clearly show the inner macroscopic curve properties of fingerprint. The result also shows that this kind of characteristic is robust to noise and pollution.

  2. Clustering Based Feature Learning on Variable Stars

    CERN Document Server

    Mackenzie, Cristóbal; Protopapas, Pavlos

    2016-01-01

    The success of automatic classification of variable stars strongly depends on the lightcurve representation. Usually, lightcurves are represented as a vector of many statistical descriptors designed by astronomers called features. These descriptors commonly demand significant computational power to calculate, require substantial research effort to develop and do not guarantee good performance on the final classification task. Today, lightcurve representation is not entirely automatic; algorithms that extract lightcurve features are designed by humans and must be manually tuned up for every survey. The vast amounts of data that will be generated in future surveys like LSST mean astronomers must develop analysis pipelines that are both scalable and automated. Recently, substantial efforts have been made in the machine learning community to develop methods that prescind from expert-designed and manually tuned features for features that are automatically learned from data. In this work we present what is, to our ...

  3. Facial pain and temporomandibular disorders

    OpenAIRE

    2002-01-01

    Abstract The study was undertaken to determine the prevalence of facial pain and the association of facial pain with temporomandibular disorders (TMD) as well as with other factors, in a geographically defined population-based sample consisting of subjects born in 1966 in northern Finland, and in a case-control study including subjects with facial pain and their healthy controls. In addition, the influence of conservative stomatognathic and necessary prosthetic treatme...

  4. Perceived Sexual Orientation Based on Vocal and Facial Stimuli Is Linked to Self-Rated Sexual Orientation in Czech Men

    OpenAIRE

    Jaroslava Varella Valentova; Jan Havlíček

    2013-01-01

    Previous research has shown that lay people can accurately assess male sexual orientation based on limited information, such as face, voice, or behavioral display. Gender-atypical traits are thought to serve as cues to sexual orientation. We investigated the presumed mechanisms of sexual orientation attribution using a standardized set of facial and vocal stimuli of Czech men. Both types of stimuli were rated for sexual orientation and masculinity-femininity by non-student heterosexual women ...

  5. 3D animation of facial plastic surgery based on computer graphics

    Science.gov (United States)

    Zhang, Zonghua; Zhao, Yan

    2013-12-01

    More and more people, especial women, are getting desired to be more beautiful than ever. To some extent, it becomes true because the plastic surgery of face was capable in the early 20th and even earlier as doctors just dealing with war injures of face. However, the effect of post-operation is not always satisfying since no animation could be seen by the patients beforehand. In this paper, by combining plastic surgery of face and computer graphics, a novel method of simulated appearance of post-operation will be given to demonstrate the modified face from different viewpoints. The 3D human face data are obtained by using 3D fringe pattern imaging systems and CT imaging systems and then converted into STL (STereo Lithography) file format. STL file is made up of small 3D triangular primitives. The triangular mesh can be reconstructed by using hash function. Top triangular meshes in depth out of numbers of triangles must be picked up by ray-casting technique. Mesh deformation is based on the front triangular mesh in the process of simulation, which deforms interest area instead of control points. Experiments on face model show that the proposed 3D animation facial plastic surgery can effectively demonstrate the simulated appearance of post-operation.

  6. Men’s Preference for Women’s Facial Features: Testing Homogamy and the Paternity Uncertainty Hypothesis

    OpenAIRE

    Jeanne Bovet; Julien Barthes; Valérie Durand; Michel Raymond; Alexandra Alvergne

    2012-01-01

    Male mate choice might be based on both absolute and relative strategies. Cues of female attractiveness are thus likely to reflect both fitness and reproductive potential, as well as compatibility with particular male phenotypes. In humans, absolute clues of fertility and indices of favorable developmental stability are generally associated with increased women's attractiveness. However, why men exhibit variable preferences remains less studied. Male mate choice might be influenced by uncerta...

  7. Facial tics

    Science.gov (United States)

    Tic - facial; Mimic spasm ... Tics may involve repeated, uncontrolled spasm-like muscle movements, such as: Eye blinking Grimacing Mouth twitching Nose wrinkling Squinting Repeated throat clearing or grunting may also be ...

  8. Hirschsprung disease, microcephaly, mental retardation, and characteristic facial features: delineation of a new syndrome and identification of a locus at chromosome 2q22-q23.

    OpenAIRE

    1998-01-01

    We have identified six children with a distinctive facial phenotype in association with mental retardation (MR), microcephaly, and short stature, four of whom presented with Hirschsprung (HSCR) disease in the neonatal period. HSCR was diagnosed in a further child at the age of 3 years after investigation for severe chronic constipation and another child, identified as sharing the same facial phenotype, had chronic constipation, but did not have HSCR. One of our patients has an interstitial de...

  9. A genome-wide association study identifies five loci influencing facial morphology in Europeans.

    Directory of Open Access Journals (Sweden)

    Fan Liu

    2012-09-01

    Full Text Available Inter-individual variation in facial shape is one of the most noticeable phenotypes in humans, and it is clearly under genetic regulation; however, almost nothing is known about the genetic basis of normal human facial morphology. We therefore conducted a genome-wide association study for facial shape phenotypes in multiple discovery and replication cohorts, considering almost ten thousand individuals of European descent from several countries. Phenotyping of facial shape features was based on landmark data obtained from three-dimensional head magnetic resonance images (MRIs and two-dimensional portrait images. We identified five independent genetic loci associated with different facial phenotypes, suggesting the involvement of five candidate genes--PRDM16, PAX3, TP63, C5orf50, and COL17A1--in the determination of the human face. Three of them have been implicated previously in vertebrate craniofacial development and disease, and the remaining two genes potentially represent novel players in the molecular networks governing facial development. Our finding at PAX3 influencing the position of the nasion replicates a recent GWAS of facial features. In addition to the reported GWA findings, we established links between common DNA variants previously associated with NSCL/P at 2p21, 8q24, 13q31, and 17q22 and normal facial-shape variations based on a candidate gene approach. Overall our study implies that DNA variants in genes essential for craniofacial development contribute with relatively small effect size to the spectrum of normal variation in human facial morphology. This observation has important consequences for future studies aiming to identify more genes involved in the human facial morphology, as well as for potential applications of DNA prediction of facial shape such as in future forensic applications.

  10. Fully automatic recognition of the temporal phases of facial actions.

    Science.gov (United States)

    Valstar, Michel F; Pantic, Maja

    2012-02-01

    Past work on automatic analysis of facial expressions has focused mostly on detecting prototypic expressions of basic emotions like happiness and anger. The method proposed here enables the detection of a much larger range of facial behavior by recognizing facial muscle actions [action units (AUs)] that compound expressions. AUs are agnostic, leaving the inference about conveyed intent to higher order decision making (e.g., emotion recognition). The proposed fully automatic method not only allows the recognition of 22 AUs but also explicitly models their temporal characteristics (i.e., sequences of temporal segments: neutral, onset, apex, and offset). To do so, it uses a facial point detector based on Gabor-feature-based boosted classifiers to automatically localize 20 facial fiducial points. These points are tracked through a sequence of images using a method called particle filtering with factorized likelihoods. To encode AUs and their temporal activation models based on the tracking data, it applies a combination of GentleBoost, support vector machines, and hidden Markov models. We attain an average AU recognition rate of 95.3% when tested on a benchmark set of deliberately displayed facial expressions and 72% when tested on spontaneous expressions.

  11. Face Recognition Based on Nonlinear Feature Approach

    Directory of Open Access Journals (Sweden)

    Eimad E.A. Abusham

    2008-01-01

    Full Text Available Feature extraction techniques are widely used to reduce the complexity high dimensional data. Nonlinear feature extraction via Locally Linear Embedding (LLE has attracted much attention due to their high performance. In this paper, we proposed a novel approach for face recognition to address the challenging task of recognition using integration of nonlinear dimensional reduction Locally Linear Embedding integrated with Local Fisher Discriminant Analysis (LFDA to improve the discriminating power of the extracted features by maximize between-class while within-class local structure is preserved. Extensive experimentation performed on the CMU-PIE database indicates that the proposed methodology outperforms Benchmark methods such as Principal Component Analysis (PCA, Fisher Discrimination Analysis (FDA. The results showed that 95% of recognition rate could be obtained using our proposed method.

  12. Palmprint Based Verification System Using SURF Features

    Science.gov (United States)

    Srinivas, Badrinath G.; Gupta, Phalguni

    This paper describes the design and development of a prototype of robust biometric system for verification. The system uses features extracted using Speeded Up Robust Features (SURF) operator of human hand. The hand image for features is acquired using a low cost scanner. The palmprint region extracted is robust to hand translation and rotation on the scanner. The system is tested on IITK database of 200 images and PolyU database of 7751 images. The system is found to be robust with respect to translation and rotation. It has FAR 0.02%, FRR 0.01% and accuracy of 99.98% and can be a suitable system for civilian applications and high-security environments.

  13. Phonetic Features in Patients with Velo-cardio-facial Syndrome%腭-心-面综合征的语音特点

    Institute of Scientific and Technical Information of China (English)

    刘琼; 王国民; 蒋莉萍; 陈阳

    2011-01-01

    Objective: A retrospective review on the phonetic features of 17 velo-cardio-facial syndrome (VCFS)patients.Methods: 9 patients were males while 8 were females (aged 5-23 years, mean 13.2 years).All patients were asked to make a specialist voice analysis test.Results: The main phonetic feature of VCFS were found to be excessive nasal pronunciation, accompanied by consonant abscission and attenuation.The consonant abscissions were mainly z,zh, j, and g.The consonant attenuations mainly were s, sh, ch, x, q, k, and b.Conclusion: The phonetic features in VCFS patients appear to be excessive nasal pronunciation and consonant abscission and attenuation, which induce an unsharp pronucation.%目的:回顾分析17例腭-心-面综合征患者的异常语音特点,为临床诊治提供参考.方法:收集17例腭-心-面综合征患者(男9例,女8例,年龄5-23岁,平均13.2岁)的专科语音检查结果进行分析.结果:腭-心-面综合征患者主要表现为以过度鼻音为主的语音障碍,多伴有辅音的脱落和弱化.其中辅音的脱落以z、zh、j、g所占比例较大,弱化以s、sh、ch、x、q、k、b所占比例较大.结论:腭-心-面综合征患者由于腭咽部的形态畸形和运动功能减弱,发音过程中不能形成良好的腭咽闭合,导致不同程度的过度鼻音和摩擦辅音的脱落或弱化,从而造成汉语语音清晰度差.

  14. Perceived sexual orientation based on vocal and facial stimuli is linked to self-rated sexual orientation in Czech men.

    Science.gov (United States)

    Valentova, Jaroslava Varella; Havlíček, Jan

    2013-01-01

    Previous research has shown that lay people can accurately assess male sexual orientation based on limited information, such as face, voice, or behavioral display. Gender-atypical traits are thought to serve as cues to sexual orientation. We investigated the presumed mechanisms of sexual orientation attribution using a standardized set of facial and vocal stimuli of Czech men. Both types of stimuli were rated for sexual orientation and masculinity-femininity by non-student heterosexual women and homosexual men. Our data showed that by evaluating vocal stimuli both women and homosexual men can judge sexual orientation of the target men in agreement with their self-reported sexual orientation. Nevertheless, only homosexual men accurately attributed sexual orientation of the two groups from facial images. Interestingly, facial images of homosexual targets were rated as more masculine than heterosexual targets. This indicates that attributions of sexual orientation are affected by stereotyped association between femininity and male homosexuality; however, reliance on such cues can lead to frequent misjudgments as was the case with the female raters. Although our study is based on a community sample recruited in a non-English speaking country, the results are generally consistent with the previous research and thus corroborate the validity of sexual orientation attributions.

  15. Perceived sexual orientation based on vocal and facial stimuli is linked to self-rated sexual orientation in Czech men.

    Directory of Open Access Journals (Sweden)

    Jaroslava Varella Valentova

    Full Text Available Previous research has shown that lay people can accurately assess male sexual orientation based on limited information, such as face, voice, or behavioral display. Gender-atypical traits are thought to serve as cues to sexual orientation. We investigated the presumed mechanisms of sexual orientation attribution using a standardized set of facial and vocal stimuli of Czech men. Both types of stimuli were rated for sexual orientation and masculinity-femininity by non-student heterosexual women and homosexual men. Our data showed that by evaluating vocal stimuli both women and homosexual men can judge sexual orientation of the target men in agreement with their self-reported sexual orientation. Nevertheless, only homosexual men accurately attributed sexual orientation of the two groups from facial images. Interestingly, facial images of homosexual targets were rated as more masculine than heterosexual targets. This indicates that attributions of sexual orientation are affected by stereotyped association between femininity and male homosexuality; however, reliance on such cues can lead to frequent misjudgments as was the case with the female raters. Although our study is based on a community sample recruited in a non-English speaking country, the results are generally consistent with the previous research and thus corroborate the validity of sexual orientation attributions.

  16. Ship Targets Discrimination Algorithm in SAR Images Based on Hu Moment Feature and Texture Feature

    Directory of Open Access Journals (Sweden)

    Liu Lei

    2016-01-01

    Full Text Available To discriminate the ship targets in SAR images, this paper proposed the method based on combination of Hu moment feature and texture feature. Firstly,7 Hu moment features should be extracted, while gray level co-occurrence matrix is then used to extract the features of mean, variance, uniformity, energy, entropy, inertia moment, correlation and differences. Finally the k-neighbour classifier was used to analysis the 15 dimensional feature vectors. The experimental results show that the method of this paper has a good effect.

  17. Corporate Features and Faith-Based Academies

    Science.gov (United States)

    Green, Elizabeth

    2009-01-01

    This article forms an introductory exploration into the relationship between corporate features and religious values in Academies sponsored by a Christian foundation. This is a theme which arose from research comprising the ethnography of a City Technology College (CTC) with a Christian ethos. The Christian foundation which sponsors the CTC also…

  18. Surface characterization based upon significant topographic features

    Energy Technology Data Exchange (ETDEWEB)

    Blanc, J; Grime, D; Blateyron, F, E-mail: fblateyron@digitalsurf.fr [Digital Surf, 16 rue Lavoisier, F-25000 Besancon (France)

    2011-08-19

    Watershed segmentation and Wolf pruning, as defined in ISO 25178-2, allow the detection of significant features on surfaces and their characterization in terms of dimension, area, volume, curvature, shape or morphology. These new tools provide a robust way to specify functional surfaces.

  19. FaceWarehouse: a 3D facial expression database for visual computing.

    Science.gov (United States)

    Cao, Chen; Weng, Yanlin; Zhou, Shun; Tong, Yiying; Zhou, Kun

    2014-03-01

    We present FaceWarehouse, a database of 3D facial expressions for visual computing applications. We use Kinect, an off-the-shelf RGBD camera, to capture 150 individuals aged 7-80 from various ethnic backgrounds. For each person, we captured the RGBD data of her different expressions, including the neutral expression and 19 other expressions such as mouth-opening, smile, kiss, etc. For every RGBD raw data record, a set of facial feature points on the color image such as eye corners, mouth contour, and the nose tip are automatically localized, and manually adjusted if better accuracy is required. We then deform a template facial mesh to fit the depth data as closely as possible while matching the feature points on the color image to their corresponding points on the mesh. Starting from these fitted face meshes, we construct a set of individual-specific expression blendshapes for each person. These meshes with consistent topology are assembled as a rank-3 tensor to build a bilinear face model with two attributes: identity and expression. Compared with previous 3D facial databases, for every person in our database, there is a much richer matching collection of expressions, enabling depiction of most human facial actions. We demonstrate the potential of FaceWarehouse for visual computing with four applications: facial image manipulation, face component transfer, real-time performance-based facial image animation, and facial animation retargeting from video to image.

  20. Facial Sports Injuries

    Science.gov (United States)

    ... Find an ENT Doctor Near You Facial Sports Injuries Facial Sports Injuries Patient Health Information News media interested in ... should receive immediate medical attention. Prevention Of Facial Sports Injuries The best way to treat facial sports injuries ...

  1. Children and Facial Trauma

    Science.gov (United States)

    ... an ENT Doctor Near You Children and Facial Trauma Children and Facial Trauma Patient Health Information News ... staff at newsroom@entnet.org . What is facial trauma? The term facial trauma means any injury to ...

  2. Feature selection with neighborhood entropy-based cooperative game theory.

    Science.gov (United States)

    Zeng, Kai; She, Kun; Niu, Xinzheng

    2014-01-01

    Feature selection plays an important role in machine learning and data mining. In recent years, various feature measurements have been proposed to select significant features from high-dimensional datasets. However, most traditional feature selection methods will ignore some features which have strong classification ability as a group but are weak as individuals. To deal with this problem, we redefine the redundancy, interdependence, and independence of features by using neighborhood entropy. Then the neighborhood entropy-based feature contribution is proposed under the framework of cooperative game. The evaluative criteria of features can be formalized as the product of contribution and other classical feature measures. Finally, the proposed method is tested on several UCI datasets. The results show that neighborhood entropy-based cooperative game theory model (NECGT) yield better performance than classical ones.

  3. Facial Scar Revision: Understanding Facial Scar Treatment

    Science.gov (United States)

    ... a facial plastic surgeon Facial Scar Revision Understanding Facial Scar Treatment When the skin is injured from a cut or tear the body heals by forming scar tissue. The appearance of the scar can range from ...

  4. 智慧学习环境中基于面部表情的情感分析%Emotion Analysis Based on Facial Expression Recognition in Smart Learning Environment

    Institute of Scientific and Technical Information of China (English)

    孙波; 刘永娜; 陈玖冰; 罗继鸿; 张迪

    2015-01-01

    expression recognition. In an optimal situation, the related individual facial feature can be separated during the process of expression recognition. According to FACS (Facial Action Coding System) proposed by Ekman, a famous psychologist, we constructed an emotion analysis framework based on facial expression recognition in smart learning environment. We used feature decomposition method to decompose the facial feature and the expressional feature into face subspace and expression subspace respectively. After that, expression recognition will be done in the expression subspace and the interference of facial features will be eliminated. Experimental results on JAFFE database suggest that our method is effective. Facial expression recognition for emotional intervention has been performed in Magic Learning, which is an emotional interaction subsystem between learners and virtual teachers in 3D virtual learning environment.

  5. Structure damage detection based on random forest recursive feature elimination

    Science.gov (United States)

    Zhou, Qifeng; Zhou, Hao; Zhou, Qingqing; Yang, Fan; Luo, Linkai

    2014-05-01

    Feature extraction is a key former step in structural damage detection. In this paper, a structural damage detection method based on wavelet packet decomposition (WPD) and random forest recursive feature elimination (RF-RFE) is proposed. In order to gain the most effective feature subset and to improve the identification accuracy a two-stage feature selection method is adopted after WPD. First, the damage features are sorted according to original random forest variable importance analysis. Second, using RF-RFE to eliminate the least important feature and reorder the feature list each time, then get the new feature importance sequence. Finally, k-nearest neighbor (KNN) algorithm, as a benchmark classifier, is used to evaluate the extracted feature subset. A four-storey steel shear building model is chosen as an example in method verification. The experimental results show that using the fewer features got from proposed method can achieve higher identification accuracy and reduce the detection time cost.

  6. Precise Facial Feature Localization under Non-Restraint Environment with Limited Training Images%非约束环境下基于小样本的人脸特征精确定位

    Institute of Scientific and Technical Information of China (English)

    陈莹; 张龙媛

    2013-01-01

    针对非约束环境下的人脸特征定位问题,在概率框架下提出一种基于小样本的精确定位策略.通过对比分析,提取人脸主要特征的颜色和灰度信息及人脸特征之间的几何约束信息,利用混合高斯模型分别对其进行概率建模.之后建立定位融合策略,不仅考虑每种人脸特征的概率分布,还考虑其周围元素的概率分布特性,及各元素之间的几何约束.实验结果表明,该方法能在少量训练样本图像且样本个体较为单一的条件下,实现人脸主要特征的精确定位,且定位精度高于现有方法.%After analyzing the limitation of current methods,a precise localization strategy with limited training data is proposed in a probability framework.Texture and geometry information of facial elements are extracted as model features after comparison analysis with other traditional descriptors.Gaussian mixture model is used for the probability modeling,which describes the distribution of each model features extracted from different facial conditions well.Then,a series of fusion strategies are designed for the facial features localization,which considers the probability distribution of each facial feature,the distribution characters of their surrounding elements and their geometry constraints.The experimental results show that the proposed method can realize precise localization for the facial features with limited training sample images which belong to a single subject,and it outperforms other methods in localization accuracy.

  7. Straight line feature based image distortion correction

    Institute of Scientific and Technical Information of China (English)

    Zhang Haofeng; Zhao Chunxia; Lu Jianfeng; Tang Zhenmin; Yang Jingyu

    2008-01-01

    An image distortion correction method is proposed, which uses the straight line features. Many parallel lines of different direction from different images were extracted, and then were used to optimize the distortion parameters by nonlinear least square. The thought of step by step was added when the optimization method working. 3D world coordi-nation is not need to know, and the method is easy to implement. The experiment result shows its high accuracy.

  8. Geometrically Invariant Watermarking Scheme Based on Local Feature Points

    Directory of Open Access Journals (Sweden)

    Jing Li

    2012-06-01

    Full Text Available Based on local invariant feature points and cross ratio principle, this paper presents a feature-point-based image watermarking scheme. It is robust to geometric attacks and some signal processes. It extracts local invariant feature points from the image using the improved scale invariant feature transform algorithm. Utilizing these points as vertexes it constructs some quadrilaterals to be as local feature regions. Watermark is inserted these local feature regions repeatedly. In order to get stable local regions it adjusts the number and distribution of extracted feature points. In every chosen local feature region it decides locations to embed watermark bits based on the cross ratio of four collinear points, the cross ratio is invariant to projective transformation. Watermark bits are embedded by quantization modulation, in which the quantization step value is computed with the given PSNR. Experimental results show that the proposed method can strongly fight more geometrical attacks and the compound attacks of geometrical ones.

  9. Prediction of Facial Beauty Based on HodgeRank%基于HodgeRank的人脸美貌度预测

    Institute of Scientific and Technical Information of China (English)

    蒋婷; 朱明

    2016-01-01

    In recent years, with the rapid development of computer technology, perception of human facial beauty is an important aspect of human intelligence and has attracted more and more attention of researchers. For the current study methods that exist in the training data set of scoring most depends on manual processes, and the facial beauty assessment is not detailed enough to predict the results, this paper aims to investigate and develop intelligent systems for learning the concept of female facial beauty with data mining learning and producing human-like predictors. Our work is notably different from and goes beyond previous works. We impose less restrictions in terms of pose, lighting, background on the face images used for training and testing, which greatly reduces the manual operation for classification and we do not require costly manual annotation of landmark facial features but simply take raw pixels or texture feature as inputs. We show that a biologically-inspired model with clustering and the improved BP network method can produce results that are much more human-like approach.%随着计算机技术的迅速发展以及人脸识别技术的成熟,人脸美貌度受到越来越多的关注和研究.针对目前的研究方法中存在的对训练数据集的评分过多依赖人工操作,以及对人脸美貌度的预测结果不够详细等问题,本文提出基于HodgeRank的人脸美貌度预测系统,利用数据挖掘方法学习女性人脸的美貌度特征,构造一个模拟预测人脸美貌度的系统.明显区别于之前的研究,该系统训练和测试时采用的人脸数据集放宽了对姿态、光照以及所处环境等条件的限制,评分所需的人工操作大大减少,无需进行大量的人工标定,使用图像的原始像素或纹理特征作为输入,分别采用聚类和改进的BP网络的方法,得到更符合人类特征的美貌度预测结果.

  10. Simultaneous Channel and Feature Selection of Fused EEG Features Based on Sparse Group Lasso

    Directory of Open Access Journals (Sweden)

    Jin-Jia Wang

    2015-01-01

    Full Text Available Feature extraction and classification of EEG signals are core parts of brain computer interfaces (BCIs. Due to the high dimension of the EEG feature vector, an effective feature selection algorithm has become an integral part of research studies. In this paper, we present a new method based on a wrapped Sparse Group Lasso for channel and feature selection of fused EEG signals. The high-dimensional fused features are firstly obtained, which include the power spectrum, time-domain statistics, AR model, and the wavelet coefficient features extracted from the preprocessed EEG signals. The wrapped channel and feature selection method is then applied, which uses the logistical regression model with Sparse Group Lasso penalized function. The model is fitted on the training data, and parameter estimation is obtained by modified blockwise coordinate descent and coordinate gradient descent method. The best parameters and feature subset are selected by using a 10-fold cross-validation. Finally, the test data is classified using the trained model. Compared with existing channel and feature selection methods, results show that the proposed method is more suitable, more stable, and faster for high-dimensional feature fusion. It can simultaneously achieve channel and feature selection with a lower error rate. The test accuracy on the data used from international BCI Competition IV reached 84.72%.

  11. Simultaneous channel and feature selection of fused EEG features based on Sparse Group Lasso.

    Science.gov (United States)

    Wang, Jin-Jia; Xue, Fang; Li, Hui

    2015-01-01

    Feature extraction and classification of EEG signals are core parts of brain computer interfaces (BCIs). Due to the high dimension of the EEG feature vector, an effective feature selection algorithm has become an integral part of research studies. In this paper, we present a new method based on a wrapped Sparse Group Lasso for channel and feature selection of fused EEG signals. The high-dimensional fused features are firstly obtained, which include the power spectrum, time-domain statistics, AR model, and the wavelet coefficient features extracted from the preprocessed EEG signals. The wrapped channel and feature selection method is then applied, which uses the logistical regression model with Sparse Group Lasso penalized function. The model is fitted on the training data, and parameter estimation is obtained by modified blockwise coordinate descent and coordinate gradient descent method. The best parameters and feature subset are selected by using a 10-fold cross-validation. Finally, the test data is classified using the trained model. Compared with existing channel and feature selection methods, results show that the proposed method is more suitable, more stable, and faster for high-dimensional feature fusion. It can simultaneously achieve channel and feature selection with a lower error rate. The test accuracy on the data used from international BCI Competition IV reached 84.72%.

  12. Association study of Demodex bacteria and facial dermatoses based on DGGE technique.

    Science.gov (United States)

    Zhao, YaE; Yang, Fan; Wang, RuiLing; Niu, DongLing; Mu, Xin; Yang, Rui; Hu, Li

    2017-03-01

    The role of bacteria is unclear in the facial skin lesions caused by Demodex. To shed some light on this issue, we conducted a case-control study comparing cases with facial dermatoses with controls with healthy skin using denaturing gradient gel electrophoresis (DGGE) technique. The bacterial diversity, composition, and principal component were analyzed for Demodex bacteria and the matched facial skin bacteria. The result of mite examination showed that all 33 cases were infected with Demodex folliculorum (D. f), whereas 16 out of the 30 controls were infected with D. f, and the remaining 14 controls were infected with Demodex brevis (D. b). The diversity analysis showed that only evenness index presented statistical difference between mite bacteria and matched skin bacteria in the cases. The composition analysis showed that the DGGE bands of cases and controls were assigned to 12 taxa of 4 phyla, including Proteobacteria (39.37-52.78%), Firmicutes (2.7-26.77%), Actinobacteria (0-5.71%), and Bacteroidetes (0-2.08%). In cases, the proportion of Staphylococcus in Firmicutes was significantly higher than that in D. f controls and D. b controls, while the proportion of Sphingomonas in Proteobacteria was significantly lower than that in D. f controls. The between-group analysis (BGA) showed that all the banding patterns clustered into three groups, namely, D. f cases, D. f controls, and D. b controls. Our study suggests that the bacteria in Demodex should come from the matched facial skin bacteria. Proteobacteria and Firmicutes are the two main taxa. The increase of Staphylococcus and decrease of Sphingomonas might be associated with the development of facial dermatoses.

  13. Facial blindsight

    Directory of Open Access Journals (Sweden)

    Marco eSolcà

    2015-09-01

    Full Text Available Blindsight denotes unconscious residual visual capacities in the context of an inability to consciously recollect or identify visual information. It has been described for color and shape discrimination, movement or facial emotion recognition. The present study investigates a patient suffering from cortical blindness whilst maintaining select residual abilities in face detection. Our patient presented the capacity to distinguish between jumbled/normal faces, known/unknown faces or famous people’s categories although he failed to explicitly recognize or describe them. Conversely, performance was at chance level when asked to categorize non-facial stimuli. Our results provide clinical evidence for the notion that some aspects of facial processing can occur without perceptual awareness, possibly using direct tracts from the thalamus to associative visual cortex, bypassing the primary visual cortex.

  14. Kernel based visual tracking with scale invariant features

    Institute of Scientific and Technical Information of China (English)

    Risheng Han; Zhongliang Jing; Yuanxiang Li

    2008-01-01

    The kernel based tracking has two disadvantages:the tracking window size cannot be adjusted efficiently,and the kernel based color distribution may not have enough ability to discriminate object from clutter background.FDr boosting up the feature's discriminating ability,both scale invariant features and kernel based color distribution features are used as descriptors of tracked object.The proposed algorithm can keep tracking object of varying scales even when the surrounding background is similar to the object's appearance.

  15. Comparison of texture features based on Gabor filters

    NARCIS (Netherlands)

    Grigorescu, Simona E.; Petkov, Nicolai; Kruizinga, Peter

    2002-01-01

    Texture features that are based on the local power spectrum obtained by a bank of Gabor filters are compared. The features differ in the type of nonlinear post-processing which is applied to the local power spectrum. The following features are considered: Gabor energy, complex moments, and grating c

  16. Study on Isomerous CAD Model Exchange Based on Feature

    Institute of Scientific and Technical Information of China (English)

    SHAO Xiaodong; CHEN Feng; XU Chenguang

    2006-01-01

    A model-exchange method based on feature between isomerous CAD systems is put forward in this paper. In this method, CAD model information is accessed at both feature and geometry levels and converted according to standard feature operation. The feature information including feature tree, dimensions and constraints, which will be lost in traditional data conversion, as well as geometry are converted completely from source CAD system to destination one. So the transferred model can be edited through feature operation, which cannot be implemented by general model-exchange interface.

  17. Application of data fusion in computer facial recognition

    Directory of Open Access Journals (Sweden)

    Wang Ai Qiang

    2013-11-01

    Full Text Available The recognition rate of single recognition method is inefficiency in computer facial recognition. We proposed a new confluent facial recognition method using data fusion technology, a variety of recognition algorithm are combined to form the fusion-based face recognition system to improve the recognition rate in many ways. Data fusion considers three levels of data fusion, feature level fusion and decision level fusion. And the data layer uses a simple weighted average algorithm, which is easy to implement. Artificial neural network algorithm was selected in feature layer and fuzzy reasoning algorithm was used in decision layer. Finally, we compared with the BP neural network algorithm in the MATLAB experimental platform. The result shows that the recognition rate has been greatly improved after adopting data fusion technology in computer facial recognition.

  18. CONSTRUCTION AND MODIFICATION OF FLEXIBLE FEATURE-BASED MODELS

    Institute of Scientific and Technical Information of China (English)

    1999-01-01

    A new approach is proposed to generate flexible featrure-based models (FFBM), which can be modified dynamically. BRep/CSFG/FRG hybrid scheme is used to describe FFBM, in which BRep explicitly defines the model, CSFG (Constructive solid-feature geometry) tree records the feature-based modelling procedure and FRG (Feature relation graph) reflects different knids of relationship among features. Topological operators with local retrievability are designed to implement feature addition, which is traced by topological operation list (TOL) in detail. As a result, FFBM can be modified directly in the system database. Related features' chain reactions and variable topologies are supported in design modification, after which the product information adhering on features will not be lost. Further, a feature can be modified as rapidly as it was added.

  19. EEG signal features extraction based on fractal dimension.

    Science.gov (United States)

    Finotello, Francesca; Scarpa, Fabio; Zanon, Mattia

    2015-01-01

    The spread of electroencephalography (EEG) in countless applications has fostered the development of new techniques for extracting synthetic and informative features from EEG signals. However, the definition of an effective feature set depends on the specific problem to be addressed and is currently an active field of research. In this work, we investigated the application of features based on fractal dimension to a problem of sleep identification from EEG data. We demonstrated that features based on fractal dimension, including two novel indices defined in this work, add valuable information to standard EEG features and significantly improve sleep identification performance.

  20. Exploiting facial expressions for affective video summarisation

    NARCIS (Netherlands)

    Joho, H.; Jose, J.M.; Valenti, R.; Sebe, N.; Marchand-Maillet, S.; Kompatsiaris, I.

    2009-01-01

    This paper presents an approach to affective video summarisation based on the facial expressions (FX) of viewers. A facial expression recognition system was deployed to capture a viewer's face and his/her expressions. The user's facial expressions were analysed to infer personalised affective scenes

  1. Robust speech features representation based on computational auditory model

    Institute of Scientific and Technical Information of China (English)

    LU Xugang; JIA Chuan; DANG Jianwu

    2004-01-01

    A speech signal processing and features extracting method based on computational auditory model is proposed. The computational model is based on psychological, physiological knowledge and digital signal processing methods. In each stage of a hearing perception system, there is a corresponding computational model to simulate its function. Based on this model, speech features are extracted. In each stage, the features in different kinds of level are extracted. A further processing for primary auditory spectrum based on lateral inhibition is proposed to extract much more robust speech features. All these features can be regarded as the internal representations of speech stimulation in hearing system. The robust speech recognition experiments are conducted to test the robustness of the features. Results show that the representations based on the proposed computational auditory model are robust representations for speech signals.

  2. Accurate Image Retrieval Algorithm Based on Color and Texture Feature

    Directory of Open Access Journals (Sweden)

    Chunlai Yan

    2013-06-01

    Full Text Available Content-Based Image Retrieval (CBIR is one of the most active hot spots in the current research field of multimedia retrieval. According to the description and extraction of visual content (feature of the image, CBIR aims to find images that contain specified content (feature in the image database. In this paper, several key technologies of CBIR, e. g. the extraction of the color and texture features of the image, as well as the similarity measures are investigated. On the basis of the theoretical research, an image retrieval system based on color and texture features is designed. In this system, the Weighted Color Feature based on HSV space is adopted as a color feature vector, four features of the Co-occurrence Matrix, saying Energy, Entropy, Inertia Quadrature and Correlation, are used to construct texture vectors, and the Euclidean distance for similarity measure is employed as well. Experimental results show that this CBIR system is efficient in image retrieval.

  3. INTEGRATED EXPRESSIONAL AND COLOR INVARIANT FACIAL RECOGNITION SCHEME FOR HUMAN BIOMETRIC SYSTEM

    Directory of Open Access Journals (Sweden)

    M.Punithavalli

    2013-09-01

    Full Text Available In many practical applications like biometrics, video surveillance and human computer interaction, face recognition plays a major role. The previous works focused on recognizing and enhancing the biometric systems based on the facial components of the system. In this work, we are going to build Integrated Expressional and Color Invariant Facial Recognition scheme for human biometric recognition suited to different security provisioning public participation areas.At first, the features of the face are identified and processed using bayes classifier with RGB and HSV color bands. Second, psychological emotional variance are identified and linked with the respective human facial expression based on the facial action code system. Finally, an integrated expressional and color invariant facial recognition is proposed for varied conditions of illumination, pose, transformation, etc. These conditions on color invariant model are suited to easy and more efficient biometric recognition system in public domain and high confidential security zones. The integration is made derived genetic operation on the color and expression components of the facial feature system. Experimental evaluation is planned to done with public face databases (DBs such as CMU-PIE, Color FERET, XM2VTSDB, SCface, and FRGC 2.0 to estimate the performance of the proposed integrated expressional facial and color invariant recognition scheme [IEFCIRS]. Performance evaluation is done based on the constraints like recognition rate, security and evalaution time.

  4. Dynamic Approaches for Facial Recognition Using Digital Image Speckle Correlation

    Science.gov (United States)

    Rafailovich-Sokolov, Sara; Guan, E.; Afriat, Isablle; Rafailovich, Miriam; Sokolov, Jonathan; Clark, Richard

    2004-03-01

    Digital image analysis techniques have been extensively used in facial recognition. To date, most static facial characterization techniques, which are usually based on Fourier transform techniques, are sensitive to lighting, shadows, or modification of appearance by makeup, natural aging or surgery. In this study we have demonstrated that it is possible to uniquely identify faces by analyzing the natural motion of facial features with Digital Image Speckle Correlation (DISC). Human skin has a natural pattern produced by the texture of the skin pores, which is easily visible with conventional digital cameras of resolution greater than 4 mega pixels. Hence the application of the DISC method to the analysis of facial motion appears to be very straightforward. Here we demonstrate that the vector diagrams produced by this method for facial images are directly correlated to the underlying muscle structure which is unique for an individual and is not affected by lighting or make-up. Furthermore, we will show that this method can also be used for medical diagnosis in early detection of facial paralysis and other forms of skin disorders.

  5. Parotid lymphangioma associated with facial nerve paralysis.

    Science.gov (United States)

    Imaizumi, Mitsuyoshi; Tani, Akiko; Ogawa, Hiroshi; Omori, Koichi

    2014-10-01

    Parotid lymphangioma is a relatively rare disease that is usually detected in infancy or early childhood, and which has typical features. Clinical reports of facial nerve paralysis caused by lymphangioma, however, are very rare. Usually, facial nerve paralysis in a child suggests malignancy. Here we report a very rare case of parotid lymphangioma associated with facial nerve paralysis. A 7-year-old boy was admitted to hospital with a rapidly enlarging mass in the left parotid region. Left peripheral-type facial nerve paralysis was also noted. Computed tomography and magnetic resonance imaging also revealed multiple cystic lesions. Open biopsy was undertaken in order to investigate the cause of the facial nerve paralysis. The histopathological findings of the excised tumor were consistent with lymphangioma. Prednisone (40 mg/day) was given in a tapering dose schedule. Facial nerve paralysis was completely cured 1 month after treatment. There has been no recurrent facial nerve paralysis for eight years.

  6. Segmentation-Based PolSAR Image Classification Using Visual Features: RHLBP and Color Features

    Directory of Open Access Journals (Sweden)

    Jian Cheng

    2015-05-01

    Full Text Available A segmentation-based fully-polarimetric synthetic aperture radar (PolSAR image classification method that incorporates texture features and color features is designed and implemented. This method is based on the framework that conjunctively uses statistical region merging (SRM for segmentation and support vector machine (SVM for classification. In the segmentation step, we propose an improved local binary pattern (LBP operator named the regional homogeneity local binary pattern (RHLBP to guarantee the regional homogeneity in PolSAR images. In the classification step, the color features extracted from false color images are applied to improve the classification accuracy. The RHLBP operator and color features can provide discriminative information to separate those pixels and regions with similar polarimetric features, which are from different classes. Extensive experimental comparison results with conventional methods on L-band PolSAR data demonstrate the effectiveness of our proposed method for PolSAR image classification.

  7. A New Computational Methodology for the Construction of Forensic, Facial Composites

    Science.gov (United States)

    Solomon, Christopher; Gibson, Stuart; Maylin, Matthew

    A facial composite generated from an eyewitness’s memory often constitutes the first and only means available for police forces to identify a criminal suspect. To date, commercial computerised systems for constructing facial composites have relied almost exclusively on a feature-based, ‘cut-andpaste’ method whose effectiveness has been fundamentally limited by both the witness’s limited ability to recall and verbalise facial features and by the large dimensionality of the search space. We outline a radically new approach to composite generation which combines a parametric, statistical model of facial appearance with a computational search algorithm based on interactive, evolutionary principles. We describe the fundamental principles on which the new system has been constructed, outline recent innovations in the computational search procedure and also report on the real-world experience of UK police forces who have been using a commercial version of the system.

  8. Feature Selection for Neural Network Based Stock Prediction

    Science.gov (United States)

    Sugunnasil, Prompong; Somhom, Samerkae

    We propose a new methodology of feature selection for stock movement prediction. The methodology is based upon finding those features which minimize the correlation relation function. We first produce all the combination of feature and evaluate each of them by using our evaluate function. We search through the generated set with hill climbing approach. The self-organizing map based stock prediction model is utilized as the prediction method. We conduct the experiment on data sets of the Microsoft Corporation, General Electric Co. and Ford Motor Co. The results show that our feature selection method can improve the efficiency of the neural network based stock prediction.

  9. Spatio-Temporal Pain Recognition in CNN-based Super-Resolved Facial Images

    DEFF Research Database (Denmark)

    Bellantonio, Marco; Haque, Mohammad Ahsanul; Rodriguez, Pau

    2017-01-01

    expression provides a way of efficient pain detection. When deep machine learning methods came into the scene, automatic pain detection exhibited even better performance. In this paper, we figured out three important factors to exploit in automatic pain detection: spatial information available regarding...... available UNBC-McMaster Shoulder Pain database. As a contribution, the paper provides novel and important information regarding to the performance of a hybrid deep learning framework for pain detection in facial images of different resolution....... to pain in each of the facial video frames, temporal axis information regarding to pain expression pattern in a subject video sequence, and variation of face resolution. We employed a combination of convolutional neural network and recurrent neural network to setup a deep hybrid pain detection framework...

  10. Level Sets and Voronoi based Feature Extraction from any Imagery

    DEFF Research Database (Denmark)

    Sharma, O.; Anton, François; Mioc, Darka

    2012-01-01

    Polygon features are of interest in many GEOProcessing applications like shoreline mapping, boundary delineation, change detection, etc. This paper presents a unique new GPU-based methodology to automate feature extraction combining level sets, or mean shift based segmentation together with Voronoi...

  11. Neuro-fuzzy quantification of personal perceptions of facial images based on a limited data set.

    Science.gov (United States)

    Diago, Luis; Kitaoka, Tetsuko; Hagiwara, Ichiro; Kambayashi, Toshiki

    2011-12-01

    Artificial neural networks are nonlinear techniques which typically provide one of the most accurate predictive models perceiving faces in terms of the social impressions they make on people. However, they are often not suitable to be used in many practical application domains because of their lack of transparency and comprehensibility. This paper proposes a new neuro-fuzzy method to investigate the characteristics of the facial images perceived as Iyashi by one hundred and fourteen subjects. Iyashi is a Japanese word used to describe a peculiar phenomenon that is mentally soothing, but is yet to be clearly defined. In order to gain a clear insight into the reasoning made by the nonlinear prediction models such as holographic neural networks (HNN) in the classification of Iyashi expressions, the interpretability of the proposed fuzzy-quantized HNN (FQHNN) is improved by reducing the number of input parameters, creating membership functions and extracting fuzzy rules from the responses provided by the subjects about a limited dataset of 20 facial images. The experimental results show that the proposed FQHNN achieves 2-8% increase in the prediction accuracy compared with traditional neuro-fuzzy classifiers while it extracts 35 fuzzy rules explaining what characteristics a facial image should have in order to be classified as Iyashi-stimulus for 87 subjects.

  12. Facial Resemblance Exaggerates Sex-Specific Jealousy-Based Decisions1

    Directory of Open Access Journals (Sweden)

    Steven M. Platek

    2007-01-01

    Full Text Available Sex differences in reaction to a romantic partner's infidelity are well documented and are hypothesized to be attributable to sex-specific jealousy mechanisms which are utilized to solve adaptive problems associated with risk of extra-pair copulation. Males, because of the risk of cuckoldry become more upset by sexual infidelity, while females, because of loss of resources and biparental investment tend to become more distressed by emotional infidelity. However, the degree to which these sex-specific reactions to jealousy interact with cues to kin are completely unknown. Here we investigated the interaction of facial resemblance with decisions about sex-specific jealousy scenarios. Fifty nine volunteers were asked to imagine that two different people (represented by facial composites informed them about their romantic partner's sexual or emotional infidelity. Consistent with previous research, males ranked sexual infidelity scenarios as most upsetting and females ranked emotional infidelity scenarios most upsetting. However, when information about the infidelity was provided by a face that resembled the subject, sex-specific reactions to jealousy were exaggerated. This finding highlights the use of facial resemblance as a putative self-referent phenotypic matching cue that impacts trusting behavior in sexual contexts.

  13. Feature selection using feature dissimilarity measure and density-based clustering: Application to biological data

    Indian Academy of Sciences (India)

    Debarka Sengupta; Indranil Aich; Sanghamitra Bandyopadhyay

    2015-10-01

    Reduction of dimensionality has emerged as a routine process in modelling complex biological systems. A large number of feature selection techniques have been reported in the literature to improve model performance in terms of accuracy and speed. In the present article an unsupervised feature selection technique is proposed, using maximum information compression index as the dissimilarity measure and the well-known density-based cluster identification technique DBSCAN for identifying the largest natural group of dissimilar features. The algorithm is fast and less sensitive to the user-supplied parameters. Moreover, the method automatically determines the required number of features and identifies them. We used the proposed method for reducing dimensionality of a number of benchmark data sets of varying sizes. Its performance was also extensively compared with some other well-known feature selection methods.

  14. Driver Fatigue Features Extraction

    Directory of Open Access Journals (Sweden)

    Gengtian Niu

    2014-01-01

    Full Text Available Driver fatigue is the main cause of traffic accidents. How to extract the effective features of fatigue is important for recognition accuracy and traffic safety. To solve the problem, this paper proposes a new method of driver fatigue features extraction based on the facial image sequence. In this method, first, each facial image in the sequence is divided into nonoverlapping blocks of the same size, and Gabor wavelets are employed to extract multiscale and multiorientation features. Then the mean value and standard deviation of each block’s features are calculated, respectively. Considering the facial performance of human fatigue is a dynamic process that developed over time, each block’s features are analyzed in the sequence. Finally, Adaboost algorithm is applied to select the most discriminating fatigue features. The proposed method was tested on a self-built database which includes a wide range of human subjects of different genders, poses, and illuminations in real-life fatigue conditions. Experimental results show the effectiveness of the proposed method.

  15. Individual discriminative face recognition models based on subsets of features

    DEFF Research Database (Denmark)

    Clemmensen, Line Katrine Harder; Gomez, David Delgado; Ersbøll, Bjarne Kjær

    2007-01-01

    of the face recognition problem. The elastic net model is able to select a subset of features with low computational effort compared to other state-of-the-art feature selection methods. Furthermore, the fact that the number of features usually is larger than the number of images in the data base makes feature...... selection techniques such as forward selection or lasso regression become inadequate. In the experimental section, the performance of the elastic net model is compared with geometrical and color based algorithms widely used in face recognition such as Procrustes nearest neighbor, Eigenfaces, or Fisher...

  16. Variations of midline facial soft tissue thicknesses among three skeletal classes in Central Anatolian adults.

    Science.gov (United States)

    Gungor, Kahraman; Bulut, Ozgur; Hizliol, Ismail; Hekimoglu, Baki; Gurcan, Safa

    2015-11-01

    Facial reconstruction is a technique employed in a forensic investigation as a last resort to recreate an individual's facial appearance from his/her skull. Forensic anthropologists or artists use facial soft tissue thickness (FSTT) measurements as a guide in facial reconstructions. The aim of this study was to develop FSTT values for Central Anatolian adults, taking into consideration sex and skeletal classes; first, to achieve better results obtaining the likenesses of deceased individuals in two or three-dimensional forensic facial reconstructions and, second, to compare these values to existing databases. Lateral cephalograms were used to determine FSTT values at 10 midline facial landmarks of 167 adults. Descriptive statistics were calculated for these facial soft tissue thickness values, and these values were compared to those reported in two other comparable databases. The majority of the landmarks showed sex-based differences. Males were found to have significantly larger landmark values than female subjects. These results point not only to the necessity to present data in accordance with sexual dimorphism, but also the need to consider that individuals from different geographical areas have unique facial features and that, as a result, geographical population-specific FSTT values are required.

  17. Novel dynamic Bayesian networks for facial action element recognition and understanding

    Science.gov (United States)

    Zhao, Wei; Park, Jeong-Seon; Choi, Dong-You; Lee, Sang-Woong

    2011-12-01

    In daily life, language is an important tool of communication between people. Besides language, facial action can also provide a great amount of information. Therefore, facial action recognition has become a popular research topic in the field of human-computer interaction (HCI). However, facial action recognition is quite a challenging task due to its complexity. In a literal sense, there are thousands of facial muscular movements, many of which have very subtle differences. Moreover, muscular movements always occur simultaneously when the pose is changed. To address this problem, we first build a fully automatic facial points detection system based on a local Gabor filter bank and principal component analysis. Then, novel dynamic Bayesian networks are proposed to perform facial action recognition using the junction tree algorithm over a limited number of feature points. In order to evaluate the proposed method, we have used the Korean face database for model training. For testing, we used the CUbiC FacePix, facial expressions and emotion database, Japanese female facial expression database, and our own database. Our experimental results clearly demonstrate the feasibility of the proposed approach.

  18. Remote sensing image classification based on block feature point density analysis and multiple-feature fusion

    Science.gov (United States)

    Li, Shijin; Jiang, Yaping; Zhang, Yang; Feng, Jun

    2015-10-01

    With the development of remote sensing (RS) and the related technologies, the resolution of RS images is enhancing. Compared with moderate or low resolution images, high-resolution ones can provide more detailed ground information. However, a variety of terrain has complex spatial distribution. The different objectives of high-resolution images have a variety of features. The effectiveness of these features is not the same, but some of them are complementary. Considering the above information and characteristics, a new method is proposed to classify RS images based on hierarchical fusion of multi-features. Firstly, RS images are pre-classified into two categories in terms of whether feature points are uniformly or non-uniformly distributed. Then, the color histogram and Gabor texture feature are extracted from the uniformly-distributed categories, and the linear spatial pyramid matching using sparse coding (ScSPM) feature is obtained from the non-uniformly-distributed categories. Finally, the classification is performed by two support vector machine classifiers. The experimental results on a large RS image database with 2100 images show that the overall classification accuracy is boosted by 10.1% in comparison with the highest accuracy of single feature classification method. Compared with other multiple-feature fusion methods, the proposed method has achieved the highest classification accuracy on this dataset which has reached 90.1%, and the time complexity of the algorithm is also greatly reduced.

  19. A novel use of the facial artery based buccinator musculo-mucosal island flap for reconstruction of the oropharynx.

    Science.gov (United States)

    Khan, K; Hinckley, V; Cassell, O; Silva, P; Winter, S; Potter, M

    2013-10-01

    The buccinator musculo-mucosal island or Zhao flap can be used to reconstruct a wide range of intra-oral defects including floor of mouth, tonsillar fossa and lateral tongue. We describe our experience with the inferiorly based facial artery buccinator musculo-mucosal flap for a novel use in the reconstruction of oropharyngeal tumours at the tongue base and lateral pharyngeal wall. We prospectively reviewed all patients who underwent buccinator musculo-mucosal island flap reconstruction examining indication, operative details, and post-operative outcomes. We describe our technique for its novel use in lateral pharynx/tongue base reconstruction through neck dissection access. Deeper flaps were adequately visualised and monitored using flexible nasoendoscopy. There were no flap failures with all patients achieving primary healing with minimal complications. All donor sites closed directly with minimal scarring. Two patients reported mild tightness on mouth opening and two patients reported transient weakness of the mandibular branch of the facial nerve. In our experience the buccinator musculo-mucosal island flap is an extremely versatile 'like for like' local flap option due to its long arc of rotation. As inset can be achieved via neck dissection access, this avoids lip/jaw split as per conventional oropharyngeal surgical management further minimising morbidity. We present the first series of its effective use in oropharyngeal reconstruction.

  20. Moment feature based fast feature extraction algorithm for moving object detection using aerial images.

    Directory of Open Access Journals (Sweden)

    A F M Saifuddin Saif

    Full Text Available Fast and computationally less complex feature extraction for moving object detection using aerial images from unmanned aerial vehicles (UAVs remains as an elusive goal in the field of computer vision research. The types of features used in current studies concerning moving object detection are typically chosen based on improving detection rate rather than on providing fast and computationally less complex feature extraction methods. Because moving object detection using aerial images from UAVs involves motion as seen from a certain altitude, effective and fast feature extraction is a vital issue for optimum detection performance. This research proposes a two-layer bucket approach based on a new feature extraction algorithm referred to as the moment-based feature extraction algorithm (MFEA. Because a moment represents the coherent intensity of pixels and motion estimation is a motion pixel intensity measurement, this research used this relation to develop the proposed algorithm. The experimental results reveal the successful performance of the proposed MFEA algorithm and the proposed methodology.

  1. Highly comparative, feature-based time-series classification

    CERN Document Server

    Fulcher, Ben D

    2014-01-01

    A highly comparative, feature-based approach to time series classification is introduced that uses an extensive database of algorithms to extract thousands of interpretable features from time series. These features are derived from across the scientific time-series analysis literature, and include summaries of time series in terms of their correlation structure, distribution, entropy, stationarity, scaling properties, and fits to a range of time-series models. After computing thousands of features for each time series in a training set, those that are most informative of the class structure are selected using greedy forward feature selection with a linear classifier. The resulting feature-based classifiers automatically learn the differences between classes using a reduced number of time-series properties, and circumvent the need to calculate distances between time series. Representing time series in this way results in orders of magnitude of dimensionality reduction, allowing the method to perform well on ve...

  2. Colesteatoma causando paralisia facial Cholesteatoma causing facial paralysis

    Directory of Open Access Journals (Sweden)

    José Ricardo Gurgel Testa

    2003-10-01

    blood supply or production of neurotoxic substances secreted from either the cholesteatoma matrix or bacteria enclosed in the tumor. AIM: To evaluate the incidence, clinical features and treatment of the facial palsy due cholesteatoma. STUDY DESIGN: Clinical retrospective. MATERIAL AND METHOD: Retrospective study of 10 cases of facial paralysis due cholesteatoma selected through a survey of 206 decompressions of the facial nerve due various aetiologies realized in the last 10 years in UNIFESP-EPM. RESULTS: The incidence of facial paralysis due cholesteatoma in this study was 4,85%, with female predominance (60%. The average age of the patients was 39 years. The duration and severity of the facial palsy associated with the extension of lesion were important for the functional recovery of the facial nerve. CONCLUSION: Early surgical approach is necessary in these cases to improve the nerve function more adequately. When disruption or intense fibrous replacement occurs in the facial nerve, nerve grafting (greater auricular/sural nerves and/or hypoglossal facial anastomosis may be suggested.

  3. 基于Gabor滤波的表情动态特征提取方法%An expressional dynamic feature extraction method based on Gabor filtering

    Institute of Scientific and Technical Information of China (English)

    朱明旱; 伍宗富; 罗大庸

    2012-01-01

    Using existing dynamic feature extraction method to extract expression features from image sequence,the facial appearance features are also extracted together.Aiming at this problem,an expressional dynamic feature extraction method based on Gabor filtering is proposed in this paper.Expression features being extracted,the method takes advantage of the frequency and direction selectivity of Gabor filter to suppress the facial appearance feature extraction.Thereby facial appearance features are reduced in expression features extracted from the sequence.The experimental results in Cohn-Kanade face database and CMU-AMP face database show expression features extracted by the proposed method are much more efficient for expression recognition of image sequence.%针对目前动态特征提取方法在提取序列表情特征时人脸外貌特征也一起被提取的缺陷,提出了一种基于Gabor滤波的表情动态特征提取方法。利用Gabor滤波器在频率和方向上的选择特性,在提取表情特征时较好地抑制了人脸外貌特征的提取,从而减少了表情特征中人脸外貌特征的含量。在Cohn-Kanade和CMU-AMP人脸库上的表情识别实验表明,本文方法获得的表情动态特征对表情识别更有效。

  4. Dissociable roles of internal feelings and face recognition ability in facial expression decoding.

    Science.gov (United States)

    Zhang, Lin; Song, Yiying; Liu, Ling; Liu, Jia

    2016-05-15

    The problem of emotion recognition has been tackled by researchers in both affective computing and cognitive neuroscience. While affective computing relies on analyzing visual features from facial expressions, it has been proposed that humans recognize emotions by internally simulating the emotional states conveyed by others' expressions, in addition to perceptual analysis of facial features. Here we investigated whether and how our internal feelings contributed to the ability to decode facial expressions. In two independent large samples of participants, we observed that individuals who generally experienced richer internal feelings exhibited a higher ability to decode facial expressions, and the contribution of internal feelings was independent of face recognition ability. Further, using voxel-based morphometry, we found that the gray matter volume (GMV) of bilateral superior temporal sulcus (STS) and the right inferior parietal lobule was associated with facial expression decoding through the mediating effect of internal feelings, while the GMV of bilateral STS, precuneus, and the right central opercular cortex contributed to facial expression decoding through the mediating effect of face recognition ability. In addition, the clusters in bilateral STS involved in the two components were neighboring yet separate. Our results may provide clues about the mechanism by which internal feelings, in addition to face recognition ability, serve as an important instrument for humans in facial expression decoding.

  5. Feature Selection for Image Retrieval based on Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Preeti Kushwaha

    2016-12-01

    Full Text Available This paper describes the development and implementation of feature selection for content based image retrieval. We are working on CBIR system with new efficient technique. In this system, we use multi feature extraction such as colour, texture and shape. The three techniques are used for feature extraction such as colour moment, gray level co- occurrence matrix and edge histogram descriptor. To reduce curse of dimensionality and find best optimal features from feature set using feature selection based on genetic algorithm. These features are divided into similar image classes using clustering for fast retrieval and improve the execution time. Clustering technique is done by k-means algorithm. The experimental result shows feature selection using GA reduces the time for retrieval and also increases the retrieval precision, thus it gives better and faster results as compared to normal image retrieval system. The result also shows precision and recall of proposed approach compared to previous approach for each image class. The CBIR system is more efficient and better performs using feature selection based on Genetic Algorithm.

  6. Feature-based multiresolution techniques for product design

    Institute of Scientific and Technical Information of China (English)

    LEE Sang Hun; LEE Kunwoo

    2006-01-01

    3D computer-aided design (CAD) systems based on feature-based solid modelling technique have been widely spread and used for product design. However, when part models associated with features are used in various downstream applications,simplified models in various levels of detail (LODs) are frequently more desirable than the full details of the parts. In particular,the need for feature-based multiresolution representation of a solid model representing an object at multiple LODs in the feature unit is increasing for engineering tasks. One challenge is to generate valid models at various LODs after an arbitrary rearrangement of features using a certain LOD criterion, because composite Boolean operations consisting of union and subtraction are not commutative. The other challenges are to devise proper topological framework for multiresolution representation, to suggest more reasonable LOD criteria, and to extend applications. This paper surveys the recent research on these issues.

  7. Spatial Circular Granulation Method Based on Multimodal Finger Feature

    Directory of Open Access Journals (Sweden)

    Jinfeng Yang

    2016-01-01

    Full Text Available Finger-based personal identification has become an active research topic in recent years because of its high user acceptance and convenience. How to reliably and effectively fuse the multimodal finger features together, however, has still been a challenging problem in practice. In this paper, viewing the finger trait as the combination of a fingerprint, finger vein, and finger-knuckle-print, a new multimodal finger feature recognition scheme is proposed based on granular computing. First, the ridge texture features of FP, FV, and FKP are extracted using Gabor Ordinal Measures (GOM. Second, combining the three-modal GOM feature maps in a color-based manner, we then constitute the original feature object set of a finger. To represent finger features effectively, they are granulated at three levels of feature granules (FGs in a bottom-up manner based on spatial circular granulation. In order to test the performance of the multilevel FGs, a top-down matching method is proposed. Experimental results show that the proposed method achieves higher accuracy recognition rate in finger feature recognition.

  8. Multi-features Based Approach for Moving Shadow Detection

    Institute of Scientific and Technical Information of China (English)

    ZHOU Ning; ZHOU Man-li; XU Yi-ping; FANG Bao-hong

    2004-01-01

    In the video-based surveillance application, moving shadows can affect the correct localization and detection of moving objects. This paper aims to present a method for shadow detection and suppression used for moving visual object detection. The major novelty of the shadow suppression is the integration of several features including photometric invariant color feature, motion edge feature, and spatial feature etc. By modifying process for false shadow detected, the averaging detection rate of moving object reaches above 90% in the test of Hall-Monitor sequence.

  9. Opinion mining feature-level using Naive Bayes and feature extraction based analysis dependencies

    Science.gov (United States)

    Sanda, Regi; Baizal, Z. K. Abdurahman; Nhita, Fhira

    2015-12-01

    Development of internet and technology, has major impact and providing new business called e-commerce. Many e-commerce sites that provide convenience in transaction, and consumers can also provide reviews or opinions on products that purchased. These opinions can be used by consumers and producers. Consumers to know the advantages and disadvantages of particular feature of the product. Procuders can analyse own strengths and weaknesses as well as it's competitors products. Many opinions need a method that the reader can know the point of whole opinion. The idea emerged from review summarization that summarizes the overall opinion based on sentiment and features contain. In this study, the domain that become the main focus is about the digital camera. This research consisted of four steps 1) giving the knowledge to the system to recognize the semantic orientation of an opinion 2) indentify the features of product 3) indentify whether the opinion gives a positive or negative 4) summarizing the result. In this research discussed the methods such as Naï;ve Bayes for sentiment classification, and feature extraction algorithm based on Dependencies Analysis, which is one of the tools in Natural Language Processing (NLP) and knowledge based dictionary which is useful for handling implicit features. The end result of research is a summary that contains a bunch of reviews from consumers on the features and sentiment. With proposed method, accuration for sentiment classification giving 81.2 % for positive test data, 80.2 % for negative test data, and accuration for feature extraction reach 90.3 %.

  10. Efficient sparse kernel feature extraction based on partial least squares.

    Science.gov (United States)

    Dhanjal, Charanpal; Gunn, Steve R; Shawe-Taylor, John

    2009-08-01

    The presence of irrelevant features in training data is a significant obstacle for many machine learning tasks. One approach to this problem is to extract appropriate features and, often, one selects a feature extraction method based on the inference algorithm. Here, we formalize a general framework for feature extraction, based on Partial Least Squares, in which one can select a user-defined criterion to compute projection directions. The framework draws together a number of existing results and provides additional insights into several popular feature extraction methods. Two new sparse kernel feature extraction methods are derived under the framework, called Sparse Maximal Alignment (SMA) and Sparse Maximal Covariance (SMC), respectively. Key advantages of these approaches include simple implementation and a training time which scales linearly in the number of examples. Furthermore, one can project a new test example using only k kernel evaluations, where k is the output dimensionality. Computational results on several real-world data sets show that SMA and SMC extract features which are as predictive as those found using other popular feature extraction methods. Additionally, on large text retrieval and face detection data sets, they produce features which match the performance of the original ones in conjunction with a Support Vector Machine.

  11. The Relationships between Processing Facial Identity, Emotional Expression, Facial Speech, and Gaze Direction during Development

    Science.gov (United States)

    Spangler, Sibylle M.; Schwarzer, Gudrun; Korell, Monika; Maier-Karius, Johanna

    2010-01-01

    Four experiments were conducted with 5- to 11-year-olds and adults to investigate whether facial identity, facial speech, emotional expression, and gaze direction are processed independently of or in interaction with one another. In a computer-based, speeded sorting task, participants sorted faces according to facial identity while disregarding…

  12. Content Based Image Retrieval by Multi Features using Image Blocks

    Directory of Open Access Journals (Sweden)

    Arpita Mathur

    2013-12-01

    Full Text Available Content based image retrieval (CBIR is an effective method of retrieving images from large image resources. CBIR is a technique in which images are indexed by extracting their low level features like, color, texture, shape, and spatial location, etc. Effective and efficient feature extraction mechanisms are required to improve existing CBIR performance. This paper presents a novel approach of CBIR system in which higher retrieval efficiency is achieved by combining the information of image features color, shape and texture. The color feature is extracted using color histogram for image blocks, for shape feature Canny edge detection algorithm is used and the HSB extraction in blocks is used for texture feature extraction. The feature set of the query image are compared with the feature set of each image in the database. The experiments show that the fusion of multiple features retrieval gives better retrieval results than another approach used by Rao et al. This paper presents comparative study of performance of the two different approaches of CBIR system in which the image features color, shape and texture are used.

  13. Validation of a three-dimensional facial scanning system based on structured light techniques.

    Science.gov (United States)

    Ma, Lili; Xu, Tianmin; Lin, Jiuxiang

    2009-06-01

    The aim of this study was to validate a newly developed three-dimensional (3D) structured light scanning system in recording the facial morphology. The validation was performed in three aspects including accuracy, precision and reliability. The accuracy and precision were investigated using a plaster model with 19 marked landmarks. The accuracy was determined by comparing the coordinates from the 3D images and from the coordinates measure machine (CMM). The precision was quantified through the repeated landmarks location on 3D images. The reliability was investigated in 10 adult volunteers. Each was scanned five times in 3 weeks. The 3D images acquired at different times were compared with each other to measure the reliability. We found that the accuracy was 0.93 mm, the precision was 0.79 mm, the reliability was 0.2mm. These findings suggested that the structured light scanning system was accurate, precise and reliable to record the facial morphology for both clinic and research purposes.

  14. High-precision Detection of Facial Landmarks to Estimate Head Motions Based on Vision Models

    Directory of Open Access Journals (Sweden)

    Xiaohong W. Gao

    2007-01-01

    Full Text Available A new approach of determination of head movement is presented from the pictures recorded via digital cameras monitoring the scanning processing of PET. Two human vision models of CIECAMs and BMV are applied to segment the face region via skin colour and to detect local facial landmarks respectively. The developed algorithms are evaluated on the pictures (n=12 monitoring a subject’s head while simulating PET scanning captured by two calibrated cameras (located in the front and left side from a subject. It is shown that centers of chosen facial landmarks of eye corners and middle point of nose basement have been detected with very high precision (1 0.64 pixels. Three landmarks on pictures received by the front camera and two by the side camera have been identified. Preliminary results on 2D images with known moving parameters show that movement parameters of rotations and translations along X, Y, and Z directions can be obtained very accurately via the described methods.

  15. Facial Recognition Technology: An analysis with scope in India

    CERN Document Server

    Thorat, S B; Dandale, Jyoti P

    2010-01-01

    A facial recognition system is a computer application for automatically identifying or verifying a person from a digital image or a video frame from a video source. One of the way is to do this is by comparing selected facial features from the image and a facial database.It is typically used in security systems and can be compared to other biometrics such as fingerprint or eye iris recognition systems. In this paper we focus on 3-D facial recognition system and biometric facial recognision system. We do critics on facial recognision system giving effectiveness and weaknesses. This paper also introduces scope of recognision system in India.

  16. Feature selection gait-based gender classification under different circumstances

    Science.gov (United States)

    Sabir, Azhin; Al-Jawad, Naseer; Jassim, Sabah

    2014-05-01

    This paper proposes a gender classification based on human gait features and investigates the problem of two variations: clothing (wearing coats) and carrying bag condition as addition to the normal gait sequence. The feature vectors in the proposed system are constructed after applying wavelet transform. Three different sets of feature are proposed in this method. First, Spatio-temporal distance that is dealing with the distance of different parts of the human body (like feet, knees, hand, Human Height and shoulder) during one gait cycle. The second and third feature sets are constructed from approximation and non-approximation coefficient of human body respectively. To extract these two sets of feature we divided the human body into two parts, upper and lower body part, based on the golden ratio proportion. In this paper, we have adopted a statistical method for constructing the feature vector from the above sets. The dimension of the constructed feature vector is reduced based on the Fisher score as a feature selection method to optimize their discriminating significance. Finally k-Nearest Neighbor is applied as a classification method. Experimental results demonstrate that our approach is providing more realistic scenario and relatively better performance compared with the existing approaches.

  17. Ear Recognition Based on Gabor Features and KFDA

    Directory of Open Access Journals (Sweden)

    Li Yuan

    2014-01-01

    Full Text Available We propose an ear recognition system based on 2D ear images which includes three stages: ear enrollment, feature extraction, and ear recognition. Ear enrollment includes ear detection and ear normalization. The ear detection approach based on improved Adaboost algorithm detects the ear part under complex background using two steps: offline cascaded classifier training and online ear detection. Then Active Shape Model is applied to segment the ear part and normalize all the ear images to the same size. For its eminent characteristics in spatial local feature extraction and orientation selection, Gabor filter based ear feature extraction is presented in this paper. Kernel Fisher Discriminant Analysis (KFDA is then applied for dimension reduction of the high-dimensional Gabor features. Finally distance based classifier is applied for ear recognition. Experimental results of ear recognition on two datasets (USTB and UND datasets and the performance of the ear authentication system show the feasibility and effectiveness of the proposed approach.

  18. Ear recognition based on Gabor features and KFDA.

    Science.gov (United States)

    Yuan, Li; Mu, Zhichun

    2014-01-01

    We propose an ear recognition system based on 2D ear images which includes three stages: ear enrollment, feature extraction, and ear recognition. Ear enrollment includes ear detection and ear normalization. The ear detection approach based on improved Adaboost algorithm detects the ear part under complex background using two steps: offline cascaded classifier training and online ear detection. Then Active Shape Model is applied to segment the ear part and normalize all the ear images to the same size. For its eminent characteristics in spatial local feature extraction and orientation selection, Gabor filter based ear feature extraction is presented in this paper. Kernel Fisher Discriminant Analysis (KFDA) is then applied for dimension reduction of the high-dimensional Gabor features. Finally distance based classifier is applied for ear recognition. Experimental results of ear recognition on two datasets (USTB and UND datasets) and the performance of the ear authentication system show the feasibility and effectiveness of the proposed approach.

  19. Dependence of the appearance-based perception of criminality, suggestibility, and trustworthiness on the level of pixelation of facial images.

    Science.gov (United States)

    Nurmoja, Merle; Eamets, Triin; Härma, Hanne-Loore; Bachmann, Talis

    2012-10-01

    While the dependence of face identification on the level of pixelation-transform of the images of faces has been well studied, similar research on face-based trait perception is underdeveloped. Because depiction formats used for hiding individual identity in visual media and evidential material recorded by surveillance cameras often consist of pixelized images, knowing the effects of pixelation on person perception has practical relevance. Here, the results of two experiments are presented showing the effect of facial image pixelation on the perception of criminality, trustworthiness, and suggestibility. It appears that individuals (N = 46, M age = 21.5 yr., SD = 3.1 for criminality ratings; N = 94, M age = 27.4 yr., SD = 10.1 for other ratings) have the ability to discriminate between facial cues ndicative of these perceived traits from the coarse level of image pixelation (10-12 pixels per face horizontally) and that the discriminability increases with a decrease in the coarseness of pixelation. Perceived criminality and trustworthiness appear to be better carried by the pixelized images than perceived suggestibility.

  20. Lazy learner text categorization algorithm based on embedded feature selection

    Institute of Scientific and Technical Information of China (English)

    Yan Peng; Zheng Xuefeng; Zhu Jianyong; Xiao Yunhong

    2009-01-01

    To avoid the curse of dimensionality, text categorization (TC) algorithms based on machine learning (ML) have to use an feature selection (FS) method to reduce the dimensionality of feature space. Although having been widely used, FS process will generally cause information losing and then have much side-effect on the whole performance of TC algorithms. On the basis of the sparsity characteristic of text vectors, a new TC algorithm based on lazy feature selection (LFS) is presented. As a new type of embedded feature selection approach, the LFS method can greatly reduce the dimension of features without any information losing, which can improve both efficiency and performance of algorithms greatly. The experiments show the new algorithm can simultaneously achieve much higher both performance and efficiency than some of other classical TC algorithms.

  1. Guiding atypical facial growth back to normal. Part 1: Understanding facial growth.

    Science.gov (United States)

    Galella, Steve; Chow, Daniel; Jones, Earl; Enlow, Donald; Masters, Ari

    2011-01-01

    Many practitioners find the complexity of facial growth overwhelming and thus merely observe and accept the clinical features of atypical growth and do not comprehend the long-term consequences. Facial growth and development is a strictly controlled biological process. Normal growth involves ongoing bone remodeling and positional displacement. Atypical growth begins when this biological balance is disturbed With the understanding of these processes, clinicians can adequately assess patients and determine the causes of these atypical facial growth patterns and design effective treatment plans. This is the first of a series of articles which addresses normal facial growth, atypical facial growth, patient assessment, causes of atypical facial growth, and guiding facial growth back to normal.

  2. Facial Erythema of Rosacea - Aetiology, Different Pathophysiologies and Treatment Options.

    Science.gov (United States)

    Steinhoff, Martin; Schmelz, Martin; Schauber, Jürgen

    2016-06-15

    Rosacea is a common chronic skin condition that displays a broad diversity of clinical manifestations. Although the pathophysiological mechanisms of the four subtypes are not completely elucidated, the key elements often present are augmented immune responses of the innate and adaptive immune system, and neurovascular dysregulation. The most common primary feature of all cutaneous subtypes of rosacea is transient or persistent facial erythema. Perilesional erythema of papules or pustules is based on the sustained vasodilation and plasma extravasation induced by the inflammatory infiltrates. In contrast, transient erythema has rapid kinetics induced by trigger factors independent of papules or pustules. Amongst the current treatments for facial erythema of rosacea, only the selective α2-adrenergic receptor agonist brimonidine 0.33% topical gel (Mirvaso®) is approved. This review aims to discuss the potential causes, different pathophysiologies and current treatment options to address the unmet medical needs of patients with facial erythema of rosacea.

  3. Freestyle Local Perforator Flaps for Facial Reconstruction

    Directory of Open Access Journals (Sweden)

    Jun Yong Lee

    2015-01-01

    Full Text Available For the successful reconstruction of facial defects, various perforator flaps have been used in single-stage surgery, where tissues are moved to adjacent defect sites. Our group successfully performed perforator flap surgery on 17 patients with small to moderate facial defects that affected the functional and aesthetic features of their faces. Of four complicated cases, three developed venous congestion, which resolved in the subacute postoperative period, and one patient with partial necrosis underwent minor revision. We reviewed the literature on freestyle perforator flaps for facial defect reconstruction and focused on English articles published in the last five years. With the advance of knowledge regarding the vascular anatomy of pedicled perforator flaps in the face, we found that some perforator flaps can improve functional and aesthetic reconstruction for the facial defects. We suggest that freestyle facial perforator flaps can serve as alternative, safe, and versatile treatment modalities for covering small to moderate facial defects.

  4. Freestyle Local Perforator Flaps for Facial Reconstruction.

    Science.gov (United States)

    Lee, Jun Yong; Kim, Ji Min; Kwon, Ho; Jung, Sung-No; Shim, Hyung Sup; Kim, Sang Wha

    2015-01-01

    For the successful reconstruction of facial defects, various perforator flaps have been used in single-stage surgery, where tissues are moved to adjacent defect sites. Our group successfully performed perforator flap surgery on 17 patients with small to moderate facial defects that affected the functional and aesthetic features of their faces. Of four complicated cases, three developed venous congestion, which resolved in the subacute postoperative period, and one patient with partial necrosis underwent minor revision. We reviewed the literature on freestyle perforator flaps for facial defect reconstruction and focused on English articles published in the last five years. With the advance of knowledge regarding the vascular anatomy of pedicled perforator flaps in the face, we found that some perforator flaps can improve functional and aesthetic reconstruction for the facial defects. We suggest that freestyle facial perforator flaps can serve as alternative, safe, and versatile treatment modalities for covering small to moderate facial defects.

  5. Image mosaic method based on SIFT features of line segment.

    Science.gov (United States)

    Zhu, Jun; Ren, Mingwu

    2014-01-01

    This paper proposes a novel image mosaic method based on SIFT (Scale Invariant Feature Transform) feature of line segment, aiming to resolve incident scaling, rotation, changes in lighting condition, and so on between two images in the panoramic image mosaic process. This method firstly uses Harris corner detection operator to detect key points. Secondly, it constructs directed line segments, describes them with SIFT feature, and matches those directed segments to acquire rough point matching. Finally, Ransac method is used to eliminate wrong pairs in order to accomplish image mosaic. The results from experiment based on four pairs of images show that our method has strong robustness for resolution, lighting, rotation, and scaling.

  6. Feature-based Ontology Mapping from an Information Receivers’ Viewpoint

    DEFF Research Database (Denmark)

    Glückstad, Fumiko Kano; Mørup, Morten

    2012-01-01

    This paper compares four algorithms for computing feature-based similarities between concepts respectively possessing a distinctive set of features. The eventual purpose of comparing these feature-based similarity algorithms is to identify a candidate term in a Target Language (TL) that can...... optimally convey the original meaning of a culturally-specific Source Language (SL) concept to a TL audience by aligning two culturally-dependent domain-specific ontologies. The results indicate that the Bayesian Model of Generalization [1] performs best, not only for identifying candidate translation terms...

  7. Feature-based tolerancing for intelligent inspection process definition

    Energy Technology Data Exchange (ETDEWEB)

    Brown, C.W.

    1993-07-01

    This paper describes a feature-based tolerancing capability that complements a geometric solid model with an explicit representation of conventional and geometric tolerances. This capability is focused on supporting an intelligent inspection process definition system. The feature-based tolerance model`s benefits include advancing complete product definition initiatives (e.g., STEP -- Standard for Exchange of Product model dam), suppling computer-integrated manufacturing applications (e.g., generative process planning and automated part programming) with product definition information, and assisting in the solution of measurement performance issues. A feature-based tolerance information model was developed based upon the notion of a feature`s toleranceable aspects and describes an object-oriented scheme for representing and relating tolerance features, tolerances, and datum reference frames. For easy incorporation, the tolerance feature entities are interconnected with STEP solid model entities. This schema will explicitly represent the tolerance specification for mechanical products, support advanced dimensional measurement applications, and assist in tolerance-related methods divergence issues.

  8. Facial myokymia as a presenting symptom of vestibular schwannoma.

    Directory of Open Access Journals (Sweden)

    Joseph B

    2002-07-01

    Full Text Available Facial myokymia is a rare presenting feature of a vestibular schwannoma. We present a 48 year old woman with a large right vestibular schwannoma, who presented with facial myokymia. It is postulated that facial myokymia might be due to a defect in the motor axons of the 7th nerve or due to brain stem compression by the tumor.

  9. Latent Trees for Estimating Intensity of Facial Action Units

    NARCIS (Netherlands)

    Kaltwang, Sebastian; Todorovic, Sinisa; Pantic, Maja

    2015-01-01

    This paper is about estimating intensity levels of Facial Action Units (FAUs) in videos as an important step toward interpreting facial expressions. As input features, we use locations of facial landmark points detected in video frames. To address uncertainty of input, we formulate a generative late

  10. Web-based Visualisation of Head Pose and Facial Expressions Changes: Monitoring Human Activity Using Depth Data

    DEFF Research Database (Denmark)

    Kalliatakis, Grigorios; Vidakis, Nikolaos; Triantafyllidis, Georgios

    Despite significant recent advances in the field of head pose estimation and facial expression recognition, raising the cognitive level when analysing human activity presents serious challenges to current concepts. Motivated by the need of generating comprehensible visual representations from...... and accurately estimate head pose changes in unconstrained environment. In order to complete the secondary process of recognising four universal dominant facial expressions (happiness, anger, sadness and surprise), emotion recognition via facial expressions (ERFE) was adopted. After that, a lightweight data...

  11. Contextual Query Perfection by Affective Features Based Implicit Contextual Semantic Relevance Feedback in Multimedia Information Retrieval

    Directory of Open Access Journals (Sweden)

    Anil K. Tripathi

    2012-09-01

    Full Text Available Multimedia Information may have multiple semantics depending on context, a temporal interest and user preferences. Hence we are exploiting the plausibility of context associated with semantic concept in retrieving relevance information. We are proposing an Affective Feature Based Implicit Contextual Semantic Relevance Feedback (AICSRF to investigate whether audio and speech along with visual could determine the current context in which user wants to retrieve the information and to further investigate whether we could employ Affective Feedback as an implicit source of evidence in CSRF cycle to increase the systems contextual semantic understanding. We introduce an Emotion Recognition Unit (ERU that comprises of spatiotemporal Gabor filter to capture spontaneous facial expression and emotional word recognition system that uses phonemes to recognize the spoken emotional words. We propose Contextual Query Perfection Scheme (CQPS to learn, refine the current context that could be used in query perfection in RF cycle to understand the semantic of query on the basis of relevance judgment taken by ERU. Observations suggest that CQPS in AICSRF incorporating such affective features reduce the search space hence retrieval time and increase the systems contextual semantic understanding.

  12. AGE CLASSIFICATION BASED ON FEATURES EXTRACTED FROM THIRD ORDER NEIGHBORHOOD LOCAL BINARY PATTERN

    Directory of Open Access Journals (Sweden)

    Pullela S.V.V.S.R. Kumar

    2014-11-01

    Full Text Available The present paper extended the work carried out by Kumar et. al. [10] on Third order Neighbourhood LBP (TN-LBP and derived an approach that estimates pattern trends on the outer cell of TN-LBP. The present paper observed and noted that the TN-LBP forms two types of V-patterns on the outer cell of TN-LBP i.e. Outer Right V Patterns (ORVP and Outer Left V Patterns (OLVP. The ORLP and OLVP of TN-LBP consist of 5 pixels each. The present paper derived Grey Level Co-occurrence Matrix (GLCM features based on LBP values of ORVP and OLVP. This GLCM is named as ORLVP-GLCM (Outer cell Right and Left V-Patterns of GLCM and on this four features are evaluated to classify human into child (0 to 12 years, young (13 to 30 years, middle aged (31 to 50 years and senior adult (above 60 years. The proposed method is experimented on FGNET, GOOGLE and Scanned facial images and the results are compared with the existing methods. The results demonstrate the efficiency of the proposed method over the existing methods.

  13. Spatiotemporal Features for Asynchronous Event-based Data

    Directory of Open Access Journals (Sweden)

    Xavier eLagorce

    2015-02-01

    Full Text Available Bio-inspired asynchronous event-based vision sensors are currently introducing a paradigm shift in visual information processing. These new sensors rely on a stimulus-driven principle of light acquisition similar to biological retinas. They are event-driven and fully asynchronous, thereby reducing redundancy and encoding exact times of input signal changes, leading to a very precise temporal resolution. Approaches for higher-level computer vision often rely on the realiable detection of features in visual frames, but similar definitions of features for the novel dynamic and event-based visual input representation of silicon retinas have so far been lacking. This article addresses the problem of learning and recognizing features for event-based vision sensors, which capture properties of truly spatiotemporal volumes of sparse visual event information. A novel computational architecture for learning and encoding spatiotemporal features is introduced based on a set of predictive recurrent reservoir networks, competing via winner-take-all selection. Features are learned in an unsupervised manner from real-world input recorded with event-based vision sensors. It is shown that the networks in the architecture learn distinct and task-specific dynamic visual features, and can predict their trajectories over time.

  14. Facial Scar Revision: Understanding Facial Scar Treatment

    Science.gov (United States)

    ... more to fully heal and achieve maximum improved appearance. Facial plastic surgery makes it possible to correct facial flaws that can undermine self-confidence. Changing how your scar looks can help change ...

  15. Allometry of facial mobility in anthropoid primates: implications for the evolution of facial expression.

    Science.gov (United States)

    Dobson, Seth D

    2009-01-01

    Body size may be an important factor influencing the evolution of facial expression in anthropoid primates due to allometric constraints on the perception of facial movements. Given this hypothesis, I tested the prediction that observed facial mobility is positively correlated with body size in a comparative sample of nonhuman anthropoids. Facial mobility, or the variety of facial movements a species can produce, was estimated using a novel application of the Facial Action Coding System (FACS). I used FACS to estimate facial mobility in 12 nonhuman anthropoid species, based on video recordings of facial activity in zoo animals. Body mass data were taken from the literature. I used phylogenetic generalized least squares (PGLS) to perform a multiple regression analysis with facial mobility as the dependent variable and two independent variables: log body mass and dummy-coded infraorder. Together, body mass and infraorder explain 92% of the variance in facial mobility. However, the partial effect of body mass is much stronger than for infraorder. The results of my study suggest that allometry is an important constraint on the evolution of facial mobility, which may limit the complexity of facial expression in smaller species. More work is needed to clarify the perceptual bases of this allometric pattern.

  16. EMOTION ANALYSIS OF SONGS BASED ON LYRICAL AND AUDIO FEATURES

    Directory of Open Access Journals (Sweden)

    Adit Jamdar

    2015-05-01

    Full Text Available In this paper, a method is proposed to detect the emotion of a song based on its lyrical and audio features. Lyrical features are generated by segmentation of lyrics during the process of data extraction. ANEW and WordNet knowledge is then incorporated to compute Valence and Arousal values. In addition to this, linguistic association rules are applied to ensure that the issue of ambiguity is properly addressed. Audio features are used to supplement the lyrical ones and include attributes like energy, tempo, and danceability. These features are extracted from The Echo Nest, a widely used music intelligence platform. Construction of training and test sets is done on the basis of social tags extracted from the last.fm website. The classification is done by applying feature weighting and stepwise threshold reduction on the k-Nearest Neighbors algorithm to provide fuzziness in the classification.

  17. SVM-based glioma grading: Optimization by feature reduction analysis.

    Science.gov (United States)

    Zöllner, Frank G; Emblem, Kyrre E; Schad, Lothar R

    2012-09-01

    We investigated the predictive power of feature reduction analysis approaches in support vector machine (SVM)-based classification of glioma grade. In 101 untreated glioma patients, three analytic approaches were evaluated to derive an optimal reduction in features; (i) Pearson's correlation coefficients (PCC), (ii) principal component analysis (PCA) and (iii) independent component analysis (ICA). Tumor grading was performed using a previously reported SVM approach including whole-tumor cerebral blood volume (CBV) histograms and patient age. Best classification accuracy was found using PCA at 85% (sensitivity=89%, specificity=84%) when reducing the feature vector from 101 (100-bins rCBV histogram+age) to 3 principal components. In comparison, classification accuracy by PCC was 82% (89%, 77%, 2 dimensions) and 79% by ICA (87%, 75%, 9 dimensions). For improved speed (up to 30%) and simplicity, feature reduction by all three methods provided similar classification accuracy to literature values (∼87%) while reducing the number of features by up to 98%.

  18. [Surgical facial reanimation after persisting facial paralysis].

    Science.gov (United States)

    Pasche, Philippe

    2011-10-01

    Facial reanimation following persistent facial paralysis can be managed with surgical procedures of varying complexity. The choice of the technique is mainly determined by the cause of facial paralysis, the age and desires of the patient. The techniques most commonly used are the nerve grafts (VII-VII, XII-VII, cross facial graft), dynamic muscle transfers (temporal myoplasty, free muscle transfert) and static suspensions. An intensive rehabilitation through specific exercises after all procedures is essential to archieve good results.

  19. NEURO FUZZY MODEL FOR FACE RECOGNITION WITH CURVELET BASED FEATURE IMAGE

    Directory of Open Access Journals (Sweden)

    SHREEJA R,

    2011-06-01

    Full Text Available A facial recognition system is a computer application for automatically identifying or verifying a person from a digital image or a video frame from a video source. One of the ways to do this is by comparing selected facial features from the image and a facial database. It is typically used in security systems and can be compared to other biometric techniques such as fingerprint or iris recognition systems. Every face has approximately 80 nodal points like (Distance between the eyes, Width of the nose etc.The basic face recognition system capture the sample, extract feature, compare template and perform matching. In this paper two methods of face recognition are compared- neural networks and neuro fuzzy method. For this curvelet transform is used for feature extraction. Feature vector is formed by extracting statistical quantities of curve coefficients. From the statistical results it is concluded that neuro fuzzy method is the better technique for face recognition as compared to neural network.

  20. Maximized Posteriori Attributes Selection from Facial Salient Landmarks for Face Recognition

    CERN Document Server

    Gupta, Phalguni; Sing, Jamuna Kanta; Tistarelli, Massimo

    2010-01-01

    This paper presents a robust and dynamic face recognition technique based on the extraction and matching of devised probabilistic graphs drawn on SIFT features related to independent face areas. The face matching strategy is based on matching individual salient facial graph characterized by SIFT features as connected to facial landmarks such as the eyes and the mouth. In order to reduce the face matching errors, the Dempster-Shafer decision theory is applied to fuse the individual matching scores obtained from each pair of salient facial features. The proposed algorithm is evaluated with the ORL and the IITK face databases. The experimental results demonstrate the effectiveness and potential of the proposed face recognition technique also in case of partially occluded faces.

  1. Dynamic Facial Prosthetics for Sufferers of Facial Paralysis

    Directory of Open Access Journals (Sweden)

    Fergal Coulter

    2011-10-01

    Full Text Available BackgroundThis paper discusses the various methods and the materialsfor the fabrication of active artificial facial muscles. Theprimary use for these will be the reanimation of paralysedor atrophied muscles in sufferers of non-recoverableunilateral facial paralysis.MethodThe prosthetic solution described in this paper is based onsensing muscle motion of the contralateral healthy musclesand replicating that motion across a patient’s paralysed sideof the face, via solid state and thin film actuators. Thedevelopment of this facial prosthetic device focused onrecreating a varying intensity smile, with emphasis ontiming, displacement and the appearance of the wrinklesand folds that commonly appear around the nose and eyesduring the expression.An animatronic face was constructed with actuations beingmade to a silicone representation musculature, usingmultiple shape-memory alloy cascades. Alongside theartificial muscle physical prototype, a facial expressionrecognition software system was constructed. This formsthe basis of an automated calibration and reconfigurationsystem for the artificial muscles following implantation, soas to suit the implantee’s unique physiognomy.ResultsAn animatronic model face with silicone musculature wasdesigned and built to evaluate the performance of ShapeMemory Alloy artificial muscles, their power controlcircuitry and software control systems. A dual facial motionsensing system was designed to allow real time control overmodel – a piezoresistive flex sensor to measure physicalmotion, and a computer vision system to evaluate real toartificial muscle performance.Analysis of various facial expressions in real subjects wasmade, which give useful data upon which to base thesystems parameter limits.ConclusionThe system performed well, and the various strengths andshortcomings of the materials and methods are reviewedand considered for the next research phase, when newpolymer based artificial muscles are constructed

  2. Facial expression influences face identity recognition during the attentional blink.

    Science.gov (United States)

    Bach, Dominik R; Schmidt-Daffy, Martin; Dolan, Raymond J

    2014-12-01

    Emotional stimuli (e.g., negative facial expressions) enjoy prioritized memory access when task relevant, consistent with their ability to capture attention. Whether emotional expression also impacts on memory access when task-irrelevant is important for arbitrating between feature-based and object-based attentional capture. Here, the authors address this question in 3 experiments using an attentional blink task with face photographs as first and second target (T1, T2). They demonstrate reduced neutral T2 identity recognition after angry or happy T1 expression, compared to neutral T1, and this supports attentional capture by a task-irrelevant feature. Crucially, after neutral T1, T2 identity recognition was enhanced and not suppressed when T2 was angry-suggesting that attentional capture by this task-irrelevant feature may be object-based and not feature-based. As an unexpected finding, both angry and happy facial expressions suppress memory access for competing objects, but only angry facial expression enjoyed privileged memory access. This could imply that these 2 processes are relatively independent from one another.

  3. Deep Pain: Exploiting Long Short-Term Memory Networks for Facial Expression Classification

    DEFF Research Database (Denmark)

    Rodriguez, Pau; Cucurull, Guillem; Gonzàlez, Jordi

    2017-01-01

    in pain assessment, which are based on facial features only, we suggest that the performance can be enhanced by feeding the raw frames to deep learning models, outperforming the latest state-of-the-art results while also directly facing the problem of imbalanced data. As a baseline, our approach first...... uses convolutional neural networks (CNN) to learned facial features from VGG Faces, which are then linked to a Long Short-Term Memory (LSTM) to exploit the temporal relation between video frames. We further compare the performances of using the so popular schema based on the canonically normalized...

  4. Human age estimation framework using different facial parts

    OpenAIRE

    Mohamed Y. El Dib; Hoda M. Onsi

    2011-01-01

    Human age estimation from facial images has a wide range of real-world applications in human computer interaction (HCI). In this paper, we use the bio-inspired features (BIF) to analyze different facial parts: (a) eye wrinkles, (b) whole internal face (without forehead area) and (c) whole face (with forehead area) using different feature shape points. The analysis shows that eye wrinkles which cover 30% of the facial area contain the most important aging features compared to internal face and...

  5. Convolutional neural network features based change detection in satellite images

    Science.gov (United States)

    Mohammed El Amin, Arabi; Liu, Qingjie; Wang, Yunhong

    2016-07-01

    With the popular use of high resolution remote sensing (HRRS) satellite images, a huge research efforts have been placed on change detection (CD) problem. An effective feature selection method can significantly boost the final result. While hand-designed features have proven difficulties to design features that effectively capture high and mid-level representations, the recent developments in machine learning (Deep Learning) omit this problem by learning hierarchical representation in an unsupervised manner directly from data without human intervention. In this letter, we propose approaching the change detection problem from a feature learning perspective. A novel deep Convolutional Neural Networks (CNN) features based HR satellite images change detection method is proposed. The main guideline is to produce a change detection map directly from two images using a pretrained CNN. This method can omit the limited performance of hand-crafted features. Firstly, CNN features are extracted through different convolutional layers. Then, a concatenation step is evaluated after an normalization step, resulting in a unique higher dimensional feature map. Finally, a change map was computed using pixel-wise Euclidean distance. Our method has been validated on real bitemporal HRRS satellite images according to qualitative and quantitative analyses. The results obtained confirm the interest of the proposed method.

  6. Intrinsic feature-based pose measurement for imaging motion compensation

    Science.gov (United States)

    Baba, Justin S.; Goddard, Jr., James Samuel

    2014-08-19

    Systems and methods for generating motion corrected tomographic images are provided. A method includes obtaining first images of a region of interest (ROI) to be imaged and associated with a first time, where the first images are associated with different positions and orientations with respect to the ROI. The method also includes defining an active region in the each of the first images and selecting intrinsic features in each of the first images based on the active region. Second, identifying a portion of the intrinsic features temporally and spatially matching intrinsic features in corresponding ones of second images of the ROI associated with a second time prior to the first time and computing three-dimensional (3D) coordinates for the portion of the intrinsic features. Finally, the method includes computing a relative pose for the first images based on the 3D coordinates.

  7. The Phase Spectra Based Feature for Robust Speech Recognition

    Directory of Open Access Journals (Sweden)

    Abbasian ALI

    2009-07-01

    Full Text Available Speech recognition in adverse environment is one of the major issue in automatic speech recognition nowadays. While most current speech recognition system show to be highly efficient for ideal environment but their performance go down extremely when they are applied in real environment because of noise effected speech. In this paper a new feature representation based on phase spectra and Perceptual Linear Prediction (PLP has been suggested which can be used for robust speech recognition. It is shown that this new features can improve the performance of speech recognition not only in clean condition but also in various levels of noise condition when it is compared to PLP features.

  8. Electronic image stabilization system based on global feature tracking

    Institute of Scientific and Technical Information of China (English)

    Zhu Juanjuan; Guo Baolong

    2008-01-01

    A new robust electronic image stabilization system is presented, which involves feature-point, tracking based global motion estimation and Kalman filtering based motion compensation. First, global motion is estimated from the local motions of selected feature points. Considering the local moving objects or the inevitable mismatch,the matching validation, based on the stable relative distance between the points set is proposed, thus maintaining high accuracy and robustness. Next, the global motion parameters are accumulated for correction by Kalman filter-ation. The experimental result illustrates that the proposed system is effective to stabilize translational, rotational,and zooming jitter and robust to local motions.

  9. Frequency feature based quantification of defect depth and thickness

    Science.gov (United States)

    Tian, Shulin; Chen, Kai; Bai, Libing; Cheng, Yuhua; Tian, Lulu; Zhang, Hong

    2014-06-01

    This study develops a frequency feature based pulsed eddy current method. A frequency feature, termed frequency to zero, is proposed for subsurface defects and metal loss quantification in metallic specimens. A curve fitting method is also employed to generate extra frequency components and improve the accuracy of the proposed method. Experimental validation is carried out. Conclusions and further work are derived on the basis of the studies.

  10. Using PSO-Based Hierarchical Feature Selection Algorithm

    Directory of Open Access Journals (Sweden)

    Zhiwei Ji

    2014-01-01

    Full Text Available Hepatocellular carcinoma (HCC is one of the most common malignant tumors. Clinical symptoms attributable to HCC are usually absent, thus often miss the best therapeutic opportunities. Traditional Chinese Medicine (TCM plays an active role in diagnosis and treatment of HCC. In this paper, we proposed a particle swarm optimization-based hierarchical feature selection (PSOHFS model to infer potential syndromes for diagnosis of HCC. Firstly, the hierarchical feature representation is developed by a three-layer tree. The clinical symptoms and positive score of patient are leaf nodes and root in the tree, respectively, while each syndrome feature on the middle layer is extracted from a group of symptoms. Secondly, an improved PSO-based algorithm is applied in a new reduced feature space to search an optimal syndrome subset. Based on the result of feature selection, the causal relationships of symptoms and syndromes are inferred via Bayesian networks. In our experiment, 147 symptoms were aggregated into 27 groups and 27 syndrome features were extracted. The proposed approach discovered 24 syndromes which obviously improved the diagnosis accuracy. Finally, the Bayesian approach was applied to represent the causal relationships both at symptom and syndrome levels. The results show that our computational model can facilitate the clinical diagnosis of HCC.

  11. Feature-based tolerancing for advanced manufacturing applications

    Energy Technology Data Exchange (ETDEWEB)

    Brown, C.W.; Kirk, W.J. III; Simons, W.R.; Ward, R.C.; Brooks, S.L.

    1994-11-01

    A primary requirement for the successful deployment of advanced manufacturing applications is the need for a complete and accessible definition of the product. This product definition must not only provide an unambiguous description of a product`s nominal shape but must also contain complete tolerance specification and general property attributes. Likewise, the product definition`s geometry, topology, tolerance data, and modeler manipulative routines must be fully accessible through a robust application programmer interface. This paper describes a tolerancing capability using features that complements a geometric solid model with a representation of conventional and geometric tolerances and non-shape property attributes. This capability guarantees a complete and unambiguous definition of tolerances for manufacturing applications. An object-oriented analysis and design of the feature-based tolerance domain was performed. The design represents and relates tolerance features, tolerances, and datum reference frames. The design also incorporates operations that verify correctness and check for the completeness of the overall tolerance definition. The checking algorithm is based upon the notion of satisfying all of a feature`s toleranceable aspects. Benefits from the feature-based tolerance modeler include: advancing complete product definition initiatives, incorporating tolerances in product data exchange, and supplying computer-integrated manufacturing applications with tolerance information.

  12. Three-Dimensional Facial Adaptation for MPEG-4 Talking Heads

    Directory of Open Access Journals (Sweden)

    Nikos Grammalidis

    2002-10-01

    Full Text Available This paper studies a new method for three-dimensional (3D facial model adaptation and its integration into a text-to-speech (TTS system. The 3D facial adaptation requires a set of two orthogonal views of the user′s face with a number of feature points located on both views. Based on the correspondences of the feature points′ positions, a generic face model is deformed nonrigidly treating every facial part as a separate entity. A cylindrical texture map is then built from the two image views. The generated head models are compared to corresponding models obtained by the commonly used adaptation method that utilizes 3D radial bases functions. The generated 3D models are integrated into a talking head system, which consists of two distinct parts: a multilingual text to speech sub-system and an MPEG-4 compliant facial animation sub-system. Support for the Greek language has been added, while preserving lip and speech synchronization.

  13. FACIAL EXPRESSION RECOGNITION BASED ON COMBINATION OF DIFFERENCE IMAGE AND GABOR WAVELET%结合差图像和Gabor小波的人脸表情识别

    Institute of Scientific and Technical Information of China (English)

    丁志起; 赵晖

    2011-01-01

    提出一种结合差图像和Gabor小波变换的人脸特征提取方法,并使用支持向量机SVM(Support Vector Machines)进行人脸表情识别.对包含情感信息的静态灰度图像进行预处理,将眼睛和嘴巴等表情子区域从人脸中切割出来,求出其差图像,然后提取差图像的Gabor特征,使用下采样降维减少特征向量的维数并进行归一化,最后使用SVM进行分类.与只从表情子区域提取Ga-bon特征的识别方法进行了比较,结果显示识别效果更好.%In this paper we introduce a facial expression features extraction algorithm which is the combination of difference image and Gabor wavelet transform,and use the support vector machine (SVM) to recognise facial expression. For a given static grey image containing facial expression information,pre-processing is executed first,the expression sub-regions including the eyes and the mouth respectively are cut from the face for obtaining their difference images,then we extract Gabor feature vectors of the difference images, and employ downsampling to reduce the dimensionality of the eigenvectors, and normalise the treated data, finally we use SVM to classify the facial expression. This combination method has been compared with the recognition method which only extracts the. Gabor feature from expression sub-region, the result indicates that the combination one has better recognition performance.

  14. Quantitative analysis of facial paralysis using local binary patterns in biomedical videos.

    Science.gov (United States)

    He, Shu; Soraghan, John J; O'Reilly, Brian F; Xing, Dongshan

    2009-07-01

    Facial paralysis is the loss of voluntary muscle movement of one side of the face. A quantitative, objective, and reliable assessment system would be an invaluable tool for clinicians treating patients with this condition. This paper presents a novel framework for objective measurement of facial paralysis. The motion information in the horizontal and vertical directions and the appearance features on the apex frames are extracted based on the local binary patterns (LBPs) on the temporal-spatial domain in each facial region. These features are temporally and spatially enhanced by the application of novel block processing schemes. A multiresolution extension of uniform LBP is proposed to efficiently combine the micropatterns and large-scale patterns into a feature vector. The symmetry of facial movements is measured by the resistor-average distance (RAD) between LBP features extracted from the two sides of the face. Support vector machine is applied to provide quantitative evaluation of facial paralysis based on the House-Brackmann (H-B) scale. The proposed method is validated by experiments with 197 subject videos, which demonstrates its accuracy and efficiency.

  15. Object Analysis of Human Emotions by Contourlets and GLCM Features

    Directory of Open Access Journals (Sweden)

    R. Suresh

    2014-08-01

    Full Text Available Facial expression is one of the most significant ways to express the intention, emotion and other nonverbal messages of human beings. A computerized human emotion recognition system based on Contourlet transformation is proposed. In order to analyze the presented study, seven kind of human emotions such as anger, fear, happiness, surprise, sadness, disgust and neutral of facial images are taken into account. The considered emotional images of human are represented by Contourlet transformation that decomposes the images into directional sub-bands at multiple levels. The features are extracted from the obtained sub-bands and stored for further analysis. Also, texture features from Gray Level Co-occurrence Matrix (GLCM are extracted and fused together with contourlet features to obtain higher recognition accuracy. To recognize the facial expressions, K Nearest Neighbor (KNN classifier is used to recognize the input facial image into one of the seven analyzed expressions and over 90% accuracy is achieved.

  16. Measuring Facial Movement

    Science.gov (United States)

    Ekman, Paul; Friesen, Wallace V.

    1976-01-01

    The Facial Action Code (FAC) was derived from an analysis of the anatomical basis of facial movement. The development of the method is explained, contrasting it to other methods of measuring facial behavior. An example of how facial behavior is measured is provided, and ideas about research applications are discussed. (Author)

  17. Photo anthropometric variations in Japanese facial features: Establishment of large-sample standard reference data for personal identification using a three-dimensional capture system.

    Science.gov (United States)

    Ogawa, Y; Wada, B; Taniguchi, K; Miyasaka, S; Imaizumi, K

    2015-12-01

    This study clarifies the anthropometric variations of the Japanese face by presenting large-sample population data of photo anthropometric measurements. The measurements can be used as standard reference data for the personal identification of facial images in forensic practices. To this end, three-dimensional (3D) facial images of 1126 Japanese individuals (865 male and 261 female Japanese individuals, aged 19-60 years) were acquired as samples using an already validated 3D capture system, and normative anthropometric analysis was carried out. In this anthropometric analysis, first, anthropological landmarks (22 items, i.e., entocanthion (en), alare (al), cheilion (ch), zygion (zy), gonion (go), sellion (se), gnathion (gn), labrale superius (ls), stomion (sto), labrale inferius (li)) were positioned on each 3D facial image (the direction of which had been adjusted to the Frankfort horizontal plane as the standard position for appropriate anthropometry), and anthropometric absolute measurements (19 items, i.e., bientocanthion breadth (en-en), nose breadth (al-al), mouth breadth (ch-ch), bizygomatic breadth (zy-zy), bigonial breadth (go-go), morphologic face height (se-gn), upper-lip height (ls-sto), lower-lip height (sto-li)) were exported using computer software for the measurement of a 3D digital object. Second, anthropometric indices (21 items, i.e., (se-gn)/(zy-zy), (en-en)/(al-al), (ls-li)/(ch-ch), (ls-sto)/(sto-li)) were calculated from these exported measurements. As a result, basic statistics, such as the mean values, standard deviations, and quartiles, and details of the distributions of these anthropometric results were shown. All of the results except "upper/lower lip ratio (ls-sto)/(sto-li)" were normally distributed. They were acquired as carefully as possible employing a 3D capture system and 3D digital imaging technologies. The sample of images was much larger than any Japanese sample used before for the purpose of personal identification. The

  18. Feature-based attentional modulation of orientation perception in somatosensation

    Directory of Open Access Journals (Sweden)

    Meike Annika Schweisfurth

    2014-07-01

    Full Text Available In a reaction time study of human tactile orientation detection the effects of spatial attention and feature-based attention were investigated. Subjects had to give speeded responses to target orientations (parallel and orthogonal to the finger axis in a random stream of oblique tactile distractor orientations presented to their index and ring fingers. Before each block of trials, subjects received a tactile cue at one finger. By manipulating the validity of this cue with respect to its location and orientation (feature, we provided an incentive to subjects to attend spatially to the cued location and only there to the cued orientation. Subjects showed quicker responses to parallel compared to orthogonal targets, pointing to an orientation anisotropy in sensory processing. Also, faster reaction times were observed in location-matched trials, i.e. when targets appeared on the cued finger, representing a perceptual benefit of spatial attention. Most importantly, reaction times were shorter to orientations matching the cue, both at the cued and at the uncued location, documenting a global enhancement of tactile sensation by feature-based attention. This is the first report of a perceptual benefit of feature-based attention outside the spatial focus of attention in somatosensory perception. The similarity to effects of feature-based attention in visual perception supports the notion of matching attentional mechanisms across sensory domains.

  19. Syntactic and Sentence Feature Based Hybrid Approach for Text Summarization

    Directory of Open Access Journals (Sweden)

    D.Y. Sakhare

    2014-02-01

    Full Text Available Recently, there has been a significant research in automatic text summarization using feature-based techniques in which most of them utilized any one of the soft computing techniques. But, making use of syntactic structure of the sentences for text summarization has not widely applied due to its difficulty of handling it in summarization process. On the other hand, feature-based technique available in the literature showed efficient results in most of the techniques. So, combining syntactic structure into the feature-based techniques is surely smooth the summarization process in a way that the efficiency can be achieved. With the intention of combining two different techniques, we have presented an approach of text summarization that combines feature and syntactic structure of the sentences. Here, two neural networks are trained based on the feature score and the syntactic structure of sentences. Finally, the two neural networks are combined with weighted average to find the sentence score of the sentences. The experimentation is carried out using DUC 2002 dataset for various compression ratios. The results showed that the proposed approach achieved F-measure of 80% for the compression ratio 50 % that proved the better results compared with the existing techniques.

  20. Iris Recognition System Based on Feature Level Fusion

    Directory of Open Access Journals (Sweden)

    Dr. S. R. Ganorkar

    2013-11-01

    Full Text Available Multibiometric systems utilize the evidence presented by multiple biometric sources (e.g., face and fingerprint, multiple fingers of a single user, multiple matchers, etc. in order to determine or verify the identity of an individual. Information from multiple sources can be consolidated in several distinct levels. But fusion of two different biometric traits are difficult due to (i the feature sets of multiple modalities may be incompatible (e.g., minutiae set of fingerprints and eigen-coefficients of face; (ii the relationship between the feature spaces of different biometric systems may not be known; (iii concatenating two feature vectors may result in a feature vector with very large dimensionality leading to the `curse of dimensionality problem, huge storage space and different processing algorithm. Also if we are use multiple images of single biometric trait, then it doesn’t show much variations. So in this paper, we present a efficient technique of feature-based fusion in a multimodal system where left eye and right eye are used as input. Iris recognition basically contains iris location, feature extraction, and identification. This algorithm uses canny edge detection to identify inner and outer boundary of iris. Then this image is feed to Gabor wavelet transform to extract the feature and finally matching is done by using indexing algorithm. The results from the analysis of works indicate that the proposed technique can lead to substantial improvement in performance.

  1. [Peripheral facial nerve palsy].

    Science.gov (United States)

    Pons, Y; Ukkola-Pons, E; Ballivet de Régloix, S; Champagne, C; Raynal, M; Lepage, P; Kossowski, M

    2013-06-01

    Facial palsy can be defined as a decrease in function of the facial nerve, the primary motor nerve of the facial muscles. When the facial palsy is peripheral, it affects both the superior and inferior areas of the face as opposed to central palsies, which affect only the inferior portion. The main cause of peripheral facial palsies is Bell's palsy, which remains a diagnosis of exclusion. The prognosis is good in most cases. In cases with significant cosmetic sequelae, a variety of surgical procedures are available (such as hypoglossal-facial anastomosis, temporalis myoplasty and Tenzel external canthopexy) to rehabilitate facial aesthetics and function.

  2. A flower image retrieval method based on ROI feature

    Institute of Scientific and Technical Information of China (English)

    洪安祥; 陈刚; 李均利; 池哲儒; 张亶

    2004-01-01

    Flower image retrieval is a very important step for computer-aided plant species recognition. In this paper, we propose an efficient segmentation method based on color clustering and domain knowledge to extract flower regions from flower images. For flower retrieval, we use the color histogram of a flower region to characterize the color features of flower and two shape-based features sets, Centroid-Contour Distance (CCD) and Angle Code Histogram (ACH), to characterize the shape features of a flower contour. Experimental results showed that our flower region extraction method based on color clustering and domain knowledge can produce accurate flower regions. Flower retrieval results on a database of 885 flower images collected from 14 plant species showed that our Region-of-Interest (ROI) based retrieval approach using both color and shape features can perform better than a method based on the global color histogram proposed by Swain and Ballard (1991) and a method based on domain knowledge-driven segmentation and color names proposed by Das et al.(1999).

  3. A flower image retrieval method based on ROI feature

    Institute of Scientific and Technical Information of China (English)

    洪安祥; 陈刚; 李均利; 池哲儒; 张亶

    2004-01-01

    Flower image retrieval is a very important step for computer-aided plant species recognition.In this paper,we propose an efficient segmentation method based on color clustering and domain knowledge to extract flower regions from flower images.For flower retrieval,we use the color histogram of a flower region to characterize the color features of flower and two shape-based features sets,Centroid-Contour Distance(CCD)and Angle Code Histogram(ACH),to characterize the shape features of a flower contour.Experimental results showed that our flower region extraction method based on color clustering and domain knowledge can produce accurate flower regions.Flower retrieval results on a database of 885 flower images collected from 14 plant species showed that our Region-of-Interest(ROD based retrieval approach using both color and shape features can perform better than a method based on the global color histogram proposed by Swain and Ballard(1991)and a method based on domain knowledge-driven segmentation and color names proposed by Das et al.(1999).

  4. Facial Recognition

    Directory of Open Access Journals (Sweden)

    Mihalache Sergiu

    2014-05-01

    Full Text Available During their lifetime, people learn to recognize thousands of faces that they interact with. Face perception refers to an individual's understanding and interpretation of the face, particularly the human face, especially in relation to the associated information processing in the brain. The proportions and expressions of the human face are important to identify origin, emotional tendencies, health qualities, and some social information. From birth, faces are important in the individual's social interaction. Face perceptions are very complex as the recognition of facial expressions involves extensive and diverse areas in the brain. Our main goal is to put emphasis on presenting human faces specialized studies, and also to highlight the importance of attractiviness in their retention. We will see that there are many factors that influence face recognition.

  5. A Facial Expression Classification System Integrating Canny, Principal Component Analysis and Artificial Neural Network

    CERN Document Server

    Thai, Le Hoang; Hai, Tran Son

    2011-01-01

    Facial Expression Classification is an interesting research problem in recent years. There are a lot of methods to solve this problem. In this research, we propose a novel approach using Canny, Principal Component Analysis (PCA) and Artificial Neural Network. Firstly, in preprocessing phase, we use Canny for local region detection of facial images. Then each of local region's features will be presented based on Principal Component Analysis (PCA). Finally, using Artificial Neural Network (ANN)applies for Facial Expression Classification. We apply our proposal method (Canny_PCA_ANN) for recognition of six basic facial expressions on JAFFE database consisting 213 images posed by 10 Japanese female models. The experimental result shows the feasibility of our proposal method.

  6. Analysis of quantitative pore features based on mathematical morphology

    Institute of Scientific and Technical Information of China (English)

    QI Heng-nian; CHEN Feng-nong; WANG Hang-jun

    2008-01-01

    Wood identification is a basic technique of wood science and industry. Pore features are among the most important identification features for hardwoods. We have used a method based on an analysis of quantitative pore feature, which differs from traditional qualitative methods. We applies mathematical morphology methods such as dilation and erosion, open and close transformation of wood cross-sections, image repairing, noise filtering and edge detection to segment the pores from their background. Then the mean square errors (MSE) of pores were computed to describe the distribution of pores. Our experiment shows that it is easy to classift the pore features into three basic types, just as in traditional qualitative methods, but with the use of MSE of pores. This quantitative method improves wood identification considerably.

  7. Whispered speaker identification based on feature and model hybrid compensation

    Institute of Scientific and Technical Information of China (English)

    GU Xiaojiang; ZHAO Heming; Lu Gang

    2012-01-01

    In order to increase short time whispered speaker recognition rate in variable chan- nel conditions, the hybrid compensation in model and feature domains was proposed. This method is based on joint factor analysis in training model stage. It extracts speaker factor and eliminates channel factor by estimating training speech speaker and channel spaces. Then in the test stage, the test speech channel factor is projected into feature space to engage in feature compensation, so it can remove channel information both in model and feature domains in order to improve recognition rate. The experiment result shows that the hybrid compensation can obtain the similar recognition rate in the three different training channel conditions and this method is more effective than joint factor analysis in the test of short whispered speech.

  8. Weighted feature fusion for content-based image retrieval

    Science.gov (United States)

    Soysal, Omurhan A.; Sumer, Emre

    2016-07-01

    The feature descriptors such as SIFT (Scale Invariant Feature Transform), SURF (Speeded-up Robust Features) and ORB (Oriented FAST and Rotated BRIEF) are known as the most commonly used solutions for the content-based image retrieval problems. In this paper, a novel approach called "Weighted Feature Fusion" is proposed as a generic solution instead of applying problem-specific descriptors alone. Experiments were performed on two basic data sets of the Inria in order to improve the precision of retrieval results. It was found that in cases where the descriptors were used alone the proposed approach yielded 10-30% more accurate results than the ORB alone. Besides, it yielded 9-22% and 12-29% less False Positives compared to the SIFT alone and SURF alone, respectively.

  9. Remote Sensing Image Feature Extracting Based Multiple Ant Colonies Cooperation

    Directory of Open Access Journals (Sweden)

    Zhang Zhi-long

    2014-02-01

    Full Text Available This paper presents a novel feature extraction method for remote sensing imagery based on the cooperation of multiple ant colonies. First, multiresolution expression of the input remote sensing imagery is created, and two different ant colonies are spread on different resolution images. The ant colony in the low-resolution image uses phase congruency as the inspiration information, whereas that in the high-resolution image uses gradient magnitude. The two ant colonies cooperate to detect features in the image by sharing the same pheromone matrix. Finally, the image features are extracted on the basis of the pheromone matrix threshold. Because a substantial amount of information in the input image is used as inspiration information of the ant colonies, the proposed method shows higher intelligence and acquires more complete and meaningful image features than those of other simple edge detectors.

  10. Hierarchical Geometric Constraint Model for Parametric Feature Based Modeling

    Institute of Scientific and Technical Information of China (English)

    高曙明; 彭群生

    1997-01-01

    A new geometric constraint model is described,which is hierarchical and suitable for parametric feature based modeling.In this model,different levels of geometric information are repesented to support various stages of a design process.An efficient approach to parametric feature based modeling is also presented,adopting the high level geometric constraint model.The low level geometric model such as B-reps can be derived automatically from the hig level geometric constraint model,enabling designers to perform their task of detailed design.

  11. Facial nerve palsy and hemifacial spasm.

    Science.gov (United States)

    Valls-Solé, Josep

    2013-01-01

    Facial nerve lesions are usually benign conditions even though patients may present with emotional distress. Facial palsy usually resolves in 3-6 weeks, but if axonal degeneration takes place, it is likely that the patient will end up with a postparalytic facial syndrome featuring synkinesis, myokymic discharges, and hemifacial mass contractions after abnormal reinnervation. Essential hemifacial spasm is one form of facial hyperactivity that must be distinguished from synkinesis after facial palsy and also from other forms of facial dyskinesias. In this condition, there can be ectopic discharges, ephaptic transmission, and lateral spread of excitation among nerve fibers, giving rise to involuntary muscle twitching and spasms. Electrodiagnostic assessment is of relevance for the diagnosis and prognosis of peripheral facial palsy and hemifacial spasm. In this chapter the most relevant clinical and electrodiagnostic aspects of the two disorders are reviewed, with emphasis on the various stages of facial palsy after axonal degeneration, the pathophysiological mechanisms underlying the various features of hemifacial spasm, and the cues for differential diagnosis between the two entities.

  12. MRI-based diagnostic imaging of the intratemporal facial nerve; Die kernspintomographische Darstellung des intratemporalen N. facialis

    Energy Technology Data Exchange (ETDEWEB)

    Kress, B.; Baehren, W. [Bundeswehrkrankenhaus Ulm (Germany). Abt. fuer Radiologie

    2001-07-01

    Detailed imaging of the five sections of the full intratemporal course of the facial nerve can be achieved by MRI and using thin tomographic section techniques and surface coils. Contrast media are required for tomographic imaging of pathological processes. Established methods are available for diagnostic evaluation of cerebellopontine angle tumors and chronic Bell's palsy, as well as hemifacial spasms. A method still under discussion is MRI for diagnostic evaluation of Bell's palsy in the presence of fractures of the petrous bone, when blood volumes in the petrous bone make evaluation even more difficult. MRI-based diagnostic evaluation of the idiopatic facial paralysis currently is subject to change. Its usual application cannot be recommended for routine evaluation at present. However, a quantitative analysis of contrast medium uptake of the nerve may be an approach to improve the prognostic value of MRI in acute phases of Bell's palsy. (orig./CB) [German] Die detaillierte kernspintomographische Darstellung des aus 5 Abschnitten bestehenden intratemporalen Verlaufes des N. facialis gelingt mit der MRI unter Einsatz von Duennschichttechniken und Oberflaechenspulen. Zur Darstellung von pathologischen Vorgaengen ist die Gabe von Kontrastmittel notwendig. Die Untersuchung in der Diagnostik von Kleinhirnbrueckenwinkeltumoren und der chronischen Facialisparese ist etabliert, ebenso wie die Diagnostik des Hemispasmus facialis. In der Diskussion ist die MRI zur Dokumentation der Facialisparese bei Felsenbeinfrakturen, wobei die Einblutungen im Felsenbein die Beurteilung erschweren. Die kernspintomographische Diagnostik der idiopathischen Facialisparese befindet sich im Wandel. In der herkoemmlichen Form wird sie nicht zur Routinediagnostik empfohlen. Die quantitative Analyse der Kontrastmittelaufnahme im Nerv koennte jedoch die prognostische Bedeutung der MRI in der Akutphase der Bell's palsy erhoehen. (orig.)

  13. Magnetoencephalographic study on facial movements

    Directory of Open Access Journals (Sweden)

    Kensaku eMiki

    2014-07-01

    Full Text Available In this review, we introduced our three studies that focused on facial movements. In the first study, we examined the temporal characteristics of neural responses elicited by viewing mouth movements, and assessed differences between the responses to mouth opening and closing movements and an averting eyes condition. Our results showed that the occipitotemporal area, the human MT/V5 homologue, was active in the perception of both mouth and eye motions. Viewing mouth and eye movements did not elicit significantly different activity in the occipitotemporal area, which indicated that perception of the movement of facial parts may be processed in the same manner, and this is different from motion in general. In the second study, we investigated whether early activity in the occipitotemporal region evoked by eye movements was influenced by a face contour and/or features such as the mouth. Our results revealed specific information processing for eye movements in the occipitotemporal region, and this activity was significantly influenced by whether movements appeared with the facial contour and/or features, in other words, whether the eyes moved, even if the movement itself was the same. In the third study, we examined the effects of inverting the facial contour (hair and chin and features (eyes, nose, and mouth on processing for static and dynamic face perception. Our results showed the following: (1 In static face perception, activity in the right fusiform area was affected more by the inversion of features while that in the left fusiform area was affected more by a disruption in the spatial relationship between the contour and features, and (2 In dynamic face perception, activity in the right occipitotemporal area was affected by the inversion of the facial contour.

  14. Facial Expression Classification Based on Multi Artificial Neural Network and Two Dimensional Principal Component Analysis

    CERN Document Server

    Le, Thai; Tran, Hai

    2011-01-01

    Facial expression classification is a kind of image classification and it has received much attention, in recent years. There are many approaches to solve these problems with aiming to increase efficient classification. One of famous suggestions is described as first step, project image to different spaces; second step, in each of these spaces, images are classified into responsive class and the last step, combine the above classified results into the final result. The advantages of this approach are to reflect fulfill and multiform of image classified. In this paper, we use 2D-PCA and its variants to project the pattern or image into different spaces with different grouping strategies. Then we develop a model which combines many Neural Networks applied for the last step. This model evaluates the reliability of each space and gives the final classification conclusion. Our model links many Neural Networks together, so we call it Multi Artificial Neural Network (MANN). We apply our proposal model for 6 basic fa...

  15. Content Based Video Retrieval using trajectory and Velocity features

    Directory of Open Access Journals (Sweden)

    Dr. S. D. Sawarkar

    2012-09-01

    Full Text Available The Internet forms today’s largest source of Information containing a high density of multimedia objects and its content is often semantically related. The identification of relevant media objects in such a vast collection poses a major problem that is studied in the area of multimedia information retrieval. Before the emergence of content-based retrieval, media was annotated with text, allowing the media to be accessed by text-based searching based on the classification of subject or semantics.In typical content-based retrieval systems, the contents of the media in the database are extracted and described by multi-dimensional feature vectors, also called descriptors. In our paper to retrieve desired data, users submit query examples to the retrieval system. The system then represents these examples with feature vectors. The distances (i.e.,similarities between the feature vectors of the query example and those of the media in the feature dataset are then computed and ranked. Retrieval is conducted by applying an indexing scheme to provide an efficient way to search the video database. Finally, the system ranks the search results and then returns the top search results that are most similar to the query examples.Therefore, a content-based retrieval system has four aspects: feature extraction and representation, dimension reduction of feature, indexing, and query specifications. With the search engine being developed, the user should have the ability to initiate a retrieval procedure by using video retrieval in a way that there is a better chance for a user to find the desired content.

  16. Robust Speech Recognition Method Based on Discriminative Environment Feature Extraction

    Institute of Scientific and Technical Information of China (English)

    HAN Jiqing; GAO Wen

    2001-01-01

    It is an effective approach to learn the influence of environmental parameters,such as additive noise and channel distortions, from training data for robust speech recognition.Most of the previous methods are based on maximum likelihood estimation criterion. However,these methods do not lead to a minimum error rate result. In this paper, a novel discrimina-tive learning method of environmental parameters, which is based on Minimum ClassificationError (MCE) criterion, is proposed. In the method, a simple classifier and the Generalized Probabilistic Descent (GPD) algorithm are adopted to iteratively learn the environmental parameters. Consequently, the clean speech features are estimated from the noisy speech features with the estimated environmental parameters, and then the estimations of clean speech features are utilized in the back-end HMM classifier. Experiments show that the best error rate reduction of 32.1% is obtained, tested on a task of 18 isolated confusion Korean words, relative to a conventional HMM system.

  17. A Distributed Feature-based Environment for Collaborative Design

    Directory of Open Access Journals (Sweden)

    Wei-Dong Li

    2003-02-01

    Full Text Available This paper presents a client/server design environment based on 3D feature-based modelling and Java technologies to enable design information to be shared efficiently among members within a design team. In this environment, design tasks and clients are organised through working sessions generated and maintained by a collaborative server. The information from an individual design client during a design process is updated and broadcast to other clients in the same session through an event-driven and call-back mechanism. The downstream manufacturing analysis modules can be wrapped as agents and plugged into the open environment to support the design activities. At the server side, a feature-feature relationship is established and maintained to filter the varied information of a working part, so as to facilitate efficient information update during the design process.

  18. Content Based Image Recognition by Information Fusion with Multiview Features

    Directory of Open Access Journals (Sweden)

    Rik Das

    2015-09-01

    Full Text Available Substantial research interest has been observed in the field of object recognition as a vital component for modern intelligent systems. Content based image classification and retrieval have been considered as two popular techniques for identifying the object of interest. Feature extraction has played the pivotal role towards successful implementation of the aforesaid techniques. The paper has presented two novel techniques of feature extraction from diverse image categories both in spatial domain and in frequency domain. The multi view features from the image categories were evaluated for classification and retrieval performances by means of a fusion based recognition architecture. The experimentation was carried out with four different popular public datasets. The proposed fusion framework has exhibited an average increase of 24.71% and 20.78% in precision rates for classification and retrieval respectively, when compared to state-of-the art techniques. The experimental findings were validated with a paired t test for statistical significance.

  19. [Electroencephalogram Feature Selection Based on Correlation Coefficient Analysis].

    Science.gov (United States)

    Zhou, Jinzhi; Tang, Xiaofang

    2015-08-01

    In order to improve the accuracy of classification with small amount of motor imagery training data on the development of brain-computer interface (BCD systems, we proposed an analyzing method to automatically select the characteristic parameters based on correlation coefficient analysis. Throughout the five sample data of dataset IV a from 2005 BCI Competition, we utilized short-time Fourier transform (STFT) and correlation coefficient calculation to reduce the number of primitive electroencephalogram dimension, then introduced feature extraction based on common spatial pattern (CSP) and classified by linear discriminant analysis (LDA). Simulation results showed that the average rate of classification accuracy could be improved by using correlation coefficient feature selection method than those without using this algorithm. Comparing with support vector machine (SVM) optimization features algorithm, the correlation coefficient analysis can lead better selection parameters to improve the accuracy of classification.

  20. Image Mosaic Method Based on SIFT Features of Line Segment

    Directory of Open Access Journals (Sweden)

    Jun Zhu

    2014-01-01

    Full Text Available This paper proposes a novel image mosaic method based on SIFT (Scale Invariant Feature Transform feature of line segment, aiming to resolve incident scaling, rotation, changes in lighting condition, and so on between two images in the panoramic image mosaic process. This method firstly uses Harris corner detection operator to detect key points. Secondly, it constructs directed line segments, describes them with SIFT feature, and matches those directed segments to acquire rough point matching. Finally, Ransac method is used to eliminate wrong pairs in order to accomplish image mosaic. The results from experiment based on four pairs of images show that our method has strong robustness for resolution, lighting, rotation, and scaling.

  1. [Multiple transmission electron microscopic image stitching based on sift features].

    Science.gov (United States)

    Li, Mu; Lu, Yanmeng; Han, Shuaihu; Wu, Zhuobin; Chen, Jiajing; Liu, Zhexing; Cao, Lei

    2015-08-01

    We proposed a new stitching method based on sift features to obtain an enlarged view of transmission electron microscopic (TEM) images with a high resolution. The sift features were extracted from the images, which were then combined with fitted polynomial correction field to correct the images, followed by image alignment based on the sift features. The image seams at the junction were finally removed by Poisson image editing to achieve seamless stitching, which was validated on 60 local glomerular TEM images with an image alignment error of 62.5 to 187.5 nm. Compared with 3 other stitching methods, the proposed method could effectively reduce image deformation and avoid artifacts to facilitate renal biopsy pathological diagnosis.

  2. GPR-Based Landmine Detection and Identification Using Multiple Features

    Directory of Open Access Journals (Sweden)

    Kwang Hee Ko

    2012-01-01

    Full Text Available This paper presents a method to identify landmines in various burial conditions. A ground penetration radar is used to generate data set, which is then processed to reduce the ground effect and noise to obtain landmine signals. Principal components and Fourier coefficients of the landmine signals are computed, which are used as features of each landmine for detection and identification. A database is constructed based on the features of various types of landmines and the ground conditions, including the different levels of moisture and types of ground and the burial depths of the landmines. Detection and identification is performed by searching for features in the database. For a robust decision, the counting method and the Mahalanobis distance-based likelihood ratio test method are employed. Four landmines, different in size and material, are considered as examples that demonstrate the efficiency of the proposed method for detecting and identifying landmines.

  3. A Hybrid method of face detection based on Feature Extraction using PIFR and Feature Optimization using TLBO

    Directory of Open Access Journals (Sweden)

    Kapil Verma

    2016-01-01

    Full Text Available In this paper we proposed a face detection method based on feature selection and feature optimization. Now in current research trend of biometric security used the process of feature optimization for better improvement of face detection technique. Basically our face consists of three types of feature such as skin color, texture and shape and size of face. The most important feature of face is skin color and texture of face. In this detection technique used texture feature of face image. For the texture extraction of image face used partial feature extraction function, these function is most promising shape feature analysis. For the selection of feature and optimization of feature used multi-objective TLBO. TLBO algorithm is population based searching technique and defines two constraints function for the process of selection and optimization. The proposed algorithm of face detection based on feature selection and feature optimization process. Initially used face image data base and passes through partial feature extractor function and these transform function gives a texture feature of face image. For the evaluation of performance our proposed algorithm implemented in MATLAB 7.8.0 software and face image used provided by Google face image database. For numerical analysis of result used hit and miss ratio. Our empirical evaluation of result shows better prediction result in compression of PIFR method of face detection.

  4. Expression Recognition Based on Variant Sampling Method and Gabor Features%基于多种采样方式和Gabor特征的表情识别

    Institute of Scientific and Technical Information of China (English)

    徐洁; 章毓晋

    2011-01-01

    设计一种表情识别系统,采用多种采样方式和不同尺度的局部Gabor滤波器,通过主成分分析与线性判别分析对人脸表情识别系统进行特征优化选择.该系统大幅缩减特征提取及分类的时空需求量,表情识别率也有所提高.对原始图像沿垂直方向采样识别效果说明人脸垂直方向包含更多的表情信息.实验测试结果表明.Gabor变换后的人脸表情主要特征信息在不同的尺度和方向上具有集中性和冗余性,小尺度全方向的滤波器组能获得更好的识别性.%This paper investigates a facial expression recognition system based on variant sampling method and different scales of local Gabor features optimized by Principal Component Analysis(PCA)+ Linear Discriminant Analysis(LDA). The sampling method not only reduces the need of compute time and storage memory, but also improves the recognition rates. The result obtained from the sampling in the vertical direction expresses that this direction includes much more facial expression information. Also the influence on facial expression recognition rates based on variant Gabor filters in different scales and directions can be concluded that the primitive information of facial expression features have redundancy in scales and directions.

  5. Fusing LBP and HOG features by canonical correlation analysis for facial age estimation%典型相关分析融合LBP和HOG特征的人脸年龄估计

    Institute of Scientific and Technical Information of China (English)

    瞿中; 孔令军; 冯欣

    2014-01-01

    Estimating human age via facial image analysis is very difficult ,due to the fact that the factors of causing variations in the appearance of the human face include not only the aging ,but also the lifestyle and life environments etc .Both illumination and position of facial image have side-effect on the age estimation . Existing estimation methods consider the shape or texture of facial image to characterize human aging with the preprocessing of the gray-balance and Procrustes analysis .Motivated by the fact that both LBP and HOG information of facial images are robust to control illumination and rotation and can provide complementary information in characterizing human age ,we propose fusing these two sources of information at the feature level by using canonical correlation analysis (CCA) for enhanced facial age estimation .Then , we learn a multiple linear regression function to uncover the relation of the fused features and the ground-truth age values for age prediction .Experimental results are presented to demonstrate the efficacy of the proposed method .%通过人脸分析方法估计人类年龄的困难在于人脸外观的变化原因除了年龄变化,还受生活方式及环境等影响。人脸图像在采集时的复杂性造成的光照不均,人脸姿势等,也增加年龄估计难度。目前大多数年龄估计的方法都是预先对人脸图像进行灰度均衡和人脸矫正等预处理,采用外形或纹理信息作为特性的估计方法。提出一种多特征融合的人脸年龄估计方法,采用有较好的光照及旋转不变性的局部二进制模式(LBP)和梯度直方图(HOG)作为人脸年龄变化的特征描述子,用典型相关分析法(CCA)在特征层将LBP和 HOG融合成更具年龄变化鉴别力的特征。然后通过学习得到一个多线性回归函数揭示融合后的特征和年龄之间的关系。实验结果表明该方法在没有人脸矫正等预处理的情况能取得较好效果。

  6. Dermoscopy analysis of RGB-images based on comparative features

    Science.gov (United States)

    Myakinin, Oleg O.; Zakharov, Valery P.; Bratchenko, Ivan A.; Artemyev, Dmitry N.; Neretin, Evgeny Y.; Kozlov, Sergey V.

    2015-09-01

    In this paper, we propose an algorithm for color and texture analysis for dermoscopic images of human skin based on Haar wavelets, Local Binary Patterns (LBP) and Histogram Analysis. This approach is a modification of «7-point checklist» clinical method. Thus, that is an "absolute" diagnostic method because one is using only features extracted from tumor's ROI (Region of Interest), which can be selected manually and/or using a special algorithm. We propose additional features extracted from the same image for comparative analysis of tumor and healthy skin. We used Euclidean distance, Cosine similarity, and Tanimoto coefficient as comparison metrics between color and texture features extracted from tumor's and healthy skin's ROI separately. A classifier for separating melanoma images from other tumors has been built by SVM (Support Vector Machine) algorithm. Classification's errors with and without comparative features between skin and tumor have been analyzed. Significant increase of recognition quality with comparative features has been demonstrated. Moreover, we analyzed two modes (manual and automatic) for ROI selecting on tumor and healthy skin areas. We have reached 91% of sensitivity using comparative features in contrast with 77% of sensitivity using the only "absolute" method. The specificity was the invariable (94%) in both cases.

  7. An Efficient Annotation of Search Results Based on Feature

    Directory of Open Access Journals (Sweden)

    A. Jebha

    2015-10-01

    Full Text Available  With the increased number of web databases, major part of deep web is one of the bases of database. In several search engines, encoded data in the returned resultant pages from the web often comes from structured databases which are referred as Web databases (WDB. A result page returned from WDB has multiple search records (SRR.Data units obtained from these databases are encoded into the dynamic resultant pages for manual processing. In order to make these units to be machine process able, relevant information are extracted and labels of data are assigned meaningfully. In this paper, feature ranking is proposed to extract the relevant information of extracted feature from WDB. Feature ranking is practical to enhance ideas of data and identify relevant features. This research explores the performance of feature ranking process by using the linear support vector machines with various feature of WDB database for annotation of relevant results. Experimental result of proposed system provides better result when compared with the earlier methods.

  8. Linear feature selection in texture analysis - A PLS based method

    DEFF Research Database (Denmark)

    Marques, Joselene; Igel, Christian; Lillholm, Martin

    2013-01-01

    We present a texture analysis methodology that combined uncommitted machine-learning techniques and partial least square (PLS) in a fully automatic framework. Our approach introduces a robust PLS-based dimensionality reduction (DR) step to specifically address outliers and high-dimensional featur...

  9. Adaptive Feature Based Control of Compact Disk Players

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Stoustrup, Jakob; Vidal, Enrique Sanchez

    2005-01-01

    of the Compact Disc. The problem is to design servo controllers which are well suited for handling surface faults which disturb the position measurement and still react sufficiently against normal disturbances like mechanical shocks. In previous work of the same authors a feature based control scheme for CD...

  10. Feature-Based Statistical Analysis of Combustion Simulation Data

    Energy Technology Data Exchange (ETDEWEB)

    Bennett, J; Krishnamoorthy, V; Liu, S; Grout, R; Hawkes, E; Chen, J; Pascucci, V; Bremer, P T

    2011-11-18

    We present a new framework for feature-based statistical analysis of large-scale scientific data and demonstrate its effectiveness by analyzing features from Direct Numerical Simulations (DNS) of turbulent combustion. Turbulent flows are ubiquitous and account for transport and mixing processes in combustion, astrophysics, fusion, and climate modeling among other disciplines. They are also characterized by coherent structure or organized motion, i.e. nonlocal entities whose geometrical features can directly impact molecular mixing and reactive processes. While traditional multi-point statistics provide correlative information, they lack nonlocal structural information, and hence, fail to provide mechanistic causality information between organized fluid motion and mixing and reactive processes. Hence, it is of great interest to capture and track flow features and their statistics together with their correlation with relevant scalar quantities, e.g. temperature or species concentrations. In our approach we encode the set of all possible flow features by pre-computing merge trees augmented with attributes, such as statistical moments of various scalar fields, e.g. temperature, as well as length-scales computed via spectral analysis. The computation is performed in an efficient streaming manner in a pre-processing step and results in a collection of meta-data that is orders of magnitude smaller than the original simulation data. This meta-data is sufficient to support a fully flexible and interactive analysis of the features, allowing for arbitrary thresholds, providing per-feature statistics, and creating various global diagnostics such as Cumulative Density Functions (CDFs), histograms, or time-series. We combine the analysis with a rendering of the features in a linked-view browser that enables scientists to interactively explore, visualize, and analyze the equivalent of one terabyte of simulation data. We highlight the utility of this new framework for combustion

  11. Sequence-based classification using discriminatory motif feature selection.

    Directory of Open Access Journals (Sweden)

    Hao Xiong

    Full Text Available Most existing methods for sequence-based classification use exhaustive feature generation, employing, for example, all k-mer patterns. The motivation behind such (enumerative approaches is to minimize the potential for overlooking important features. However, there are shortcomings to this strategy. First, practical constraints limit the scope of exhaustive feature generation to patterns of length ≤ k, such that potentially important, longer (> k predictors are not considered. Second, features so generated exhibit strong dependencies, which can complicate understanding of derived classification rules. Third, and most importantly, numerous irrelevant features are created. These concerns can compromise prediction and interpretation. While remedies have been proposed, they tend to be problem-specific and not broadly applicable. Here, we develop a generally applicable methodology, and an attendant software pipeline, that is predicated on discriminatory motif finding. In addition to the traditional training and validation partitions, our framework entails a third level of data partitioning, a discovery partition. A discriminatory motif finder is used on sequences and associated class labels in the discovery partition to yield a (small set of features. These features are then used as inputs to a classifier in the training partition. Finally, performance assessment occurs on the validation partition. Important attributes of our approach are its modularity (any discriminatory motif finder and any classifier can be deployed and its universality (all data, including sequences that are unaligned and/or of unequal length, can be accommodated. We illustrate our approach on two nucleosome occupancy datasets and a protein solubility dataset, previously analyzed using enumerative feature generation. Our method achieves excellent performance results, with and without optimization of classifier tuning parameters. A Python pipeline implementing the approach is

  12. Complex chromosome rearrangement in a child with microcephaly, dysmorphic facial features and mosaicism for a terminal deletion del(18(q21.32-qter investigated by FISH and array-CGH: Case report

    Directory of Open Access Journals (Sweden)

    Kokotas Haris

    2008-11-01

    Full Text Available Abstract We report on a 7 years and 4 months old Greek boy with mild microcephaly and dysmorphic facial features. He was a sociable child with maxillary hypoplasia, epicanthal folds, upslanting palpebral fissures with long eyelashes, and hypertelorism. His ears were prominent and dysmorphic, he had a long philtrum and a high arched palate. His weight was 17 kg (25th percentile and his height 120 cm (50th percentile. High resolution chromosome analysis identified in 50% of the cells a normal male karyotype, and in 50% of the cells one chromosome 18 showed a terminal deletion from 18q21.32. Molecular cytogenetic investigation confirmed a del(18(q21.32-qter in the one chromosome 18, but furthermore revealed the presence of a duplication in q21.2 in the other chromosome 18. The case is discussed concerning comparable previously reported cases and the possible mechanisms of formation.

  13. Image Recommendation Algorithm Using Feature-Based Collaborative Filtering

    Science.gov (United States)

    Kim, Deok-Hwan

    As the multimedia contents market continues its rapid expansion, the amount of image contents used in mobile phone services, digital libraries, and catalog service is increasing remarkably. In spite of this rapid growth, users experience high levels of frustration when searching for the desired image. Even though new images are profitable to the service providers, traditional collaborative filtering methods cannot recommend them. To solve this problem, in this paper, we propose feature-based collaborative filtering (FBCF) method to reflect the user's most recent preference by representing his purchase sequence in the visual feature space. The proposed approach represents the images that have been purchased in the past as the feature clusters in the multi-dimensional feature space and then selects neighbors by using an inter-cluster distance function between their feature clusters. Various experiments using real image data demonstrate that the proposed approach provides a higher quality recommendation and better performance than do typical collaborative filtering and content-based filtering techniques.

  14. Digital video steganalysis using motion vector recovery-based features.

    Science.gov (United States)

    Deng, Yu; Wu, Yunjie; Zhou, Linna

    2012-07-10

    As a novel digital video steganography, the motion vector (MV)-based steganographic algorithm leverages the MVs as the information carriers to hide the secret messages. The existing steganalyzers based on the statistical characteristics of the spatial/frequency coefficients of the video frames cannot attack the MV-based steganography. In order to detect the presence of information hidden in the MVs of video streams, we design a novel MV recovery algorithm and propose the calibration distance histogram-based statistical features for steganalysis. The support vector machine (SVM) is trained with the proposed features and used as the steganalyzer. Experimental results demonstrate that the proposed steganalyzer can effectively detect the presence of hidden messages and outperform others by the significant improvements in detection accuracy even with low embedding rates.

  15. Ear Recognition Based on Gabor Features and KFDA

    OpenAIRE

    2014-01-01

    We propose an ear recognition system based on 2D ear images which includes three stages: ear enrollment, feature extraction, and ear recognition. Ear enrollment includes ear detection and ear normalization. The ear detection approach based on improved Adaboost algorithm detects the ear part under complex background using two steps: offline cascaded classifier training and online ear detection. Then Active Shape Model is applied to segment the ear part and normalize all the ear images to the s...

  16. 基于Gabor小波与LBP直方图序列的人脸年龄估计%Age Estimation of Facial Images Based on Gabor Wavelet and Histogram Sequence of LBP

    Institute of Scientific and Technical Information of China (English)

    黄兵; 郭继昌

    2012-01-01

    提出了一种基于Gabor小波和局域二值模式(Local binary pattern,LBP)直方图序列的人脸年龄估计方法.首先对人脸图像提取多方向与多尺度的Gabor幅值域图谱(Gabor magnitude maps,GMMs);然后采用基于局部特征的LBP算子对GMMs编码,并对之分块,由各子块的直方图序列来描述人脸;为进一步降低人脸特征维数,再对人脸直方图序列特征应用主成分分析(PCA);最后使用支持向量机回归(SVR)的LOPO策略对人脸年龄库进行训练和测试.实验结果表明,该方法可以较为快速有效地对人脸图像进行年龄估计.%A method for age estimation of facial images is proposed based on the combination of the Gabor wavelets and the histogram sequence of the local binary pattern (LBP). The facial images are firstly filtered by the multi-orientation and multi-scale Gabor before Gabor magnitude maps (GMMs) are extracted. Then the local neighbor pattern on GMMs is extracted by LBP based on local characteristics and the characteristics are divided into several sub-blocks to calculate the histogram sequences. To further reduce the dimension of facial features, Principal component analysis (PCA) is applied to the histogram sequences. Finally, a leave-one-person-out (LOPO) test scheme of the support vector regression (SVR) is used to train and test the face age database. Experimental results show that the method can estimate the age of human faces quickly and effectively.

  17. Feature Learning Based Random Walk for Liver Segmentation

    Science.gov (United States)

    Zheng, Yongchang; Ai, Danni; Zhang, Pan; Gao, Yefei; Xia, Likun; Du, Shunda; Sang, Xinting; Yang, Jian

    2016-01-01

    Liver segmentation is a significant processing technique for computer-assisted diagnosis. This method has attracted considerable attention and achieved effective result. However, liver segmentation using computed tomography (CT) images remains a challenging task because of the low contrast between the liver and adjacent organs. This paper proposes a feature-learning-based random walk method for liver segmentation using CT images. Four texture features were extracted and then classified to determine the classification probability corresponding to the test images. Seed points on the original test image were automatically selected and further used in the random walk (RW) algorithm to achieve comparable results to previous segmentation methods. PMID:27846217

  18. Spectral feature matching based on partial least squares

    Institute of Scientific and Technical Information of China (English)

    Weidong Yan; Zheng Tian; Lulu Pan; Mingtao Ding

    2009-01-01

    We investigate the spectral approaches to the problem of point pattern matching, and present a spectral feature descriptors based on partial least square (PLS). Given keypoints of two images, we define the position similarity matrices respectively, and extract the spectral features from the matrices by PLS, which indicate geometric distribution and inner relationships of the keypoints. Then the keypoints matching is done by bipartite graph matching. The experiments on both synthetic and real-world data corroborate the robustness and invariance of the algorithm.

  19. The relationships between processing facial identity, emotional expression, facial speech, and gaze direction during development.

    Science.gov (United States)

    Spangler, Sibylle M; Schwarzer, Gudrun; Korell, Monika; Maier-Karius, Johanna

    2010-01-01

    Four experiments were conducted with 5- to 11-year-olds and adults to investigate whether facial identity, facial speech, emotional expression, and gaze direction are processed independently of or in interaction with one another. In a computer-based, speeded sorting task, participants sorted faces according to facial identity while disregarding facial speech, emotional expression, and gaze direction or, alternatively, according to facial speech, emotional expression, and gaze direction while disregarding facial identity. Reaction times showed that children and adults were able to direct their attention selectively to facial identity despite variations of other kinds of face information, but when sorting according to facial speech and emotional expression, they were unable to ignore facial identity. In contrast, gaze direction could be processed independently of facial identity in all age groups. Apart from shorter reaction times and fewer classification errors, no substantial change in processing facial information was found to be correlated with age. We conclude that adult-like face processing routes are employed from 5 years of age onward.

  20. An image segmentation based method for iris feature extraction

    Institute of Scientific and Technical Information of China (English)

    XU Guang-zhu; ZHANG Zai-feng; MA Yi-de

    2008-01-01

    In this article, the local anomalistic blocks such ascrypts, furrows, and so on in the iris are initially used directly asiris features. A novel image segmentation method based onintersecting cortical model (ICM) neural network was introducedto segment these anomalistic blocks. First, the normalized irisimage was put into ICM neural network after enhancement.Second, the iris features were segmented out perfectly and wereoutput in binary image type by the ICM neural network. Finally,the fourth output pulse image produced by ICM neural networkwas chosen as the iris code for the convenience of real timeprocessing. To estimate the performance of the presentedmethod, an iris recognition platform was produced and theHamming Distance between two iris codes was computed tomeasure the dissimilarity between them. The experimentalresults in CASIA vl.0 and Bath iris image databases show thatthe proposed iris feature extraction algorithm has promisingpotential in iris recognition.

  1. Digital signature systems based on smart card and fingerprint feature

    Institute of Scientific and Technical Information of China (English)

    You Lin; Xu Maozhi; Zheng Zhiming

    2007-01-01

    Two signature systems based on smart cards and fingerprint features are proposed. In one signature system, the cryptographic key is stored in the smart card and is only accessible when the signer's extracted fingerprint features match his stored template. To resist being tampered on public channel, the user's message and the signed message are encrypted by the signer's public key and the user's public key, respectively. In the other signature system, the keys are generated by combining the signer's fingerprint features, check bits, and a rememberable key,and there are no matching process and keys stored on the smart card. Additionally, there is generally more than one public key in this system, that is, there exist some pseudo public keys except a real one.

  2. Improved MFCC-Based Feature for Robust Speaker Identification

    Institute of Scientific and Technical Information of China (English)

    WU Zunjing; CAO Zhigang

    2005-01-01

    The Mel-frequency cepstral coefficient (MFCC) is the most widely used feature in speech and speaker recognition. However, MFCC is very sensitive to noise interference, which tends to drastically degrade the performance of recognition systems because of the mismatches between training and testing. In this paper, the logarithmic transformation in the standard MFCC analysis is replaced by a combined function to improve the noisy sensitivity. The proposed feature extraction process is also combined with speech enhancement methods, such as spectral subtraction and median-filter to further suppress the noise. Experiments show that the proposed robust MFCC-based feature significantly reduces the recognition error rate over a wide signal-to-noise ratio range.

  3. A novel robot visual homing method based on SIFT features.

    Science.gov (United States)

    Zhu, Qidan; Liu, Chuanjia; Cai, Chengtao

    2015-10-14

    Warping is an effective visual homing method for robot local navigation. However, the performance of the warping method can be greatly influenced by the changes of the environment in a real scene, thus resulting in lower accuracy. In order to solve the above problem and to get higher homing precision, a novel robot visual homing algorithm is proposed by combining SIFT (scale-invariant feature transform) features with the warping method. The algorithm is novel in using SIFT features as landmarks instead of the pixels in the horizon region of the panoramic image. In addition, to further improve the matching accuracy of landmarks in the homing algorithm, a novel mismatching elimination algorithm, based on the distribution characteristics of landmarks in the catadioptric panoramic image, is proposed. Experiments on image databases and on a real scene confirm the effectiveness of the proposed method.

  4. Features Extraction for Object Detection Based on Interest Point

    Directory of Open Access Journals (Sweden)

    Amin Mohamed Ahsan

    2013-05-01

    Full Text Available In computer vision, object detection is an essential process for further processes such as object tracking, analyzing and so on. In the same context, extraction features play important role to detect the object correctly. In this paper we present a method to extract local features based on interest point which is used to detect key-points within an image, then, compute histogram of gradient (HOG for the region surround that point. Proposed method used speed-up robust feature (SURF method as interest point detector and exclude the descriptor. The new descriptor is computed by using HOG method. The proposed method got advantages of both mentioned methods. To evaluate the proposed method, we used well-known dataset which is Caltech101. The initial result is encouraging in spite of using a small data for training.

  5. SPEECH/MUSIC CLASSIFICATION USING WAVELET BASED FEATURE EXTRACTION TECHNIQUES

    Directory of Open Access Journals (Sweden)

    Thiruvengatanadhan Ramalingam

    2014-01-01

    Full Text Available Audio classification serves as the fundamental step towards the rapid growth in audio data volume. Due to the increasing size of the multimedia sources speech and music classification is one of the most important issues for multimedia information retrieval. In this work a speech/music discrimination system is developed which utilizes the Discrete Wavelet Transform (DWT as the acoustic feature. Multi resolution analysis is the most significant statistical way to extract the features from the input signal and in this study, a method is deployed to model the extracted wavelet feature. Support Vector Machines (SVM are based on the principle of structural risk minimization. SVM is applied to classify audio into their classes namely speech and music, by learning from training data. Then the proposed method extends the application of Gaussian Mixture Models (GMM to estimate the probability density function using maximum likelihood decision methods. The system shows significant results with an accuracy of 94.5%.

  6. Efficient Identification Using a Prime-Feature-Based Technique

    DEFF Research Database (Denmark)

    Hussain, Dil Muhammad Akbar; Haq, Shaiq A.; Valente, Andrea

    2011-01-01

    Identification of authorized train drivers through biometrics is a growing area of interest in locomotive radio remote control systems. The existing technique of password authentication is not very reliable and potentially unauthorized personnel may also operate the system on behalf of the operator....... Fingerprint identification system, implemented on PC/104 based real-time systems, can accurately identify the operator. Traditionally, the uniqueness of a fingerprint is determined by the overall pattern of ridges and valleys as well as the local ridge anomalies e.g., a ridge bifurcation or a ridge ending...... in this paper. The technique involves identifying the most prominent feature of the fingerprint and searching only for that feature in the database to expedite the search process. The proposed architect provides efficient matching process and indexing feature for identification is unique....

  7. DWT Based Fingerprint Recognition using Non Minutiae Features

    CERN Document Server

    R., Shashi Kumar D; Chhootaray, R K; Pattanaik, Sabyasachi

    2011-01-01

    Forensic applications like criminal investigations, terrorist identification and National security issues require a strong fingerprint data base and efficient identification system. In this paper we propose DWT based Fingerprint Recognition using Non Minutiae (DWTFR) algorithm. Fingerprint image is decomposed into multi resolution sub bands of LL, LH, HL and HH by applying 3 level DWT. The Dominant local orientation angle {\\theta} and Coherence are computed on LL band only. The Centre Area Features and Edge Parameters are determined on each DWT level by considering all four sub bands. The comparison of test fingerprint with database fingerprint is decided based on the Euclidean Distance of all the features. It is observed that the values of FAR, FRR and TSR are improved compared to the existing algorithm.

  8. DWT Based Fingerprint Recognition using Non Minutiae Features

    Directory of Open Access Journals (Sweden)

    Shashi Kumar D R

    2011-03-01

    Full Text Available Forensic applications like criminal investigations, terrorist identification and National security issues require a strong fingerprint data base and efficient identification system. In this paper we propose DWT based Fingerprint Recognition using Non Minutiae (DWTFR algorithm. Fingerprint image is decomposed into multi resolution sub bands of LL, LH, HL and HH by applying 3 level DWT. The Dominant local orientation angle and#952; and Coherence are computed on LL band only. The Centre Area Features and Edge Parameters are determined on each DWT level by considering all four sub bands. The comparison of test fingerprint with database fingerprint is decided based on the Euclidean Distance of all the features. It is observed that the values of FAR, FRR and TSR are improved compared to the existing algorithm.

  9. A de novo interstitial deletion of 8p11.2 including ANK1 identified in a patient with spherocytosis, psychomotor developmental delay, and distinctive facial features.

    Science.gov (United States)

    Miya, Kazushi; Shimojima, Keiko; Sugawara, Midori; Shimada, Shino; Tsuri, Hiroyuki; Harai-Tanaka, Tomomi; Nakaoka, Sachiko; Kanegane, Hirokazu; Miyawaki, Toshio; Yamamoto, Toshiyuki

    2012-09-10

    The contiguous gene syndrome involving 8p11.2 is recognized as a combined phenotype of both Kallmann syndrome and hereditary spherocytosis, because the genes responsible for these 2 clinical entities, the fibroblast growth factor receptor 1 (FGFR1) and ankyrin 1 (ANK1) genes, respectively, are located in this region within a distance of 3.2Mb. We identified a 3.7Mb deletion of 8p11.2 in a 19-month-old female patient with hereditary spherocytosis. The identified deletion included ANK1, but not FGFR1, which is consistent with the absence of any phenotype or laboratory findings of Kallmann syndrome. Compared with the previous studies, the deletion identified in this study was located on the proximal end of 8p, indicating a pure interstitial deletion of 8p11.21. This patient exhibited mild developmental delay and distinctive facial findings in addition to hereditary spherocytosis. Thus, some of the genes included in the deleted region would be related to these symptoms.

  10. Deformation-based freeform feature reconstruction in reverse engineering

    Institute of Scientific and Technical Information of China (English)

    Qing WANG; Jiang-xiong LI; Ying-lin KE

    2008-01-01

    For reconstructing a freeform feature from point cloud,a deformation-based method is proposed in this paper.The freeform feature consists of a secondary surface and a blending surface.The secondary surface plays a role in substituting a local region of a given primary surface.The blending surface acts as a bridge to smoothly connect the unchanged region of the primary surface with the secondary surface.The secondary surface is generated by surface deformation subjected to line constraints,I.e.,character lines and limiting lines,not designed by conventional methotis.The lines are used to represent the underlying information of the freeform feature in point cloud.where the character lines depict the feature's shape,and the limiting lines determine its location and orientation.The configuration of the character lines and the extraction of the limiting lines are discussed in detail.The blending surface is designed by the traditional modeling method.whose intrinsic parameters are recovered from point cloud through a series of steps,namely,point cloud slicing,circle fitting and regression analysis.The proposed method is used not only to effectively and efficiently reconstruct the freeform feature,but also to modify it by manipulating the line constraints.Typical examples are given to verify our method.

  11. SVM-based glioma grading. Optimization by feature reduction analysis

    Energy Technology Data Exchange (ETDEWEB)

    Zoellner, Frank G.; Schad, Lothar R. [University Medical Center Mannheim, Heidelberg Univ., Mannheim (Germany). Computer Assisted Clinical Medicine; Emblem, Kyrre E. [Massachusetts General Hospital, Charlestown, A.A. Martinos Center for Biomedical Imaging, Boston MA (United States). Dept. of Radiology; Harvard Medical School, Boston, MA (United States); Oslo Univ. Hospital (Norway). The Intervention Center

    2012-11-01

    We investigated the predictive power of feature reduction analysis approaches in support vector machine (SVM)-based classification of glioma grade. In 101 untreated glioma patients, three analytic approaches were evaluated to derive an optimal reduction in features; (i) Pearson's correlation coefficients (PCC), (ii) principal component analysis (PCA) and (iii) independent component analysis (ICA). Tumor grading was performed using a previously reported SVM approach including whole-tumor cerebral blood volume (CBV) histograms and patient age. Best classification accuracy was found using PCA at 85% (sensitivity = 89%, specificity = 84%) when reducing the feature vector from 101 (100-bins rCBV histogram + age) to 3 principal components. In comparison, classification accuracy by PCC was 82% (89%, 77%, 2 dimensions) and 79% by ICA (87%, 75%, 9 dimensions). For improved speed (up to 30%) and simplicity, feature reduction by all three methods provided similar classification accuracy to literature values ({proportional_to}87%) while reducing the number of features by up to 98%. (orig.)

  12. Hybrid edge and feature-based single-image superresolution

    Science.gov (United States)

    Islam, Mohammad Moinul; Islam, Mohammed Nazrul; Asari, Vijayan K.; Karim, Mohammad A.

    2016-07-01

    A neighborhood-dependent component feature learning method for regression analysis in single-image superresolution is presented. Given a low-resolution input, the method uses a directional Fourier phase feature component to adaptively learn the regression kernel based on local covariance to estimate the high-resolution image. The unique feature of the proposed method is that it uses image features to learn about the local covariance from geometric similarity between the low-resolution image and its high-resolution counterpart. For each patch in the neighborhood, we estimate four directional variances to adapt the interpolated pixels. This gives us edge information and Fourier phase gives features, which are combined to interpolate using kernel regression. In order to compare quantitatively with other state-of-the-art techniques, root-mean-square error and measure mean-square similarity are computed for the example images, and experimental results show that the proposed algorithm outperforms similar techniques available in the literature, especially at higher resolution scales.

  13. Surgical-Allogeneic Facial Reconstruction: Facial Transplants

    OpenAIRE

    Marcelo Coelho Goiato; Daniela Micheline Dos Santos; Lisiane Cristina Bannwart; Marcela Filié Haddad; Leonardo Viana Pereira; Aljomar José Vechiato Filho

    2014-01-01

    Several factors including cancer, malformations and traumas may cause large facial mutilation. These functional and aesthetic deformities negatively affect the psychological perspectives and quality of life of the mutilated patient. Conventional treatments are prone to fail aesthetically and functionally. The recent introduction of the composite tissue allotransplantation (CTA), which uses transplanted facial tissues of healthy donors to recover the damaged or non-existent facial tissue of mu...

  14. Three-Dimensional Accuracy of Facial Scan for Facial Deformities in Clinics: A New Evaluation Method for Facial Scanner Accuracy.

    Science.gov (United States)

    Zhao, Yi-Jiao; Xiong, Yu-Xue; Wang, Yong

    2017-01-01

    In this study, the practical accuracy (PA) of optical facial scanners for facial deformity patients in oral clinic was evaluated. Ten patients with a variety of facial deformities from oral clinical were included in the study. For each patient, a three-dimensional (3D) face model was acquired, via a high-accuracy industrial "line-laser" scanner (Faro), as the reference model and two test models were obtained, via a "stereophotography" (3dMD) and a "structured light" facial scanner (FaceScan) separately. Registration based on the iterative closest point (ICP) algorithm was executed to overlap the test models to reference models, and "3D error" as a new measurement indicator calculated by reverse engineering software (Geomagic Studio) was used to evaluate the 3D global and partial (upper, middle, and lower parts of face) PA of each facial scanner. The respective 3D accuracy of stereophotography and structured light facial scanners obtained for facial deformities was 0.58±0.11 mm and 0.57±0.07 mm. The 3D accuracy of different facial partitions was inconsistent; the middle face had the best performance. Although the PA of two facial scanners was lower than their nominal accuracy (NA), they all met the requirement for oral clinic use.

  15. Three-Dimensional Accuracy of Facial Scan for Facial Deformities in Clinics: A New Evaluation Method for Facial Scanner Accuracy

    Science.gov (United States)

    Zhao, Yi-jiao; Xiong, Yu-xue; Wang, Yong

    2017-01-01

    In this study, the practical accuracy (PA) of optical facial scanners for facial deformity patients in oral clinic was evaluated. Ten patients with a variety of facial deformities from oral clinical were included in the study. For each patient, a three-dimensional (3D) face model was acquired, via a high-accuracy industrial “line-laser” scanner (Faro), as the reference model and two test models were obtained, via a “stereophotography” (3dMD) and a “structured light” facial scanner (FaceScan) separately. Registration based on the iterative closest point (ICP) algorithm was executed to overlap the test models to reference models, and “3D error” as a new measurement indicator calculated by reverse engineering software (Geomagic Studio) was used to evaluate the 3D global and partial (upper, middle, and lower parts of face) PA of each facial scanner. The respective 3D accuracy of stereophotography and structured light facial scanners obtained for facial deformities was 0.58±0.11 mm and 0.57±0.07 mm. The 3D accuracy of different facial partitions was inconsistent; the middle face had the best performance. Although the PA of two facial scanners was lower than their nominal accuracy (NA), they all met the requirement for oral clinic use. PMID:28056044

  16. [Feature extraction for breast cancer data based on geometric algebra theory and feature selection using differential evolution].

    Science.gov (United States)

    Li, Jing; Hong, Wenxue

    2014-12-01

    The feature extraction and feature selection are the important issues in pattern recognition. Based on the geometric algebra representation of vector, a new feature extraction method using blade coefficient of geometric algebra was proposed in this study. At the same time, an improved differential evolution (DE) feature selection method was proposed to solve the elevated high dimension issue. The simple linear discriminant analysis was used as the classifier. The result of the 10-fold cross-validation (10 CV) classification of public breast cancer biomedical dataset was more than 96% and proved superior to that of the original features and traditional feature extraction method.

  17. Facial feature descriptor using hybrid projection entropy in multi-scale transform domain.%多尺度变换域内混合投影熵的人脸特征描述

    Institute of Scientific and Technical Information of China (English)

    黄源源; 李建平

    2011-01-01

    提出一种新的人脸特征描述方法.使用 Contourlet 变换提取人脸图像低频子带,并对子带图像适当分块从而减少图像局部扭曲对识别的影响,利用混合投影函数和图像熵提取特征从而构建混合投影特征矩阵.在 ORL、Yale、CMU PIE 人脸数据库的实验表明该方法具有一定的优势.%This paper proposes a new facial image feature description method.This method uses the Contourlet transform to get the low frequency sub-band and divides the sub-band image into several appropriate non-overlapping blocks so that the local image distortion will less affect the recognition result. It uses the hybrid projection function and image entropy to extract the features and construct the hybrid projection feature vector. The experimental results on ORL, Yale and CMU PIE face databases demonstrate that the new method is competitive.

  18. Video segmentation using multiple features based on EM algorithm

    Institute of Scientific and Technical Information of China (English)

    张风超; 杨杰; 刘尔琦

    2004-01-01

    Object-based video segmentation is an important issue for many multimedia applications. A video segmentation method based on EM algorithm is proposed. We consider video segmentation as an unsupervised classification problem and apply EM algorithm to obtain the maximum-likelihood estimation of the Gaussian model parameters for model-based segmentation. We simultaneously combine multiple features (motion, color) within a maximum likelihood framework to obtain accurate segment results. We also use the temporal consistency among video frames to improve the speed of EM algorithm. Experimental results on typical MPEG-4 sequences and real scene sequences show that our method has an attractive accuracy and robustness.

  19. 基于拓扑知觉理论的人脸表情识别方法%Facial Expression Recognition Method Based on Topological Perception Theory

    Institute of Scientific and Technical Information of China (English)

    王晓峰; 张丽君

    2012-01-01

    On traditional computer visual field, the task is widely considered to be independent bottom-up, this causes low recognition rate of image. This paper proposes the facial expression recognition method based on lhe topology consciousness theory. The method applies the stability of human face topology invariance to abstract the facial outline. And adds the PCA (0 integrate as the facial large extent characterized information, applies large range priority principle to facial expression recognition, and designs lhe RBF+Adaboost classification. Experimental results show this method can improve the rate of facial expression recognition.%摘要:在传统的计算机视觉领域中,底层任务被认为是自主的、自底向上的过程,造成较低的图像识别率,为此,提出一种基于拓扑知觉理论的人脸表情识别方法.该方法把人脸具有拓扑不变性的性质用于人脸拓扑轮廓的提取,将提取的特征与主成分分析相结合,作为人脸大范围特征信息,将大范围优先原理应用于人脸表情的识别算法中,设计RBF+Adaboost多层分类器.实验结果表明,该方法可以提高人脸表情的识别率.

  20. Features fusion based approach for handwritten Gujarati character recognition

    Directory of Open Access Journals (Sweden)

    Ankit Sharma

    2017-02-01

    Full Text Available Handwritten character recognition is a challenging area of research. Lots of research activities in the area of character recognition are already done for Indian languages such as Hindi, Bangla, Kannada, Tamil and Telugu. Literature review on handwritten character recognition indicates that in comparison with other Indian scripts research activities on Gujarati handwritten character recognition are very less.  This paper aims to bring Gujarati character recognition in attention. Recognition of isolated Gujarati handwritten characters is proposed using three different kinds of features and their fusion. Chain code based, zone based and projection profiles based features are utilized as individual features. One of the significant contribution of proposed work is towards the generation of large and representative dataset of 88,000 handwritten Gujarati characters. Experiments are carried out on this developed dataset. Artificial Neural Network (ANN, Support Vector Machine (SVM and Naive Bayes (NB classifier based methods are implemented for handwritten Gujarati character recognition. Experimental results show substantial enhancement over state-of-the-art and authenticate our proposals.

  1. In search of Leonardo: computer-based facial image analysis of Renaissance artworks for identifying Leonardo as subject

    Science.gov (United States)

    Tyler, Christopher W.; Smith, William A. P.; Stork, David G.

    2012-03-01

    One of the enduring mysteries in the history of the Renaissance is the adult appearance of the archetypical "Renaissance Man," Leonardo da Vinci. His only acknowledged self-portrait is from an advanced age, and various candidate images of younger men are difficult to assess given the absence of documentary evidence. One clue about Leonardo's appearance comes from the remark of the contemporary historian, Vasari, that the sculpture of David by Leonardo's master, Andrea del Verrocchio, was based on the appearance of Leonardo when he was an apprentice. Taking a cue from this statement, we suggest that the more mature sculpture of St. Thomas, also by Verrocchio, might also have been a portrait of Leonardo. We tested the possibility Leonardo was the subject for Verrocchio's sculpture by a novel computational technique for the comparison of three-dimensional facial configurations. Based on quantitative measures of similarities, we also assess whether another pair of candidate two-dimensional images are plausibly attributable as being portraits of Leonardo as a young adult. Our results are consistent with the claim Leonardo is indeed the subject in these works, but we need comparisons with images in a larger corpora of candidate artworks before our results achieve statistical significance.

  2. Facial Expression Recognition System based on Gabor filter%基于Gabor滤波器的面部表情识别系统

    Institute of Scientific and Technical Information of China (English)

    宋小双

    2016-01-01

    由于缺乏有效的面部表情识别技术,面部表情识别在日常生活中的潜在应用没有得到重视。随着计算机化的盛行,运用计算机的面部识别也逐渐开始盛行。该文以MATLAB为开发工具,对面部表情进行研究。该文选择亚采样和归一化对表情图像原图进行预处理,找到面部特征的位置。然后再使用Gabor小波对预处理图像进行滤波,接着对滤波后的图像算欧式距离,最后使用最近邻方法找出最近的类,识别出表情图像所对应的情绪类型。%Facial expression recognition has potential application in different aspects of day-to-day life not yet realized due to ab-sence of effective expression recognition techniques. With the computerization of the prevalence, the use of the facial recogni-tion has gradually been popular. In this paper, MATLAB as a development tool was used for the study of facial expressions. This paper selected sub-sampling and normalized for original image pre-processing, and then find the location pf facial features. Then it uses the Gabor wavelet image preprocessing filter. The next step is counting the filtered image Euclidean distance. Finally, us-ing the Nearest Neighborhood Classifier method to find the most recent class, identify the face image corresponding type of emo-tion.

  3. Biosensor method and system based on feature vector extraction

    Science.gov (United States)

    Greenbaum, Elias [Knoxville, TN; Rodriguez, Jr., Miguel; Qi, Hairong [Knoxville, TN; Wang, Xiaoling [San Jose, CA

    2012-04-17

    A method of biosensor-based detection of toxins comprises the steps of providing at least one time-dependent control signal generated by a biosensor in a gas or liquid medium, and obtaining a time-dependent biosensor signal from the biosensor in the gas or liquid medium to be monitored or analyzed for the presence of one or more toxins selected from chemical, biological or radiological agents. The time-dependent biosensor signal is processed to obtain a plurality of feature vectors using at least one of amplitude statistics and a time-frequency analysis. At least one parameter relating to toxicity of the gas or liquid medium is then determined from the feature vectors based on reference to the control signal.

  4. The positioning algorithm based on feature variance of billet character

    Science.gov (United States)

    Yi, Jiansong; Hong, Hanyu; Shi, Yu; Chen, Hongyang

    2015-12-01

    In the process of steel billets recognition on the production line, the key problem is how to determine the position of the billet from complex scenes. To solve this problem, this paper presents a positioning algorithm based on the feature variance of billet character. Using the largest intra-cluster variance recursive method based on multilevel filtering, the billet characters are segmented completely from the complex scenes. There are three rows of characters on each steel billet, we are able to determine whether the connected regions, which satisfy the condition of the feature variance, are on a straight line. Then we can accurately locate the steel billet. The experimental results demonstrated that the proposed method in this paper is competitive to other methods in positioning the characters and it also reduce the running time. The algorithm can provide a better basis for the character recognition.

  5. A smile can reveal your age: enabling facial dynamics in age estimation

    NARCIS (Netherlands)

    Dibeklioğlu, H.; Gevers, T.; Salah, A.A.; Valenti, R.; Babaguchi, N.; Aizawa, K.; Smith, J.

    2012-01-01

    Estimation of a person's age from the facial image has many applications, ranging from biometrics and access control to cosmetics and entertainment. Many image-based methods have been proposed for this problem. In this paper, we propose a method for the use of dynamic features in age estimation, and

  6. Anatomical considerations to prevent facial nerve injury.

    Science.gov (United States)

    Roostaeian, Jason; Rohrich, Rod J; Stuzin, James M

    2015-05-01

    Injury to the facial nerve during a face lift is a relatively rare but serious complication. A large body of literature has been dedicated toward bettering the understanding of the anatomical course of the facial nerve and the relative danger zones. Most of these prior reports, however, have focused on identifying the location of facial nerve branches based on their trajectory mostly in two dimensions and rarely in three dimensions. Unfortunately, the exact location of the facial nerve relative to palpable or visible facial landmarks is quite variable. Although the precise location of facial nerve branches is variable, its relationship to soft-tissue planes is relatively constant. The focus of this report is to improve understanding of facial soft-tissue anatomy so that safe planes of dissection during surgical undermining may be identified for each branch of the facial nerve. Certain anatomical locations more prone to injury and high-risk patient parameters are further emphasized to help minimize the risk of facial nerve injury during rhytidectomy.

  7. Collaborative Tracking of Image Features Based on Projective Invariance

    Science.gov (United States)

    Jiang, Jinwei

    -mode sensors for improving the flexibility and robustness of the system. From the experimental results during three field tests for the LASOIS system, we observed that most of the errors in the image processing algorithm are caused by the incorrect feature tracking. This dissertation addresses the feature tracking problem in image sequences acquired from cameras. Despite many alternatives to feature tracking problem, iterative least squares solution solving the optical flow equation has been the most popular approach used by many in the field. This dissertation attempts to leverage the former efforts to enhance feature tracking methods by introducing a view geometric constraint to the tracking problem, which provides collaboration among features. In contrast to alternative geometry based methods, the proposed approach provides an online solution to optical flow estimation in a collaborative fashion by exploiting Horn and Schunck flow estimation regularized by view geometric constraints. Proposed collaborative tracker estimates the motion of a feature based on the geometry of the scene and how the other features are moving. Alternative to this approach, a new closed form solution to tracking that combines the image appearance with the view geometry is also introduced. We particularly use invariants in the projective coordinates and conjecture that the traditional appearance solution can be significantly improved using view geometry. The geometric constraint is introduced by defining a new optical flow equation which exploits the scene geometry from a set drawn from tracked features. At the end of each tracking loop the quality of the tracked features is judged using both appearance similarity and geometric consistency. Our experiments demonstrate robust tracking performance even when the features are occluded or they undergo appearance changes due to projective deformation of the template. The proposed collaborative tracking method is also tested in the visual navigation

  8. Sparse coding based feature representation method for remote sensing images

    Science.gov (United States)

    Oguslu, Ender

    In this dissertation, we study sparse coding based feature representation method for the classification of multispectral and hyperspectral images (HSI). The existing feature representation systems based on the sparse signal model are computationally expensive, requiring to solve a convex optimization problem to learn a dictionary. A sparse coding feature representation framework for the classification of HSI is presented that alleviates the complexity of sparse coding through sub-band construction, dictionary learning, and encoding steps. In the framework, we construct the dictionary based upon the extracted sub-bands from the spectral representation of a pixel. In the encoding step, we utilize a soft threshold function to obtain sparse feature representations for HSI. Experimental results showed that a randomly selected dictionary could be as effective as a dictionary learned from optimization. The new representation usually has a very high dimensionality requiring a lot of computational resources. In addition, the spatial information of the HSI data has not been included in the representation. Thus, we modify the framework by incorporating the spatial information of the HSI pixels and reducing the dimension of the new sparse representations. The enhanced model, called sparse coding based dense feature representation (SC-DFR), is integrated with a linear support vector machine (SVM) and a composite kernels SVM (CKSVM) classifiers to discriminate different types of land cover. We evaluated the proposed algorithm on three well known HSI datasets and compared our method to four recently developed classification methods: SVM, CKSVM, simultaneous orthogonal matching pursuit (SOMP) and image fusion and recursive filtering (IFRF). The results from the experiments showed that the proposed method can achieve better overall and average classification accuracies with a much more compact representation leading to more efficient sparse models for HSI classification. To further

  9. Feature Based Stereo Matching Using Two-Step Expansion

    Directory of Open Access Journals (Sweden)

    Liqiang Wang

    2014-01-01

    Full Text Available This paper proposes a novel method for stereo matching which is based on image features to produce a dense disparity map through two different expansion phases. It can find denser point correspondences than those of the existing seed-growing algorithms, and it has a good performance in short and wide baseline situations. This method supposes that all pixel coordinates in each image segment corresponding to a 3D surface separately satisfy projective geometry of 1D in horizontal axis. Firstly, a state-of-the-art method of feature matching is used to obtain sparse support points and an image segmentation-based prior is employed to assist the first region outspread. Secondly, the first-step expansion is to find more feature correspondences in the uniform region via initial support points, which is based on the invariant cross ratio in 1D projective transformation. In order to find enough point correspondences, we use a regular seed-growing algorithm as the second-step expansion and produce a quasi-dense disparity map. Finally, two different methods are used to obtain dense disparity map from quasi-dense pixel correspondences. Experimental results show the effectiveness of our method.

  10. Validation of Underwater Sensor Package Using Feature Based SLAM.

    Science.gov (United States)

    Cain, Christopher; Leonessa, Alexander

    2016-03-17

    Robotic vehicles working in new, unexplored environments must be able to locate themselves in the environment while constructing a picture of the objects in the environment that could act as obstacles that would prevent the vehicles from completing their desired tasks. In enclosed environments, underwater range sensors based off of acoustics suffer performance issues due to reflections. Additionally, their relatively high cost make them less than ideal for usage on low cost vehicles designed to be used underwater. In this paper we propose a sensor package composed of a downward facing camera, which is used to perform feature tracking based visual odometry, and a custom vision-based two dimensional rangefinder that can be used on low cost underwater unmanned vehicles. In order to examine the performance of this sensor package in a SLAM framework, experimental tests are performed using an unmanned ground vehicle and two feature based SLAM algorithms, the extended Kalman filter based approach and the Rao-Blackwellized, particle filter based approach, to validate the sensor package.

  11. Validation of Underwater Sensor Package Using Feature Based SLAM

    Directory of Open Access Journals (Sweden)

    Christopher Cain

    2016-03-01

    Full Text Available Robotic vehicles working in new, unexplored environments must be able to locate themselves in the environment while constructing a picture of the objects in the environment that could act as obstacles that would prevent the vehicles from completing their desired tasks. In enclosed environments, underwater range sensors based off of acoustics suffer performance issues due to reflections. Additionally, their relatively high cost make them less than ideal for usage on low cost vehicles designed to be used underwater. In this paper we propose a sensor package composed of a downward facing camera, which is used to perform feature tracking based visual odometry, and a custom vision-based two dimensional rangefinder that can be used on low cost underwater unmanned vehicles. In order to examine the performance of this sensor package in a SLAM framework, experimental tests are performed using an unmanned ground vehicle and two feature based SLAM algorithms, the extended Kalman filter based approach and the Rao-Blackwellized, particle filter based approach, to validate the sensor package.

  12. Intratemporal Hemangiomas Involving the Facial Nerve

    Science.gov (United States)

    Bhatia, Sanjaya; Karmarkar, Sandeep; Calabrese, V.; Landolfi, Mauro; Taibah, Abdelkader; Russo, Alessandra; Mazzoni, Antonio; Sanna, Mario

    1995-01-01

    Intratemporal vascular tumors involving the facial nerve are rare benign lesions. Because of their variable clinical features, they are often misdiagnosed preoperatively. This study presents a series of 21 patients with such lesions managed from 1977 to 1994. Facial nerve dysfunction was the most common complaint, present in 60% of the cases, followed by hearing loss, present in 40% of cases. High-resolution computed tomography, magnetic resonance imaging with gadolinium, and a high index of clinical suspicion is required for preoperative diagnosis of these lesions. Early surgical resection of these tumors permits acceptable return of facial nerve function in many patients. ImagesFigure 1Figure 2Figure 3 PMID:17170963

  13. Feature-Based versus Category-Based Induction with Uncertain Categories

    Science.gov (United States)

    Griffiths, Oren; Hayes, Brett K.; Newell, Ben R.

    2012-01-01

    Previous research has suggested that when feature inferences have to be made about an instance whose category membership is uncertain, feature-based inductive reasoning is used to the exclusion of category-based induction. These results contrast with the observation that people can and do use category-based induction when category membership is…

  14. Nonlinear feature identification of impedance-based structural health monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Rutherford, A. C. (Amanda C.); Park, G. H. (Gyu Hae); Sohn, H. (Hoon); Farrar, C. R. (Charles R.)

    2004-01-01

    The impedance-based structural health monitoring technique, which utilizes electromechanical coupling properties of piezoelectric materials, has shown feasibility for use in a variety of structural health monitoring applications. Relying on high frequency local excitations (typically > 30 kHz), this technique is very sensitive to minor changes in structural integrity in the near field of piezoelectric sensors. Several damage sensitive features have been identified and used coupled with the impedance methods. Most of these methods are, however, limited to linearity assumptions of a structure. This paper presents the use of experimentally identified nonlinear features, combined with impedance methods, for structural health monitoring. Their applicability to damage detection in various frequency ranges is demonstrated using actual impedance signals measured from a portal frame structure. The performance of the nonlinear feature is compared with those of conventional impedance methods. This paper reinforces the utility of nonlinear features in structural health monitoring and suggests that their varying sensitivity in different frequency ranges may be leveraged for certain applications.

  15. Estimating stellar atmospheric parameters based on Lasso features

    Science.gov (United States)

    Liu, Chuan-Xing; Zhang, Pei-Ai; Lu, Yu

    2014-04-01

    With the rapid development of large scale sky surveys like the Sloan Digital Sky Survey (SDSS), GAIA and LAMOST (Guoshoujing telescope), stellar spectra can be obtained on an ever-increasing scale. Therefore, it is necessary to estimate stellar atmospheric parameters such as Teff, log g and [Fe/H] automatically to achieve the scientific goals and make full use of the potential value of these observations. Feature selection plays a key role in the automatic measurement of atmospheric parameters. We propose to use the least absolute shrinkage selection operator (Lasso) algorithm to select features from stellar spectra. Feature selection can reduce redundancy in spectra, alleviate the influence of noise, improve calculation speed and enhance the robustness of the estimation system. Based on the extracted features, stellar atmospheric parameters are estimated by the support vector regression model. Three typical schemes are evaluated on spectral data from both the ELODIE library and SDSS. Experimental results show the potential performance to a certain degree. In addition, results show that our method is stable when applied to different spectra.

  16. Voronoi-Based Curvature and Feature Estimation from Point Clouds.

    Science.gov (United States)

    Mérigot, Quentin; Ovsjanikov, Maks; Guibas, Leonidas

    2011-06-01

    We present an efficient and robust method for extracting curvature information, sharp features, and normal directions of a piecewise smooth surface from its point cloud sampling in a unified framework. Our method is integral in nature and uses convolved covariance matrices of Voronoi cells of the point cloud which makes it provably robust in the presence of noise. We show that these matrices contain information related to curvature in the smooth parts of the surface, and information about the directions and angles of sharp edges around the features of a piecewise-smooth surface. Our method is applicable in both two and three dimensions, and can be easily parallelized, making it possible to process arbitrarily large point clouds, which was a challenge for Voronoi-based methods. In addition, we describe a Monte-Carlo version of our method, which is applicable in any dimension. We illustrate the correctness of both principal curvature information and feature extraction in the presence of varying levels of noise and sampling density on a variety of models. As a sample application, we use our feature detection method to segment point cloud samplings of piecewise-smooth surfaces.

  17. Iris-based medical analysis by geometric deformation features.

    Science.gov (United States)

    Ma, Lin; Zhang, D; Li, Naimin; Cai, Yan; Zuo, Wangmeng; Wang, Kuanguan

    2013-01-01

    Iris analysis studies the relationship between human health and changes in the anatomy of the iris. Apart from the fact that iris recognition focuses on modeling the overall structure of the iris, iris diagnosis emphasizes the detecting and analyzing of local variations in the characteristics of irises. This paper focuses on studying the geometrical structure changes in irises that are caused by gastrointestinal diseases, and on measuring the observable deformations in the geometrical structures of irises that are related to roundness, diameter and other geometric forms of the pupil and the collarette. Pupil and collarette based features are defined and extracted. A series of experiments are implemented on our experimental pathological iris database, including manual clustering of both normal and pathological iris images, manual classification by non-specialists, manual classification by individuals with a medical background, classification ability verification for the proposed features, and disease recognition by applying the proposed features. The results prove the effectiveness and clinical diagnostic significance of the proposed features and a reliable recognition performance for automatic disease diagnosis. Our research results offer a novel systematic perspective for iridology studies and promote the progress of both theoretical and practical work in iris diagnosis.

  18. Dogs Evaluate Threatening Facial Expressions by Their Biological Validity--Evidence from Gazing Patterns.

    Directory of Open Access Journals (Sweden)

    Sanni Somppi

    Full Text Available Appropriate response to companions' emotional signals is important for all social creatures. The emotional expressions of humans and non-human animals have analogies in their form and function, suggesting shared evolutionary roots, but very little is known about how animals other than primates view and process facial expressions. In primates, threat-related facial expressions evoke exceptional viewing patterns compared with neutral or positive stimuli. Here, we explore if domestic dogs (Canis familiaris have such an attentional bias toward threatening social stimuli and whether observed emotional expressions affect dogs' gaze fixation distribution among the facial features (eyes, midface and mouth. We recorded the voluntary eye gaze of 31 domestic dogs during viewing of facial photographs of humans and dogs with three emotional expressions (threatening, pleasant and neutral. We found that dogs' gaze fixations spread systematically among facial features. The distribution of fixations was altered by the seen expression, but eyes were the most probable targets of the first fixations and gathered longer looking durations than mouth regardless of the viewed expression. The examination of the inner facial features as a whole revealed more pronounced scanning differences among expressions. This suggests that dogs do not base their perception of facial expressions on the viewing of single structures, but the interpretation of the composition formed by eyes, midface and mouth. Dogs evaluated social threat rapidly and this evaluation led to attentional bias, which was dependent on the depicted species: threatening conspecifics' faces evoked heightened attention but threatening human faces instead an avoidance response. We propose that threatening signals carrying differential biological validity are processed via distinctive neurocognitive pathways. Both of these mechanisms may have an adaptive significance for domestic dogs. The findings provide a novel

  19. Dogs Evaluate Threatening Facial Expressions by Their Biological Validity--Evidence from Gazing Patterns.

    Science.gov (United States)

    Somppi, Sanni; Törnqvist, Heini; Kujala, Miiamaaria V; Hänninen, Laura; Krause, Christina M; Vainio, Outi

    2016-01-01

    Appropriate response to companions' emotional signals is important for all social creatures. The emotional expressions of humans and non-human animals have analogies in their form and function, suggesting shared evolutionary roots, but very little is known about how animals other than primates view and process facial expressions. In primates, threat-related facial expressions evoke exceptional viewing patterns compared with neutral or positive stimuli. Here, we explore if domestic dogs (Canis familiaris) have such an attentional bias toward threatening social stimuli and whether observed emotional expressions affect dogs' gaze fixation distribution among the facial features (eyes, midface and mouth). We recorded the voluntary eye gaze of 31 domestic dogs during viewing of facial photographs of humans and dogs with three emotional expressions (threatening, pleasant and neutral). We found that dogs' gaze fixations spread systematically among facial features. The distribution of fixations was altered by the seen expression, but eyes were the most probable targets of the first fixations and gathered longer looking durations than mouth regardless of the viewed expression. The examination of the inner facial features as a whole revealed more pronounced scanning differences among expressions. This suggests that dogs do not base their perception of facial expressions on the viewing of single structures, but the interpretation of the composition formed by eyes, midface and mouth. Dogs evaluated social threat rapidly and this evaluation led to attentional bias, which was dependent on the depicted species: threatening conspecifics' faces evoked heightened attention but threatening human faces instead an avoidance response. We propose that threatening signals carrying differential biological validity are processed via distinctive neurocognitive pathways. Both of these mechanisms may have an adaptive significance for domestic dogs. The findings provide a novel perspective on

  20. Multiresolution image fusion scheme based on fuzzy region feature

    Institute of Scientific and Technical Information of China (English)

    LIU Gang; JING Zhong-liang; SUN Shao-yuan

    2006-01-01

    This paper proposes a novel region based image fusion scheme based on multiresolution analysis. The low frequency band of the image multiresolution representation is segmented into important regions, sub-important regions and background regions. Each feature of the regions is used to determine the region's degree of membership in the multiresolution representation,and then to achieve multiresolution representation of the fusion result. The final image fusion result can be obtained by using the inverse multiresolution transform. Experiments showed that the proposed image fusion method can have better performance than existing image fusion methods.

  1. Estudo cefalométrico da correlação da anatomia da base craniana com o padrão facial e as bases apicais Cephalometric study of the cranial base anatomy correlation with the facial pattern and apical bases

    Directory of Open Access Journals (Sweden)

    Marcelo Calvo de Araújo

    2008-08-01

    Full Text Available OBJETIVOS: este estudo transversal foi realizado com o objetivo de avaliar cefalometricamente a correlação da anatomia da base craniana com o padrão facial e as bases apicais. METODOLOGIA: foram utilizadas 88 telerradiografias de norma lateral de jovens leucodermas brasileiros com média de idade de 10,3 anos. Utilizou-se o índice VERT de Ricketts para a determinação do padrão facial, distribuindo a amostra em: 37 para o grupo M (mesofaciais, 34 para o grupo D (dolicofaciais e 17 para o grupo B (braquifaciais. Realizaram-se, manualmente: o desenho anatômico, a demarcação de pontos, o traçado de linhas e planos e a aferição de medidas lineares e angulares. As medidas da base do crânio utilizadas foram S-N, N.S.Ba e S-N.Po-Or, e as medidas das bases apicais foram S.N.A, S.N.B e A.N.B. RESULTADOS E CONCLUSÕES: concluiu-se que, na correlação entre a base craniana e o padrão facial, houve significância entre a variável N.S.Ba e o índice VERT. Na correlação entre a base craniana e as bases apicais, houve significância entre N.S.Ba e as variáveis S.N.A e S.N.B, e entre S-N.Po-Or e as variáveis S.N.A e S.N.B.AIM: This cross-sectional study was conducted with the object of making a cephalometric evaluation of the cranial base anatomy correlation with the facial pattern and apical bases. METHODS: 88 lateral teleradiographies of young white Brazilian with mean age of 10.3 years were used. The Ricketts VERT index was used to determine the facial pattern, and the sample was distributed as follows: 37 in group M (mesofacial, 34 in group D (dolicofacial and 17 in group B (brachyfacial. The anatomic drawing, demarcation of points, line and plane tracings, and linear and angular measurement gauging were done manually. The cranial base measurements used were S-N, N.S.Ba and S-N.Po-Or and the apical base measurements were S.N.A, S.N.B and A.N.B. RESULTS AND CONCLUSION: It was concluded that, in the correlation between the cranial base and

  2. A voxel-based morphometry study of gray matter correlates of facial emotion recognition in bipolar disorder.

    Science.gov (United States)

    Neves, Maila de Castro L; Albuquerque, Maicon Rodrigues; Malloy-Diniz, Leandro; Nicolato, Rodrigo; Silva Neves, Fernando; de Souza-Duran, Fábio Luis; Busatto, Geraldo; Corrêa, Humberto

    2015-08-30

    Facial emotion recognition (FER) is one of the many cognitive deficits reported in bipolar disorder (BD) patients. The aim of this study was to investigate neuroanatomical correlates of FER impairments in BD type I (BD-I). Participants comprised 21 euthymic BD-I patients without Axis I DSM IV-TR comorbidities and 21 healthy controls who were assessed using magnetic resonance imaging and the Penn Emotion Recognition Test (ER40). Preprocessing of images used DARTEL (diffeomorphic anatomical registration through exponentiated Lie algebra) for optimized voxel-based morphometry in SPM8. Compared with healthy subjects, BD-I patients performed poorly in on the ER40 and had reduced gray matter volume (GMV) in the left orbitofrontal cortex, superior portion of the temporal pole and insula. In the BD-I group, the statistical maps indicated a direct correlation between FER on the ER40 and right middle cingulate gyrus GMV. Our findings are consistent with the previous studies regarding the overlap of multiple brain networks of social cognition and BD neurobiology, particularly components of the anterior-limbic neural network.

  3. Freestyle Local Perforator Flaps for Facial Reconstruction

    OpenAIRE

    Jun Yong Lee; Ji Min Kim; Ho Kwon; Sung-No Jung; Hyung Sup Shim; Sang Wha Kim

    2015-01-01

    For the successful reconstruction of facial defects, various perforator flaps have been used in single-stage surgery, where tissues are moved to adjacent defect sites. Our group successfully performed perforator flap surgery on 17 patients with small to moderate facial defects that affected the functional and aesthetic features of their faces. Of four complicated cases, three developed venous congestion, which resolved in the subacute postoperative period, and one patient with partial necrosi...

  4. Facial age affects emotional expression decoding

    OpenAIRE

    2014-01-01

    Facial expressions convey important information on emotional states of our interaction partners. However, in interactions between younger and older adults, there is evidence for a reduced ability to accurately decode emotional facial expressions. Previous studies have often followed up this phenomenon by examining the effect of the observers' age. However, decoding emotional faces is also likely to be influenced by stimulus features, and age-related changes in the face such as wrinkles and fo...

  5. Facial age affects emotional expression decoding

    OpenAIRE

    2014-01-01

    Facial expressions convey important information on emotional states of our interaction partners. However, in interactions between younger and older adults, there is evidence for a reduced ability to accurately decode emotional facial expressions.Previous studies have often followed up this phenomenon by examining the effect of the observers’ age. However, decoding emotional faces is also likely to be influenced by stimulus features, and age-related changes in the face such as wrinkles and fol...

  6. Facial paralysis in children.

    Science.gov (United States)

    Reddy, Sashank; Redett, Richard

    2015-04-01

    Facial paralysis can have devastating physical and psychosocial consequences. These are particularly severe in children in whom loss of emotional expressiveness can impair social development and integration. The etiologies of facial paralysis, prospects for spontaneous recovery, and functions requiring restoration differ in children as compared with adults. Here we review contemporary management of facial paralysis with a focus on special considerations for pediatric patients.

  7. Data Clustering Analysis Based on Wavelet Feature Extraction

    Institute of Scientific and Technical Information of China (English)

    QIANYuntao; TANGYuanyan

    2003-01-01

    A novel wavelet-based data clustering method is presented in this paper, which includes wavelet feature extraction and cluster growing algorithm. Wavelet transform can provide rich and diversified information for representing the global and local inherent structures of dataset. therefore, it is a very powerful tool for clustering feature extraction. As an unsupervised classification, the target of clustering analysis is dependent on the specific clustering criteria. Several criteria that should be con-sidered for general-purpose clustering algorithm are pro-posed. And the cluster growing algorithm is also con-structed to connect clustering criteria with wavelet fea-tures. Compared with other popular clustering methods,our clustering approach provides multi-resolution cluster-ing results,needs few prior parameters, correctly deals with irregularly shaped clusters, and is insensitive to noises and outliers. As this wavelet-based clustering method isaimed at solving two-dimensional data clustering prob-lem, for high-dimensional datasets, self-organizing mapand U-matrlx method are applied to transform them intotwo-dimensional Euclidean space, so that high-dimensional data clustering analysis,Results on some sim-ulated data and standard test data are reported to illus-trate the power of our method.

  8. Global Enhancement but Local Suppression in Feature-based Attention.

    Science.gov (United States)

    Forschack, Norman; Andersen, Søren K; Müller, Matthias M

    2017-04-01

    A key property of feature-based attention is global facilitation of the attended feature throughout the visual field. Previously, we presented superimposed red and blue randomly moving dot kinematograms (RDKs) flickering at a different frequency each to elicit frequency-specific steady-state visual evoked potentials (SSVEPs) that allowed us to analyze neural dynamics in early visual cortex when participants shifted attention to one of the two colors. Results showed amplification of the attended and suppression of the unattended color as measured by SSVEP amplitudes. Here, we tested whether the suppression of the unattended color also operates globally. To this end, we presented superimposed flickering red and blue RDKs in the center of a screen and a red and blue RDK in the left and right periphery, respectively, also flickering at different frequencies. Participants shifted attention to one color of the superimposed RDKs in the center to discriminate coherent motion events in the attended from the unattended color RDK, whereas the peripheral RDKs were task irrelevant. SSVEP amplitudes elicited by the centrally presented RDKs confirmed the previous findings of amplification and suppression. For peripherally located RDKs, we found the expected SSVEP amplitude increase, relative to precue baseline when color matched the one of the centrally attended RDK. We found no reduction in SSVEP amplitude relative to precue baseline, when the peripheral color matched the unattended one of the central RDK, indicating that, while facilitation in feature-based attention operates globally, suppression seems to be linked to the location of focused attention.

  9. Face Recognition Algorithms Based on Transformed Shape Features

    Directory of Open Access Journals (Sweden)

    Sambhunath Biswas

    2012-05-01

    Full Text Available Human face recognition is, indeed, a challenging task, especially under illumination and pose variations. We examine in the present paper effectiveness of two simple algorithms using coiflet packet and Radon transforms to recognize human faces from some databases of still gray level images, under the environment of illumination and pose variations. Both the algorithms convert 2-D gray level training face images into their respective depth maps or physical shape which are subsequently transformed by Coiflet packet and Radon transforms to compute energy for feature extraction. Experiments show that such transformed shape features are robust to illumination and pose variations. With the features extracted, training classes are optimally separated through linear discriminant analysis (LDA, while classification for test face images is made through a k-NN classifier, based on L1 norm and Mahalanobis distance measures. Proposed algorithms are then tested on face images that differ in illumination,expression or pose separately, obtained from three databases,namely, ORL, Yale and Essex-Grimace databases. Results, so obtained, are compared with two different existing algorithms.Performance using Daubechies wavelets is also examined. It is seen that the proposed Coiflet packet and Radon transform based algorithms have significant performance, especially under different illumination conditions and pose variation. Comparison shows the proposed algorithms are superior.

  10. Motion feature extraction scheme for content-based video retrieval

    Science.gov (United States)

    Wu, Chuan; He, Yuwen; Zhao, Li; Zhong, Yuzhuo

    2001-12-01

    This paper proposes the extraction scheme of global motion and object trajectory in a video shot for content-based video retrieval. Motion is the key feature representing temporal information of videos. And it is more objective and consistent compared to other features such as color, texture, etc. Efficient motion feature extraction is an important step for content-based video retrieval. Some approaches have been taken to extract camera motion and motion activity in video sequences. When dealing with the problem of object tracking, algorithms are always proposed on the basis of known object region in the frames. In this paper, a whole picture of the motion information in the video shot has been achieved through analyzing motion of background and foreground respectively and automatically. 6-parameter affine model is utilized as the motion model of background motion, and a fast and robust global motion estimation algorithm is developed to estimate the parameters of the motion model. The object region is obtained by means of global motion compensation between two consecutive frames. Then the center of object region is calculated and tracked to get the object motion trajectory in the video sequence. Global motion and object trajectory are described with MPEG-7 parametric motion and motion trajectory descriptors and valid similar measures are defined for the two descriptors. Experimental results indicate that our proposed scheme is reliable and efficient.

  11. Advances in face detection and facial image analysis

    CERN Document Server

    Celebi, M; Smolka, Bogdan

    2016-01-01

    This book presents the state-of-the-art in face detection and analysis. It outlines new research directions, including in particular psychology-based facial dynamics recognition, aimed at various applications such as behavior analysis, deception detection, and diagnosis of various psychological disorders. Topics of interest include face and facial landmark detection, face recognition, facial expression and emotion analysis, facial dynamics analysis, face classification, identification, and clustering, and gaze direction and head pose estimation, as well as applications of face analysis.

  12. Surgical-allogeneic facial reconstruction: facial transplants.

    Directory of Open Access Journals (Sweden)

    Marcelo Coelho Goiato

    2014-12-01

    Full Text Available Several factors including cancer, malformations and traumas may cause large facial mutilation. These functional and aesthetic deformities negatively affect the psychological perspectives and quality of life of the mutilated patient. Conventional treatments are prone to fail aesthetically and functionally. The recent introduction of the composite tissue allotransplantation (CTA, which uses transplanted facial tissues of healthy donors to recover the damaged or non-existent facial tissue of mutilated patients, resulted in greater clinical results. Therefore, the present study aims to conduct a literature review on the relevance and effectiveness of facial transplants in mutilated subjects. It was observed that the facial transplants recovered both the aesthetics and function of these patients and consequently improved their quality of life.

  13. [Rehabilitation of facial paralysis].

    Science.gov (United States)

    Martin, F

    2015-10-01

    Rehabilitation takes an important part in the treatment of facial paralysis, especially when these are severe. It aims to lead the recovery of motor activity and prevent or reduce sequelae like synkinesis or spasms. It is preferable that it be proposed early in order to set up a treatment plan based on the results of the assessment, sometimes coupled with an electromyography. In case of surgery, preoperative work is recommended, especially in case of hypoglossofacial anastomosis or lengthening temporalis myoplasty (LTM). Our proposal is to present an original technique to enhance the sensorimotor loop and the cortical control of movement, especially when using botulinum toxin and after surgery.

  14. Modification of evidence theory based on feature extraction

    Institute of Scientific and Technical Information of China (English)

    DU Feng; SHI Wen-kang; DENG Yong

    2005-01-01

    Although evidence theory has been widely used in information fusion due to its effectiveness of uncertainty reasoning, the classical DS evidence theory involves counter-intuitive behaviors when high conflict information exists. Many modification methods have been developed which can be classified into the following two kinds of ideas, either modifying the combination rules or modifying the evidence sources. In order to make the modification more reasonable and more effective, this paper gives a thorough analysis of some typical existing modification methods firstly, and then extracts the intrinsic feature of the evidence sources by using evidence distance theory. Based on the extracted features, two modified plans of evidence theory according to the corresponding modification ideas have been proposed. The results of numerical examples prove the good performance of the plans when combining evidence sources with high conflict information.

  15. Clinical gait data analysis based on Spatio-Temporal features

    CERN Document Server

    Katiyar, Rohit

    2010-01-01

    Analysing human gait has found considerable interest in recent computer vision research. So far, however, contributions to this topic exclusively dealt with the tasks of person identification or activity recognition. In this paper, we consider a different application for gait analysis and examine its use as a means of deducing the physical well-being of people. The proposed method is based on transforming the joint motion trajectories using wavelets to extract spatio-temporal features which are then fed as input to a vector quantiser; a self-organising map for classification of walking patterns of individuals with and without pathology. We show that our proposed algorithm is successful in extracting features that successfully discriminate between individuals with and without locomotion impairment.

  16. Features-Based Deisotoping Method for Tandem Mass Spectra

    Directory of Open Access Journals (Sweden)

    Zheng Yuan

    2011-01-01

    Full Text Available For high-resolution tandem mass spectra, the determination of monoisotopic masses of fragment ions plays a key role in the subsequent peptide and protein identification. In this paper, we present a new algorithm for deisotoping the bottom-up spectra. Isotopic-cluster graphs are constructed to describe the relationship between all possible isotopic clusters. Based on the relationship in isotopic-cluster graphs, each possible isotopic cluster is assessed with a score function, which is built by combining nonintensity and intensity features of fragment ions. The non-intensity features are used to prevent fragment ions with low intensity from being removed. Dynamic programming is adopted to find the highest score path with the most reliable isotopic clusters. The experimental results have shown that the average Mascot scores and F-scores of identified peptides from spectra processed by our deisotoping method are greater than those by YADA and MS-Deconv software.

  17. Semantic Feature Based Arabic Opinion Mining Using Ontology

    Directory of Open Access Journals (Sweden)

    Abdullah M. Alkadri

    2016-05-01

    Full Text Available with the increase of opinionated reviews on the web, automatically analyzing and extracting knowledge from those reviews is very important. However, it is a challenging task to be done manually. Opinion mining is a text mining discipline that automatically performs such a task. Most researches done in this field were focused on English texts with very limited researches on Arabic language. This scarcity is because there are a lot of obstacles in Arabic. The aim of this paper is to develop a novel semantic feature-based opinion mining framework for Arabic reviews. This framework utilizes the semantic of ontologies and lexicons in the identification of opinion features and their polarity. Experiments showed that the proposed framework achieved a good level of performance compared with manually collected test data.

  18. 俄汉语五官词固定词组结构对比研究%Comparative St udy on the Structure of Facial Feature Words of Set Phrases in Chinese and Russian Language

    Institute of Scientific and Technical Information of China (English)

    祁国江

    2015-01-01

    研究语言中固定词组的特点是现代词汇学和熟语学最重要的任务之一。应用分类法,实例分析俄汉语五官词固定词组的结构异同点。俄汉五官固定词组一般由两个及以上成素组成,但俄语有伴随词,汉语没有伴随词;在变体方面,俄语变体较多,汉语较少;俄语任意成素较多,汉语一般比较固定。%Study on the features of set phrases in language is one of the most important tasks of modern lexicology and idioms.There are instances of the same and different structure of facial features words of set phrases in Chinese and Russian language through classification.The research of the thesis is of certain theoret-ical value that it can enrich and complement the lexicology in Chinese and Russian language;provide instruc-tion for the Russian newspaper reading teaching and translation practice and a reference for the compilation of dictionaries.The result of the study can be applied to the teaching of Russian vocabulary.

  19. Human facial neural activities and gesture recognition for machine-interfacing applications.

    Science.gov (United States)

    Hamedi, M; Salleh, Sh-Hussain; Tan, T S; Ismail, K; Ali, J; Dee-Uam, C; Pavaganun, C; Yupapin, P P

    2011-01-01

    The authors present a new method of recognizing different human facial gestures through their neural activities and muscle movements, which can be used in machine-interfacing applications. Human-machine interface (HMI) technology utilizes human neural activities as input controllers for the machine. Recently, much work has been done on the specific application of facial electromyography (EMG)-based HMI, which have used limited and fixed numbers of facial gestures. In this work, a multipurpose interface is suggested that can support 2-11 control commands that can be applied to various HMI systems. The significance of this work is finding the most accurate facial gestures for any application with a maximum of eleven control commands. Eleven facial gesture EMGs are recorded from ten volunteers. Detected EMGs are passed through a band-pass filter and root mean square features are extracted. Various combinations of gestures with a different number of gestures in each group are made from the existing facial gestures. Finally, all combinations are trained and classified by a Fuzzy c-means classifier. In conclusion, combinations with the highest recognition accuracy in each group are chosen. An average accuracy >90% of chosen combinations proved their ability to be used as command controllers.

  20. Peripheral facial weakness (Bell's palsy).

    Science.gov (United States)

    Basić-Kes, Vanja; Dobrota, Vesna Dermanović; Cesarik, Marijan; Matovina, Lucija Zadro; Madzar, Zrinko; Zavoreo, Iris; Demarin, Vida

    2013-06-01

    Peripheral facial weakness is a facial nerve damage that results in muscle weakness on one side of the face. It may be idiopathic (Bell's palsy) or may have a detectable cause. Almost 80% of peripheral facial weakness cases are primary and the rest of them are secondary. The most frequent causes of secondary peripheral facial weakness are systemic viral infections, trauma, surgery, diabetes, local infections, tumor, immune disorders, drugs, degenerative diseases of the central nervous system, etc. The diagnosis relies upon the presence of typical signs and symptoms, blood chemistry tests, cerebrospinal fluid investigations, nerve conduction studies and neuroimaging methods (cerebral MRI, x-ray of the skull and mastoid). Treatment of secondary peripheral facial weakness is based on therapy for the underlying disorder, unlike the treatment of Bell's palsy that is controversial due to the lack of large, randomized, controlled, prospective studies. There are some indications that steroids or antiviral agents are beneficial but there are also studies that show no beneficial effect. Additional treatments include eye protection, physiotherapy, acupuncture, botulinum toxin, or surgery. Bell's palsy has a benign prognosis with complete recovery in about 80% of patients, 15% experience some mode of permanent nerve damage and severe consequences remain in 5% of patients.