WorldWideScience

Sample records for based facial feature

  1. Facial symmetry assessment based on geometric features

    Science.gov (United States)

    Xu, Guoping; Cao, Hanqiang

    2015-12-01

    Face image symmetry is an important factor affecting the accuracy of automatic face recognition. Selecting high symmetrical face image could improve the performance of the recognition. In this paper, we proposed a novel facial symmetry evaluation scheme based on geometric features, including centroid, singular value, in-plane rotation angle of face and the structural similarity index (SSIM). First, we calculate the value of the four features according to the corresponding formula. Then, we use fuzzy logic algorithm to integrate the value of the four features into a single number which represents the facial symmetry. The proposed method is efficient and can adapt to different recognition methods. Experimental results demonstrate its effectiveness in improving the robustness of face detection and recognition.

  2. Facial Features for Template Matching Based Face Recognition

    Directory of Open Access Journals (Sweden)

    Chai T. Yuen

    2009-01-01

    Full Text Available Problem statement: Template matching had been a conventional method for object detection especially facial features detection at the early stage of face recognition research. The appearance of moustache and beard had affected the performance of features detection and face recognition system since ages ago. Approach: The proposed algorithm aimed to reduce the effect of beard and moustache for facial features detection and introduce facial features based template matching as the classification method. An automated algorithm for face recognition system based on detected facial features, iris and mouth had been developed. First, the face region was located using skin color information. Next, the algorithm computed the costs for each pair of iris candidates from intensity valleys as references for iris selection. As for mouth detection, color space method was used to allocate lips region, image processing methods to eliminate unwanted noises and corner detection technique to refine the exact location of mouth. Finally, template matching was used to classify faces based on the extracted features. Results: The proposed method had shown a better features detection rate (iris = 93.06%, mouth = 95.83% than conventional method. Template matching had achieved a recognition rate of 86.11% with acceptable processing time (0.36 sec. Conclusion: The results indicate that the elimination of moustache and beard has not affected the performance of facial features detection. The proposed features based template matching has significantly improved the processing time of this method in face recognition research.

  3. Frame-Based Facial Expression Recognition Using Geometrical Features

    OpenAIRE

    Anwar Saeed; Ayoub Al-Hamadi; Robert Niese; Moftah Elzobi

    2014-01-01

    To improve the human-computer interaction (HCI) to be as good as human-human interaction, building an efficient approach for human emotion recognition is required. These emotions could be fused from several modalities such as facial expression, hand gesture, acoustic data, and biophysiological data. In this paper, we address the frame-based perception of the universal human facial expressions (happiness, surprise, anger, disgust, fear, and sadness), with the help of several geometrical featur...

  4. Sequential Clustering based Facial Feature Extraction Method for Automatic Creation of Facial Models from Orthogonal Views

    CERN Document Server

    Ghahari, Alireza

    2009-01-01

    Multiview 3D face modeling has attracted increasing attention recently and has become one of the potential avenues in future video systems. We aim to make more reliable and robust automatic feature extraction and natural 3D feature construction from 2D features detected on a pair of frontal and profile view face images. We propose several heuristic algorithms to minimize possible errors introduced by prevalent nonperfect orthogonal condition and noncoherent luminance. In our approach, we first extract the 2D features that are visible to both cameras in both views. Then, we estimate the coordinates of the features in the hidden profile view based on the visible features extracted in the two orthogonal views. Finally, based on the coordinates of the extracted features, we deform a 3D generic model to perform the desired 3D clone modeling. Present study proves the scope of resulted facial models for practical applications like face recognition and facial animation.

  5. Facial Feature Extraction Method Based on Coefficients of Variances

    Institute of Scientific and Technical Information of China (English)

    Feng-Xi Song; David Zhang; Cai-Kou Chen; Jing-Yu Yang

    2007-01-01

    Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) are two popular feature ex- traction techniques in statistical pattern recognition field. Due to small sample size problem LDA cannot be directly applied to appearance-based face recognition tasks. As a consequence, a lot of LDA-based facial feature extraction techniques are proposed to deal with the problem one after the other. Nullspace Method is one of the most effective methods among them. The Nullspace Method tries to find a set of discriminant vectors which maximize the between-class scatter in the null space of the within-class scatter matrix. The calculation of its discriminant vectors will involve performing singular value decomposition on a high-dimensional matrix. It is generally memory- and time-consuming. Borrowing the key idea in Nullspace method and the concept of coefficient of variance in statistical analysis we present a novel facial feature extraction method, i.e., Discriminant based on Coefficient of Variance (DCV) in this paper. Experimental results performed on the FERET and AR face image databases demonstrate that DCV is a promising technique in comparison with Eigenfaces, Nullspace Method, and other state-of-the-art facial feature extraction methods.

  6. Face detection and facial feature localization using notch based templates

    International Nuclear Information System (INIS)

    We present a real time detection off aces from the video with facial feature localization as well as the algorithm capable of differentiating between the face/non-face patterns. The need of face detection and facial feature localization arises in various application of computer vision, so a lot of research is dedicated to come up with a real time solution. The algorithm should remain simple to perform real time whereas it should not compromise on the challenges encountered during the detection and localization phase, keeping simplicity and all challenges i.e. algorithm invariant to scale, translation, and (+-45) rotation transformations. The proposed system contains two parts. Visual guidance and face/non-face classification. The visual guidance phase uses the fusion of motion and color cues to classify skin color. Morphological operation with union-structure component labeling algorithm extracts contiguous regions. Scale normalization is applied by nearest neighbor interpolation method to avoid the effect of different scales. Using the aspect ratio of width and height size. Region of Interest (ROI) is obtained and then passed to face/non-face classifier. Notch (Gaussian) based templates/ filters are used to find circular darker regions in ROI. The classified face region is handed over to facial feature localization phase, which uses YCbCr eyes/lips mask for face feature localization. The empirical results show an accuracy of 90% for five different videos with 1000 face/non-face patterns and processing rate of proposed algorithm is 15 frames/sec. (author)

  7. Recognition of facial expressions based on salient geometric features and support vector machines

    OpenAIRE

    Ghimire, Deepak; Lee, Joonwhoan; Li, Ze-Nian; Jeong, Sunghwan

    2016-01-01

    Facial expressions convey nonverbal cues which play an important role in interpersonal relations, and are widely used in behavior interpretation of emotions, cognitive science, and social interactions. In this paper we analyze different ways of representing geometric feature and present a fully automatic facial expression recognition (FER) system using salient geometric features. In geometric feature-based FER approach, the first important step is to initialize and track dense set of facial p...

  8. The relative salience of facial features when differentiating faces based on an interference paradigm

    OpenAIRE

    Ruiz-Soler, Marcos; Salvador Beltrán, Francesc

    2012-01-01

    Research on face recognition and social judgment usually addresses the manipulation of facial features (eyes, nose, mouth, etc.). Using a procedure based on a Stroop-like task, Montepare and Opeyo (J Nonverbal Behav 26(1):43-59, 2002) established a hierarchy of the relative salience of cues based on facial attributes when differentiating faces. Using the same perceptual interference task, we established a hierarchy of facial features. Twenty-three participants (13 men and 10 women) volunteere...

  9. A spatiotemporal feature-based approach for facial expression recognition from depth video

    Science.gov (United States)

    Uddin, Md. Zia

    2015-07-01

    In this paper, a novel spatiotemporal feature-based method is proposed to recognize facial expressions from depth video. Independent Component Analysis (ICA) spatial features of the depth faces of facial expressions are first augmented with the optical flow motion features. Then, the augmented features are enhanced by Fisher Linear Discriminant Analysis (FLDA) to make them robust. The features are then combined with on Hidden Markov Models (HMMs) to model different facial expressions that are later used to recognize appropriate expression from a test expression depth video. The experimental results show superior performance of the proposed approach over the conventional methods.

  10. Facial Composite System Using Real Facial Features

    Directory of Open Access Journals (Sweden)

    Duchovičová Soňa

    2014-12-01

    Full Text Available Facial feature points identification plays an important role in many facial image applications, like face detection, face recognition, facial expression classification, etc. This paper describes the early stages of the research in the field of evolving a facial composite, primarily the main steps of face detection and facial features extraction. Technological issues are identified and possible strategies to solve some of the problems are proposed.

  11. Facial Composite System Using Real Facial Features

    OpenAIRE

    Duchovičová Soňa; Zahradníková Barbora; Schreiber Peter

    2014-01-01

    Facial feature points identification plays an important role in many facial image applications, like face detection, face recognition, facial expression classification, etc. This paper describes the early stages of the research in the field of evolving a facial composite, primarily the main steps of face detection and facial features extraction. Technological issues are identified and possible strategies to solve some of the problems are proposed.

  12. Live facial feature extraction

    Institute of Scientific and Technical Information of China (English)

    ZHAO JieYu

    2008-01-01

    Precise facial feature extraction is essential to the high-level face recognition and expression analysis. This paper presents a novel method for the real-time geomet-ric facial feature extraction from live video. In this paper, the input image is viewed as a weighted graph. The segmentation of the pixels corresponding to the edges of facial components of the mouth, eyes, brows, and nose is implemented by means of random walks on the weighted graph. The graph has an 8-connected lattice structure and the weight value associated with each edge reflects the likelihood that a random walker will cross that edge. The random walks simulate an anisot-ropic diffusion process that filters out the noise while preserving the facial expres-sion pixels. The seeds for the segmentation are obtained from a color and motion detector. The segmented facial pixels are represented with linked lists in the origi-nal geometric form and grouped into different parts corresponding to facial com-ponents. For the convenience of implementing high-level vision, the geometric description of facial component pixels is further decomposed into shape and reg-istration information. Shape is defined as the geometric information that is invari-ant under the registration transformation, such as translation, rotation, and iso-tropic scale. Statistical shape analysis is carried out to capture global facial fea-tures where the Procrustes shape distance measure is adopted. A Bayesian ap-proach is used to incorporate high-level prior knowledge of face structure. Ex-perimental results show that the proposed method is capable of real-time extraction of precise geometric facial features from live video. The feature extraction is robust against the illumination changes, scale variation, head rotations, and hand inter-ference.

  13. Age Estimation Based on AAM and 2D-DCT Features of Facial Images

    Directory of Open Access Journals (Sweden)

    Asuman Günay

    2015-02-01

    Full Text Available This paper proposes a novel age estimation method - Global and Local feAture based Age estiMation (GLAAM - relying on global and local features of facial images. Global features are obtained with Active Appearance Models (AAM. Local features are extracted with regional 2D-DCT (2- dimensional Discrete Cosine Transform of normalized facial images. GLAAM consists of the following modules: face normalization, global feature extraction with AAM, local feature extraction with 2D-DCT, dimensionality reduction by means of Principal Component Analysis (PCA and age estimation with multiple linear regression. Experiments have shown that GLAAM outperforms many methods previously applied to the FG-NET database.

  14. Facial expression recognition based on fused Feature of PCA and LDP

    Science.gov (United States)

    Yi, Zhang; Mao, Hou-lin; Luo, Yuan

    2014-11-01

    Facial expression recognition is an important part of the study in man-machine interaction. Principal component analysis (PCA) is an extraction method based on statistical features which were extracted from the global grayscale features of the whole image .But the grayscale global features are environmentally sensitive. In order to recognize facial expression accurately, a fused method of principal component analysis and local direction pattern (LDP) is introduced in this paper. First, PCA extracts the global features of the whole grayscale image; LDP extracts the local grayscale texture features of the mouth and eyes region, which contribute most to facial expression recognition, to complement the global grayscale features of PCA. Then we adopt Support Vector Machine (SVM) classifier for expression classification. Experimental results demonstrate that this method can classify different expressions more effectively and get higher recognition rate compared with the traditional method.

  15. A Classifier Model based on the Features Quantitative Analysis for Facial Expression Recognition

    Directory of Open Access Journals (Sweden)

    Amir Jamshidnezhad

    2011-01-01

    Full Text Available In recent decades computer technology has considerable developed in use of intelligent systems for classification. The development of HCI systems is highly depended on accurate understanding of emotions. However, facial expressions are difficult to classify by a mathematical models because of natural quality. In this paper, quantitative analysis is used in order to find the most effective features movements between the selected facial feature points. Therefore, the features are extracted not only based on the psychological studies, but also based on the quantitative methods to arise the accuracy of recognitions. Also in this model, fuzzy logic and genetic algorithm are used to classify facial expressions. Genetic algorithm is an exclusive attribute of proposed model which is used for tuning membership functions and increasing the accuracy.

  16. Facial expression recognition based on local region specific features and support vector machines

    OpenAIRE

    Ghimire, Deepak; Jeong, Sunghwan; Lee, Joonwhoan; Park, Sang Hyun

    2016-01-01

    Facial expressions are one of the most powerful, natural and immediate means for human being to communicate their emotions and intensions. Recognition of facial expression has many applications including human-computer interaction, cognitive science, human emotion analysis, personality development etc. In this paper, we propose a new method for the recognition of facial expressions from single image frame that uses combination of appearance and geometric features with support vector machines ...

  17. Tracking facial features with occlusions

    Institute of Scientific and Technical Information of China (English)

    MARKIN Evgeny; PRAKASH Edmond C.

    2006-01-01

    Facial expression recognition consists of determining what kind of emotional content is presented in a human face.The problem presents a complex area for exploration, since it encompasses face acquisition, facial feature tracking, facial expression classification. Facial feature tracking is of the most interest. Active Appearance Model (AAM) enables accurate tracking of facial features in real-time, but lacks occlusions and self-occlusions. In this paper we propose a solution to improve the accuracy of fitting technique. The idea is to include occluded images into AAM training data. We demonstrate the results by running ex periments using gradient descent algorithm for fitting the AAM. Our experiments show that using fitting algorithm with occluded training data improves the fitting quality of the algorithm.

  18. A Classification Method of Normal and Overweight Females Based on Facial Features for Automated Medical Applications

    Directory of Open Access Journals (Sweden)

    Bum Ju Lee

    2012-01-01

    Full Text Available Obesity and overweight have become serious public health problems worldwide. Obesity and abdominal obesity are associated with type 2 diabetes, cardiovascular diseases, and metabolic syndrome. In this paper, we first suggest a method of predicting normal and overweight females according to body mass index (BMI based on facial features. A total of 688 subjects participated in this study. We obtained the area under the ROC curve (AUC value of 0.861 and kappa value of 0.521 in Female: 21–40 (females aged 21–40 years group, and AUC value of 0.76 and kappa value of 0.401 in Female: 41–60 (females aged 41–60 years group. In two groups, we found many features showing statistical differences between normal and overweight subjects by using an independent two-sample t-test. We demonstrated that it is possible to predict BMI status using facial characteristics. Our results provide useful information for studies of obesity and facial characteristics, and may provide useful clues in the development of applications for alternative diagnosis of obesity in remote healthcare.

  19. Facial expression recognition with facial parts based sparse representation classifier

    Science.gov (United States)

    Zhi, Ruicong; Ruan, Qiuqi

    2009-10-01

    Facial expressions play important role in human communication. The understanding of facial expression is a basic requirement in the development of next generation human computer interaction systems. Researches show that the intrinsic facial features always hide in low dimensional facial subspaces. This paper presents facial parts based facial expression recognition system with sparse representation classifier. Sparse representation classifier exploits sparse representation to select face features and classify facial expressions. The sparse solution is obtained by solving l1 -norm minimization problem with constraint of linear combination equation. Experimental results show that sparse representation is efficient for facial expression recognition and sparse representation classifier obtain much higher recognition accuracies than other compared methods.

  20. Automatic Facial Expression Recognition Using Features of Salient Facial Patches

    OpenAIRE

    Happy, S L; Routray, Aurobinda

    2015-01-01

    Extraction of discriminative features from salient facial patches plays a vital role in effective facial expression recognition. The accurate detection of facial landmarks improves the localization of the salient patches on face images. This paper proposes a novel framework for expression recognition by using appearance features of selected facial patches. A few prominent facial patches, depending on the position of facial landmarks, are extracted which are active during emotion elicitation. ...

  1. Automatic facial feature extraction and expression recognition based on neural network

    CERN Document Server

    Khandait, S P; Khandait, P D

    2012-01-01

    In this paper, an approach to the problem of automatic facial feature extraction from a still frontal posed image and classification and recognition of facial expression and hence emotion and mood of a person is presented. Feed forward back propagation neural network is used as a classifier for classifying the expressions of supplied face into seven basic categories like surprise, neutral, sad, disgust, fear, happy and angry. For face portion segmentation and localization, morphological image processing operations are used. Permanent facial features like eyebrows, eyes, mouth and nose are extracted using SUSAN edge detection operator, facial geometry, edge projection analysis. Experiments are carried out on JAFFE facial expression database and gives better performance in terms of 100% accuracy for training set and 95.26% accuracy for test set.

  2. Geometric Feature-Based Facial Expression Recognition in Image Sequences Using Multi-Class AdaBoost and Support Vector Machines

    OpenAIRE

    Joonwhoan Lee; Deepak Ghimire

    2013-01-01

    Facial expressions are widely used in the behavioral interpretation of emotions, cognitive science, and social interactions. In this paper, we present a novel method for fully automatic facial expression recognition in facial image sequences. As the facial expression evolves over time facial landmarks are automatically tracked in consecutive video frames, using displacements based on elastic bunch graph matching displacement estimation. Feature vectors from individual landmarks, as well as pa...

  3. An Efficient Multimodal 2D + 3D Feature-based Approach to Automatic Facial Expression Recognition

    KAUST Repository

    Li, Huibin

    2015-07-29

    We present a fully automatic multimodal 2D + 3D feature-based facial expression recognition approach and demonstrate its performance on the BU-3DFE database. Our approach combines multi-order gradient-based local texture and shape descriptors in order to achieve efficiency and robustness. First, a large set of fiducial facial landmarks of 2D face images along with their 3D face scans are localized using a novel algorithm namely incremental Parallel Cascade of Linear Regression (iPar-CLR). Then, a novel Histogram of Second Order Gradients (HSOG) based local image descriptor in conjunction with the widely used first-order gradient based SIFT descriptor are used to describe the local texture around each 2D landmark. Similarly, the local geometry around each 3D landmark is described by two novel local shape descriptors constructed using the first-order and the second-order surface differential geometry quantities, i.e., Histogram of mesh Gradients (meshHOG) and Histogram of mesh Shape index (curvature quantization, meshHOS). Finally, the Support Vector Machine (SVM) based recognition results of all 2D and 3D descriptors are fused at both feature-level and score-level to further improve the accuracy. Comprehensive experimental results demonstrate that there exist impressive complementary characteristics between the 2D and 3D descriptors. We use the BU-3DFE benchmark to compare our approach to the state-of-the-art ones. Our multimodal feature-based approach outperforms the others by achieving an average recognition accuracy of 86.32%. Moreover, a good generalization ability is shown on the Bosphorus database.

  4. Facial Expression Recognition Using 3D Facial Feature Distances

    OpenAIRE

    Soyel, Hamit; Hasan DEMIREL

    2008-01-01

    In this chapter we have shown that probabilistic neural network classifier can be used for the 3D analysis of facial expressions without relying on all of the 84 facial features and errorprone face pose normalization stage. Face deformation as well as facial muscle contraction and expansion are important indicators for facial expression and by using only 11 facial feature points and symmetry of the human face, we are able to extract enough information from a from a face image. Our results sho...

  5. Automatic facial feature extraction and expression recognition based on neural network

    OpenAIRE

    Khandait, S. P.; Dr. R.C.Thool; P.D.Khandait

    2012-01-01

    In this paper, an approach to the problem of automatic facial feature extraction from a still frontal posed image and classification and recognition of facial expression and hence emotion and mood of a person is presented. Feed forward back propagation neural network is used as a classifier for classifying the expressions of supplied face into seven basic categories like surprise, neutral, sad, disgust, fear, happy and angry. For face portion segmentation and localization, morphological image...

  6. Unifying Geometric Features and Facial Action Units for Improved Performance of Facial Expression Analysis

    OpenAIRE

    Ghayoumi, Mehdi; Bansal, Arvind K.

    2016-01-01

    Previous approaches to model and analyze facial expression analysis use three different techniques: facial action units, geometric features and graph based modelling. However, previous approaches have treated these technique separately. There is an interrelationship between these techniques. The facial expression analysis is significantly improved by utilizing these mappings between major geometric features involved in facial expressions and the subset of facial action units whose presence or...

  7. Tracking subtle stereotypes of children with trisomy 21: from facial-feature-based to implicit stereotyping.

    Directory of Open Access Journals (Sweden)

    Claire Enea-Drapeau

    Full Text Available BACKGROUND: Stigmatization is one of the greatest obstacles to the successful integration of people with Trisomy 21 (T21 or Down syndrome, the most frequent genetic disorder associated with intellectual disability. Research on attitudes and stereotypes toward these people still focuses on explicit measures subjected to social-desirability biases, and neglects how variability in facial stigmata influences attitudes and stereotyping. METHODOLOGY/PRINCIPAL FINDINGS: The participants were 165 adults including 55 young adult students, 55 non-student adults, and 55 professional caregivers working with intellectually disabled persons. They were faced with implicit association tests (IAT, a well-known technique whereby response latency is used to capture the relative strength with which some groups of people--here photographed faces of typically developing children and children with T21--are automatically (without conscious awareness associated with positive versus negative attributes in memory. Each participant also rated the same photographed faces (consciously accessible evaluations. We provide the first evidence that the positive bias typically found in explicit judgments of children with T21 is smaller for those whose facial features are highly characteristic of this disorder, compared to their counterparts with less distinctive features and to typically developing children. We also show that this bias can coexist with negative evaluations at the implicit level (with large effect sizes, even among professional caregivers. CONCLUSION: These findings support recent models of feature-based stereotyping, and more importantly show how crucial it is to go beyond explicit evaluations to estimate the true extent of stigmatization of intellectually disabled people.

  8. Self-Learning Facial Emotional Feature Selection Based on Rough Set Theory

    OpenAIRE

    Hao Kong; Guoyin Wang; Yong Yang

    2009-01-01

    Emotion recognition is very important for human-computer intelligent interaction. It is generally performed on facial or audio information by artificial neural network, fuzzy set, support vector machine, hidden Markov model, and so forth. Although some progress has already been made in emotion recognition, several unsolved issues still exist. For example, it is still an open problem which features are the most important for emotion recognition. It is a subject that was seldom studied in compu...

  9. Geometric Feature-Based Facial Expression Recognition in Image Sequences Using Multi-Class AdaBoost and Support Vector Machines

    Directory of Open Access Journals (Sweden)

    Joonwhoan Lee

    2013-06-01

    Full Text Available Facial expressions are widely used in the behavioral interpretation of emotions, cognitive science, and social interactions. In this paper, we present a novel method for fully automatic facial expression recognition in facial image sequences. As the facial expression evolves over time facial landmarks are automatically tracked in consecutive video frames, using displacements based on elastic bunch graph matching displacement estimation. Feature vectors from individual landmarks, as well as pairs of landmarks tracking results are extracted, and normalized, with respect to the first frame in the sequence. The prototypical expression sequence for each class of facial expression is formed, by taking the median of the landmark tracking results from the training facial expression sequences. Multi-class AdaBoost with dynamic time warping similarity distance between the feature vector of input facial expression and prototypical facial expression, is used as a weak classifier to select the subset of discriminative feature vectors. Finally, two methods for facial expression recognition are presented, either by using multi-class AdaBoost with dynamic time warping, or by using support vector machine on the boosted feature vectors. The results on the Cohn-Kanade (CK+ facial expression database show a recognition accuracy of 95.17% and 97.35% using multi-class AdaBoost and support vector machines, respectively.

  10. Emotion Recognition based on 2D-3D Facial Feature Extraction from Color Image Sequences

    Directory of Open Access Journals (Sweden)

    Robert Niese

    2010-10-01

    Full Text Available Normal 0 21 false false false DE X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Normale Tabelle"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} In modern human computer interaction systems, emotion recognition from video is becoming an imperative feature. In this work we propose a new method for automatic recognition of facial expressions related to categories of basic emotions from image data. Our method incorporates a series of image processing, low level 3D computer vision and pattern recognition techniques. For image feature extraction, color and gradient information is used. Further, in terms of 3D processing, camera models are applied along with an initial registration step, in which person specific face models are automatically built from stereo. Based on these face models, geometric feature measures are computed and normalized using photogrammetric techniques. For recognition this normalization leads to minimal mixing between different emotion classes, which are determined with an artificial neural network classifier. Our framework achieves robust and superior classification results, also across a variety of head poses with resulting perspective foreshortening and changing face size. Results are presented for domestic and publicly available databases.

  11. Artificial Neural Networks and Gene Expression Programing based age estimation using facial features

    Directory of Open Access Journals (Sweden)

    Baddrud Z. Laskar

    2015-10-01

    Full Text Available This work is about estimating human age automatically through analysis of facial images. It has got a lot of real-world applications. Due to prompt advances in the fields of machine vision, facial image processing, and computer graphics, automatic age estimation via faces in computer is one of the dominant topics these days. This is due to widespread real-world applications, in areas of biometrics, security, surveillance, control, forensic art, entertainment, online customer management and support, along with cosmetology. As it is difficult to estimate the exact age, this system is to estimate a certain range of ages. Four sets of classifications have been used to differentiate a person’s data into one of the different age groups. The uniqueness about this study is the usage of two technologies i.e., Artificial Neural Networks (ANN and Gene Expression Programing (GEP to estimate the age and then compare the results. New methodologies like Gene Expression Programing (GEP have been explored here and significant results were found. The dataset has been developed to provide more efficient results by superior preprocessing methods. This proposed approach has been developed, tested and trained using both the methods. A public data set was used to test the system, FG-NET. The quality of the proposed system for age estimation using facial features is shown by broad experiments on the available database of FG-NET.

  12. Facial Expression Recognition using Entropy and Brightness Features

    OpenAIRE

    Khan, Rizwan Ahmed; Meyer, Alexandre; Konik, Hubert; Bouakaz, Saïda

    2011-01-01

    International audience This paper proposes a novel framework for universal facial expression recognition. The framework is based on two sets of features extracted from the face image: entropy and brightness. First, saliency maps are obtained by state-of-the-art saliency detection algorithm i.e. "frequencytuned salient region detection". Then only localized salient facial regions from saliency maps are processed to extract entropy and brightness features. To validate the performance of sali...

  13. ROI Segmentation for Feature Extraction from Human Facial Images

    Directory of Open Access Journals (Sweden)

    Surbhi

    2012-04-01

    Full Text Available Human Computer Interaction (HCI is the biggest goal of computer vision researchers. Features form the different facial images are able to provide a very deep knowledge about the activities performed by the different facial movements. In this paper we presented a technique for feature extraction from various regions of interest with the help of Skin color segmentation technique, Thresholding, knowledge based technique for face recognition.

  14. 基于链码的人脸表情几何特征提取%Facial Expression Geometrical Feature Extraction Based on Chain Code

    Institute of Scientific and Technical Information of China (English)

    张庆; 代锐; 朱雪莹; 韦穗

    2012-01-01

    已有人脸表情特征提取算法的表情识别率较低.为此,提出一种基于链码的人脸表情几何特征提取算法.以主动形状模型特征点定位为基础,对面部目标上定位的特征点位置进行循环链码编码,以提取出人脸表情几何特征.实验结果表明,相比经典的LBP表情特征鉴别方法,该算法的识别率提高约10%.%The existing facial expression recognition rate of facial expression feature extraction algorithm is low. For this, this paper proposes a facial geometric feature extraction algorithm based chain codes. Based on active shape model that locates feature points and outputs the points' coordinates of facial targets the coding method gives a circular codes to extract the facial geometric feature. Experimental results show that, compared with the method of typical LBP expression recognition, the accuracy of the algorithm is increased by nearly 10%.

  15. Detection of Facial Features in Scale-Space

    Directory of Open Access Journals (Sweden)

    P. Hosten

    2007-01-01

    Full Text Available This paper presents a new approach to the detection of facial features. A scale adapted Harris Corner detector is used to find interest points in scale-space. These points are described by the SIFT descriptor. Thus invariance with respect to image scale, rotation and illumination is obtained. Applying a Karhunen-Loeve transform reduces the dimensionality of the feature space. In the training process these features are clustered by the k-means algorithm, followed by a cluster analysis to find the most distinctive clusters, which represent facial features in feature space. Finally, a classifier based on the nearest neighbor approach is used to decide whether the features obtained from the interest points are facial features or not. 

  16. Extraction of Facial Features from Color Images

    Directory of Open Access Journals (Sweden)

    J. Pavlovicova

    2008-09-01

    Full Text Available In this paper, a method for localization and extraction of faces and characteristic facial features such as eyes, mouth and face boundaries from color image data is proposed. This approach exploits color properties of human skin to localize image regions – face candidates. The facial features extraction is performed only on preselected face-candidate regions. Likewise, for eyes and mouth localization color information and local contrast around eyes are used. The ellipse of face boundary is determined using gradient image and Hough transform. Algorithm was tested on image database Feret.

  17. Odor valence linearly modulates attractiveness, but not age assessment, of invariant facial features in a memory-based rating task.

    Directory of Open Access Journals (Sweden)

    Janina Seubert

    Full Text Available Scented cosmetic products are used across cultures as a way to favorably influence one's appearance. While crossmodal effects of odor valence on perceived attractiveness of facial features have been demonstrated experimentally, it is unknown whether they represent a phenomenon specific to affective processing. In this experiment, we presented odors in the context of a face battery with systematic feature manipulations during a speeded response task. Modulatory effects of linear increases of odor valence were investigated by juxtaposing subsequent memory-based ratings tasks--one predominantly affective (attractiveness and a second, cognitive (age. The linear modulation pattern observed for attractiveness was consistent with additive effects of face and odor appraisal. Effects of odor valence on age perception were not linearly modulated and may be the result of cognitive interference. Affective and cognitive processing of faces thus appear to differ in their susceptibility to modulation by odors, likely as a result of privileged access of olfactory stimuli to affective brain networks. These results are critically discussed with respect to potential biases introduced by the preceding speeded response task.

  18. Towards the automation of forensic facial individualisation: Comparing forensic to non forensic eyebrow features

    OpenAIRE

    Zeinstra, Chris; Veldhuis, Raymond; Spreeuwers, Luuk

    2014-01-01

    The Facial Identification Scientific Working Group (FISWG) publishes recommendations regarding one-to-one facial comparisons. At this moment a draft version of a facial image comparison feature list for morphological analysis has been published. This feature list is based on casework experience by forensic facial examiners. This paper investigates whether the performance of the FISWG eyebrow feature set can be considered as being "state-of-the-art". We compare the recognition performance of o...

  19. Dynamic Facial Fatigue Recognition Based on Independent Features Fusion%基于独立特征融合的面部疲劳状态识别

    Institute of Scientific and Technical Information of China (English)

    孙艳丰; 卢冰; 王立春

    2013-01-01

    To improve the facial fatigue state recognition effect, a facial fatigue features representation method of fusing the global and local features was proposed. This method combined the discrete cosine transform (DCT), independent component analysis (ICA) technology and Gabor transformation, and obtained the final facial fatigue features representation through fusing the global independent DCT features and local dynamic Gabor features. The experiments based on the previous self-built fatigue image sequences show that the fatigue features extracted by this method are more discriminative.%为提高面部疲劳状态的识别效果,提出了一种融合全局特征和局部特征的面部疲劳特征表示方法.该方法将离散余弦变换(discrete cosine transform,DCT)和独立元分析(independent component analysis,ICA)技术以及Gabor变换相结合,通过融合全局独立DCT特征和局部动态Gabor特征得到最终的面部疲劳特征表示.基于前人自建的疲劳图像序列库进行了实验,结果表明该方法提取的疲劳特征更加具有鉴别力.

  20. A Novel Feature Extraction Technique for Facial Expression Recognition

    Directory of Open Access Journals (Sweden)

    Mohammad Shahidul Islam

    2013-01-01

    Full Text Available This paper presents a new technique to extract the light invariant local feature for facial expression recognition. It is not only robust to monotonic gray-scale changes caused by light variations but also very simple to perform which makes it possible for analyzing images in challenging real-time settings. The local feature for a pixel is computed by finding the direction of the neighboring of the pixel with the particular rank in term of its gray scale value among all the neighboring pixels. When eight neighboring pixels are considered, the direction of the neighboring pixel with the second minima of the gray scale intensity can yield the best performance for the facial expression recognition in our experiment. The facial expression classification in the experiment was performed using a support vector machine on CK+ dataset The average recognition rate achieved is 90.1 3.8%, which is better than other previous local feature based methods for facial expression analysis. The experimental results do show that the proposed feature extraction technique is fast, accurate and efficient for facial expression recognition.

  1. Effects of Bariatric Surgery on Facial Features

    OpenAIRE

    Papoian, Vardan; Mardirossian, Vartan; Hess, Donald Thomas; Spiegel, Jeffrey H.

    2015-01-01

    Background Bariatric surgeries performed in the USA has increased twelve-fold in the past two decades. The effects of rapid weight loss on facial features has not been previously studied. We hypothesized that bariatric surgery will mimic the effects of aging thus giving the patient an older and less attractive appearance. Methods Consecutive patients were enrolled from the bariatric surgical clinic at our institution. Pre and post weight loss photographs were taken and used to generate two su...

  2. 人脸显性特征的融合构造方法及识别%Face Recognition Based on Explicit Facial Features by Fusion Construction Method

    Institute of Scientific and Technical Information of China (English)

    杨飞; 苏剑波

    2012-01-01

    In the current research on face recognition,facial geometric features have not been fully utilized.Thus,the importance of geometric features in face recognition is explicated, and a novel technique of facial geometric feature extraction is proposed. Then a facial explicit feature is constructed based on the fusion of geometric and texture information. The corresponding face recognition method using these features is also given. This novel face recognition method not only possesses some advantages over the popular subspace methods based on statistical learning,but can be a complement to the latter. Experiments demonstrate that the extracted features and the corresponding face recognition algorithm are robust to facial expression and environmental illumination variations.%目前的人脸识别研究中,面部几何特征没有得到很好的利用.本文阐述了几何特征对于人脸识别的重要性,在此基础上提出了一种提取面部几何特征的新方法;通过融合几何信息和纹理信息构造出一种面部显性特征,并给出了相应的人脸识别方法.这种新的人脸识别方法相对于基于统计学习的子空间方法具有一定的优势,同时也可作为后者的有益补充.实验表明,本文提出的人脸表示特征及识别方法对人脸表情变化和环境光照变化均有一定的鲁棒性.

  3. Frontal Facial Pose Recognition Using a Discriminant Splitting Feature Extraction Procedure

    OpenAIRE

    Marras, Ioannis; Nikolaidis, Nikos; Pitas, Ioannis

    2011-01-01

    Frontal facial pose recognition deals with classifying facial images into two-classes: frontal and non-frontal. Recognition of frontal poses is required as a preprocessing step to face analysis algorithms (e.g. face or facial expression recognition) that can operate only on frontal views. A novel frontal facial pose recognition technique that is based on discriminant image splitting for feature extraction is presented in this paper. Spatially homogeneous and discriminant regions for each faci...

  4. Identification based on facial parts

    Directory of Open Access Journals (Sweden)

    Stevanov Zorica

    2007-01-01

    Full Text Available Two opposing views dominate face identification literature, one suggesting that the face is processed as a whole and another suggesting analysis based on parts. Our research tried to establish which of these two is the dominant strategy and our results fell in the direction of analysis based on parts. The faces were covered with a mask and the participants were uncovering different parts, one at the time, in an attempt to identify a person. Already at the level of a single facial feature, such as mouth or eye and top of the nose, some observers were capable to establish the identity of a familiar face. Identification is exceptionally successful when a small assembly of facial parts is visible, such as eye, eyebrow and the top of the nose. Some facial parts are not very informative on their own but do enhance recognition when given as a part of such an assembly. Novel finding here is importance of the top of the nose for the face identification. Additionally observers have a preference toward the left side of the face. Typically subjects view the elements in the following order: left eye, left eyebrow, right eye, lips, region between the eyes, right eyebrow, region between the eyebrows, left check, right cheek. When observers are not in a position to see eyes, eyebrows or top of the nose, they go for lips first and then region between the eyebrows, region between the eyes, left check, right cheek and finally chin.

  5. Fusing Facial Features for Face Recognition

    OpenAIRE

    Jamal Ahmad Dargham; Ali Chekima; Ervin Gubin Moung

    2012-01-01

    Face recognition is an important biometric method because of its potential applications in many fields, such as access control, surveillance, and human-computer interaction. In this paper, a face recognition system that fuses the outputs of three face recognition systems based on Gabor jets is presented. The first system uses the magnitude, the second uses the phase, and the third uses the phase-weighted magnitude of the jets. The jets are generated from facial landmarks selected using three ...

  6. Facial Expression Recognition Using New Feature Extraction Algorithm

    OpenAIRE

    Huang, Hung-Fu; Tai, Shen-Chuan

    2012-01-01

    This paper proposes a method for facial expression recognition. Facial feature vectors are generated from keypoint descriptors using Speeded-Up Robust Features. Each facial feature vector is then normalized and next the probability density function descriptor is generated. The distance between two probability density function descriptors is calculated using Kullback Leibler divergence. Mathematical equation is employed to select certain practicable probability density function descriptors for...

  7. Improving Recognition and Identification of Facial Areas Involved in Non-verbal Communication by Feature Selection

    OpenAIRE

    Sheerman-Chase, T; Ong, E-J; Pugeault, N; Bowden, R.

    2013-01-01

    Meaningful Non-Verbal Communication (NVC) signals can be recognised by facial deformations based on video tracking. However, the geometric features previously used contain a significant amount of redundant or irrelevant information. A feature selection method is described for selecting a subset of features that improves performance and allows for the identification and visualisation of facial areas involved in NVC. The feature selection is based on a sequential backward elimination of features ...

  8. A Cloud Model-based Approach for Facial Expression Synthesis

    Directory of Open Access Journals (Sweden)

    Juebo Wu

    2011-04-01

    Full Text Available The process to synthesize feature for human facial expression often implies both fuzziness, randomness and their certain relevance in image data. By using the advantage of cloud model, this paper presents a new approaches and applications for comprehensive analysis of human facial expression synthesis using cloud model, in order to realize the rapid and effective facial expression processing in analysis and application. It gives the comprehensive analysis for the fuzziness and randomness of facial expression feature and the relationship between them based on cloud model, including the new method of facial expression synthesis with the uncertainty. It proposes the method of facial expression feature synthesis by cloud model, using the three numerical characteristics (Expectation, Entropy and Hyper Entropy as the features and concepts of facial expression with its fuzziness, randomness and certain relevance in them. Through such three numerical characteristics, it introduces the framework of facial expression synthesis and the detail procedures based on cloud model. It puts forward the synthesis method of facial expression and gives the concrete realization and the implementation process. The facial expressions after synthesis can express the different expressions for one person, and it can meet a variety of demands for facial expression. The experimental results show that the proposed method is feasible and effective in facial expression synthesis.

  9. Wavelet based approach for facial expression recognition

    Directory of Open Access Journals (Sweden)

    Zaenal Abidin

    2015-03-01

    Full Text Available Facial expression recognition is one of the most active fields of research. Many facial expression recognition methods have been developed and implemented. Neural networks (NNs have capability to undertake such pattern recognition tasks. The key factor of the use of NN is based on its characteristics. It is capable in conducting learning and generalizing, non-linear mapping, and parallel computation. Backpropagation neural networks (BPNNs are the approach methods that mostly used. In this study, BPNNs were used as classifier to categorize facial expression images into seven-class of expressions which are anger, disgust, fear, happiness, sadness, neutral and surprise. For the purpose of feature extraction tasks, three discrete wavelet transforms were used to decompose images, namely Haar wavelet, Daubechies (4 wavelet and Coiflet (1 wavelet. To analyze the proposed method, a facial expression recognition system was built. The proposed method was tested on static images from JAFFE database.

  10. Perceived Attractiveness, Facial Features, and African Self-Consciousness.

    Science.gov (United States)

    Chambers, John W., Jr.; And Others

    1994-01-01

    Investigated relationships between perceived attractiveness, facial features, and African self-consciousness (ASC) among 149 African American college students. As predicted, high ASC subjects used more positive adjectives in descriptions of strong African facial features than did medium or low ASC subjects. Results are discussed in the context of…

  11. Improved Facial-Feature Detection for AVSP via Unsupervised Clustering and Discriminant Analysis

    Directory of Open Access Journals (Sweden)

    Lucey Simon

    2003-01-01

    Full Text Available An integral part of any audio-visual speech processing (AVSP system is the front-end visual system that detects facial-features (e.g., eyes and mouth pertinent to the task of visual speech processing. The ability of this front-end system to not only locate, but also give a confidence measure that the facial-feature is present in the image, directly affects the ability of any subsequent post-processing task such as speech or speaker recognition. With these issues in mind, this paper presents a framework for a facial-feature detection system suitable for use in an AVSP system, but whose basic framework is useful for any application requiring frontal facial-feature detection. A novel approach for facial-feature detection is presented, based on an appearance paradigm. This approach, based on intraclass unsupervised clustering and discriminant analysis, displays improved detection performance over conventional techniques.

  12. Improved Facial-Feature Detection for AVSP via Unsupervised Clustering and Discriminant Analysis

    Science.gov (United States)

    Lucey, Simon; Sridharan, Sridha; Chandran, Vinod

    2003-12-01

    An integral part of any audio-visual speech processing (AVSP) system is the front-end visual system that detects facial-features (e.g., eyes and mouth) pertinent to the task of visual speech processing. The ability of this front-end system to not only locate, but also give a confidence measure that the facial-feature is present in the image, directly affects the ability of any subsequent post-processing task such as speech or speaker recognition. With these issues in mind, this paper presents a framework for a facial-feature detection system suitable for use in an AVSP system, but whose basic framework is useful for any application requiring frontal facial-feature detection. A novel approach for facial-feature detection is presented, based on an appearance paradigm. This approach, based on intraclass unsupervised clustering and discriminant analysis, displays improved detection performance over conventional techniques.

  13. LBP and SIFT based facial expression recognition

    Science.gov (United States)

    Sumer, Omer; Gunes, Ece O.

    2015-02-01

    This study compares the performance of local binary patterns (LBP) and scale invariant feature transform (SIFT) with support vector machines (SVM) in automatic classification of discrete facial expressions. Facial expression recognition is a multiclass classification problem and seven classes; happiness, anger, sadness, disgust, surprise, fear and comtempt are classified. Using SIFT feature vectors and linear SVM, 93.1% mean accuracy is acquired on CK+ database. On the other hand, the performance of LBP-based classifier with linear SVM is reported on SFEW using strictly person independent (SPI) protocol. Seven-class mean accuracy on SFEW is 59.76%. Experiments on both databases showed that LBP features can be used in a fairly descriptive way if a good localization of facial points and partitioning strategy are followed.

  14. Facial Expression Recognition in the Wild using Rich Deep Features

    OpenAIRE

    Karali, Abubakrelsedik; Bassiouny, Ahmad; El-Saban, Motaz

    2016-01-01

    Facial Expression Recognition is an active area of research in computer vision with a wide range of applications. Several approaches have been developed to solve this problem for different benchmark datasets. However, Facial Expression Recognition in the wild remains an area where much work is still needed to serve real-world applications. To this end, in this paper we present a novel approach towards facial expression recognition. We fuse rich deep features with domain knowledge through enco...

  15. 基于特征区域自动分割的人脸表情识别%Facial Expression Recognition Based on Feature Regions Automatic Segmentation

    Institute of Scientific and Technical Information of China (English)

    张腾飞; 闵锐; 王保云

    2011-01-01

    针对目前三维人脸表情区域分割方法复杂、费时问题,提出一种人脸表情区域自动分割方法,通过投影、曲率计算的方法检测人脸的部分特征点,以上述特征点为基础进行人脸表情区域的自动分割.为得到更加丰富的表情特征,结合人脸表情识别编码规则对提取到的特征矩阵进行扩充,利用分类器进行人脸表情的识别.通过对三维人脸表情数据库部分样本的识别结果表明,该方法可以取得较高的识别率.%To improve 3D facial expression feature regions segmentation, an automatic feature regions segmentation method is presented.The facial feature points are detected by conducting projection and curvature calculation, and are used as the basis of facial expression feature regions automatic segmentation.To obtain more abundant facial expression information, the Facial Action Coding System(FACS) coding roles is introduced to extend the extracted characteristic matrix.And facial expressions can be recognized by combining classifiers.Experimental results of 3D facial expression samples show that the method is effective with high recognition rate.

  16. 基于CBP-TOP特征的人脸表情识别%Facial expression recognition based on CBP-TOP features

    Institute of Scientific and Technical Information of China (English)

    朱勇; 詹永照

    2011-01-01

    According to effective extraction of facial expression information in space-time domain,this paper proposed a novel approach for facial expression recognition based on CBP-TOP features and SVM classifier.In this method, processed original image sequences first, including face detection, image interception and size normalized.Then extracted the features of image from the blocks of images using the CBP-TOP operator.Finally recognized six expressions by support vector machine classifier.The experiment result shows that, this method can extract movement feature of image sequences and dynamic texture information more effectively,as well as raise the accuracy of expression recognition.Compared with VLBP, CBP-TOP has greater improvement in recognition rate and recognition speed.%针对人脸表情时空域特征信息的有效提取,提出了一种CBP-TOP(centralized binary patterns from three orthogonal panels)特征和SVM分类器相结合的人脸表情识别新方法.该方法首先将原始图像序列进行图像预处理,包括人脸检测、图像截取和图像尺度归一化,然后用CBP-TOP算子对图像序列进行分块提取特征,最后采用SVM分类器进行表情识别.实验结果表明,该方法能更有效地提取图像序列的运动特征和动态纹理信息,提高了表情识别的准确率.与VLBP(volume local binary pattern)特征相比,CBP-TOP特征在表情识别中具有更高的识别率和更快的识别速度.

  17. Regression-based Multi-View Facial Expression Recognition

    NARCIS (Netherlands)

    Rudovic, Ognjen; Patras, Ioannis; Pantic, Maja

    2010-01-01

    We present a regression-based scheme for multi-view facial expression recognition based on 2-D geometric features. We address the problem by mapping facial points (e.g. mouth corners) from non-frontal to frontal view where further recognition of the expressions can be performed using a state-of-the-

  18. Facial features influence the categorization of female sexual orientation.

    Science.gov (United States)

    Tskhay, Konstantin O; Feriozzo, Melissa M; Rule, Nicholas O

    2013-01-01

    Social categorization is a rapid and automatic process, and people rely on various facial cues to accurately categorize each other into social groups. Recently, studies have demonstrated that people integrate different cues to arrive at accurate impressions of others' sexual orientations. The amount of perceptual information available to perceivers could affect these categorizations, however. Here, we found that, as visual information decreased from full faces to internal facial features to just pairs of eyes, so did the accuracy of judging women's sexual orientation. Yet and still, accuracy remained significantly greater than chance across all conditions. More important, however, participants' response bias varied significantly depending on the facial feature judged. Perceivers were significantly more likely to consider that a target may be lesbian as they viewed less of the faces. Thus, although facial features may be continuously integrated in person construal, they can differentially affect how people see each other. PMID:24494440

  19. 核典型相关分析的融合人脸识别算法%Fusing facial feature recognition algorithm based on kernel canonical correlation analysis

    Institute of Scientific and Technical Information of China (English)

    王大伟; 陈浩; 王延杰

    2009-01-01

    A new fusing facial feature recognition algorithm based on kernel Canonical Correlation Analysis ( Kernel CCA) was proposed,for mapping image data into feature space and improving classifying accuracy. In our approach, we first map the image data through kernel function,then extract feature from the directions of rows and columns. Our algorithm simplifies the computation without decomposing the mapped matrix and gains the more discriminated features. The experiment results on OTCBVS V/IR face database of Ohio state university show that our algorithm gets better performance than other facial recognition method based on CCA with recognition accuracytate more than 90%. In addition,it also can get the excellent results with the illumination and expression variance.%为了更有效地映射图像数据样本到可分类特征空间,提高分类正确率,提出了一种新的基于核函数的典型相关分析的融合人脸识别算法.该方法首先把图像矩阵通过核函数影射到核空间,然后从核空间的行和列两个方向进行特征抽取,同时避免分解映射后的数据矩阵,简化了数据运算,获得了更具鉴别力的分类特征.在Ohio州立大学的OTCBVS可见/红外人脸数据库中进行了分类识别实验,实验结果表明:该方法可以获得90%以上的识别正确率,优于其他的典型相关分析的人脸识别方法的分类正确率.此外,对不均匀光照变化,表情变化等人脸识别的常见问题具有很好的抵抗能力.

  20. Face recognition using improved-LDA with facial combined feature

    Institute of Scientific and Technical Information of China (English)

    Dake Zhou; Xin Yang; Ningsong Peng

    2005-01-01

    @@ Face recognition subjected to various conditions is a challenging task. This paper presents a combined feature improved Fisher classifier method for face recognition. Both of the facial holistic information and local information are used for face representation. In addition, the improved linear discriminant analysis (I-LDA) is employed for good generalization capability. Experiments show that the method is not only robust to moderate changes of illumination, pose and facial expression but also superior to the traditional methods, such as eigenfaces and Fisherfaces.

  1. Concealing the Level-3 features of Fingerprint in a Facial Image

    Directory of Open Access Journals (Sweden)

    Dr.R.Seshadri,

    2010-11-01

    Full Text Available individual based on their physical, chemical and behavioral characteristics of the person. Biometrics is increasingly being used for authentication and protection purposes and this has generated considerable interest from many parts of the information technology people. In this paper we proposed facial image Watermarking methods that can embedded fingerprint level-3 features information into host facial images. This scheme has the advantage that in addition to facial matching, the recovered fingerprint level-3 features during the decoding can be used to establish the authentication. Here the proposed system concealing of vital information human being for identification and at the same time the system protect themselves fromattackers.

  2. Facial Affect Recognition Using Regularized Discriminant Analysis-Based Algorithms

    OpenAIRE

    Lee Chien-Cheng; Huang Shin-Sheng; Shih Cheng-Yuan

    2010-01-01

    This paper presents a novel and effective method for facial expression recognition including happiness, disgust, fear, anger, sadness, surprise, and neutral state. The proposed method utilizes a regularized discriminant analysis-based boosting algorithm (RDAB) with effective Gabor features to recognize the facial expressions. Entropy criterion is applied to select the effective Gabor feature which is a subset of informative and nonredundant Gabor features. The proposed RDAB algorithm uses RD...

  3. Which 3D Geometric Facial Features Give Up Your Identity ?

    OpenAIRE

    Ballihi, Lahoucine; Boulbaba, Ben Amor; Daoudi, Mohamed; Srivastava, Anuj; Aboutajdine, Driss

    2012-01-01

    International audience The 3D face recognition literature has many papers that represent facial shapes as collections of curves of different kinds (level-curves, iso-level curves, radial curves, profiles, geodesic polarization, iso-depth lines, iso-stripes, etc.). In contrast with the holistic approaches, the approaches that match faces based on whole surfaces, the curve-based parametrization allows local analysis of facial shapes. This, in turn, facilitates handling of pose variations (pr...

  4. Geometric Feature Based Face-Sketch Recognition

    OpenAIRE

    Pramanik, Sourav; Bhattacharjee, Debotosh

    2013-01-01

    This paper presents a novel facial sketch image or face-sketch recognition approach based on facial feature extraction. To recognize a face-sketch, we have concentrated on a set of geometric face features like eyes, nose, eyebrows, lips, etc and their length and width ratio because it is difficult to match photos and sketches because they belong to two different modalities. In this system, first the facial features/components from training images are extracted, then ratios of length, width, a...

  5. Detection of Facial Features in Scale-Space

    OpenAIRE

    P. Hosten; M. Asbach

    2007-01-01

    This paper presents a new approach to the detection of facial features. A scale adapted Harris Corner detector is used to find interest points in scale-space. These points are described by the SIFT descriptor. Thus invariance with respect to image scale, rotation and illumination is obtained. Applying a Karhunen-Loeve transform reduces the dimensionality of the feature space. In the training process these features are clustered by the k-means algorithm, followed by a cluster analysis to find ...

  6. FACIAL EXPRESSION RECOGNITION BASED ON EDGE DETECTION

    OpenAIRE

    Chen, Xiaoming; Cheng, Wushan

    2015-01-01

    Relational Over the last two decades, the advances in computer vision and pattern recognition power have opened the door to new opportunity of automatic facial expression recognition system[1]. This paper use Canny edge detection method for facial expression recognition. Image color space transformation in the first place and then to identify and locate human face .Next pick up the edge of eyes and mouth's features extraction. Last we judge the facial expressions after compared wi...

  7. Extracted facial feature of racial closely related faces

    Science.gov (United States)

    Liewchavalit, Chalothorn; Akiba, Masakazu; Kanno, Tsuneo; Nagao, Tomoharu

    2010-02-01

    Human faces contain a lot of demographic information such as identity, gender, age, race and emotion. Human being can perceive these pieces of information and use it as an important clue in social interaction with other people. Race perception is considered the most delicacy and sensitive parts of face perception. There are many research concerning image-base race recognition, but most of them are focus on major race group such as Caucasoid, Negroid and Mongoloid. This paper focuses on how people classify race of the racial closely related group. As a sample of racial closely related group, we choose Japanese and Thai face to represents difference between Northern and Southern Mongoloid. Three psychological experiment was performed to study the strategies of face perception on race classification. As a result of psychological experiment, it can be suggested that race perception is an ability that can be learn. Eyes and eyebrows are the most attention point and eyes is a significant factor in race perception. The Principal Component Analysis (PCA) was performed to extract facial features of sample race group. Extracted race features of texture and shape were used to synthesize faces. As the result, it can be suggested that racial feature is rely on detailed texture rather than shape feature. This research is a indispensable important fundamental research on the race perception which are essential in the establishment of human-like race recognition system.

  8. Invariant facial feature extraction using biologically inspired strategies

    Science.gov (United States)

    Du, Xing; Gong, Weiguo

    2011-12-01

    In this paper, a feature extraction model for face recognition is proposed. This model is constructed by implementing three biologically inspired strategies, namely a hierarchical network, a learning mechanism of the V1 simple cells, and a data-driven attention mechanism. The hierarchical network emulates the functions of the V1 cortex to progressively extract facial features invariant to illumination, expression, slight pose change, and variations caused by local transformation of facial parts. In the network, filters that account for the local structures of the face are derived through the learning mechanism and used for the invariant feature extraction. The attention mechanism computes a saliency map for the face, and enhances the salient regions of the invariant features to further improve the performance. Experiments on the FERET and AR face databases show that the proposed model boosts the recognition accuracy effectively.

  9. 基于Trace变换的人脸特征提取技术研究%Research of facial features extraction technology based on Trace transform

    Institute of Scientific and Technical Information of China (English)

    王景中; 王国庆; 伍淳华; 王龙

    2012-01-01

    为了提高人脸特征的稳定性和区分度,提出了一种基于Trace变换的人脸特征提取算法.算法通过几种不同的泛函函数对预处理后的人脸图像进行组合作用,得到该图像的一个Trace特征向量,从而建立了一种新的人脸特征表达方式.基于ORL人脸数据库的实验结果表明,该算法所提出的人脸特征对同一个人不同表情、不同光照条件下的图像变化能够保持较好的稳定性,同时对不同人的人脸图像具有较高的区分能力,在人脸识别的实际应用中是一种可行的方法.%This paper proposed a novel facial feature extraction scheme and a new way for facial representation, which could improve the ability to discriminate different persons' face images while also recognize same person' s face images with different expressions or under different lighting conditions. It combined several functions together to process the preprocessed original face images, which would produce feature vectors of the original face images. Experimental results on ORL face database prove that the facial feature extracted by this scheme can discriminate different persons' face images while also recognize the same person' s different face images. Therefore,this algorithm is feasible in practical applications of face recognition.

  10. Facial features and social attractiveness: preferences of Bosnian female students

    Directory of Open Access Journals (Sweden)

    Nina Bosankić

    2015-09-01

    Full Text Available This research aimed at testing multiple fitness hypothesis of attraction, investigating relationship between male facial characteristic and female students' reported readiness to engage in various social relations. A total of 27 male photos were evaluated on five dimensions on a seven-point Likert-type scale ranging from -3 to 3, by convenient sample of 90 female students of University of Sarajevo. The dimensions were: desirable to date – not desirable to date; desirable to marry – not desirable to marry; desirable to have sex with – not desirable to have sex with; desirable to be a friend – not desirable to be a friend; attractive - not attractive. Facial metric measurements of facial features such as distance between the eyes, smile width and height were performed using AutoCad. The results indicate that only smile width positively correlates with desirability of establishing friendship, whilst none of the other characteristics correlates with any of the other dimensions. This leads to the conclusion that motivation to establish various social relations cannot be reduced to mere physical appearance, mainly facial features, but many other variables yet to be investigated.

  11. Extraction of Facial Feature Points Using Cumulative Histogram

    Directory of Open Access Journals (Sweden)

    Sushil Kumar Paul

    2012-01-01

    Full Text Available This paper proposes a novel adaptive algorithm to extract facial feature points automatically such as eyebrows corners, eyes corners, nostrils, nose tip, and mouth corners in frontal view faces, which is based on cumulative histogram approach by varying different threshold values. At first, the method adopts the Viola-Jones face detector to detect the location of face and also crops the face region in an image. From the concept of the human face structure, the six relevant regions such as right eyebrow, left eyebrow, right eye, left eye, nose, and mouth areas are cropped in a face image. Then the histogram of each cropped relevant region is computed and its cumulative histogram value is employed by varying different threshold values to create a new filtering image in an adaptive way. The connected component of interested area for each relevant filtering image is indicated our respective feature region. A simple linear search algorithm for eyebrows, eyes and mouth filtering images and contour algorithm for nose filtering image are applied to extract our desired corner points automatically. The method was tested on a large BioID frontal face database in different illuminations, expressions and lighting conditions and the experimental results have achieved average success rates of 95.27%.

  12. Extraction of Facial Feature Points Using Cumulative Histogram

    CERN Document Server

    Paul, Sushil Kumar; Bouakaz, Saida

    2012-01-01

    This paper proposes a novel adaptive algorithm to extract facial feature points automatically such as eyebrows corners, eyes corners, nostrils, nose tip, and mouth corners in frontal view faces, which is based on cumulative histogram approach by varying different threshold values. At first, the method adopts the Viola-Jones face detector to detect the location of face and also crops the face region in an image. From the concept of the human face structure, the six relevant regions such as right eyebrow, left eyebrow, right eye, left eye, nose, and mouth areas are cropped in a face image. Then the histogram of each cropped relevant region is computed and its cumulative histogram value is employed by varying different threshold values to create a new filtering image in an adaptive way. The connected component of interested area for each relevant filtering image is indicated our respective feature region. A simple linear search algorithm for eyebrows, eyes and mouth filtering images and contour algorithm for nos...

  13. 基于多特征融合的轨道交通智能视频人像识别技术研究%Intelligent Video Facial Recognition Technology Based on Features Fusion for Rail Transportation

    Institute of Scientific and Technical Information of China (English)

    沈海燕; 史宏; 冯云梅

    2011-01-01

    Considering the features of passenger flow and application requirements of rail transportation, this paper focuses on the global features extraction. It explores the upper features extracting and the T-zone features extracting based on detailed description of image pre-processing technology, facial space establishment and eigenface recognition method. The recognition method based on global features is the identification by extracting the overall facial shape features which is easily affected by expressions, poses, and shades. The recognition method based on the upper features and the T-zone features is proved to be superior to the recognition method based on global features in terms of solving problems of impressions and shades. They are summed according to their own weight. Therefore, the paper proposes a facial recognition method based on features fusion. Pilot application of the proposed method at some stations on Beijing-Shanghai high speed line proves that this method effectively improves the facial recognition accuracy.%结合轨道交通客流特点和应用需求,在详细描述人像识别中的图像预处理、人像空间的建立及特征脸识别方法的基础上,研究了全局特征提取、Upper特征提取和Tzone特征提取方法.基于全局特征的识别方法通过提取人像的整体形状特征进行识别,易受表情、姿势、遮挡等的影响.基于Upper特征和Tzone特征在处理表情和遮挡等问题时,与基于全局特征的方法相比具有一定的优势,因此,本文对全局特征、Upper特征和Tzone特征进行了加权融合,提出一种基于多特征融合的人像识别方法.经京沪高速铁路部分车站试点验证,该方法能有效提高人像识别的准确率.

  14. Facial Feature Extraction Using Frequency Map Series in PCNN

    Directory of Open Access Journals (Sweden)

    Rencan Nie

    2016-01-01

    Full Text Available Pulse coupled neural network (PCNN has been widely used in image processing. The 3D binary map series (BMS generated by PCNN effectively describes image feature information such as edges and regional distribution, so BMS can be treated as the basis of extracting 1D oscillation time series (OTS for an image. However, the traditional methods using BMS did not consider the correlation of the binary sequence in BMS and the space structure for every map. By further processing for BMS, a novel facial feature extraction method is proposed. Firstly, consider the correlation among maps in BMS; a method is put forward to transform BMS into frequency map series (FMS, and the method lessens the influence of noncontinuous feature regions in binary images on OTS-BMS. Then, by computing the 2D entropy for every map in FMS, the 3D FMS is transformed into 1D OTS (OTS-FMS, which has good geometry invariance for the facial image, and contains the space structure information of the image. Finally, by analyzing the OTS-FMS, the standard Euclidean distance is used to measure the distances for OTS-FMS. Experimental results verify the effectiveness of OTS-FMS in facial recognition, and it shows better recognition performance than other feature extraction methods.

  15. Automatic Facial Expression Recognition Based on Hybrid Approach

    Directory of Open Access Journals (Sweden)

    Ali K. K. Bermani

    2012-12-01

    Full Text Available The topic of automatic recognition of facial expressions deduce a lot of researchers in the late last century and has increased a great interest in the past few years. Several techniques have emerged in order to improve the efficiency of the recognition by addressing problems in face detection and extraction features in recognizing expressions. This paper has proposed automatic system for facial expression recognition which consists of hybrid approach in feature extraction phase which represent a combination between holistic and analytic approaches by extract 307 facial expression features (19 features by geometric, 288 feature by appearance. Expressions recognition is performed by using radial basis function (RBF based on artificial neural network to recognize the six basic emotions (anger, fear, disgust, happiness, surprise, sadness in addition to the natural.The system achieved recognition rate 97.08% when applying on person-dependent database and 93.98% when applying on person-independent.

  16. A Novel Feature Extraction Technique for Facial Expression Recognition

    OpenAIRE

    Mohammad Shahidul Islam; Surapong Auwatanamongkol

    2013-01-01

    This paper presents a new technique to extract the light invariant local feature for facial expression recognition. It is not only robust to monotonic gray-scale changes caused by light variations but also very simple to perform which makes it possible for analyzing images in challenging real-time settings. The local feature for a pixel is computed by finding the direction of the neighboring of the pixel with the particular rank in term of its gray scale value among all the neighboring pixels...

  17. 基于差分纹理的人脸表情识别%Facial expression recognition based on differential texture features

    Institute of Scientific and Technical Information of China (English)

    夏海英; 徐鲁辉

    2015-01-01

    考虑到自动人脸表情识别背景复杂性问题,提出了一个新的表情识别方法———基于差分纹理的人脸表情识别,该方法在一定程度上能够有效地屏蔽掉个体人脸之间的差异,同时保留住人脸表情信息。首先选定一个标准人脸参考模型,该模型合理分布面部55个基准点,这些基准点主要分布于眼睛、鼻子、嘴和包含表情丰富的外部轮廓上;然后利用 Delaunay 三角剖分获取这些基准点的相对位置信息。对于人脸表情图像,首先利用主动形状模型(ASM)跟踪定位这55个基准点,然后利用三角剖分获得的相对位置信息,以及应用纹理映射技术将表情图像映射到标准人脸参考模型中,这样中性表情图像(不含表情信息的人脸)和非中性表情(六种基本表情)图像均被映射到同一大小的框架内,最后将它们的差值图像作为表情特征,称为 DT(differential texture,差分纹理)特征。最后分别将 JAFFE 人脸表情库和 CK 人脸表情库中的部分样本组成混合数据并进行实验,结果表明提出的方法对六种基本表情具有较好的识别率,并且该方法优于传统的 Gabor 特征和 LBP 特征方法,并能扩展到动态图像中的表情识别中去。%Considering the problem of automatically recognizing facial expression with complex background,this paper pro-posed a novel method,which could extract expression features regardless of face information.First,the method selected a standard reference model,in which 55 facial landmark points were reasonably distributed by geometric information of the face. Those landmark points mainly located at facial contour,eyebrows,eyes,nose and lips,which constituted the convex hull of face model.Then it deployed the Delaunay triangulation to get the relative location information of those points in the standard reference model.It got 55 landmark points by using ASMlocation for

  18. FACIAL EXPRESSION CLASSIFICATION WITH HAAR FEATURES, GEOMETRIC FEATURES AND CUBIC BÉZIER CURVES

    OpenAIRE

    Kandemir, Rembiye; Özmen, Gonca

    2013-01-01

    Facial expressions are nonverbal communication channels to interact with other people.  Computer recognition of human emotions based on facial expression is  an interesting and difficult problem. In this study, images were analyzed based on facial expressions and tried to identify different emotions, such as smile, surprise, sadness, fear, disgust, anger and  neutral. In practice, it was used Viola-Jones face detector  used  AdaBoost  algorithm for  finding the location of the face. Haar filt...

  19. Prediction of Mortality Based on Facial Characteristics.

    Science.gov (United States)

    Delorme, Arnaud; Pierce, Alan; Michel, Leena; Radin, Dean

    2016-01-01

    Recent studies have shown that characteristics of the face contain a wealth of information about health, age and chronic clinical conditions. Such studies involve objective measurement of facial features correlated with historical health information. But some individuals also claim to be adept at gauging mortality based on a glance at a person's photograph. To test this claim, we invited 12 such individuals to see if they could determine if a person was alive or dead based solely on a brief examination of facial photographs. All photos used in the experiment were transformed into a uniform gray scale and then counterbalanced across eight categories: gender, age, gaze direction, glasses, head position, smile, hair color, and image resolution. Participants examined 404 photographs displayed on a computer monitor, one photo at a time, each shown for a maximum of 8 s. Half of the individuals in the photos were deceased, and half were alive at the time the experiment was conducted. Participants were asked to press a button if they thought the person in a photo was living or deceased. Overall mean accuracy on this task was 53.8%, where 50% was expected by chance (p clairvoyance warrants further investigation. PMID:27242466

  20. Facial Expression Feature Extraction Based on the J-divergence Entropy of IMF%基于IMF解析信号能量熵的人脸表情特征提取方法磁

    Institute of Scientific and Technical Information of China (English)

    李茹; 张建伟

    2016-01-01

    人脸表情识别是指利用计算机技术、图像处理、机器视觉等技术对人脸表情图像或图像序列进行特征提取、建模,以及表情分类的过程,从而使得计算机程序能够依据人的脸部表情信息推断人的心理状态。人脸表情识别主要分为三个阶段:人脸检测、表情特征提取、表情特征分类。其中,表情特征的选取是人脸表情识别的关键步骤,特征选取的好坏直接影响表情分类的效果。论文提出了一种基于IM F解析信号能量熵的人脸表情特征提取方法,将希尔伯特黄变换方法应用到人脸表情识别中。首先,对表情图像进行Radon变换,得到人脸表情信号,然后对该信号进行经验模态分解(EMD),得到一系列本征模态函数(IM F),对得到本征模态函数(IM F)进行 Hilbert变换,得到IM F解析信号,计算瞬时振幅,瞬时频率。选择IM F以及其解析信号的振幅作为特征向量,计算其能量判别熵,选择同类之间有较小判别熵,不同信号类之间有较大判别熵的特征作为表情分类的特征向量。采用PCA算法对选取的特征进行降维,使用支持向量机(SVM )对两类表情进行分类。%Facial emotion or facial expression recognition refers to using computer technology ,image processing and machine vision technology to process the object from a given image or video sequence for feature extraction ,modeling ,classi‐fication to identify the psychological mood of the subject .Facial expression recognition is mainly divided into three stages ,in‐cluding face detection ,face feature extraction and expression classification .Expression feature extraction and selection is a key step in efficient and effective facial emotion recognition and may affect the classification results .In this study ,a novel ap‐proach of face expression feature extraction is proposed based on energy entropy of IMF analytic signal

  1. Men's preference for women's facial features: testing homogamy and the paternity uncertainty hypothesis.

    Directory of Open Access Journals (Sweden)

    Jeanne Bovet

    Full Text Available Male mate choice might be based on both absolute and relative strategies. Cues of female attractiveness are thus likely to reflect both fitness and reproductive potential, as well as compatibility with particular male phenotypes. In humans, absolute clues of fertility and indices of favorable developmental stability are generally associated with increased women's attractiveness. However, why men exhibit variable preferences remains less studied. Male mate choice might be influenced by uncertainty of paternity, a selective factor in species where the survival of the offspring depends on postnatal paternal care. For instance, in humans, a man might prefer a woman with recessive traits, thereby increasing the probability that his paternal traits will be visible in the child and ensuring paternity. Alternatively, attractiveness is hypothesized to be driven by self-resembling features (homogamy, which would reduce outbreeding depression. These hypotheses have been simultaneously evaluated for various facial traits using both real and artificial facial stimuli. The predicted preferences were then compared to realized mate choices using facial pictures from couples with at least 1 child. No evidence was found to support the paternity uncertainty hypothesis, as recessive features were not preferred by male raters. Conversely, preferences for self-resembling mates were found for several facial traits (hair and eye color, chin dimple, and thickness of lips and eyebrows. Moreover, realized homogamy for facial traits was also found in a sample of long-term mates. The advantages of homogamy in evolutionary terms are discussed.

  2. Facial Affect Recognition Using Regularized Discriminant Analysis-Based Algorithms

    Directory of Open Access Journals (Sweden)

    Cheng-Yuan Shih

    2010-01-01

    Full Text Available This paper presents a novel and effective method for facial expression recognition including happiness, disgust, fear, anger, sadness, surprise, and neutral state. The proposed method utilizes a regularized discriminant analysis-based boosting algorithm (RDAB with effective Gabor features to recognize the facial expressions. Entropy criterion is applied to select the effective Gabor feature which is a subset of informative and nonredundant Gabor features. The proposed RDAB algorithm uses RDA as a learner in the boosting algorithm. The RDA combines strengths of linear discriminant analysis (LDA and quadratic discriminant analysis (QDA. It solves the small sample size and ill-posed problems suffered from QDA and LDA through a regularization technique. Additionally, this study uses the particle swarm optimization (PSO algorithm to estimate optimal parameters in RDA. Experiment results demonstrate that our approach can accurately and robustly recognize facial expressions.

  3. An Improved Method of feature extraction technique for Facial Expression Recognition using Adaboost Neural Network

    OpenAIRE

    Aruna Bhadu; Dr. Vijay Kumar; Mr. Hardayal Singh Shekhawat; Rajbala Tokas

    2012-01-01

    The objective of this research is comparative study of different feature extraction techniques for facial expression recognition & develops a algorithm, for feature extraction using AdaBoost classifier to reduce the generalization error and improve performance by getting the high recognition rate. For facial feature extraction, I will follow 2 different techniques: Discrete Cosine Transform, Wavelet Transform. Upon extraction of the facial expression information the feature vector is given t...

  4. Facial Expression Recognition Based on Feature Point Vector and Texture Deformation Energy Parameters%基于特征点矢量与纹理形变能量参数融合的人脸表情识别

    Institute of Scientific and Technical Information of China (English)

    易积政; 毛峡; Ishizuka Mitsuru; 薛雨丽

    2013-01-01

    Facial expression recognition is a popular and difficult research field in human-computer interaction. In order to remove effectively the differences in expression feature caused by individual differences, this paper firstly presents the feature point distance ratio coefficient based on feature point vector, and then gives the concept of texture deformation energy parameters. Finally, merges previously mentioned two parts to form a new expression feature for facial expression recognition. The proposed method is tested in the Cohn-Kanade database and the BHU facial expression database, and the experimental results show the recognition rates of the proposed method comparing with the existing ones increased by 4.5%and 3.9%.%人脸表情识别是人机交互领域的研究热点和难点之一。为了有效去除由于个体差异而造成的表情特征的差异,该文首先基于特征点矢量提出特征点距离比例系数;其后,又给出纹理形变能量参数的概念;最后,将二者融合用于人脸表情识别。所提方法在Cohn-Kanade数据库及BHU人脸表情数据库进行了测试,实验结果表明该方法较传统的方法在识别率上分别提高了4.5%与3.9%。

  5. Contribution of Facial Feature Dimensions and Velocity Parameters on Particle Inhalability

    OpenAIRE

    Anthony, T. Renée

    2010-01-01

    To examine whether the actual dimensions of human facial features are important to the development of a low-velocity inhalable particulate mass sampling criterion, this study evaluated the effect of facial feature dimensions (nose and lips) on estimates of aspiration efficiency of inhalable particles using computational fluid dynamics modeling over a range of indoor air and breathing velocities. Fluid flow and particle transport around four humanoid forms with different facial feature dimensi...

  6. Implicit binding of facial features during change blindness.

    Directory of Open Access Journals (Sweden)

    Pessi Lyyra

    Full Text Available Change blindness refers to the inability to detect visual changes if introduced together with an eye-movement, blink, flash of light, or with distracting stimuli. Evidence of implicit detection of changed visual features during change blindness has been reported in a number of studies using both behavioral and neurophysiological measurements. However, it is not known whether implicit detection occurs only at the level of single features or whether complex organizations of features can be implicitly detected as well. We tested this in adult humans using intact and scrambled versions of schematic faces as stimuli in a change blindness paradigm while recording event-related potentials (ERPs. An enlargement of the face-sensitive N170 ERP component was observed at the right temporal electrode site to changes from scrambled to intact faces, even if the participants were not consciously able to report such changes (change blindness. Similarly, the disintegration of an intact face to scrambled features resulted in attenuated N170 responses during change blindness. Other ERP deflections were modulated by changes, but unlike the N170 component, they were indifferent to the direction of the change. The bidirectional modulation of the N170 component during change blindness suggests that implicit change detection can also occur at the level of complex features in the case of facial stimuli.

  7. Implicit binding of facial features during change blindness.

    Science.gov (United States)

    Lyyra, Pessi; Mäkelä, Hanna; Hietanen, Jari K; Astikainen, Piia

    2014-01-01

    Change blindness refers to the inability to detect visual changes if introduced together with an eye-movement, blink, flash of light, or with distracting stimuli. Evidence of implicit detection of changed visual features during change blindness has been reported in a number of studies using both behavioral and neurophysiological measurements. However, it is not known whether implicit detection occurs only at the level of single features or whether complex organizations of features can be implicitly detected as well. We tested this in adult humans using intact and scrambled versions of schematic faces as stimuli in a change blindness paradigm while recording event-related potentials (ERPs). An enlargement of the face-sensitive N170 ERP component was observed at the right temporal electrode site to changes from scrambled to intact faces, even if the participants were not consciously able to report such changes (change blindness). Similarly, the disintegration of an intact face to scrambled features resulted in attenuated N170 responses during change blindness. Other ERP deflections were modulated by changes, but unlike the N170 component, they were indifferent to the direction of the change. The bidirectional modulation of the N170 component during change blindness suggests that implicit change detection can also occur at the level of complex features in the case of facial stimuli. PMID:24498165

  8. Alexithymic features and automatic amygdala reactivity to facial emotion.

    Science.gov (United States)

    Kugel, Harald; Eichmann, Mischa; Dannlowski, Udo; Ohrmann, Patricia; Bauer, Jochen; Arolt, Volker; Heindel, Walter; Suslow, Thomas

    2008-04-11

    Alexithymic individuals have difficulties in identifying and verbalizing their emotions. The amygdala is known to play a central role in processing emotion stimuli and in generating emotional experience. In the present study automatic amygdala reactivity to facial emotion was investigated as a function of alexithymia (as assessed by the 20-Item Toronto Alexithymia Scale). The Beck-Depression Inventory (BDI) and the State-Trait-Anxiety Inventory (STAI) were administered to measure participants' depressivity and trait anxiety. During 3T fMRI scanning, pictures of faces bearing sad, happy, and neutral expressions masked by neutral faces were presented to 21 healthy volunteers. The amygdala was selected as the region of interest (ROI) and voxel values of the ROI were extracted, summarized by mean and tested among the different conditions. A detection task was applied to assess participants' awareness of the masked emotional faces shown in the fMRI experiment. Masked sad and happy facial emotions led to greater right amygdala activation than masked neutral faces. The alexithymia feature difficulties identifying feelings was negatively correlated with the neural response of the right amygdala to masked sad faces, even when controlling for depressivity and anxiety. Reduced automatic amygdala responsivity may contribute to problems in identifying one's emotions in everyday life. Low spontaneous reactivity of the amygdala to sad faces could implicate less engagement in the encoding of negative emotional stimuli. PMID:18314269

  9. Facial attractiveness: evolutionary based research

    OpenAIRE

    Little, Anthony C.; Jones, Benedict C.; DeBruine, Lisa M

    2011-01-01

    Face preferences affect a diverse range of critical social outcomes, from mate choices and decisions about platonic relationships to hiring decisions and decisions about social exchange. Firstly, we review the facial characteristics that influence attractiveness judgements of faces (e.g. symmetry, sexually dimorphic shape cues, averageness, skin colour/texture and cues to personality) and then review several important sources of individual differences in face preferences (e.g. hormone levels ...

  10. An Improved AAM Method for Extracting Human Facial Features

    Directory of Open Access Journals (Sweden)

    Tao Zhou

    2012-01-01

    Full Text Available Active appearance model is a statistically parametrical model, which is widely used to extract human facial features and recognition. However, intensity values used in original AAM cannot provide enough information for image texture, which will lead to a larger error or a failure fitting of AAM. In order to overcome these defects and improve the fitting performance of AAM model, an improved texture representation is proposed in this paper. Firstly, translation invariant wavelet transform is performed on face images and then image structure is represented using the measure which is obtained by fusing the low-frequency coefficients with edge intensity. Experimental results show that the improved algorithm can increase the accuracy of the AAM fitting and express more information for structures of edge and texture.

  11. Microanatomy and Histological Features of Central Myelin in the Root Exit Zone of Facial Nerve

    OpenAIRE

    Yee, Gi-Taek; Yoo, Chan-Jong; Han, Seong-Rok; Choi, Chan-Young

    2014-01-01

    Objective The aim of this study was to evaluate the microanatomy and histological features of the central myelin in the root exit zone of facial nerve. Methods Forty facial nerves with brain stem were obtained from 20 formalin fixed cadavers. Among them 17 facial nerves were ruined during preparation and 23 root entry zone (REZ) of facial nerves could be examined. The length of medial REZ, from detach point of facial nerve at the brain stem to transitional area, and the thickness of glial mem...

  12. Facial attractiveness: evolutionary based research.

    Science.gov (United States)

    Little, Anthony C; Jones, Benedict C; DeBruine, Lisa M

    2011-06-12

    Face preferences affect a diverse range of critical social outcomes, from mate choices and decisions about platonic relationships to hiring decisions and decisions about social exchange. Firstly, we review the facial characteristics that influence attractiveness judgements of faces (e.g. symmetry, sexually dimorphic shape cues, averageness, skin colour/texture and cues to personality) and then review several important sources of individual differences in face preferences (e.g. hormone levels and fertility, own attractiveness and personality, visual experience, familiarity and imprinting, social learning). The research relating to these issues highlights flexible, sophisticated systems that support and promote adaptive responses to faces that appear to function to maximize the benefits of both our mate choices and more general decisions about other types of social partners. PMID:21536551

  13. Interpretation of appearance: the effect of facial features on first impressions and personality

    DEFF Research Database (Denmark)

    Wolffhechel, Karin Marie Brandt; Fagertun, Jens; Jacobsen, Ulrik Plesner; Majewski, Wiktor; Hemmingsen, Astrid Sofie; Larsen, Catrine Lohmann; Lorentzen, Sofie Katrine; Jarmer, Hanne Østergaard

    2014-01-01

    Appearance is known to influence social interactions, which in turn could potentially influence personality development. In this study we focus on discovering the relationship between self-reported personality traits, first impressions and facial characteristics. The results reveal that several...... personality traits can be read above chance from a face, and that facial features influence first impressions. Despite the former, our prediction model fails to reliably infer personality traits from either facial features or first impressions. First impressions, however, could be inferred more reliably from...... facial features. We have generated artificial, extreme faces visualising the characteristics having an effect on first impressions for several traits. Conclusively, we find a relationship between first impressions, some personality traits and facial features and consolidate that people on average assess...

  14. Robust Facial Feature Tracking Using Shape-Constrained Multi-Resolution Selected Linear Predictors.

    OpenAIRE

    Ong, EJ; Bowden, R.

    2011-01-01

    This paper proposes a learnt {\\em data-driven} approach for accurate, real-time tracking of facial features using only intensity information, a non-trivial task since the face is a highly deformable object with large textural variations and motion in certain regions. The framework proposed here largely avoids the need for apriori design of feature trackers by automatically identifying the optimal visual support required for tracking a single facial feature point. This is essentially equivalen...

  15. Facial biometrics based on 2D vector geometry

    Science.gov (United States)

    Malek, Obaidul; Venetsanopoulos, Anastasios; Androutsos, Dimitrios

    2014-05-01

    The main challenge of facial biometrics is its robustness and ability to adapt to changes in position orientation, facial expression, and illumination effects. This research addresses the predominant deficiencies in this regard and systematically investigates a facial authentication system in the Euclidean domain. In the proposed method, Euclidean geometry in 2D vector space is being constructed for features extraction and the authentication method. In particular, each assigned point of the candidates' biometric features is considered to be a 2D geometrical coordinate in the Euclidean vector space. Algebraic shapes of the extracted candidate features are also computed and compared. The proposed authentication method is being tested on images from the public "Put Face Database". The performance of the proposed method is evaluated based on Correct Recognition (CRR), False Acceptance (FAR), and False Rejection (FRR) rates. The theoretical foundation of the proposed method along with the experimental results are also presented in this paper. The experimental results demonstrate the effectiveness of the proposed method.

  16. Facial animation on an anatomy-based hierarchical face model

    Science.gov (United States)

    Zhang, Yu; Prakash, Edmond C.; Sung, Eric

    2003-04-01

    In this paper we propose a new hierarchical 3D facial model based on anatomical knowledge that provides high fidelity for realistic facial expression animation. Like real human face, the facial model has a hierarchical biomechanical structure, incorporating a physically-based approximation to facial skin tissue, a set of anatomically-motivated facial muscle actuators and underlying skull structure. The deformable skin model has multi-layer structure to approximate different types of soft tissue. It takes into account the nonlinear stress-strain relationship of the skin and the fact that soft tissue is almost incompressible. Different types of muscle models have been developed to simulate distribution of the muscle force on the skin due to muscle contraction. By the presence of the skull model, our facial model takes advantage of both more accurate facial deformation and the consideration of facial anatomy during the interactive definition of facial muscles. Under the muscular force, the deformation of the facial skin is evaluated using numerical integration of the governing dynamic equations. The dynamic facial animation algorithm runs at interactive rate with flexible and realistic facial expressions to be generated.

  17. Extraction of Subject-Specific Facial Expression Categories and Generation of Facial Expression Feature Space using Self-Mapping

    Directory of Open Access Journals (Sweden)

    Masaki Ishii

    2008-06-01

    Full Text Available This paper proposes a generation method of a subject-specific Facial Expression Map (FEMap using the Self-Organizing Maps (SOM of unsupervised learning and Counter Propagation Networks (CPN of supervised learning together. The proposed method consists of two steps. In the first step, the topological change of a face pattern in the expressional process of facial expression is learned hierarchically using the SOM of a narrow mapping space, and the number of subject-specific facial expression categories and the representative images of each category are extracted. Psychological significance based on the neutral and six basic emotions (anger, sadness, disgust, happiness, surprise, and fear is assigned to each extracted category. In the latter step, the categories and the representative images described above are learned using the CPN of a large mapping space, and a category map that expresses the topological characteristics of facial expression is generated. This paper defines this category map as an FEMap. Experimental results for six subjects show that the proposed method can generate a subject-specific FEMap based on the topological characteristics of facial expression appearing on face images.

  18. Effect of Different Occlusion on Facial Expressions Recognition

    OpenAIRE

    Ankita Vyas; Ramchand Hablani

    2014-01-01

    Occlusions around facial parts complicate the task of recognizing facial expressions from their facial images. We propose facial expressions recognition method based on local facial regions, which provides better recognition rate in the presence of facial occlusions. Proposed method uses Uniform Local Binary pattern as a feature extractor, which extract discriminative features from some important parts of facial image. Feature vectors are classified using simplest classifier th...

  19. Perceptually Valid Facial Expressions for Character-Based Applications

    Directory of Open Access Journals (Sweden)

    Ali Arya

    2009-01-01

    Full Text Available This paper addresses the problem of creating facial expression of mixed emotions in a perceptually valid way. The research has been done in the context of a “game-like” health and education applications aimed at studying social competency and facial expression awareness in autistic children as well as native language learning, but the results can be applied to many other applications such as games with need for dynamic facial expressions or tools for automating the creation of facial animations. Most existing methods for creating facial expressions of mixed emotions use operations like averaging to create the combined effect of two universal emotions. Such methods may be mathematically justifiable but are not necessarily valid from a perceptual point of view. The research reported here starts by user experiments aiming at understanding how people combine facial actions to express mixed emotions, and how the viewers perceive a set of facial actions in terms of underlying emotions. Using the results of these experiments and a three-dimensional emotion model, we associate facial actions to dimensions and regions in the emotion space, and create a facial expression based on the location of the mixed emotion in the three-dimensional space. We call these regionalized facial actions “facial expression units.”

  20. Facial Features: What Women Perceive as Attractive and What Men Consider Attractive.

    Science.gov (United States)

    Muñoz-Reyes, José Antonio; Iglesias-Julios, Marta; Pita, Miguel; Turiegano, Enrique

    2015-01-01

    Attractiveness plays an important role in social exchange and in the ability to attract potential mates, especially for women. Several facial traits have been described as reliable indicators of attractiveness in women, but very few studies consider the influence of several measurements simultaneously. In addition, most studies consider just one of two assessments to directly measure attractiveness: either self-evaluation or men's ratings. We explored the relationship between these two estimators of attractiveness and a set of facial traits in a sample of 266 young Spanish women. These traits are: facial fluctuating asymmetry, facial averageness, facial sexual dimorphism, and facial maturity. We made use of the advantage of having recently developed methodologies that enabled us to measure these variables in real faces. We also controlled for three other widely used variables: age, body mass index and waist-to-hip ratio. The inclusion of many different variables allowed us to detect any possible interaction between the features described that could affect attractiveness perception. Our results show that facial fluctuating asymmetry is related both to self-perceived and male-rated attractiveness. Other facial traits are related only to one direct attractiveness measurement: facial averageness and facial maturity only affect men's ratings. Unmodified faces are closer to natural stimuli than are manipulated photographs, and therefore our results support the importance of employing unmodified faces to analyse the factors affecting attractiveness. We also discuss the relatively low equivalence between self-perceived and male-rated attractiveness and how various anthropometric traits are relevant to them in different ways. Finally, we highlight the need to perform integrated-variable studies to fully understand female attractiveness. PMID:26161954

  1. Facial Features: What Women Perceive as Attractive and What Men Consider Attractive.

    Directory of Open Access Journals (Sweden)

    José Antonio Muñoz-Reyes

    Full Text Available Attractiveness plays an important role in social exchange and in the ability to attract potential mates, especially for women. Several facial traits have been described as reliable indicators of attractiveness in women, but very few studies consider the influence of several measurements simultaneously. In addition, most studies consider just one of two assessments to directly measure attractiveness: either self-evaluation or men's ratings. We explored the relationship between these two estimators of attractiveness and a set of facial traits in a sample of 266 young Spanish women. These traits are: facial fluctuating asymmetry, facial averageness, facial sexual dimorphism, and facial maturity. We made use of the advantage of having recently developed methodologies that enabled us to measure these variables in real faces. We also controlled for three other widely used variables: age, body mass index and waist-to-hip ratio. The inclusion of many different variables allowed us to detect any possible interaction between the features described that could affect attractiveness perception. Our results show that facial fluctuating asymmetry is related both to self-perceived and male-rated attractiveness. Other facial traits are related only to one direct attractiveness measurement: facial averageness and facial maturity only affect men's ratings. Unmodified faces are closer to natural stimuli than are manipulated photographs, and therefore our results support the importance of employing unmodified faces to analyse the factors affecting attractiveness. We also discuss the relatively low equivalence between self-perceived and male-rated attractiveness and how various anthropometric traits are relevant to them in different ways. Finally, we highlight the need to perform integrated-variable studies to fully understand female attractiveness.

  2. Scattered Data Processing Approach Based on Optical Facial Motion Capture

    OpenAIRE

    Qiang Zhang; Xiaoying Liang; Xiaopeng Wei

    2013-01-01

    In recent years, animation reconstruction of facial expressions has become a popular research field in computer science and motion capture-based facial expression reconstruction is now emerging in this field. Based on the facial motion data obtained using a passive optical motion capture system, we propose a scattered data processing approach, which aims to solve the common problems of missing data and noise. To recover missing data, given the nonlinear relationships among neighbors with the ...

  3. FACIAL FEATURE EXTRACTION USING STATISTICAL QUANTITIES OF CURVE COEFFICIENTS

    Directory of Open Access Journals (Sweden)

    SHREEJA R,

    2010-10-01

    Full Text Available Face recognition technology involves obtaining the identity of a person by comparing the image captured with the stored images in the database. In today’s world face recognition has relevance in many day to day applications. A large number of organizations are making use of various biometric techniques for applications such are employeesign in, access to secure systems etc. Unlike other biometric techniques it has the advantage that it can be done without the active participation of the person. So it can be extensively used in crime investigation purpose. For identifying faces the preliminary phase is to obtain the features of the faces which are crucial for recognition. In this paper a feature extraction method based on the statistical descriptors of Curve coefficients is proposed. Curvelettransform is a multiscale pyramid with many directions, positions at each length, scale, and needle shaped elements at fine scales. The Curvelet transform is an extension of wavelet originally designed to represent edges and other singularities along curves much more efficiently than traditional wavelet transforms. After curvelet transform, the coefficients of low frequency and high frequency in matrix form is obtained. The former contain the approximation of the face images, and we call them curve faces. The latter contain the detail information. The low frequency coefficients i.e. curve faces, contain most significant information of faces, and are crucial for recognition. But after curvelet transform a large number of coefficients are obtained. Using all these will make our recognition system complex. So the important features that are sufficient for recognition are to be extracted. In this paper a method of extracting the required features from the coefficients obtained after curvelet transform is discussed.

  4. Faces in-between: evaluations reflect the interplay of facial features and task-dependent fluency.

    Science.gov (United States)

    Winkielman, Piotr; Olszanowski, Michal; Gola, Mateusz

    2015-04-01

    Facial features influence social evaluations. For example, faces are rated as more attractive and trustworthy when they have more smiling features and also more female features. However, the influence of facial features on evaluations should be qualified by the affective consequences of fluency (cognitive ease) with which such features are processed. Further, fluency (along with its affective consequences) should depend on whether the current task highlights conflict between specific features. Four experiments are presented. In 3 experiments, participants saw faces varying in expressions ranging from pure anger, through mixed expression, to pure happiness. Perceivers first categorized faces either on a control dimension, or an emotional dimension (angry/happy). Thus, the emotional categorization task made "pure" expressions fluent and "mixed" expressions disfluent. Next, participants made social evaluations. Results show that after emotional categorization, but not control categorization, targets with mixed expressions are relatively devalued. Further, this effect is mediated by categorization disfluency. Additional data from facial electromyography reveal that on a basic physiological level, affective devaluation of mixed expressions is driven by their objective ambiguity. The fourth experiment shows that the relative devaluation of mixed faces that vary in gender ambiguity requires a gender categorization task. Overall, these studies highlight that the impact of facial features on evaluation is qualified by their fluency, and that the fluency of features is a function of the current task. The discussion highlights the implications of these findings for research on emotional reactions to ambiguity. PMID:25642724

  5. Facial Expression Recognition

    OpenAIRE

    Neeta Sarode; Prof. Shalini Bhatia

    2010-01-01

    Facial expression analysis is rapidly becoming an area of intense interest in computer science and human-computer interaction design communities. The most expressive way humans display emotions is through facial expressions. In this paper a method is implemented using 2D appearance-based local approach for the extraction of intransient facial features and recognition of four facial expressions. The algorithm implements Radial Symmetry Transform and further uses edge projection analysis for fe...

  6. Image Based Hair Segmentation Algorithm for the Application of Automatic Facial Caricature Synthesis

    OpenAIRE

    2014-01-01

    Hair is a salient feature in human face region and are one of the important cues for face analysis. Accurate detection and presentation of hair region is one of the key components for automatic synthesis of human facial caricature. In this paper, an automatic hair detection algorithm for the application of automatic synthesis of facial caricature based on a single image is proposed. Firstly, hair regions in training images are labeled manually and then the hair position prior distributions an...

  7. Facial expression analysis using LBP features. Computer Engineering and Applications, 2011,47(2): 149-152.%人脸表情的LBP特征分析

    Institute of Scientific and Technical Information of China (English)

    刘伟锋; 李树娟; 王延江

    2011-01-01

    为了有效提取面部表情特征,提出了一种新的基于LBP(局部二值模式)特征的人脸表情识别特征提取方法.首先用均值方差法对表情图像进行灰度规一化,通过对图像进行积分投影,定位出眉毛、眼睛、鼻和嘴巴这些关键特征点,进而划分出各特征部件所在子区域,然后对子区域进行分块,提取各个子区域的分块LBP直方图特征.为了验证所提出的方法的合理性,最后在JAFFE表情库上进行了实验,结果表明提出的方法能够有效地描述表情的特征.%In order to effectively extract facial expression feature,a novel facial feature extraction approach for facial expression recognition based on Local Binary Pattern(LBP) is proposed in the paper. Firstly,facial expression images' gray level is normalized with the average-variance method. By doing integral projection, some critical facial feature points are located,such as eyebrow,eye,nose and mouth. Then sub-regions belong to each facial component are partitioned. And then facial expression features are presented with LBP histograms of each sub-region, which is divided into several blocks. To validate the rationality of the method proposed,experiments are implemented on JAFEE(Japanese female facial expression database) database. The results illustrate that the method proposed is effective to represent facial expression feature.

  8. What are we really priming? Cue-based versus category-based processing of facial stimuli.

    Science.gov (United States)

    Livingston, Robert W; Brewer, Marilynn B

    2002-01-01

    Results from 5 experiments provide converging evidence that automatic evaluation of faces in sequential priming paradigms reflects affective responses to phenotypic features per se rather than evaluation of the racial categories to which the faces belong. Experiment 1 demonstrates that African American facial primes with racially prototypic physical features facilitate more automatic negative evaluations than do other Black faces that are unambiguously categorizable as African American but have less prototypic features. Experiments 2, 3, and 4 further support the hypothesis that these differences reflect direct affective responses to physical features rather than differential categorization. Experiment 5 shows that automatic responses to facial primes correlate with cue-based but not category-based explicit measures of prejudice. Overall, these results suggest the existence of 2 distinct types of prejudice. PMID:11811634

  9. Binary pattern flavored feature extractors for Facial Expression Recognition: An overview

    DEFF Research Database (Denmark)

    Kristensen, Rasmus Lyngby; Tan, Zheng-Hua; Ma, Zhanyu;

    2015-01-01

    This paper conducts a survey of modern binary pattern flavored feature extractors applied to the Facial Expression Recognition (FER) problem. In total, 26 different feature extractors are included, of which six are selected for in depth description. In addition, the paper unifies important FER...... terminology, describes open challenges, and provides recommendations to scientific evaluation of FER systems. Lastly, it studies the facial expression recognition accuracy and blur invariance of the Local Frequency Descriptor. The paper seeks to bring together disjointed studies, and the main contribution is...

  10. Improved Facial-Feature Detection for AVSP via Unsupervised Clustering and Discriminant Analysis

    OpenAIRE

    Simon Lucey; Sridha Sridharan; Vinod Chandran

    2003-01-01

    An integral part of any audio-visual speech processing (AVSP) system is the front-end visual system that detects facial-features (e.g., eyes and mouth) pertinent to the task of visual speech processing. The ability of this front-end system to not only locate, but also give a confidence measure that the facial-feature is present in the image, directly affects the ability of any subsequent post-processing task such as speech or speaker recognition. With these issues in mind, this paper presents...

  11. Improved Facial-Feature Detection for AVSP via Unsupervised Clustering and Discriminant Analysis

    OpenAIRE

    Lucey Simon; Sridharan Sridha; Chandran Vinod

    2003-01-01

    An integral part of any audio-visual speech processing (AVSP) system is the front-end visual system that detects facial-features (e.g., eyes and mouth) pertinent to the task of visual speech processing. The ability of this front-end system to not only locate, but also give a confidence measure that the facial-feature is present in the image, directly affects the ability of any subsequent post-processing task such as speech or speaker recognition. With these issues in mind, this paper present...

  12. Vascular Ehlers-Danlos syndrome without the characteristic facial features: a case report.

    Science.gov (United States)

    Inokuchi, Ryota; Kurata, Hideaki; Endo, Kiyoshi; Kitsuta, Yoichi; Nakajima, Susumu; Hatamochi, Atsushi; Yahagi, Naoki

    2014-12-01

    As a type of Ehlers-Danlos syndrome (EDS), vascular EDs (vEDS) is typified by a number of characteristic facial features (eg, large eyes, small chin, sunken cheeks, thin nose and lips, lobeless ears). However, vEDs does not typically display hypermobility of the large joints and skin hyperextensibility, which are features typical of the more common forms of EDS. Thus, colonic perforation or aneurysm rupture may be the first presentation of the disease. Because both complications are associated with a reduced life expectancy for individuals with this condition, an awareness of the clinical features of vEDS is important. Here, we describe the treatment of vEDS lacking the characteristic facial attributes in a 24-year-old healthy man who presented to the emergency room with abdominal pain. Enhanced computed tomography revealed diverticula and perforation in the sigmoid colon. The lesion of the sigmoid colon perforation was removed, and Hartmann procedure was performed. During the surgery, the control of bleeding was required because of vascular fragility. Subsequent molecular and genetic analysis was performed based on the suspected diagnosis of vEDS. These analyses revealed reduced type III collagen synthesis in cultured skin fibroblasts and identified a previously undocumented mutation in the gene for a1 type III collagen, confirming the diagnosis of vEDS. After eliciting a detailed medical profile, we learned his mother had a history of extensive bruising since childhood and idiopathic hematothorax. Both were prescribed oral celiprolol. One year after admission, the patient was free of recurrent perforation. This case illustrates an awareness of the clinical characteristics of vEDS and the family history is important because of the high mortality from this condition even in young people. Importantly, genetic assays could help in determining the surgical procedure and offer benefits to relatives since this condition is inherited in an autosomal dominant manner. PMID

  13. Effects of face feature and contour crowding in facial expression adaptation.

    Science.gov (United States)

    Liu, Pan; Montaser-Kouhsari, Leila; Xu, Hong

    2014-12-01

    Prolonged exposure to a visual stimulus, such as a happy face, biases the perception of subsequently presented neutral face toward sad perception, the known face adaptation. Face adaptation is affected by visibility or awareness of the adapting face. However, whether it is affected by discriminability of the adapting face is largely unknown. In the current study, we used crowding to manipulate discriminability of the adapting face and test its effect on face adaptation. Instead of presenting flanking faces near the target face, we shortened the distance between facial features (internal feature crowding), and reduced the size of face contour (external contour crowding), to introduce crowding. We are interested in whether internal feature crowding or external contour crowding is more effective in inducing crowding effect in our first experiment. We found that combining internal feature and external contour crowding, but not either of them alone, induced significant crowding effect. In Experiment 2, we went on further to investigate its effect on adaptation. We found that both internal feature crowding and external contour crowding reduced its facial expression aftereffect (FEA) significantly. However, we did not find a significant correlation between discriminability of the adapting face and its FEA. Interestingly, we found a significant correlation between discriminabilities of the adapting and test faces. Experiment 3 found that the reduced adaptation aftereffect in combined crowding by the external face contour and the internal facial features cannot be decomposed into the effects from the face contour and facial features linearly. It thus suggested a nonlinear integration between facial features and face contour in face adaptation. PMID:25449164

  14. A kind of Face Recognition Method Based on CCA Feature Information Fusion

    OpenAIRE

    Li-Xia NIU; Li, Guo

    2013-01-01

    In order to achieve more local facial feature, a kind of sub image face recognition method based on RS-Sp CCA feature information fusion is proposed in this paper. According to take samples for the local facial feature of sub image and use CCA to fuse the global facial feature and the local facial feature information after sampling, the global feature of image can be fully used to construct much more different kinds of component classifiers. Then, make experimental analysis on databases of 3 ...

  15. Robust facial expression recognition algorithm based on local metric learning

    Science.gov (United States)

    Jiang, Bin; Jia, Kebin

    2016-01-01

    In facial expression recognition tasks, different facial expressions are often confused with each other. Motivated by the fact that a learned metric can significantly improve the accuracy of classification, a facial expression recognition algorithm based on local metric learning is proposed. First, k-nearest neighbors of the given testing sample are determined from the total training data. Second, chunklets are selected from the k-nearest neighbors. Finally, the optimal transformation matrix is computed by maximizing the total variance between different chunklets and minimizing the total variance of instances in the same chunklet. The proposed algorithm can find the suitable distance metric for every testing sample and improve the performance on facial expression recognition. Furthermore, the proposed algorithm can be used for vector-based and matrix-based facial expression recognition. Experimental results demonstrate that the proposed algorithm could achieve higher recognition rates and be more robust than baseline algorithms on the JAFFE, CK, and RaFD databases.

  16. Automated Facial Expression Recognition Using Gradient-Based Ternary Texture Patterns

    Directory of Open Access Journals (Sweden)

    Faisal Ahmed

    2013-01-01

    Full Text Available Recognition of human expression from facial image is an interesting research area, which has received increasing attention in the recent years. A robust and effective facial feature descriptor is the key to designing a successful expression recognition system. Although much progress has been made, deriving a face feature descriptor that can perform consistently under changing environment is still a difficult and challenging task. In this paper, we present the gradient local ternary pattern (GLTP—a discriminative local texture feature for representing facial expression. The proposed GLTP operator encodes the local texture of an image by computing the gradient magnitudes of the local neighborhood and quantizing those values in three discrimination levels. The location and occurrence information of the resulting micropatterns is then used as the face feature descriptor. The performance of the proposed method has been evaluated for the person-independent face expression recognition task. Experiments with prototypic expression images from the Cohn-Kanade (CK face expression database validate that the GLTP feature descriptor can effectively encode the facial texture and thus achieves improved recognition performance than some well-known appearance-based facial features.

  17. FACIAL EXPRESSION RECOGNITION BASED ON WAPA AND OEPA FASTICA

    OpenAIRE

    Humayra Binte Ali; Powers, David M. W

    2014-01-01

    Face is one of the most important biometric traits for its uniqueness and robustness. For this reason researchers from many diversified fields, like: security, psychology, image processing, and computer vision, started to do research on face detection as well as facial expression recognition. Subspace learning methods work very good for recognizing same facial features. Among subspace learning techniques PCA, ICA, NMF are the most prominent topics. In this work, our main focus is on Indepe...

  18. Recovering Faces from Memory: The Distracting Influence of External Facial Features

    Science.gov (United States)

    Frowd, Charlie D.; Skelton, Faye; Atherton, Chris; Pitchford, Melanie; Hepton, Gemma; Holden, Laura; McIntyre, Alex H.; Hancock, Peter J. B.

    2012-01-01

    Recognition memory for unfamiliar faces is facilitated when contextual cues (e.g., head pose, background environment, hair and clothing) are consistent between study and test. By contrast, inconsistencies in external features, especially hair, promote errors in unfamiliar face-matching tasks. For the construction of facial composites, as carried…

  19. Implicit binding of facial features during change blindness

    OpenAIRE

    Pessi Lyyra; Hanna Mäkelä; Hietanen, Jari K.; Piia Astikainen

    2014-01-01

    Abstract. Change blindness refers to the inability to detect visual changes if introduced together with an eye-movement, blink, flash of light, or with distracting stimuli. Evidence of implicit detection of changed visual features during change blindness has been reported in a number of studies using both behavioral and neurophysiological measurements. However, it is not known whether implicit detection occurs only at the level of single features or whether complex organizations of f...

  20. Detection of Human Head Direction Based on Facial Normal Algorithm

    Directory of Open Access Journals (Sweden)

    Lam Thanh Hien

    2015-01-01

    Full Text Available Many scholars worldwide have paid special efforts in searching for advance approaches to efficiently estimate human head direction which has been successfully applied in numerous applications such as human-computer interaction, teleconferencing, virtual reality, and 3D audio rendering. However, one of the existing shortcomings in the current literature is the violation of some ideal assumptions in practice. Hence, this paper aims at proposing a novel algorithm based on the normal of human face to recognize human head direction by optimizing a 3D face model combined with the facial normal model. In our experiments, a computational program was also developed based on the proposed algorithm and integrated with the surveillance system to alert the driver drowsiness. The program intakes data from either video or webcam, and then automatically identify the critical points of facial features based on the analysis of major components on the faces; and it keeps monitoring the slant angle of the head closely and makes alarming signal whenever the driver dozes off. From our empirical experiments, we found that our proposed algorithm effectively works in real-time basis and provides highly accurate results

  1. Facial Expression Recognition Based on Local Binary Patterns and Kernel Discriminant Isomap

    Directory of Open Access Journals (Sweden)

    Xiaoming Zhao

    2011-10-01

    Full Text Available Facial expression recognition is an interesting and challenging subject. Considering the nonlinear manifold structure of facial images, a new kernel-based manifold learning method, called kernel discriminant isometric mapping (KDIsomap, is proposed. KDIsomap aims to nonlinearly extract the discriminant information by maximizing the interclass scatter while minimizing the intraclass scatter in a reproducing kernel Hilbert space. KDIsomap is used to perform nonlinear dimensionality reduction on the extracted local binary patterns (LBP facial features, and produce low-dimensional discrimimant embedded data representations with striking performance improvement on facial expression recognition tasks. The nearest neighbor classifier with the Euclidean metric is used for facial expression classification. Facial expression recognition experiments are performed on two popular facial expression databases, i.e., the JAFFE database and the Cohn-Kanade database. Experimental results indicate that KDIsomap obtains the best accuracy of 81.59% on the JAFFE database, and 94.88% on the Cohn-Kanade database. KDIsomap outperforms the other used methods such as principal component analysis (PCA, linear discriminant analysis (LDA, kernel principal component analysis (KPCA, kernel linear discriminant analysis (KLDA as well as kernel isometric mapping (KIsomap.

  2. Newborns' Face Recognition: Role of Inner and Outer Facial Features

    Science.gov (United States)

    Turati, Chiara; Macchi Cassia, Viola; Simion, Francesca; Leo, Irene

    2006-01-01

    Existing data indicate that newborns are able to recognize individual faces, but little is known about what perceptual cues drive this ability. The current study showed that either the inner or outer features of the face can act as sufficient cues for newborns' face recognition (Experiment 1), but the outer part of the face enjoys an advantage…

  3. De Novo Mutation in ABCC9 Causes Hypertrichosis Acromegaloid Facial Features Disorder.

    Science.gov (United States)

    Afifi, Hanan H; Abdel-Hamid, Mohamed S; Eid, Maha M; Mostafa, Inas S; Abdel-Salam, Ghada M H

    2016-01-01

    A 13-year-old Egyptian girl with generalized hypertrichosis, gingival hyperplasia, coarse facial appearance, no cardiovascular or skeletal anomalies, keloid formation, and multiple labial frenula was referred to our clinic for counseling. Molecular analysis of the ABCC9 gene showed a de novo missense mutation located in exon 27, which has been described previously with Cantu syndrome. An overlap between Cantu syndrome, acromegaloid facial syndrome, and hypertrichosis acromegaloid facial features disorder is apparent at the phenotypic and molecular levels. The patient reported here gives further evidence that these syndromes are an expression of the ABCC9-related disorders, ranging from hypertrichosis and acromegaloid facies to the severe end of Cantu syndrome. PMID:26871653

  4. FACIAL EXPRESSION RECOGNITION BASED ON WAPA AND OEPA FASTICA

    Directory of Open Access Journals (Sweden)

    Humayra Binte Ali

    2014-05-01

    Full Text Available Face is one of the most important biometric traits for its uniqueness and robustness. For this reason researchers from many diversified fields, like: security, psychology, image processing, and computer vision, started to do research on face detection as well as facial expression recognition. Subspace learning methods work very good for recognizing same facial features. Among subspace learning techniques PCA, ICA, NMF are the most prominent topics. In this work, our main focus is on Independent Component Analysis (ICA. Among several architectures of ICA, we used here FastICA and LS-ICA algorithm. We applied Fast-ICA on whole faces and on different facial parts to analyze the influence of different parts for basic facial expressions. Our extended algorithm WAPA-FastICA and OEPA-FastICA are discussed in proposed algorithm section. Locally Salient ICA is implemented on whole face by using 8x8 windows to find the more prominent facial features for facial expression. The experiment shows our proposed OEPAFastICA and WAPA-FastICA outperform the existing prevalent Whole-FastICA and LS-ICA methods.

  5. Facial Expression Recognition Based on WAPA and OEPA Fastica

    Directory of Open Access Journals (Sweden)

    Humayra Binte Ali

    2014-06-01

    Full Text Available Face is one of the most important biometric traits for its uniqueness and robustness. For this reason researchers from many diversified fields, like: security, psychology, image processing, and computer vision, started to do research on face detection as well as facial expression recognition. Subspace learning methods work very good for recognizing same facial features. Among subspace learning techniques PCA, ICA, NMF are the most prominent topics. In this work, our main focus is on Independent Component Analysis (ICA. Among several architectures of ICA,we used here FastICA and LS-ICA algorithm. We applied Fast-ICA on whole faces and on different facial parts to analyze the influence of different parts for basic facial expressions. Our extended algorithm WAPA-FastICA and OEPA-FastICA are discussed in proposed algorithm section. Locally Salient ICA is implemented on whole face by using 8x8 windows to find the more prominent facial features for facial expression. The experiment shows our proposed OEPA-FastICA and WAPA-FastICA outperform the existing prevalent Whole-FastICA and LS-ICA methods.

  6. A kind of Face Recognition Method Based on CCA Feature Information Fusion

    Directory of Open Access Journals (Sweden)

    Li-Xia NIU

    2013-10-01

    Full Text Available In order to achieve more local facial feature, a kind of sub image face recognition method based on RS-Sp CCA feature information fusion is proposed in this paper. According to take samples for the local facial feature of sub image and use CCA to fuse the global facial feature and the local facial feature information after sampling, the global feature of image can be fully used to construct much more different kinds of component classifiers. Then, make experimental analysis on databases of 3 standard facial data sets. At last, the results show that sub image face recognition method based on RS-Sp CCA feature information fusion is better than the simple feature sampling method and feature fusion method. Besides, it can efficiently improve the face recognition rate.

  7. Does my face FIT?: a face image task reveals structure and distortions of facial feature representation.

    Directory of Open Access Journals (Sweden)

    Christina T Fuentes

    Full Text Available Despite extensive research on face perception, few studies have investigated individuals' knowledge about the physical features of their own face. In this study, 50 participants indicated the location of key features of their own face, relative to an anchor point corresponding to the tip of the nose, and the results were compared to the true location of the same individual's features from a standardised photograph. Horizontal and vertical errors were analysed separately. An overall bias to underestimate vertical distances revealed a distorted face representation, with reduced face height. Factor analyses were used to identify separable subconfigurations of facial features with correlated localisation errors. Independent representations of upper and lower facial features emerged from the data pattern. The major source of variation across individuals was in representation of face shape, with a spectrum from tall/thin to short/wide representation. Visual identification of one's own face is excellent, and facial features are routinely used for establishing personal identity. However, our results show that spatial knowledge of one's own face is remarkably poor, suggesting that face representation may not contribute strongly to self-awareness.

  8. Feature Fusion Algorithm for Multimodal Emotion Recognition from Speech and Facial Expression Signal

    Directory of Open Access Journals (Sweden)

    Han Zhiyan

    2016-01-01

    Full Text Available In order to overcome the limitation of single mode emotion recognition. This paper describes a novel multimodal emotion recognition algorithm, and takes speech signal and facial expression signal as the research subjects. First, fuse the speech signal feature and facial expression signal feature, get sample sets by putting back sampling, and then get classifiers by BP neural network (BPNN. Second, measure the difference between two classifiers by double error difference selection strategy. Finally, get the final recognition result by the majority voting rule. Experiments show the method improves the accuracy of emotion recognition by giving full play to the advantages of decision level fusion and feature level fusion, and makes the whole fusion process close to human emotion recognition more, with a recognition rate 90.4%.

  9. 3D facial expression recognition using maximum relevance minimum redundancy geometrical features

    Science.gov (United States)

    Rabiu, Habibu; Saripan, M. Iqbal; Mashohor, Syamsiah; Marhaban, Mohd Hamiruce

    2012-12-01

    In recent years, facial expression recognition (FER) has become an attractive research area, which besides the fundamental challenges, it poses, finds application in areas, such as human-computer interaction, clinical psychology, lie detection, pain assessment, and neurology. Generally the approaches to FER consist of three main steps: face detection, feature extraction and expression recognition. The recognition accuracy of FER hinges immensely on the relevance of the selected features in representing the target expressions. In this article, we present a person and gender independent 3D facial expression recognition method, using maximum relevance minimum redundancy geometrical features. The aim is to detect a compact set of features that sufficiently represents the most discriminative features between the target classes. Multi-class one-against-one SVM classifier was employed to recognize the seven facial expressions; neutral, happy, sad, angry, fear, disgust, and surprise. The average recognition accuracy of 92.2% was recorded. Furthermore, inter database homogeneity was investigated between two independent databases the BU-3DFE and UPM-3DFE the results showed a strong homogeneity between the two databases.

  10. Facial action detection using block-based pyramid appearance descriptors

    OpenAIRE

    Jiang, Bihan; Valstar, Michel F.; Pantic, Maja

    2012-01-01

    Facial expression is one of the most important non-verbal behavioural cues in social signals. Constructing an effective face representation from images is an essential step for successful facial behaviour analysis. Most existing face descriptors operate on the same scale, and do not leverage coarse v.s. fine methods such as image pyramids. In this work, we propose the sparse appearance descriptors Block-based Pyramid Local Binary Pattern (B-PLBP) and Block-based Pyramid Local Phase Quantisati...

  11. A Method for Head-shoulder Segmentation and Human Facial Feature Positioning

    Institute of Scientific and Technical Information of China (English)

    1998-01-01

    This paper proposes a method of head-shoulder segmentation and human facial feature allocation for videotelephone application. Utilizing the characteristic of multi-resolution processing of human eyes, analyzing the edge information of only a single frame in different frequency bands, this method can automatically perform head-shoulder segmentation and locate the facial feature regions (eyes, mouth, etc.) with rather high precision, simple and fast computation. Therefore, this method makes the 3-D model automatic adaptation and 3-D motion estimation possible. However, this method may fail while processing practical images with a complex background. Then it is preferable to use some pre-known information and multi-frame joint processing.

  12. Facial expression recognition based on image Euclidean distance-supervised neighborhood preserving embedding

    Science.gov (United States)

    Chen, Li; Li, Yingjie; Li, Haibin

    2014-11-01

    High-dimensional data often lie on relatively low-dimensional manifold, while the nonlinear geometry of that manifold is often embedded in the similarities between the data points. These similar structures are captured by Neighborhood Preserving Embedding (NPE) effectively. But NPE as an unsupervised method can't utilize class information to guide the procedure of nonlinear dimensionality reduction. They ignore the geometrical structure information of local data points and the spatial information of pixels, which leads to the failure of classification. For this problem, a feature extraction method based on Image Euclidean Distance-Supervised NPE (IED-SNPE) is proposed, and is applied to facial expression recognition. Firstly, it employs Image Euclidean Distance (IED) to characterize the dissimilarity of data points. And then the neighborhood graph of the input data is constructed according to a certain kind of dissimilarity between data points. Finally, it fuses prior nonlinear facial expression manifold of facial expression images and class-label information to extract discriminative features for expression recognition. In the classification experiments on JAFFE facial expression database, IED-SNPE is used for feature extraction and compared with NPE, SNPE, and IED-NPE. The results reveal that IED-SNPE not only the local structure of expression manifold preserves well but also explicitly considers the spatial relationships among pixels in the images. So it excels NPE in feature extraction and is highly competitive with those well-known feature extraction methods.

  13. Facial contour deformity correction with microvascular flaps based on the 3-dimentional template and facial moulage

    Directory of Open Access Journals (Sweden)

    Dinesh Kadam

    2013-01-01

    Full Text Available Introduction: Facial contour deformities presents with varied aetiology and degrees severity. Accurate assessment, selecting a suitable tissue and sculpturing it to fill the defect is challenging and largely subjective. Objective assessment with imaging and software is not always feasible and preparing a template is complicated. A three-dimensional (3D wax template pre-fabricated over the facial moulage aids surgeons to fulfil these tasks. Severe deformities demand a stable vascular tissue for an acceptable outcome. Materials and Methods: We present review of eight consecutive patients who underwent augmentation of facial contour defects with free flaps between June 2005 and January 2011. De-epithelialised free anterolateral thigh (ALT flap in three, radial artery forearm flap and fibula osteocutaneous flap in two each and groin flap was used in one patient. A 3D wax template was fabricated by augmenting the deformity on facial moulage. It was utilised to select the flap, to determine the exact dimensions and to sculpture intraoperatively. Ancillary procedures such as genioplasty, rhinoplasty and coloboma correction were performed. Results: The average age at the presentation was 25 years and average disease free interval was 5.5 years and all flaps survived. Mean follow-up period was 21.75 months. The correction was aesthetically acceptable and was maintained without any recurrence or atrophy. Conclusion: The 3D wax template on facial moulage is simple, inexpensive and precise objective tool. It provides accurate guide for the planning and execution of the flap reconstruction. The selection of the flap is based on the type and extent of the defect. Superiority of vascularised free tissue is well-known and the ALT flap offers a versatile option for correcting varying degrees of the deformities. Ancillary procedures improve the overall aesthetic outcomes and minor flap touch-up procedures are generally required.

  14. Scattered Data Processing Approach Based on Optical Facial Motion Capture

    Directory of Open Access Journals (Sweden)

    Qiang Zhang

    2013-01-01

    Full Text Available In recent years, animation reconstruction of facial expressions has become a popular research field in computer science and motion capture-based facial expression reconstruction is now emerging in this field. Based on the facial motion data obtained using a passive optical motion capture system, we propose a scattered data processing approach, which aims to solve the common problems of missing data and noise. To recover missing data, given the nonlinear relationships among neighbors with the current missing marker, we propose an improved version of a previous method, where we use the motion of three muscles rather than one to recover the missing data. To reduce the noise, we initially apply preprocessing to eliminate impulsive noise, before our proposed three-order quasi-uniform B-spline-based fitting method is used to reduce the remaining noise. Our experiments showed that the principles that underlie this method are simple and straightforward, and it delivered acceptable precision during reconstruction.

  15. A novel human-machine interface based on recognition of multi-channel facial bioelectric signals

    International Nuclear Information System (INIS)

    Full text: This paper presents a novel human-machine interface for disabled people to interact with assistive systems for a better quality of life. It is based on multichannel forehead bioelectric signals acquired by placing three pairs of electrodes (physical channels) on the Fron-tails and Temporalis facial muscles. The acquired signals are passes through a parallel filter bank to explore three different sub-bands related to facial electromyogram, electrooculogram and electroencephalogram. The root mean features of the bioelectric signals analyzed within non-overlapping 256 ms windows were extracted. The subtractive fuzzy c-means clustering method (SFCM) was applied to segment the feature space and generate initial fuzzy based Takagi-Sugeno rules. Then, an adaptive neuro-fuzzy inference system is exploited to tune up the premises and consequence parameters of the extracted SFCMs. rules. The average classifier discriminating ratio for eight different facial gestures (smiling, frowning, pulling up left/right lips corner, eye movement to left/right/up/down is between 93.04% and 96.99% according to different combinations and fusions of logical features. Experimental results show that the proposed interface has a high degree of accuracy and robustness for discrimination of 8 fundamental facial gestures. Some potential and further capabilities of our approach in human-machine interfaces are also discussed. (author)

  16. Enhancement of the Adaptive Shape Variants Average Values by Using Eight Movement Directions for Multi-Features Detection of Facial Sketch

    Directory of Open Access Journals (Sweden)

    Arif Muntasa

    2012-04-01

    Full Text Available This paper aims to detect multi features of a facial sketch by using a novel approach. The detection of multi features of facial sketch has been conducted by several researchers, but they mainly considered frontal face sketches as object samples. In fact, the detection of multi features of facial sketch with certain angle is very important to assist police for describing the criminal’s face, when criminal’s face only appears on certain angle. Integration of the maximum line gradient value enhancement and the level set methods was implemented to detect facial features sketches with tilt angle to 15 degrees. However, these methods tend to move towards non features when there are a lot of graffiti around the shape. To overcome this weakness, the author proposes a novel approach to move the shape by adding a parameter to control the movement based on enhancement of the adaptive shape variants average values with 8 movement directions. The experimental results show that the proposed method can improve the detection accuracy up to 92.74%.

  17. Application of LBP information of feature-points in facial expression recognition%特征点LBP信息在表情识别中的应用

    Institute of Scientific and Technical Information of China (English)

    刘伟锋; 王延江

    2009-01-01

    提出一种基于特征点LBP信息的表情识别方法.在分析了表情识别中的LBP特征之后,选择含有丰富表情信息的上半脸眼部周围和下半脸嘴部周围的特征点,计算每个特征点邻域的LBP信息作为表情特征进行表情识别.实验表明,基于特征点LBP信息的方法不需要对人脸进行预配准,较传统的LBP特征更有利于表情识别的实现.%An facial expression recognition method is proposed based on the Local Binary Pattern (LBP) of feature-points.First, the LBP feature in facial expression recognition is presented.Then the feature-points around the eyes of upper face and the mouth of lower face is fixed which hold rich expression information.And the LBP map of the neighbor field of each feature point is computed as expression feature for facial expression recognilion.Experimental results show that,the face normalization is not necessary by using the proposed method,which can improve the facial expression recognition.

  18. Facial Sketch Synthesis Using 2D Direct Combined Model-Based Face-Specific Markov Network.

    Science.gov (United States)

    Tu, Ching-Ting; Chan, Yu-Hsien; Chen, Yi-Chung

    2016-08-01

    A facial sketch synthesis system is proposed, featuring a 2D direct combined model (2DDCM)-based face-specific Markov network. In contrast to the existing facial sketch synthesis systems, the proposed scheme aims to synthesize sketches, which reproduce the unique drawing style of a particular artist, where this drawing style is learned from a data set consisting of a large number of image/sketch pairwise training samples. The synthesis system comprises three modules, namely, a global module, a local module, and an enhancement module. The global module applies a 2DDCM approach to synthesize the global facial geometry and texture of the input image. The detailed texture is then added to the synthesized sketch in a local patch-based manner using a parametric 2DDCM model and a non-parametric Markov random field (MRF) network. Notably, the MRF approach gives the synthesized results an appearance more consistent with the drawing style of the training samples, while the 2DDCM approach enables the synthesis of outcomes with a more derivative style. As a result, the similarity between the synthesized sketches and the input images is greatly improved. Finally, a post-processing operation is performed to enhance the shadowed regions of the synthesized image by adding strong lines or curves to emphasize the lighting conditions. The experimental results confirm that the synthesized facial images are in good qualitative and quantitative agreement with the input images as well as the ground-truth sketches provided by the same artist. The representing power of the proposed framework is demonstrated by synthesizing facial sketches from input images with a wide variety of facial poses, lighting conditions, and races even when such images are not included in the training data set. Moreover, the practical applicability of the proposed framework is demonstrated by means of automatic facial recognition tests. PMID:27244737

  19. A novel three-dimensional smile analysis based on dynamic evaluation of facial curve contour

    OpenAIRE

    Yi Lin; Han Lin; Qiuping Lin; Jinxin Zhang; Ping Zhu; Yao Lu; Zhi Zhao; Jiahong Lv; Mln Kyeong Lee; Yue Xu

    2016-01-01

    The influence of three-dimensional facial contour and dynamic evaluation decoding on factors of smile esthetics is essential for facial beauty improvement. However, the kinematic features of the facial smile contour and the contribution from the soft tissue and underlying skeleton are uncharted. Here, the cheekbone-maxilla contour and nasolabial fold were combined into a “smile contour” delineating the overall facial topography emerges prominently in smiling. We screened out the stable and un...

  20. Local Illumination Normalization and Facial Feature Point Selection for Robust Face Recognition

    Directory of Open Access Journals (Sweden)

    Song HAN

    2013-03-01

    Full Text Available Face recognition systems must be robust to the variation of various factors such as facial expression, illumination, head pose and aging. Especially, the robustness against illumination variation is one of the most important problems to be solved for the practical use of face recognition systems. Gabor wavelet is widely used in face detection and recognition because it gives the possibility to simulate the function of human visual system. In this paper, we propose a method for extracting Gabor wavelet features which is stable under the variation of local illumination and show experiment results demonstrating its effectiveness.

  1. 3D Facial Similarity Measure Based on Geodesic Network and Curvatures

    Directory of Open Access Journals (Sweden)

    Junli Zhao

    2014-01-01

    Full Text Available Automated 3D facial similarity measure is a challenging and valuable research topic in anthropology and computer graphics. It is widely used in various fields, such as criminal investigation, kinship confirmation, and face recognition. This paper proposes a 3D facial similarity measure method based on a combination of geodesic and curvature features. Firstly, a geodesic network is generated for each face with geodesics and iso-geodesics determined and these network points are adopted as the correspondence across face models. Then, four metrics associated with curvatures, that is, the mean curvature, Gaussian curvature, shape index, and curvedness, are computed for each network point by using a weighted average of its neighborhood points. Finally, correlation coefficients according to these metrics are computed, respectively, as the similarity measures between two 3D face models. Experiments of different persons’ 3D facial models and different 3D facial models of the same person are implemented and compared with a subjective face similarity study. The results show that the geodesic network plays an important role in 3D facial similarity measure. The similarity measure defined by shape index is consistent with human’s subjective evaluation basically, and it can measure the 3D face similarity more objectively than the other indices.

  2. Facial expression recognition using biologically inspired features and SVM%基于生物启发特征和SVM的人脸表情识别

    Institute of Scientific and Technical Information of China (English)

    穆国旺; 王阳; 郭蔚

    2014-01-01

    将C1特征应用于静态图像人脸表情识别,提出了一种新的基于生物启发特征和SVM的表情识别算法。提取人脸图像的C1特征,利用PCA+LDA方法对特征进行降维,用SVM进行分类。在JAFFE和Extended Cohn-Kanade(CK+)人脸表情数据库上的实验结果表明,该算法具有较高的识别率,是一种有效的人脸表情识别方法。%C1 features are introduced to facial expression recognition for static images, and a new algorithm for facial expression recognition based on Biologically Inspired Features(BIFs)and SVM is proposed. C1 features of the facial images are extracted, PCA+LDA method is used to reduce the dimensionality of the C1 features, SVM is used for classifi-cation of the expression. The experiments on the JAFFE and Extended Cohn-Kanade(CK+)facial expression data sets show the effectiveness and the good performance of the algorithm.

  3. Facial Expression Recognition based on Independent Component Analysis

    OpenAIRE

    XiaoHui Guo; Xiao Zhang; Chao Deng; Jianyu Wei

    2013-01-01

    As an important part of artificial intelligence and pattern recognition, facial expression recognition has drawn much attention recently and numerous methods have been proposed. Feature extraction is the most important part which directly affects the final recognition results. Independent component analysis (ICA) is a subspace analysis method, which is also a novel statistical technique in signal processing and machine learning that aims at finding linear projections of the data that maximize...

  4. Uitrasonographic evaluation of fetal facial anatomy (Ⅰ):ultrasonographic features of normal fetal face in vitro study

    Institute of Scientific and Technical Information of China (English)

    李胜利; 陈琮瑛; 刘菊玲; 欧阳淑媛

    2004-01-01

    Background Because of lacking skills in scanning the normal fetal facial structures and their corresponding ultrasonic features, misdiagnoses freguently occur. Therefore, we studied the appearance features and improved displaying skills of fetal facial anatomy in order to provide basis for prenatal diagnosis. Methods Twenty fetuses with normal facial anatomy from induced labor because of other malformations except facial anomalies were immersed in a water bath and then scanned ultrasonographically on coronal, sagittal and transverse planes to define the ultrasonic image features of normal anatomy. The coronal and sagittal planes obtained from the submandibular triangle were used for displaying the soft and hard palate in particular. Results Facial anatomic structures of the fetus can be clearly displayed through the three routine orthogonal planes. However, the soft and hard palate can be displayed on the planes obtained from the submandibular triangle only. Conclusions The superficial soft tissues and deep bony structures of the fetal face can be recognized and evaluated by routine ultrasonographic images, which is a reliable prenatal diagnostic technique to evaluate the fetal facial anatomy. The soft and hard palate can be well demonstrated by the submandibular triangle approach.

  5. Fixation to features and neural processing of facial expressions in a gender discrimination task.

    Science.gov (United States)

    Neath, Karly N; Itier, Roxane J

    2015-10-01

    Early face encoding, as reflected by the N170 ERP component, is sensitive to fixation to the eyes. Whether this sensitivity varies with facial expressions of emotion and can also be seen on other ERP components such as P1 and EPN, was investigated. Using eye-tracking to manipulate fixation on facial features, we found the N170 to be the only eye-sensitive component and this was true for fearful, happy and neutral faces. A different effect of fixation to features was seen for the earlier P1 that likely reflected general sensitivity to face position. An early effect of emotion (∼120 ms) for happy faces was seen at occipital sites and was sustained until ∼350 ms post-stimulus. For fearful faces, an early effect was seen around 80 ms followed by a later effect appearing at ∼150 ms until ∼300 ms at lateral posterior sites. Results suggests that in this emotion-irrelevant gender discrimination task, processing of fearful and happy expressions occurred early and largely independently of the eye-sensitivity indexed by the N170. Processing of the two emotions involved different underlying brain networks active at different times. PMID:26277653

  6. Feature Extraction based Face Recognition, Gender and Age Classification

    OpenAIRE

    Venugopal K R2; L M Patnaik; Ramesha K; K B Raja

    2010-01-01

    The face recognition system with large sets of training sets for personal identification normally attains good accuracy. In this paper, we proposed Feature Extraction based Face Recognition, Gender and Age Classification (FEBFRGAC) algorithm with only small training sets and it yields good results even with one image per person. This process involves three stages: Pre-processing, Feature Extraction and Classification. The geometric features of facial images like eyes, nose, mouth etc. are loc...

  7. Using Computers for Assessment of Facial Features and Recognition of Anatomical Variants that Result in Unfavorable Rhinoplasty Outcomes

    Directory of Open Access Journals (Sweden)

    Tarik Ozkul

    2008-04-01

    Full Text Available Rhinoplasty and facial plastic surgery are among the most frequently performed surgical procedures in the world. Although the underlying anatomical features of nose and face are very well known, performing a successful facial surgery requires not only surgical skills but also aesthetical talent from surgeon. Sculpting facial features surgically in correct proportions to end up with an aesthetically pleasing result is highly difficult. To further complicate the matter, some patients may have some anatomical features which affect rhinoplasty operation outcome negatively. If goes undetected, these anatomical variants jeopardize the surgery causing unexpected rhinoplasty outcomes. In this study, a model is developed with the aid of artificial intelligence tools, which analyses facial features of the patient from photograph, and generates an index of "appropriateness" of the facial features and an index of existence of anatomical variants that effect rhinoplasty negatively. The software tool developed is intended to detect the variants and warn the surgeon before the surgery. Another purpose of the tool is to generate an objective score to assess the outcome of the surgery.

  8. Data-driven facial animation based on manifold Bayesian regression

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Driving facial animation based on tens of tracked markers is a challenging task due to the complex topology and to the non-rigid nature of human faces. We propose a solution named manifold Bayesian regression. First a novel distance metric, the geodesic manifold distance, is introduced to replace the Euclidean distance. The problem of facial animation can be formulated as a sparse warping kernels regression problem, in which the geodesic manifold distance is used for modelling the topology and discontinuities of the face models. The geodesic manifold distance can be adopted in traditional regression methods, e.g. radial basis functions without much tuning. We put facial animation into the framework of Bayesian regression. Bayesian approaches provide an elegant way of dealing with noise and uncertainty. After the covariance matrix is properly modulated, Hybrid Monte Carlo is used to approximate the integration of probabilities and get deformation results. The experimental results showed that our algorithm can robustly produce facial animation with large motions and complex face models.

  9. Gender Recognition Based on Sift Features

    CERN Document Server

    Yousefi, Sahar

    2011-01-01

    This paper proposes a robust approach for face detection and gender classification in color images. Previous researches about gender recognition suppose an expensive computational and time-consuming pre-processing step in order to alignment in which face images are aligned so that facial landmarks like eyes, nose, lips, chin are placed in uniform locations in image. In this paper, a novel technique based on mathematical analysis is represented in three stages that eliminates alignment step. First, a new color based face detection method is represented with a better result and more robustness in complex backgrounds. Next, the features which are invariant to affine transformations are extracted from each face using scale invariant feature transform (SIFT) method. To evaluate the performance of the proposed algorithm, experiments have been conducted by employing a SVM classifier on a database of face images which contains 500 images from distinct people with equal ratio of male and female.

  10. A stable biologically motivated learning mechanism for visual feature extraction to handle facial categorization.

    Directory of Open Access Journals (Sweden)

    Karim Rajaei

    Full Text Available The brain mechanism of extracting visual features for recognizing various objects has consistently been a controversial issue in computational models of object recognition. To extract visual features, we introduce a new, biologically motivated model for facial categorization, which is an extension of the Hubel and Wiesel simple-to-complex cell hierarchy. To address the synaptic stability versus plasticity dilemma, we apply the Adaptive Resonance Theory (ART for extracting informative intermediate level visual features during the learning process, which also makes this model stable against the destruction of previously learned information while learning new information. Such a mechanism has been suggested to be embedded within known laminar microcircuits of the cerebral cortex. To reveal the strength of the proposed visual feature learning mechanism, we show that when we use this mechanism in the training process of a well-known biologically motivated object recognition model (the HMAX model, it performs better than the HMAX model in face/non-face classification tasks. Furthermore, we demonstrate that our proposed mechanism is capable of following similar trends in performance as humans in a psychophysical experiment using a face versus non-face rapid categorization task.

  11. Efficient Web-based Facial Recognition System Employing 2DHOG

    CERN Document Server

    Abdelwahab, Moataz M; Yousry, Islam

    2012-01-01

    In this paper, a system for facial recognition to identify missing and found people in Hajj and Umrah is described as a web portal. Explicitly, we present a novel algorithm for recognition and classifications of facial images based on applying 2DPCA to a 2D representation of the Histogram of oriented gradients (2D-HOG) which maintains the spatial relation between pixels of the input images. This algorithm allows a compact representation of the images which reduces the computational complexity and the storage requirments, while maintaining the highest reported recognition accuracy. This promotes this method for usage with very large datasets. Large dataset was collected for people in Hajj. Experimental results employing ORL, UMIST, JAFFE, and HAJJ datasets confirm these excellent properties.

  12. MRI-based diagnostic imaging of the intratemporal facial nerve

    International Nuclear Information System (INIS)

    Detailed imaging of the five sections of the full intratemporal course of the facial nerve can be achieved by MRI and using thin tomographic section techniques and surface coils. Contrast media are required for tomographic imaging of pathological processes. Established methods are available for diagnostic evaluation of cerebellopontine angle tumors and chronic Bell's palsy, as well as hemifacial spasms. A method still under discussion is MRI for diagnostic evaluation of Bell's palsy in the presence of fractures of the petrous bone, when blood volumes in the petrous bone make evaluation even more difficult. MRI-based diagnostic evaluation of the idiopatic facial paralysis currently is subject to change. Its usual application cannot be recommended for routine evaluation at present. However, a quantitative analysis of contrast medium uptake of the nerve may be an approach to improve the prognostic value of MRI in acute phases of Bell's palsy. (orig./CB)

  13. Facial expression recognition based on improved DAGSVM

    Science.gov (United States)

    Luo, Yuan; Cui, Ye; Zhang, Yi

    2014-11-01

    For the cumulative error problem because of randomization sequence of traditional DAGSVM(Directed Acyclic Graph Support Vector Machine) classification, this paper presents an improved DAGSVM expression recognition method. The method uses the distance of class and the standard deviation as the measure of the classer, which minimize the error rate of the upper structure of the classification. At the same time, this paper uses the method which combines discrete cosine transform (Discrete Cosine Transform, DCT) with Local Binary Pattern(Local Binary Pattern - LBP) ,to extract expression feature and be the input to improve the DAGSVM classifier for recognition. Experimental results show that compared with other multi-class support vector machine method, improved DAGSVM classifier can achieve higher recognition rate. And when it's used at the platform of the intelligent wheelchair, experiments show that the method has a better robustness.

  14. Facial expression recognition on a people-dependent personal facial expression space (PFES)

    Science.gov (United States)

    Chandrasiri, N. P.; Park, Min Chul; Naemura, Takeshi; Harashima, Hiroshi

    2000-04-01

    In this paper, a person-specific facial expression recognition method which is based on Personal Facial Expression Space (PFES) is presented. The multidimensional scaling maps facial images as points in lower dimensions in PFES. It reflects personality of facial expressions as it is based on the peak instant of facial expression images of a specific person. In constructing PFES for a person, his/her whole normalized facial image is considered as a single pattern without block segmentation and differences of 2-D DCT coefficients from neutral facial image of the same person are used as features. Therefore, in the early part of the paper, separation characteristics of facial expressions in the frequency domain are analyzed using a still facial image database which consists of neutral, smile, anger, surprise and sadness facial images for each of 60 Japanese males (300 facial images). Results show that facial expression categories are well separated in the low frequency domain. PFES is constructed using multidimensional scaling by taking these low frequency domain of differences of 2-D DCT coefficients as features. On the PFES, trajectory of a facial image sequence of a person can be calculated in real time. Based on this trajectory, facial expressions can be recognized. Experimental results show the effectiveness of this method.

  15. Feature-Based Attention and Feature-Based Expectation.

    Science.gov (United States)

    Summerfield, Christopher; Egner, Tobias

    2016-06-01

    Foreknowledge of target stimulus features improves visual search performance as a result of 'feature-based attention' (FBA). Recent studies have reported that 'feature-based expectation' (FBE) also heightens decision sensitivity. Superficially, it appears that the latter work has simply rediscovered (and relabeled) the effects of FBA. However, this is not the case. Here we explain why. PMID:27079632

  16. An Experimental Investigation about the Integration of Facial Dynamics in Video-Based Face Recognition

    OpenAIRE

    Hadid, Abdenour; Pietikäinen, Matti

    2005-01-01

    Recent psychological and neural studies indicate that when people talk their changing facial expressions and head movements provide a dynamic cue for recognition. Therefore, both fixed facial features and dynamic personal characteristics are used in the human visual system (HVS) to recognize faces. However, most automatic recognition systems use only the static information as it is unclear how the dynamic cue can be integrated and exploited. The few works attempting to combine facial structur...

  17. Content-based Image Retrieval Using Constrained Independent Component Analysis: Facial Image Retrieval Based on Compound Queries

    OpenAIRE

    Kim, Tae-Seong; Ahmed, Bilal

    2008-01-01

    In this work, we have proposed a new technique of facial image retrieval based on constrained ICA. Our technique requires no offline learning, pre-processing, and feature extraction. The system has been designed so that none of the user-provided information is lost, and in turn more semantically accurate images can be retrieved. As our future work, we would like to test the system in other domains such as the retrieval of chest x-rays and CT images.

  18. Perceptually Valid Facial Expressions for Character-Based Applications

    OpenAIRE

    Ali Arya; Steve DiPaola; Avi Parush

    2009-01-01

    This paper addresses the problem of creating facial expression of mixed emotions in a perceptually valid way. The research has been done in the context of a “game-like” health and education applications aimed at studying social competency and facial expression awareness in autistic children as well as native language learning, but the results can be applied to many other applications such as games with need for dynamic facial expressions or tools for automating the creation of facial animatio...

  19. Facial Expressions recognition Based on Principal Component Analysis (PCA)

    OpenAIRE

    2014-01-01

    The facial expression recognition is an ocular task that can be performed without human discomfort, is really a speedily growing on the computer research field. There are many applications and programs uses facial expression to evaluate human character, judgment, feelings, and viewpoint. The process of recognizing facial expression is a hard task due to the several circumstances such as facial occlusions, face shape, illumination, face colors, and etc. This paper present a PCA methodology to ...

  20. Facial Image Analysis Based on Local Binary Patterns: A Survey

    NARCIS (Netherlands)

    Huang, D.; Shan, C.; Ardebilian, M.; Chen, L.

    2011-01-01

    Facial image analysis, including face detection, face recognition,facial expression analysis, facial demographic classification, and so on, is an important and interesting research topic in the computervision and image processing area, which has many important applications such as human-computer

  1. Facial Video based Detection of Physical Fatigue for Maximal Muscle Activity

    DEFF Research Database (Denmark)

    Haque, Mohammad Ahsanul; Irani, Ramin; Nasrollahi, Kamal; Moeslund, Thomas B.

    2016-01-01

    Physical fatigue reveals the health condition of a person at for example health checkup, fitness assessment or rehabilitation training. This paper presents an efficient noncontact system for detecting non-localized physi-cal fatigue from maximal muscle activity using facial videos acquired in a...... realistic environment with natural lighting where subjects were allowed to voluntarily move their head, change their facial expression, and vary their pose. The proposed method utilizes a facial feature point tracking method by combining a ‘Good feature to track’ and a ‘Supervised descent method’ to address...

  2. Automated Facial Expression Recognition Using Gradient-Based Ternary Texture Patterns

    OpenAIRE

    Faisal Ahmed; Emam Hossain

    2013-01-01

    Recognition of human expression from facial image is an interesting research area, which has received increasing attention in the recent years. A robust and effective facial feature descriptor is the key to designing a successful expression recognition system. Although much progress has been made, deriving a face feature descriptor that can perform consistently under changing environment is still a difficult and challenging task. In this paper, we present the gradient local ternary pattern (G...

  3. Personality Trait and Facial Expression Filter-Based Brain-Computer Interface

    OpenAIRE

    Seongah Chin; Chung-Yeon Lee

    2013-01-01

    In this paper, we present technical approaches that bridge the gap in the research related to the use of brain‐computer interfaces for entertainment and facial expressions. Such facial expressions that reflect an individual’s personal traits can be used to better realize artificial facial expressions in a gaming environment based on a brain‐computer interface. First, an emotion extraction filter is introduced in order to classify emotions on the basis of the users’ brain signals in real time....

  4. MaqFACS: A Muscle-Based Facial Movement Coding System for the Rhesus Macaque

    OpenAIRE

    Parr, L. A.; Waller, B.M.; Burrows, A.M.; Gothard, K.M.; Vick, S.J.

    2010-01-01

    Over 125 years ago, Charles Darwin suggested that the only way to fully understand the form and function of human facial expression was to make comparisons to other species. Nevertheless, it has been only recently that facial expressions in humans and related primate species have been compared using systematic, anatomically-based techniques. Through this approach, large scale evolutionary and phylogenetic analyses of facial expressions, including their homology, can now be addressed. Here, th...

  5. Hierarchical Spatio-Temporal Probabilistic Graphical Model with Multiple Feature Fusion for Binary Facial Attribute Classification in Real-World Face Videos.

    Science.gov (United States)

    Demirkus, Meltem; Precup, Doina; Clark, James J; Arbel, Tal

    2016-06-01

    Recent literature shows that facial attributes, i.e., contextual facial information, can be beneficial for improving the performance of real-world applications, such as face verification, face recognition, and image search. Examples of face attributes include gender, skin color, facial hair, etc. How to robustly obtain these facial attributes (traits) is still an open problem, especially in the presence of the challenges of real-world environments: non-uniform illumination conditions, arbitrary occlusions, motion blur and background clutter. What makes this problem even more difficult is the enormous variability presented by the same subject, due to arbitrary face scales, head poses, and facial expressions. In this paper, we focus on the problem of facial trait classification in real-world face videos. We have developed a fully automatic hierarchical and probabilistic framework that models the collective set of frame class distributions and feature spatial information over a video sequence. The experiments are conducted on a large real-world face video database that we have collected, labelled and made publicly available. The proposed method is flexible enough to be applied to any facial classification problem. Experiments on a large, real-world video database McGillFaces [1] of 18,000 video frames reveal that the proposed framework outperforms alternative approaches, by up to 16.96 and 10.13%, for the facial attributes of gender and facial hair, respectively. PMID:26415152

  6. Facial action detection using block-based pyramid appearance descriptors

    NARCIS (Netherlands)

    Jiang, Bihan; Valstar, Michel F.; Pantic, Maja

    2012-01-01

    Facial expression is one of the most important non-verbal behavioural cues in social signals. Constructing an effective face representation from images is an essential step for successful facial behaviour analysis. Most existing face descriptors operate on the same scale, and do not leverage coarse

  7. Quantitative assessment of the facial features of a Mexican population dataset.

    Science.gov (United States)

    Farrera, Arodi; García-Velasco, Maria; Villanueva, Maria

    2016-05-01

    The present study describes the morphological variation of a large database of facial photographs. The database comprises frontal (386 female, 764 males) and lateral (312 females, 666 males) images of Mexican individuals aged 14-69 years that were obtained under controlled conditions. We used geometric morphometric methods and multivariate statistics to describe the phenotypic variation within the dataset as well as the variation regarding sex and age groups. In addition, we explored the correlation between facial traits in both views. We found a spectrum of variation that encompasses broad and narrow faces. In frontal view, the latter is associated to a longer nose, a thinner upper lip, a shorter lower face and to a longer upper face, than individuals with broader faces. In lateral view, antero-posteriorly shortened faces are associated to a longer profile and to a shortened helix, than individuals with longer faces. Sexual dimorphism is found in all age groups except for individuals above 39 years old in lateral view. Likewise, age-related changes are significant for both sexes, except for females above 29 years old in both views. Finally, we observed that the pattern of covariation between views differs in males and females mainly in the thickness of the upper lip and the angle of the facial profile and the auricle. The results of this study could contribute to the forensic practices as a complement for the construction of biological profiles, for example, to improve facial reconstruction procedures. PMID:27017173

  8. Fingerprints, Iris and DNA Features based Multimodal Systems: A Review

    Directory of Open Access Journals (Sweden)

    Prakash Chandra Srivastava

    2013-01-01

    Full Text Available Biometric systems are alternates to the traditional identification systems. This paper provides an overview of single feature and multiple features based biometric systems, including the performance of physiological characteristics (such as fingerprint, hand geometry, head recognition, iris, retina, face recognition, DNA recognition, palm prints, heartbeat, finger veins, palates etc and behavioral characteristics (such as body language, facial expression, signature verification, speech recognition, Gait Signature etc.. The fingerprints, iris image, and DNA features based multimodal systems and their performances are analyzed in terms of security, reliability, accuracy, and long-term stability. The strengths and weaknesses of various multiple features based biometric approaches published so far are analyzed. The directions of future research work for robust personal identification is outlined.

  9. On Facial Expression Recognition Based on SKLLE and SVM%基于SKLLE和SVM的人脸表情识别

    Institute of Scientific and Technical Information of China (English)

    晏勇

    2014-01-01

    In order to extract facial expression image features efficiently and to reduce the dimension of fea-ture vectors,a novel dimension reduction and classification methodology based on supervised kernel locally linear embedding (SKLLE)and support vector machine (SVM)has been proposed.Nonlinear manifold structure information and label information have been used to reduce dimension and extract low-dimension embedding features for facial expression recognition.Support vector machine has been used as classifier in-stead of K-nearest neighbor (KNN).Experiments with JAFFE facial expression image database and Cohn-Kanade AU-Coded facial expression database show that,in this method,dimension can be reduced effec-tively and high recognition rate enhanced relatively,which improved the performance of facial expression recognition.%为有效提取人脸表情图像特征并降低特征向量维数,该文提出一种基于监督核局部线性嵌入(Supervised Kernel Locally Linear Embedding,SKLLE)和支持向量机(Support Vector Machine,SVM)相结合的降维和分类方法。利用人脸表情图像数据本身的非线性流形结构信息和标签信息实现维数约简,提取低维嵌入特征用于人脸表情识别,采用支持向量机代替传统的K近邻分类器。基于JAFFE人脸表情图像库和Cohn-Kanade人脸表情数据库的实验结果表明,该方法可以很好地实现维数约简,达到较高的识别率,有效地提高了人脸表情识别的性能。

  10. Facial artery flaps in facial oncoplastic reconstruction.

    Science.gov (United States)

    Fabrizio, Tommaso

    2013-10-01

    The face is one of the common sites for cutaneous cancer localization. It is well known that the face is the localization of more than 50% of skin cancers. Nowadays, the principles of modern "oncoplasty" recommend the complete excision of the cancer and the reconstruction with respect to cosmetic features of the face in terms of good color, good softness, and good texture of the flaps, utilized in cancer repair. The oncological and cosmetic results of facial reconstruction are strictly linked and the modern plastic and reconstructive surgeon must respect both oncological and cosmetic aspects. For that reason the best solution in facial cancer repair is the utilization of locoregional flaps based on the tributary vessels of the facial artery. In consideration of the dimension of recipient area to repair, the retroangular flap (RAF) or the submental flap could be used. This article is voted to illustrate a very large and long-term casuistry dedicated to these flaps. PMID:24037925

  11. Appearance-based human gesture recognition using multimodal features for human computer interaction

    Science.gov (United States)

    Luo, Dan; Gao, Hua; Ekenel, Hazim Kemal; Ohya, Jun

    2011-03-01

    The use of gesture as a natural interface plays an utmost important role for achieving intelligent Human Computer Interaction (HCI). Human gestures include different components of visual actions such as motion of hands, facial expression, and torso, to convey meaning. So far, in the field of gesture recognition, most previous works have focused on the manual component of gestures. In this paper, we present an appearance-based multimodal gesture recognition framework, which combines the different groups of features such as facial expression features and hand motion features which are extracted from image frames captured by a single web camera. We refer 12 classes of human gestures with facial expression including neutral, negative and positive meanings from American Sign Languages (ASL). We combine the features in two levels by employing two fusion strategies. At the feature level, an early feature combination can be performed by concatenating and weighting different feature groups, and LDA is used to choose the most discriminative elements by projecting the feature on a discriminative expression space. The second strategy is applied on decision level. Weighted decisions from single modalities are fused in a later stage. A condensation-based algorithm is adopted for classification. We collected a data set with three to seven recording sessions and conducted experiments with the combination techniques. Experimental results showed that facial analysis improve hand gesture recognition, decision level fusion performs better than feature level fusion.

  12. Cosmetics as a feature of the extended human phenotype: modulation of the perception of biologically important facial signals.

    Directory of Open Access Journals (Sweden)

    Nancy L Etcoff

    Full Text Available Research on the perception of faces has focused on the size, shape, and configuration of inherited features or the biological phenotype, and largely ignored the effects of adornment, or the extended phenotype. Research on the evolution of signaling has shown that animals frequently alter visual features, including color cues, to attract, intimidate or protect themselves from conspecifics. Humans engage in conscious manipulation of visual signals using cultural tools in real time rather than genetic changes over evolutionary time. Here, we investigate one tool, the use of color cosmetics. In two studies, we asked viewers to rate the same female faces with or without color cosmetics, and we varied the style of makeup from minimal (natural, to moderate (professional, to dramatic (glamorous. Each look provided increasing luminance contrast between the facial features and surrounding skin. Faces were shown for 250 ms or for unlimited inspection time, and subjects rated them for attractiveness, competence, likeability and trustworthiness. At 250 ms, cosmetics had significant positive effects on all outcomes. Length of inspection time did not change the effect for competence or attractiveness. However, with longer inspection time, the effect of cosmetics on likability and trust varied by specific makeup looks, indicating that cosmetics could impact automatic and deliberative judgments differently. The results suggest that cosmetics can create supernormal facial stimuli, and that one way they may do so is by exaggerating cues to sexual dimorphism. Our results provide evidence that judgments of facial trustworthiness and attractiveness are at least partially separable, that beauty has a significant positive effect on judgment of competence, a universal dimension of social cognition, but has a more nuanced effect on the other universal dimension of social warmth, and that the extended phenotype significantly influences perception of biologically important

  13. Cosmetics as a feature of the extended human phenotype: modulation of the perception of biologically important facial signals.

    Science.gov (United States)

    Etcoff, Nancy L; Stock, Shannon; Haley, Lauren E; Vickery, Sarah A; House, David M

    2011-01-01

    Research on the perception of faces has focused on the size, shape, and configuration of inherited features or the biological phenotype, and largely ignored the effects of adornment, or the extended phenotype. Research on the evolution of signaling has shown that animals frequently alter visual features, including color cues, to attract, intimidate or protect themselves from conspecifics. Humans engage in conscious manipulation of visual signals using cultural tools in real time rather than genetic changes over evolutionary time. Here, we investigate one tool, the use of color cosmetics. In two studies, we asked viewers to rate the same female faces with or without color cosmetics, and we varied the style of makeup from minimal (natural), to moderate (professional), to dramatic (glamorous). Each look provided increasing luminance contrast between the facial features and surrounding skin. Faces were shown for 250 ms or for unlimited inspection time, and subjects rated them for attractiveness, competence, likeability and trustworthiness. At 250 ms, cosmetics had significant positive effects on all outcomes. Length of inspection time did not change the effect for competence or attractiveness. However, with longer inspection time, the effect of cosmetics on likability and trust varied by specific makeup looks, indicating that cosmetics could impact automatic and deliberative judgments differently. The results suggest that cosmetics can create supernormal facial stimuli, and that one way they may do so is by exaggerating cues to sexual dimorphism. Our results provide evidence that judgments of facial trustworthiness and attractiveness are at least partially separable, that beauty has a significant positive effect on judgment of competence, a universal dimension of social cognition, but has a more nuanced effect on the other universal dimension of social warmth, and that the extended phenotype significantly influences perception of biologically important signals at first

  14. Cosmetics as a Feature of the Extended Human Phenotype: Modulation of the Perception of Biologically Important Facial Signals

    Science.gov (United States)

    Etcoff, Nancy L.; Stock, Shannon; Haley, Lauren E.; Vickery, Sarah A.; House, David M.

    2011-01-01

    Research on the perception of faces has focused on the size, shape, and configuration of inherited features or the biological phenotype, and largely ignored the effects of adornment, or the extended phenotype. Research on the evolution of signaling has shown that animals frequently alter visual features, including color cues, to attract, intimidate or protect themselves from conspecifics. Humans engage in conscious manipulation of visual signals using cultural tools in real time rather than genetic changes over evolutionary time. Here, we investigate one tool, the use of color cosmetics. In two studies, we asked viewers to rate the same female faces with or without color cosmetics, and we varied the style of makeup from minimal (natural), to moderate (professional), to dramatic (glamorous). Each look provided increasing luminance contrast between the facial features and surrounding skin. Faces were shown for 250 ms or for unlimited inspection time, and subjects rated them for attractiveness, competence, likeability and trustworthiness. At 250 ms, cosmetics had significant positive effects on all outcomes. Length of inspection time did not change the effect for competence or attractiveness. However, with longer inspection time, the effect of cosmetics on likability and trust varied by specific makeup looks, indicating that cosmetics could impact automatic and deliberative judgments differently. The results suggest that cosmetics can create supernormal facial stimuli, and that one way they may do so is by exaggerating cues to sexual dimorphism. Our results provide evidence that judgments of facial trustworthiness and attractiveness are at least partially separable, that beauty has a significant positive effect on judgment of competence, a universal dimension of social cognition, but has a more nuanced effect on the other universal dimension of social warmth, and that the extended phenotype significantly influences perception of biologically important signals at first

  15. Information based universal feature extraction

    Science.gov (United States)

    Amiri, Mohammad; Brause, Rüdiger

    2015-02-01

    In many real world image based pattern recognition tasks, the extraction and usage of task-relevant features are the most crucial part of the diagnosis. In the standard approach, they mostly remain task-specific, although humans who perform such a task always use the same image features, trained in early childhood. It seems that universal feature sets exist, but they are not yet systematically found. In our contribution, we tried to find those universal image feature sets that are valuable for most image related tasks. In our approach, we trained a neural network by natural and non-natural images of objects and background, using a Shannon information-based algorithm and learning constraints. The goal was to extract those features that give the most valuable information for classification of visual objects hand-written digits. This will give a good start and performance increase for all other image learning tasks, implementing a transfer learning approach. As result, in our case we found that we could indeed extract features which are valid in all three kinds of tasks.

  16. Robust Facial Expression Recognition via Compressive Sensing

    OpenAIRE

    Shiqing Zhang; Xiaoming Zhao; Bicheng Lei

    2012-01-01

    Recently, compressive sensing (CS) has attracted increasing attention in the areas of signal processing, computer vision and pattern recognition. In this paper, a new method based on the CS theory is presented for robust facial expression recognition. The CS theory is used to construct a sparse representation classifier (SRC). The effectiveness and robustness of the SRC method is investigated on clean and occluded facial expression images. Three typical facial features, i.e., the raw pixels, ...

  17. A genome-wide association scan in admixed Latin Americans identifies loci influencing facial and scalp hair features

    OpenAIRE

    Adhikari, Kaustubh; Fontanil, Tania; Cal, Santiago; Mendoza-Revilla, Javier; Fuentes-Guajardo, Macarena; Chacón-Duque, Juan-Camilo; Al-Saadi, Farah; Johansson, Jeanette A.; Quinto-Sanchez, Mirsha; Acuña-Alonzo, Victor; Jaramillo, Claudia; Arias, William; Barquera Lozano, Rodrigo; Macín Pérez, Gastón; Gómez-Valdés, Jorge

    2016-01-01

    We report a genome-wide association scan in over 6,000 Latin Americans for features of scalp hair (shape, colour, greying, balding) and facial hair (beard thickness, monobrow, eyebrow thickness). We found 18 signals of association reaching genome-wide significance (P values 5 × 10(-8) to 3 × 10(-119)), including 10 novel associations. These include novel loci for scalp hair shape and balding, and the first reported loci for hair greying, monobrow, eyebrow and beard thickness. A newly identifi...

  18. A genome-wide association scan in admixed Latin Americans identifies loci influencing facial and scalp hair features

    OpenAIRE

    Adhikari, Kaustubh; Fontanil, Tania; Cal, Santiago; Mendoza-Revilla, Javier; Fuentes-Guajardo, Macarena; Chacón-Duque, Juan-Camilo; Al-Saadi, Farah; Johansson, Jeanette A.; Quinto-Sanchez, Mirsha; Acuña-Alonzo, Victor; Jaramillo, Claudia; Arias, William; Barquera Lozano, Rodrigo; Macín Pérez, Gastón; Gómez-Valdés, Jorge

    2016-01-01

    We report a genome-wide association scan in over 6,000 Latin Americans for features of scalp hair (shape, colour, greying, balding) and facial hair (beard thickness, monobrow, eyebrow thickness). We found 18 signals of association reaching genome-wide significance (P values 5 × 10−8 to 3 × 10−119), including 10 novel associations. These include novel loci for scalp hair shape and balding, and the first reported loci for hair greying, monobrow, eyebrow and beard thickness. A newly identified l...

  19. Spontaneous Subtle Expression Detection and Recognition based on Facial Strain

    OpenAIRE

    Liong, Sze-Teng; See, John; Phan, Raphael Chung-Wei; Oh, Yee-Hui; Ngo, Anh Cat Le; Wong, KokSheik; Tan, Su-Wei

    2016-01-01

    Optical strain is an extension of optical flow that is capable of quantifying subtle changes on faces and representing the minute facial motion intensities at the pixel level. This is computationally essential for the relatively new field of spontaneous micro-expression, where subtle expressions can be technically challenging to pinpoint. In this paper, we present a novel method for detecting and recognizing micro-expressions by utilizing facial optical strain magnitudes to construct optical ...

  20. Local Illumination Normalization and Facial Feature Point Selection for Robust Face Recognition

    OpenAIRE

    Han, Song; Jinsong KIM; Cholhun KIM; Jo, Jongchol

    2013-01-01

    Face recognition systems must be robust to the variation of various factors such as facial expression, illumination, head pose and aging. Especially, the robustness against illumination variation is one of the most important problems to be solved for the practical use of face recognition systems. Gabor wavelet is widely used in face detection and recognition because it gives the possibility to simulate the function of human visual system. In this paper, we propose a method for extracting Gab...

  1. CBFS: high performance feature selection algorithm based on feature clearness.

    Directory of Open Access Journals (Sweden)

    Minseok Seo

    Full Text Available BACKGROUND: The goal of feature selection is to select useful features and simultaneously exclude garbage features from a given dataset for classification purposes. This is expected to bring reduction of processing time and improvement of classification accuracy. METHODOLOGY: In this study, we devised a new feature selection algorithm (CBFS based on clearness of features. Feature clearness expresses separability among classes in a feature. Highly clear features contribute towards obtaining high classification accuracy. CScore is a measure to score clearness of each feature and is based on clustered samples to centroid of classes in a feature. We also suggest combining CBFS and other algorithms to improve classification accuracy. CONCLUSIONS/SIGNIFICANCE: From the experiment we confirm that CBFS is more excellent than up-to-date feature selection algorithms including FeaLect. CBFS can be applied to microarray gene selection, text categorization, and image classification.

  2. What's in a face? Mentalizing in borderline personality disorder based on dynamically changing facial expressions.

    Science.gov (United States)

    Lowyck, Benedicte; Luyten, Patrick; Vanwalleghem, Dominique; Vermote, Rudi; Mayes, Linda C; Crowley, Michael J

    2016-01-01

    The mentalization-based approach to borderline personality disorder (BPD) argues that impairments in mentalizing are a key feature of BPD. Most previous research in this area has concentrated on potential impairments in facial emotion recognition in BPD patients. However, these studies have yielded inconsistent results, which may be attributable to methodological differences. This study aimed to address several limitations of previous studies by investigating different parameters involved in emotion recognition in BPD patients using a novel, 2-step dynamically changing facial expression paradigm, taking into account the possible influence of mood, psychotropic medication, and trauma exposure. Twenty-two BPD patients and 22 matched normal controls completed this paradigm. Parameters assessed were accuracy of emotion recognition, reaction time (RT), and level of confidence, both for first and full response and for correct and incorrect responses. Results showed (a) that BPD patients were as accurate in their first, but less accurate in their full emotion recognition than normal controls, (b) a trend for BPD patients to respond more slowly than normal controls, and (c) no significant difference in overall level of confidence between BPD patients and normal controls. Mood and psychotropic medication did not influence these results. Exposure to trauma in BPD patients, however, was negatively related to accuracy at full expression. Although further research is needed, results suggest no general emotion-recognition deficit in BPD patients using a dynamic changing facial recognition paradigm, except for a subgroup of BPD patients with marked trauma who become less accurate when they have to rely more on controlled, reflective processes. PMID:26461044

  3. Multiple features extraction using Gabor wavelet transformation, Fisher faces and integrated SVM with application to facial expression recognition%基于Gabor、Fisher脸多特征提取及集成SVM的人脸表情识别

    Institute of Scientific and Technical Information of China (English)

    黄永明; 章国宝; 董飞; 达飞鹏

    2011-01-01

    针对静态的灰度图像表情库,提出了基于多种脸部表情特征多级分类的表情识别算法.首先在选取的人脸特征点上作局部的Gabor小波变换.为了提高特征提取速度,利用改进的弹性图匹配算法来提取图像中的人脸有效区域,在提取的人脸区域中提取几何特征,并通过Fisher脸法提取统计特征,利用几何特征与建立的相应一级集成SVM来进行初次分类.最后利用Fisher特征与建立的相应二级集成SVM进行最终分类.通过在JAFFE与Cohn-Kanade表情库中实验,证明该方法与单个特征相比较,具有更高的表情识别率以及更强的鲁棒性.%Based on the static gray image expression database, this paper gave a recognition algorithm by using multiple facial expression features to construct multi-classifier.Aiming to improving speed of extracting features, features of expression that were extracted by local Gabor wavelet transformation on the selected facial landmark were used to constructing facial elastic templates.Extracted geometric features and Fisherfaces features on the facial effective area extracted by elastic templates.Primary integrated SVM should be constructed by combining with Geometric features; secondary integrated SVM should be constructed by combining with Fisherfaces features.Compared with the single features, the experimental results show that recognition rate and robustness are improved by experiments based on JAFFE and Cohn-Kanade.

  4. Suitable models for face geometry normalization in facial expression recognition

    Science.gov (United States)

    Sadeghi, Hamid; Raie, Abolghasem A.

    2015-01-01

    Recently, facial expression recognition has attracted much attention in machine vision research because of its various applications. Accordingly, many facial expression recognition systems have been proposed. However, the majority of existing systems suffer from a critical problem: geometric variability. It directly affects the performance of geometric feature-based facial expression recognition approaches. Furthermore, it is a crucial challenge in appearance feature-based techniques. This variability appears in both neutral faces and facial expressions. Appropriate face geometry normalization can improve the accuracy of each facial expression recognition system. Therefore, this paper proposes different geometric models or shapes for normalization. Face geometry normalization removes geometric variability of facial images and consequently, appearance feature extraction methods can be accurately utilized to represent facial images. Thus, some expression-based geometric models are proposed for facial image normalization. Next, local binary patterns and local phase quantization are used for appearance feature extraction. A combination of an effective geometric normalization with accurate appearance representations results in more than a 4% accuracy improvement compared to several state-of-the-arts in facial expression recognition. Moreover, utilizing the model of facial expressions which have larger mouth and eye region sizes gives higher accuracy due to the importance of these regions in facial expression.

  5. Facial Nerve Palsy: An Unusual Presenting Feature of Small Cell Lung Cancer

    Directory of Open Access Journals (Sweden)

    Ozcan Yildiz

    2011-01-01

    Full Text Available Lung cancer is the second most common type of cancer in the world and is the most common cause of cancer-related death in men and women; it is responsible for 1.3 million deaths annually worldwide. It can metastasize to any organ. The most common site of metastasis in the head and neck region is the brain; however, it can also metastasize to the oral cavity, gingiva, tongue, parotid gland and lymph nodes. This article reports a case of small cell lung cancer presenting with metastasis to the facial nerve.

  6. 3D facial expression recognition based on histograms of surface differential quantities

    KAUST Repository

    Li, Huibin

    2011-01-01

    3D face models accurately capture facial surfaces, making it possible for precise description of facial activities. In this paper, we present a novel mesh-based method for 3D facial expression recognition using two local shape descriptors. To characterize shape information of the local neighborhood of facial landmarks, we calculate the weighted statistical distributions of surface differential quantities, including histogram of mesh gradient (HoG) and histogram of shape index (HoS). Normal cycle theory based curvature estimation method is employed on 3D face models along with the common cubic fitting curvature estimation method for the purpose of comparison. Based on the basic fact that different expressions involve different local shape deformations, the SVM classifier with both linear and RBF kernels outperforms the state of the art results on the subset of the BU-3DFE database with the same experimental setting. © 2011 Springer-Verlag.

  7. 男性化与女性化对面孔偏好的影响——基于图像处理技术和眼动的检验%The Effects of Transformed Gender Facial Features on Face Preference of College Students: Based on the Test of Computer Graphics and Eye Movement Tracks

    Institute of Scientific and Technical Information of China (English)

    温芳芳; 佐斌

    2012-01-01

    采用图像处理技术和眼动探讨了性别二态线索对面孔偏好的影响.实验1发现非面孔线索未掩蔽和掩蔽时,感知男性化技术与原始照片条件下女性化的男性面孔更有吸引力和信任度;性别二态技术条件下,非面孔线索未掩蔽时男性化的男性面孔更有吸引力和信任度.实验2表明被试对男性面孔的平均瞳孔大小和注视次数均大于和多于女性面孔,首次注视时间短于女性面孔;被试对男性化面孔的首次注视时间和首次注视持续时间均长于女性化面孔.%Perceived facial attractiveness can influence people's social interactions with one another, including mate selection, intimate relationship, hiring decision, and voting behavior. People evaluate faces using multiple trait dimensions such as attractiveness and trustworthiness both of which are affected by facial masculinity or femininity cues. However, studies manipulating the computer graphics of sexual dimorphism on facial attractiveness have yielded inconsistent results. Some found that feminine facial features in male faces were more attractive than masculine ones. Some others found that women prefer masculine male faces. And still others found that women preferred femininity in male faces.The current study used the computer graphics and the eye tracker to assess the effect of the dimorphic cues on the perception of facial attractiveness among Chinese college students through two experiments. Experiment 1 assessed women's perceptions of attractiveness and trustworthiness of men's faces under the condition of either perceived masculinity vs. Femininity or the sexual dimorphism. Results showed that, when non-face cues (e.g., hairstyle) were masked, women perceived femininity in men's faces as more attractive and trustworthy than the masculinity. However, in the sexual dimorphism condition in which the non-face cues were not masked, women found masculinity in men's faces more attractive and

  8. Facial Emotion Recognition Using Context Based Multimodal Approach

    Directory of Open Access Journals (Sweden)

    Priya Metri

    2011-12-01

    Full Text Available Emotions play a crucial role in person to person interaction. In recent years, there has been a growing interest in improving all aspects of interaction between humans and computers. The ability to understand human emotions is desirable for the computer in several applications especially by observing facial expressions. This paper explores a ways of human-computer interaction that enable the computer to be more aware of the user’s emotional expressions we present a approach for the emotion recognition from a facial expression, hand and body posture. Our model uses multimodal emotion recognition system in which we use two different models for facial expression recognition and for hand and body posture recognition and then combining the result of both classifiers using a third classifier which give the resulting emotion . Multimodal system gives more accurate result than a signal or bimodal system

  9. A genome-wide association scan in admixed Latin Americans identifies loci influencing facial and scalp hair features.

    Science.gov (United States)

    Adhikari, Kaustubh; Fontanil, Tania; Cal, Santiago; Mendoza-Revilla, Javier; Fuentes-Guajardo, Macarena; Chacón-Duque, Juan-Camilo; Al-Saadi, Farah; Johansson, Jeanette A; Quinto-Sanchez, Mirsha; Acuña-Alonzo, Victor; Jaramillo, Claudia; Arias, William; Barquera Lozano, Rodrigo; Macín Pérez, Gastón; Gómez-Valdés, Jorge; Villamil-Ramírez, Hugo; Hunemeier, Tábita; Ramallo, Virginia; Silva de Cerqueira, Caio C; Hurtado, Malena; Villegas, Valeria; Granja, Vanessa; Gallo, Carla; Poletti, Giovanni; Schuler-Faccini, Lavinia; Salzano, Francisco M; Bortolini, Maria-Cátira; Canizales-Quinteros, Samuel; Rothhammer, Francisco; Bedoya, Gabriel; Gonzalez-José, Rolando; Headon, Denis; López-Otín, Carlos; Tobin, Desmond J; Balding, David; Ruiz-Linares, Andrés

    2016-01-01

    We report a genome-wide association scan in over 6,000 Latin Americans for features of scalp hair (shape, colour, greying, balding) and facial hair (beard thickness, monobrow, eyebrow thickness). We found 18 signals of association reaching genome-wide significance (P values 5 × 10(-8) to 3 × 10(-119)), including 10 novel associations. These include novel loci for scalp hair shape and balding, and the first reported loci for hair greying, monobrow, eyebrow and beard thickness. A newly identified locus influencing hair shape includes a Q30R substitution in the Protease Serine S1 family member 53 (PRSS53). We demonstrate that this enzyme is highly expressed in the hair follicle, especially the inner root sheath, and that the Q30R substitution affects enzyme processing and secretion. The genome regions associated with hair features are enriched for signals of selection, consistent with proposals regarding the evolution of human hair. PMID:26926045

  10. Evolutionary Computational Method of Facial Expression Analysis for Content-based Video Retrieval using 2-Dimensional Cellular Automata

    CERN Document Server

    Geetha, P

    2010-01-01

    In this paper, Deterministic Cellular Automata (DCA) based video shot classification and retrieval is proposed. The deterministic 2D Cellular automata model captures the human facial expressions, both spontaneous and posed. The determinism stems from the fact that the facial muscle actions are standardized by the encodings of Facial Action Coding System (FACS) and Action Units (AUs). Based on these encodings, we generate the set of evolutionary update rules of the DCA for each facial expression. We consider a Person-Independent Facial Expression Space (PIFES) to analyze the facial expressions based on Partitioned 2D-Cellular Automata which capture the dynamics of facial expressions and classify the shots based on it. Target video shot is retrieved by comparing the similar expression is obtained for the query frame's face with respect to the key faces expressions in the database video. Consecutive key face expressions in the database that are highly similar to the query frame's face, then the key faces are use...

  11. Feature Extraction based Face Recognition, Gender and Age Classification

    Directory of Open Access Journals (Sweden)

    Venugopal K R

    2010-01-01

    Full Text Available The face recognition system with large sets of training sets for personal identification normally attains good accuracy. In this paper, we proposed Feature Extraction based Face Recognition, Gender and Age Classification (FEBFRGAC algorithm with only small training sets and it yields good results even with one image per person. This process involves three stages: Pre-processing, Feature Extraction and Classification. The geometric features of facial images like eyes, nose, mouth etc. are located by using Canny edge operator and face recognition is performed. Based on the texture and shape information gender and age classification is done using Posteriori Class Probability and Artificial Neural Network respectively. It is observed that the face recognition is 100%, the gender and age classification is around 98% and 94% respectively.

  12. Facial expression discrimination varies with presentation time but not with fixation on features: a backward masking study using eye-tracking.

    Science.gov (United States)

    Neath, Karly N; Itier, Roxane J

    2014-01-01

    The current study investigated the effects of presentation time and fixation to expression-specific diagnostic features on emotion discrimination performance, in a backward masking task. While no differences were found when stimuli were presented for 16.67 ms, differences between facial emotions emerged beyond the happy-superiority effect at presentation times as early as 50 ms. Happy expressions were best discriminated, followed by neutral and disgusted, then surprised, and finally fearful expressions presented for 50 and 100 ms. While performance was not improved by the use of expression-specific diagnostic facial features, performance increased with presentation time for all emotions. Results support the idea of an integration of facial features (holistic processing) varying as a function of emotion and presentation time. PMID:23879672

  13. Multi-Layer Sparse Representation for Weighted LBP-Patches Based Facial Expression Recognition

    OpenAIRE

    Qi Jia; Xinkai Gao; He Guo; Zhongxuan Luo; Yi Wang(Kavli Institute for the Physics and Mathematics of the Universe, Todai Institutes for Advanced Study, University of Tokyo (WPI), 5-1-5 Kashiwanoha, Kashiwa, Chiba 277-8583, Japan)

    2015-01-01

    In this paper, a novel facial expression recognition method based on sparse representation is proposed. Most contemporary facial expression recognition systems suffer from limited ability to handle image nuisances such as low resolution and noise. Especially for low intensity expression, most of the existing training methods have quite low recognition rates. Motivated by sparse representation, the problem can be solved by finding sparse coefficients of the test image by the whole training set...

  14. Facial Expression Recognition Based on Local Binary Patterns and Kernel Discriminant Isomap

    OpenAIRE

    Xiaoming Zhao; Shiqing Zhang

    2011-01-01

    Facial expression recognition is an interesting and challenging subject. Considering the nonlinear manifold structure of facial images, a new kernel-based manifold learning method, called kernel discriminant isometric mapping (KDIsomap), is proposed. KDIsomap aims to nonlinearly extract the discriminant information by maximizing the interclass scatter while minimizing the intraclass scatter in a reproducing kernel Hilbert space. KDIsomap is used to perform nonlinear dimensionality reduction o...

  15. Features Based Text Similarity Detection

    CERN Document Server

    Kent, Chow Kok

    2010-01-01

    As the Internet help us cross cultural border by providing different information, plagiarism issue is bound to arise. As a result, plagiarism detection becomes more demanding in overcoming this issue. Different plagiarism detection tools have been developed based on various detection techniques. Nowadays, fingerprint matching technique plays an important role in those detection tools. However, in handling some large content articles, there are some weaknesses in fingerprint matching technique especially in space and time consumption issue. In this paper, we propose a new approach to detect plagiarism which integrates the use of fingerprint matching technique with four key features to assist in the detection process. These proposed features are capable to choose the main point or key sentence in the articles to be compared. Those selected sentence will be undergo the fingerprint matching process in order to detect the similarity between the sentences. Hence, time and space usage for the comparison process is r...

  16. Multi-Layer Sparse Representation for Weighted LBP-Patches Based Facial Expression Recognition

    Directory of Open Access Journals (Sweden)

    Qi Jia

    2015-03-01

    Full Text Available In this paper, a novel facial expression recognition method based on sparse representation is proposed. Most contemporary facial expression recognition systems suffer from limited ability to handle image nuisances such as low resolution and noise. Especially for low intensity expression, most of the existing training methods have quite low recognition rates. Motivated by sparse representation, the problem can be solved by finding sparse coefficients of the test image by the whole training set. Deriving an effective facial representation from original face images is a vital step for successful facial expression recognition. We evaluate facial representation based on weighted local binary patterns, and Fisher separation criterion is used to calculate the weighs of patches. A multi-layer sparse representation framework is proposed for multi-intensity facial expression recognition, especially for low-intensity expressions and noisy expressions in reality, which is a critical problem but seldom addressed in the existing works. To this end, several experiments based on low-resolution and multi-intensity expressions are carried out. Promising results on publicly available databases demonstrate the potential of the proposed approach.

  17. Multi-layer sparse representation for weighted LBP-patches based facial expression recognition.

    Science.gov (United States)

    Jia, Qi; Gao, Xinkai; Guo, He; Luo, Zhongxuan; Wang, Yi

    2015-01-01

    In this paper, a novel facial expression recognition method based on sparse representation is proposed. Most contemporary facial expression recognition systems suffer from limited ability to handle image nuisances such as low resolution and noise. Especially for low intensity expression, most of the existing training methods have quite low recognition rates. Motivated by sparse representation, the problem can be solved by finding sparse coefficients of the test image by the whole training set. Deriving an effective facial representation from original face images is a vital step for successful facial expression recognition. We evaluate facial representation based on weighted local binary patterns, and Fisher separation criterion is used to calculate the weighs of patches. A multi-layer sparse representation framework is proposed for multi-intensity facial expression recognition, especially for low-intensity expressions and noisy expressions in reality, which is a critical problem but seldom addressed in the existing works. To this end, several experiments based on low-resolution and multi-intensity expressions are carried out. Promising results on publicly available databases demonstrate the potential of the proposed approach. PMID:25808772

  18. 一种特征加权融合人脸识别方法%Face recognition by weighted fusion of facial features

    Institute of Scientific and Technical Information of China (English)

    孙劲光; 孟凡宇

    2015-01-01

    针对传统人脸识别算法在非限制条件下识别准确率不高的问题,提出了一种特征加权融合人脸识别方法( DLWF+). 根据人脸面部左眼、右眼、鼻子、嘴、下巴等5个器官位置,将人脸图像划分成5个局部采样区域;将得到的5个局部采样区域和整幅人脸图像分别输入到对应的神经网络中进行网络权值调整,完成子网络的构建;利用softmax回归求出6个相似度向量并组成相似度矩阵与权向量相乘得出最终的识别结果. 经ORL和WFL人脸库上进行实验验证,识别准确率分别达到97%和91.63%. 实验结果表明:该算法能够有效提高人脸识别能力,与传统识别算法相比在限制条件和非限制条件下都具有较高的识别准确率.%The accuracy of face recognition is low under unconstrained conditions. To solve this problem, we pro-pose a new method based on deep learning and the weighted fusion of facial features. First, we divide facial feature points into five regions using an active shape model and then sample different facial components corresponding to those facial feature points. A corresponding deep belief network ( DBN) was then trained based on these regional samples to obtain optimal network parameters. The five regional sampling regions and entire facial image obtained were then inputted into a corresponding neural network to adjust the network weight and complete the construction of sub-networks. Finally, using softmax regression, we obtained six similarity vectors of different components. These six similarity vectors comprise a similarity matrix, which is then multiplied by the weight vector to derive the final recognition result. Recognition accuracy was 97% and 91.63% on the ORL and WFL face databases, respectively. Compared with traditional recognition algorithms such as SVM, DBN, PCA, and FIP+LDA, recognition rates for both databases were improved in both constrained and unconstrained conditions. On the basis of

  19. Facial Expression Analysis

    NARCIS (Netherlands)

    Pantic, Maja; Li, S.; Jain, A.

    2009-01-01

    Facial expression recognition is a process performed by humans or computers, which consists of: 1. Locating faces in the scene (e.g., in an image; this step is also referred to as face detection), 2. Extracting facial features from the detected face region (e.g., detecting the shape of facial compon

  20. Facial Emotion Recognition Using Context Based Multimodal Approach

    OpenAIRE

    Priya Metri; Jayshree Ghorpade; Ayesha Butalia

    2011-01-01

    Emotions play a crucial role in person to person interaction. In recent years, there has been a growing interest in improving all aspects of interaction between humans and computers. The ability to understand human emotions is desirable for the computer in several applications especially by observing facial expressions. This paper explores a ways of human-computer interaction that enable the computer to be more aware of the user’s emotional expressions we present a approach for the emotion re...

  1. Pain assessment in severe demented elderly based on facial expression

    OpenAIRE

    Leysens, Greet; Noben, Annelies; De Maesschalck, Lieven

    2010-01-01

    Introduction: Pain is an important and underestimated aspect at elderly with dementia, especially when their communication skills deteriorate. Moreover, the risk of under treatment increases with the progression of dementia, despite of the increasing pharmacological possibilities and interest in pain. Facial expression can be considered as a reflection of the real, authentic pain experience. Elderly with cognitive limitations are less socially inhibited to express pain nonverbally. Therefore ...

  2. A novel three-dimensional smile analysis based on dynamic evaluation of facial curve contour

    Science.gov (United States)

    Lin, Yi; Lin, Han; Lin, Qiuping; Zhang, Jinxin; Zhu, Ping; Lu, Yao; Zhao, Zhi; Lv, Jiahong; Lee, Mln Kyeong; Xu, Yue

    2016-02-01

    The influence of three-dimensional facial contour and dynamic evaluation decoding on factors of smile esthetics is essential for facial beauty improvement. However, the kinematic features of the facial smile contour and the contribution from the soft tissue and underlying skeleton are uncharted. Here, the cheekbone-maxilla contour and nasolabial fold were combined into a “smile contour” delineating the overall facial topography emerges prominently in smiling. We screened out the stable and unstable points on the smile contour using facial motion capture and curve fitting, before analyzing the correlation between soft tissue coordinates and hard tissue counterparts of the screened points. Our finding suggests that the mouth corner region was the most mobile area characterizing smile expression, while the other areas remained relatively stable. Therefore, the perioral area should be evaluated dynamically while the static assessment outcome of other parts of the smile contour contribute partially to their dynamic esthetics. Moreover, different from the end piece, morphologies of the zygomatic area and the superior part of the nasolabial crease were determined largely by the skeleton in rest, implying the latter can be altered by orthopedic or orthodontic correction and the former better improved by cosmetic procedures to improve the beauty of smile.

  3. Limitations in 4-Year-Old Children's Sensitivity to the Spacing among Facial Features

    Science.gov (United States)

    Mondloch, Catherine J.; Thomson, Kendra

    2008-01-01

    Four-year-olds' sensitivity to differences among faces in the spacing of features was tested under 4 task conditions: judging distinctiveness when the external contour was visible and when it was occluded, simultaneous match-to-sample, and recognizing the face of a friend. In each task, the foil differed only in the spacing of features, and…

  4. Features of INIS data base

    International Nuclear Information System (INIS)

    The on-line service of INIS atomic energy literature file was started by JOIS on January 5, 1984. This service is to be spread in Japan by the cooperation of the Japan Information Center of Science and Technology having the on-line system in Japan and the Japan Atomic Energy Research Institute having the data base. At the time of starting the service, the features of the INIS data base are outlined from the viewpoint of the utilization. The International Nuclear Information System is the information system supported by the member countries of IAEA and international organizations to offer the secondary information covering the literatures on the peaceful use of atomic energy in the world based on English by computer treatment and to distribute the whole text of special literatures by utilizing microfilm techniques. The INIS has continued the full scale operation since 1973, and now, 70 countries and 14 international organizations take part. As of the end of 1983, about 810,000 secondary informations were collected. The object of the service by JOIS is those of January, 1976, or later. The setup of the INIS, the object of the INIS file, the items of the primary retrieval and the secondary retrieval, answer output, and the experience in JAERI-INIS on-line retrieval system are described. (Kako, I.)

  5. Hepatitis Diagnosis Using Facial Color Image

    Science.gov (United States)

    Liu, Mingjia; Guo, Zhenhua

    Facial color diagnosis is an important diagnostic method in traditional Chinese medicine (TCM). However, due to its qualitative, subjective and experi-ence-based nature, traditional facial color diagnosis has a very limited application in clinical medicine. To circumvent the subjective and qualitative problems of facial color diagnosis of Traditional Chinese Medicine, in this paper, we present a novel computer aided facial color diagnosis method (CAFCDM). The method has three parts: face Image Database, Image Preprocessing Module and Diagnosis Engine. Face Image Database is carried out on a group of 116 patients affected by 2 kinds of liver diseases and 29 healthy volunteers. The quantitative color feature is extracted from facial images by using popular digital image processing techni-ques. Then, KNN classifier is employed to model the relationship between the quantitative color feature and diseases. The results show that the method can properly identify three groups: healthy, severe hepatitis with jaundice and severe hepatitis without jaundice with accuracy higher than 73%.

  6. An optimized ERP brain-computer interface based on facial expression changes

    Science.gov (United States)

    Jin, Jing; Daly, Ian; Zhang, Yu; Wang, Xingyu; Cichocki, Andrzej

    2014-06-01

    Objective. Interferences from spatially adjacent non-target stimuli are known to evoke event-related potentials (ERPs) during non-target flashes and, therefore, lead to false positives. This phenomenon was commonly seen in visual attention-based brain-computer interfaces (BCIs) using conspicuous stimuli and is known to adversely affect the performance of BCI systems. Although users try to focus on the target stimulus, they cannot help but be affected by conspicuous changes of the stimuli (such as flashes or presenting images) which were adjacent to the target stimulus. Furthermore, subjects have reported that conspicuous stimuli made them tired and annoyed. In view of this, the aim of this study was to reduce adjacent interference, annoyance and fatigue using a new stimulus presentation pattern based upon facial expression changes. Our goal was not to design a new pattern which could evoke larger ERPs than the face pattern, but to design a new pattern which could reduce adjacent interference, annoyance and fatigue, and evoke ERPs as good as those observed during the face pattern. Approach. Positive facial expressions could be changed to negative facial expressions by minor changes to the original facial image. Although the changes are minor, the contrast is big enough to evoke strong ERPs. In this paper, a facial expression change pattern between positive and negative facial expressions was used to attempt to minimize interference effects. This was compared against two different conditions, a shuffled pattern containing the same shapes and colours as the facial expression change pattern, but without the semantic content associated with a change in expression, and a face versus no face pattern. Comparisons were made in terms of classification accuracy and information transfer rate as well as user supplied subjective measures. Main results. The results showed that interferences from adjacent stimuli, annoyance and the fatigue experienced by the subjects could be

  7. A small-world network model of facial emotion recognition.

    Science.gov (United States)

    Takehara, Takuma; Ochiai, Fumio; Suzuki, Naoto

    2016-01-01

    Various models have been proposed to increase understanding of the cognitive basis of facial emotions. Despite those efforts, interactions between facial emotions have received minimal attention. If collective behaviours relating to each facial emotion in the comprehensive cognitive system could be assumed, specific facial emotion relationship patterns might emerge. In this study, we demonstrate that the frameworks of complex networks can effectively capture those patterns. We generate 81 facial emotion images (6 prototypes and 75 morphs) and then ask participants to rate degrees of similarity in 3240 facial emotion pairs in a paired comparison task. A facial emotion network constructed on the basis of similarity clearly forms a small-world network, which features an extremely short average network distance and close connectivity. Further, even if two facial emotions have opposing valences, they are connected within only two steps. In addition, we show that intermediary morphs are crucial for maintaining full network integration, whereas prototypes are not at all important. These results suggest the existence of collective behaviours in the cognitive systems of facial emotions and also describe why people can efficiently recognize facial emotions in terms of information transmission and propagation. For comparison, we construct three simulated networks--one based on the categorical model, one based on the dimensional model, and one random network. The results reveal that small-world connectivity in facial emotion networks is apparently different from those networks, suggesting that a small-world network is the most suitable model for capturing the cognitive basis of facial emotions. PMID:26315136

  8. Personality Trait and Facial Expression Filter-Based Brain-Computer Interface

    Directory of Open Access Journals (Sweden)

    Seongah Chin

    2013-02-01

    Full Text Available In this paper, we present technical approaches that bridge the gap in the research related to the use of brain‐computer interfaces for entertainment and facial expressions. Such facial expressions that reflect an individual’s personal traits can be used to better realize artificial facial expressions in a gaming environment based on a brain‐computer interface. First, an emotion extraction filter is introduced in order to classify emotions on the basis of the users’ brain signals in real time. Next, a personality trait filter is defined to classify extrovert and introvert types, which manifest as five traits: very extrovert, extrovert, medium, introvert and very introvert. In addition, facial expressions derived from expression rates are obtained by an extrovert‐introvert fuzzy model through its defuzzification process. Finally, we confirm this validation via an analysis of the variance of the personality trait filter, a k‐fold cross validation of the emotion extraction filter, an accuracy analysis, a user study of facial synthesis and a test case game.

  9. Feature selection of facial displays for detection of non verbal communication in natural conversation

    OpenAIRE

    Sheerman-Chase T.; Ong E.-J.; Bowden R.

    2009-01-01

    Recognition of human communication has previously focused on deliberately acted emotions or in structured or artificial social contexts. This makes the result hard to apply to realistic social situations. This paper describes the recording of spontaneous human communication in a specific and common social situation: conversation between two people. The clips are then annotated by multiple observers to reduce individual variations in interpretation of social signals. Temporal and static featur...

  10. Feature-based Image Sequence Compression Coding

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A novel compressing method for video teleconference applications is presented. Semantic-based coding based on human image feature is realized, where human features are adopted as parameters. Model-based coding and the concept of vector coding are combined with the work on image feature extraction to obtain the result.

  11. 基于LBP和SVM决策树的人脸表情识别%Facial Expression Recognition Based on LBP and SVM Decision Tree

    Institute of Scientific and Technical Information of China (English)

    李扬; 郭海礁

    2014-01-01

    为了提高人脸表情识别的识别率,提出一种LBP和SVM决策树相结合的人脸表情识别算法。首先利用LBP算法将人脸表情图像转换为LBP特征谱,然后将LBP特征谱转换成LBP直方图特征序列,最后通过SVM决策树算法完成人脸表情的分类和识别,并且在JAFFE人脸表情库的识别中证明该算法的有效性。%In order to improve the recognition rate of facial expression, proposes a facial expression recognition algorithm based on a LBP and SVM decision tree. First facial expression image is converted to LBP characteristic spectrum using LBP algorithm, and then the LBP character-istic spectrum into LBP histogram feature sequence, finally completes the classification and recognition of facial expression by SVM deci-sion tree algorithm, and proves the effectiveness of the proposed method in the recognition of facial expression database in JAFFE.

  12. Avoiding occlusal derangement in facial fractures: An evidence based approach

    Directory of Open Access Journals (Sweden)

    Derick Mendonca

    2013-01-01

    Full Text Available Facial fractures with occlusal derangement describe any fracture which directly or indirectly affects the occlusal relationship. Such fractures include dento-alveolar fractures in the maxilla and mandible, midface fractures - Le fort I, II, III and mandible fractures of the symphysis, parasymphysis, body, angle, and condyle. In some of these fractures, the fracture line runs through the dento-alveolar component whereas in others the fracture line is remote from the occlusal plane nevertheless altering the occlusion. The complications that could ensue from the management of maxillofacial fractures are predominantly iatrogenic, and therefore can be avoided if adequate care is exercised by the operating surgeon. This paper does not emphasize on complications arising from any particular technique in the management of maxillofacial fractures but rather discusses complications in general, irrespective of the technique used.

  13. A voxel-based lesion study on facial emotion recognition after penetrating brain injury

    OpenAIRE

    Dal Monte, Olga; Krueger, Frank; Solomon, Jeffrey M.; Schintu, Selene; Knutson, Kristine M.; Strenziok, Maren; Pardini, Matteo; Leopold, Anne; Raymont, Vanessa; Grafman, Jordan

    2012-01-01

    The ability to read emotions in the face of another person is an important social skill that can be impaired in subjects with traumatic brain injury (TBI). To determine the brain regions that modulate facial emotion recognition, we conducted a whole-brain analysis using a well-validated facial emotion recognition task and voxel-based lesion symptom mapping (VLSM) in a large sample of patients with focal penetrating TBIs (pTBIs). Our results revealed that individuals with pTBI performed signif...

  14. Face Recognition from Still Images to Video Sequences: A Local-Feature-Based Framework

    OpenAIRE

    Shaokang Chen; Sandra Mau; Harandi, Mehrtash T.; Conrad Sanderson; Abbas Bigdeli; Lovell, Brian C.

    2011-01-01

    Although automatic faces recognition has shown success for high-quality images under controlled conditions, for video-based recognition it is hard to attain similar levels of performance. We describe in this paper recent advances in a project being undertaken to trial and develop advanced surveillance systems for public safety. In this paper, we propose a local facial feature based framework for both still image and video-based face recognition. The evaluation is performed on a still image d...

  15. Invariant facial feature extraction method with biologically-like mechanism%一种仿生的人脸不变特征提取方法

    Institute of Scientific and Technical Information of China (English)

    杜兴; 龚卫国; 张睿

    2011-01-01

    A biologically-like invariant facial feature extraction method is proposed to improve the face recognition rate obtained using the methods based on subspace algorithms. A hierarchical network with two layers, which is constructed according to the information processing procedure in the primary visual cortex ( V1 ) , is put forward to extract invariant feature from the face image. The first layer of the network, which simulates the function of the V1 simple cells, leams and obtains a group of V1-simple-cell-like filters using sparse coding method, and employs these filters to extract a set of illumination insensitive features from the face image. The second layer, which simulates the function of the V1 complex cells, merges the output of the first layer in neighborhood of positions and scales using local maximum operation, so as to obtain the facial feature robust to illumination, expression, slight pose change and local facial detail variations. The obtained invariant features are used to replace original face image as the input of a subspace algorithm, and the performance of face recognition is improved. Experiments on the FERET and ORL face databases show that compared with directly applying subspace algorithms to the image, using the proposed method can increase the recognition rate by 4.95% ~ 20.35%.%为了提高基于子空间算法的人脸识别的识别率,提出一种仿生的人脸不变特征提取方法.通过模拟初级视皮层(V1)的信息处理机制,构建一个二层结构的分层网络提取人脸图像的不变特征.网络的第1层模拟Vl简单细胞的功能,通过稀疏编码方法学习获得一组类似Vl简单细胞的滤波器,利用该组滤波器提取图像的光照不变特征;第2层模拟V1复杂细胞的功能,通过局部极大值运算对第1层的输出在空间和尺度邻域内进行合并,得到对光照、表情、轻微姿态变化和面部局部细节变化具有鲁棒性的人脸不变特征.以此不变特征

  16. Histopathologic and Ultrastructural Features of Gold Thread Implanted in the Skin for Facial Rejuvenation.

    Science.gov (United States)

    Moulonguet, Isabelle; Arnaud, Eric; Plantier, Françoise; da Costa, Patrick; Zaleski, Stéphane

    2015-10-01

    The authors report the histopathologic and ultrastructural features of gold threads, which were implanted in the cheek subcutis of a 77-year-old woman 10 years ago. These particles did not give rise to any adverse reactions and were fortuitously discovered by the surgeon during a facelift. Histopathology showed a nonpolarizing exogenous material consisting of black oval structures surrounded by a capsule of fibrosis and by a discrete inflammatory reaction with a few giant cells. In some cases, only a long fibrous tract surrounded by a moderate mononucleate infiltrate was observed. The wires were characterized with scanning electron microscopy, and X-ray microanalysis revealed a specific peak at 2.2 keV representative of gold that was absent in the control skin sample. As this value is specific for gold, it confirms the presence of the metal in the patient's skin. The histopathologic appearance of gold threads is particularly distinctive and easily recognizable by dermatopathologists. PMID:25321089

  17. Compression of color facial images using feature correction two-stage vector quantization.

    Science.gov (United States)

    Huang, J; Wang, Y

    1999-01-01

    A feature correction two-stage vector quantization (FC2VQ) algorithm was previously developed to compress gray-scale photo identification (ID) pictures. This algorithm is extended to color images in this work. Three options are compared, which apply the FC2VQ algorithm in RGB, YCbCr, and Karhunen-Loeve transform (KLT) color spaces, respectively. The RGB-FC2VQ algorithm is found to yield better image quality than KLT-FC2VQ or YCbCr-FC2VQ at similar bit rates. With the RGB-FC2VQ algorithm, a 128 x 128 24-b color ID image (49,152 bytes) can be compressed down to about 500 bytes with satisfactory quality. When the codeword indices are further compressed losslessly using a first order Huffman coder, this size is further reduced to about 450 bytes. PMID:18262869

  18. Feature Selection Based on Confidence Machine

    OpenAIRE

    Liu, Chang; Xu, Yi

    2014-01-01

    In machine learning and pattern recognition, feature selection has been a hot topic in the literature. Unsupervised feature selection is challenging due to the loss of labels which would supply the related information.How to define an appropriate metric is the key for feature selection. We propose a filter method for unsupervised feature selection which is based on the Confidence Machine. Confidence Machine offers an estimation of confidence on a feature'reliability. In this paper, we provide...

  19. Robust Facial Expression Recognition via Compressive Sensing

    Directory of Open Access Journals (Sweden)

    Shiqing Zhang

    2012-03-01

    Full Text Available Recently, compressive sensing (CS has attracted increasing attention in the areas of signal processing, computer vision and pattern recognition. In this paper, a new method based on the CS theory is presented for robust facial expression recognition. The CS theory is used to construct a sparse representation classifier (SRC. The effectiveness and robustness of the SRC method is investigated on clean and occluded facial expression images. Three typical facial features, i.e., the raw pixels, Gabor wavelets representation and local binary patterns (LBP, are extracted to evaluate the performance of the SRC method. Compared with the nearest neighbor (NN, linear support vector machines (SVM and the nearest subspace (NS, experimental results on the popular Cohn-Kanade facial expression database demonstrate that the SRC method obtains better performance and stronger robustness to corruption and occlusion on robust facial expression recognition tasks.

  20. Lighted display devices for producing static or animated visual displays, including animated facial features

    Science.gov (United States)

    Heilbron, Valerie J; Clem, Paul G; Cook, Adam Wade

    2014-02-11

    An illuminated display device with a base member with a plurality of cavities therein. Illumination devices illuminate the cavities and emit light through an opening of the cavities in a pattern, and a speaker can emit sounds in synchronization with the pattern. A panel with translucent portions can overly the base member and the cavities. An animated talking character can have an animated mouth cavity complex with multiple predetermined mouth lighting configurations simulative of human utterances. The cavities can be open, or optical waveguide material or positive members can be disposed therein. Reflective material can enhance internal reflectance and light emission.

  1. Infrared-based blink-detecting glasses for facial pacing: toward a bionic blink.

    Science.gov (United States)

    Frigerio, Alice; Hadlock, Tessa A; Murray, Elizabeth H; Heaton, James T

    2014-01-01

    IMPORTANCE Facial paralysis remains one of the most challenging conditions to effectively manage, often causing life-altering deficits in both function and appearance. Facial rehabilitation via pacing and robotic technology has great yet unmet potential. A critical first step toward reanimating symmetrical facial movement in cases of unilateral paralysis is the detection of healthy movement to use as a trigger for stimulated movement. OBJECTIVE To test a blink detection system that can be attached to standard eyeglasses and used as part of a closed-loop facial pacing system. DESIGN, SETTING, AND PARTICIPANTS Standard safety glasses were equipped with an infrared (IR) emitter-detector unit, oriented horizontally across the palpebral fissure, creating a monitored IR beam that became interrupted when the eyelids closed, and were tested in 24 healthy volunteers from a tertiary care facial nerve center community. MAIN OUTCOMES AND MEASURES Video-quantified blinking was compared with both IR sensor signal magnitude and rate of change in healthy participants with their gaze in repose, while they shifted their gaze from central to far-peripheral positions, and during the production of particular facial expressions. RESULTS Blink detection based on signal magnitude achieved 100% sensitivity in forward gaze but generated false detections on downward gaze. Calculations of peak rate of signal change (first derivative) typically distinguished blinks from gaze-related eyelid movements. During forward gaze, 87% of detected blink events were true positives, 11% were false positives, and 2% were false negatives. Of the 11% false positives, 6% were associated with partial eyelid closures. During gaze changes, false blink detection occurred 6% of the time during lateral eye movements, 10% of the time during upward movements, 47% of the time during downward movements, and 6% of the time for movements from an upward or downward gaze back to the primary gaze. Facial expressions

  2. Survey on Sparse Coded Features for Content Based Face Image Retrieval

    OpenAIRE

    Johnvictor, D.; Selvavinayagam, G.

    2014-01-01

    Content based image retrieval, a technique which uses visual contents of image to search images from large scale image databases according to users' interests. This paper provides a comprehensive survey on recent technology used in the area of content based face image retrieval. Nowadays digital devices and photo sharing sites are getting more popularity, large human face photos are available in database. Multiple types of facial features are used to represent discriminality on large scale hu...

  3. Triplication of 16p12.1p12.3 associated with developmental and growth delay and distinctive facial features.

    Science.gov (United States)

    Nimmo, Graeme A M; Guerin, Andrea; Badilla-Porras, Ramses; Stavropoulos, Dimitri J; Yoon, Grace; Carter, Melissa T

    2016-03-01

    The 16p12 region is particularly prone to genomic disorders due to the large number of low copy repeats [Martin et al., 2004; Nature 432:988-994]. We report two unrelated patients with de novo triplication of 16p12.1p12.3 who had developmental delay and similar facial features. Patient 1 is a 4-year-old male with a congenital heart anomaly, bilateral cryptorchidism, chronic constipation, and developmental delay. Patient 2 is a 12-year-old female with prenatally diagnosed hydronephrosis, hepatobiliary disease, failure to thrive, and developmental delay. Distinctive facial features common to both patients include short palpebral fissures, bulbous nose, thin upper vermillion border, apparently lowset ears, and large ear lobes. We compare the clinical manifestations of our patients with a previously reported patient with triplication of 16p12.2. © 2015 Wiley Periodicals, Inc. PMID:26647099

  4. Comparison of facial features of DiGeorge syndrome (DGS) due to deletion 10p13-10pter with DGS due to 22q11 deletion

    Energy Technology Data Exchange (ETDEWEB)

    Goodship, J.; Lynch, S.; Brown, J. [Univ. of Newcastle, Tyne (United Kingdom)] [and others

    1994-09-01

    DiGeorge syndrome (DGS) is a congenital anomaly consisting of cardiac defects, aplasia or hypoplasia of the thymus and parathroid glands, and dysmorphic facial features. The majority of DGS cases have a submicroscopic deletion within chromosome 22q11. However there have been a number of reports of DGS in association with other chromosomal abnormalities including four cases with chromosome 10p deletions. We describe a further 10p deletion case and suggest that the facial features in children with DGS due to deletions of 10p are different from those associated with chromosome 22 deletions. The propositus was born at 39 weeks gestation to unrelated caucasian parents, birth weight 2580g (10th centile) and was noted to be dysmorphic and cyanosed shortly after birth. The main dysmorphic facial features were a broad nasal bridge with very short palpebral fissures. Echocardiography revealed a large subsortic VSD and overriding aorta. She had a low ionised calcium and low parathroid hormone level. T cell subsets and PHA response were normal. Abdominal ultrasound showed duplex kidneys and on further investigation she was found to have reflux and raised plasma creatinine. She had an anteriorly placed anus. Her karyotype was 46,XX,-10,+der(10)t(3;10)(p23;p13)mat. The dysmorphic facial features in this baby are strikingly similar to those noted by Bridgeman and Butler in child with DGS as the result of a 10p deletion and distinct from the face seen in children with DiGeorge syndrome resulting from interstitial chromosome 22 deletions.

  5. Facial expression discrimination varies with presentation time but not with fixation on features: A backward masking study using eye-tracking

    OpenAIRE

    Neath, Karly N.; Itier, Roxane J.

    2013-01-01

    The current study investigated the effects of presentation time and fixation to expression-specific diagnostic features on emotion discrimination performance, in a backward masking task. While no differences were found when stimuli were presented for 16.67 ms, differences between facial emotions emerged beyond the happy-superiority effect at presentation times as early as 50 ms. Happy expressions were best discriminated, followed by neutral and disgusted, then surprised, and finally fearful e...

  6. Rough set-based feature selection method

    Institute of Scientific and Technical Information of China (English)

    ZHAN Yanmei; ZENG Xiangyang; SUN Jincai

    2005-01-01

    A new feature selection method is proposed based on the discern matrix in rough set in this paper. The main idea of this method is that the most effective feature, if used for classification, can distinguish the most number of samples belonging to different classes. Experiments are performed using this method to select relevant features for artificial datasets and real-world datasets. Results show that the selection method proposed can correctly select all the relevant features of artificial datasets and drastically reduce the number of features at the same time. In addition, when this method is used for the selection of classification features of real-world underwater targets,the number of classification features after selection drops to 20% of the original feature set, and the classification accuracy increases about 6% using dataset after feature selection.

  7. Color Facial Discriminate Feature Extraction and Recognition%基于鉴别分析的彩色人脸图像灰度转换方法

    Institute of Scientific and Technical Information of China (English)

    儒林

    2012-01-01

    Current popular algorithms for face recognition all share the same characteristic, they all convert the origin color image to gray image. After that, operate the class discrimination based on the feature extraction and recognition algorithm of the gray image. During practical operations, people convert color images to gray images with a simple combination color coefficients. This can't tell which is more important of the three components RGB. According to the color components of the facial images, this paper seeks to find a optimal combination coefficient which contains the most color information of the image, by feature extraction and analysis of the R, G and B components of color facial images. Then operate PCA on the combined components. Finally, extensive experiments performed on the international and universal AR standard color face database verify the effectiveness of the proposed method.%当前主流的人脸识别算法,都是把原有的彩色图像转化为灰度图后,采用基于灰度图像的特征抽取与识别算法进行分类识别.人们在实际操作过程中,只是使用一组简单的加权系数实现从彩色图像到灰度图的转换,这并不能很好的体现R,G,B 3个颜色分量之间的次重关系.本文根据人脸图像颜色组成的特点,对彩色人脸图像的R,G,B 3个分量的颜色信息进行特征抽取与分析,从中找出鉴别特征的三基色系数表示方法,把彩色图像转化为灰度图.最后,在国际通用的AR标准彩色人脸库中进行了大量实验,验证了本文算法的有效性.

  8. RGB-D动态序列的人脸自然表情识别%Spontaneous Facial Expression Recognition Based on RGB-D Dynamic Sequences

    Institute of Scientific and Technical Information of China (English)

    邵洁; 董楠

    2015-01-01

    Different from traditional facial expression recognition methods based on 2D static images, a spontaneous facial expression recognition algorithm is proposed for RGB-D image sequences. After pre-processing on image alignments and normalization, 4D spatio-temporal texture data are extracted as dynamic features. Then Slow Feature Analysis method is applied to detect the apex of the expression, so that 3D fa-cial geometrical model of the apex image is built and used as the static feature. With the combination of these two kinds of features and the dimensional reduction by PCA, Conditional Random Fields is applied to train and classify the features in the end. A lot of experiments were performed based on BU-4DFE facial ex-pression database. It has been verified that our algorithm not only outperforms traditional static facial ex-pression recognition methods and many other dynamic facial expression recognition methods, but also could recognize spontaneous expression automatically, which makes it possible for further practical applications.%区别于以二维静态图像为对象的传统人脸表情识别,提出一种针对RGB-D动态图像序列分析的人脸自然表情自动识别算法。首先针对预处理后的RGB-D表情图像序列提取四维时空纹理特征作为局部动态特征;再利用慢特征分析自动检测表情序列的峰值图像,并提取脸部三维几何模型为全局静态特征;最后结合动、静态特征,经主成分分析降维后输入条件随机场模型完成特征训练和表情识别。经由BU-4DFE人脸表情库验证表明,该算法不但比传统静态表情识别算法和其他动态算法具有优越性,而且能够针对自然展现的表情实现自动识别,为今后算法的实用化提供了可能。

  9. Autosomal recessive spastic tetraplegia caused by AP4M1 and AP4B1 gene mutation: expansion of the facial and neuroimaging features.

    Science.gov (United States)

    Tüysüz, Beyhan; Bilguvar, Kaya; Koçer, Naci; Yalçınkaya, Cengiz; Çağlayan, Okay; Gül, Ece; Sahin, Sezgin; Çomu, Sinan; Günel, Murat

    2014-07-01

    Adaptor protein complex-4 (AP4) is a component of intracellular transportation of proteins, which is thought to have a unique role in neurons. Recently, mutations affecting all four subunits of AP4 (AP4M1, AP4E1, AP4S1, and AP4B1) have been found to cause similar autosomal recessive phenotype consisting of tetraplegic cerebral palsy and intellectual disability. The aim of this study was analyzing AP4 genes in three new families with this phenotype, and discussing their clinical findings with an emphasis on neuroimaging and facial features. Using homozygosity mapping followed by whole-exome sequencing, we identified two novel homozygous mutations in AP4M1 and a homozygous deletion in AP4B1 in three pairs of siblings. Spastic tetraplegia, microcephaly, severe intellectual disability, limited speech, and stereotypic laughter were common findings in our patients. All patients also had similar facial features consisting of coarse and hypotonic face, bitemporal narrowing, bulbous nose with broad nasal ridge, and short philtrum which were not described in patients with AP4M1 and AP4B1 mutations previously. The patients presented here and previously with AP4M1, AP4B1, and AP4E1 mutations shared brain abnormalities including asymmetrical ventriculomegaly, thin splenium of the corpus callosum, and reduced white matter volume. The patients also had hippocampal globoid formation and thin hippocampus. In conclusion, disorders due to mutations in AP4 complex have similar neurological, facial, and cranial imaging findings. Thus, these four genes encoding AP4 subunits should be screened in patients with autosomal recessive spastic tetraplegic cerebral palsy, severe intellectual disability, and stereotypic laughter, especially with the described facial and cranial MRI features. PMID:24700674

  10. Initial assessment of facial nerve paralysis based on motion analysis using an optical flow method.

    Science.gov (United States)

    Samsudin, Wan Syahirah W; Sundaraj, Kenneth; Ahmad, Amirozi; Salleh, Hasriah

    2016-03-14

    An initial assessment method that can classify as well as categorize the severity of paralysis into one of six levels according to the House-Brackmann (HB) system based on facial landmarks motion using an Optical Flow (OF) algorithm is proposed. The desired landmarks were obtained from the video recordings of 5 normal and 3 Bell's Palsy subjects and tracked using the Kanade-Lucas-Tomasi (KLT) method. A new scoring system based on the motion analysis using area measurement is proposed. This scoring system uses the individual scores from the facial exercises and grades the paralysis based on the HB system. The proposed method has obtained promising results and may play a pivotal role towards improved rehabilitation programs for patients. PMID:26578273

  11. Feature-based sentiment analysis with ontologies

    OpenAIRE

    Taner, Berk

    2011-01-01

    Sentiment analysis is a topic that many researchers work on. In recent years, new research directions under sentiment analysis appeared. Feature-based sentiment analysis is one such topic that deals not only with finding sentiment in a sentence but providing a more detailed analysis on a given domain. In the beginning researchers focused on commercial products and manually generated list of features for a product. Then they tried to generate a feature-based approach to attach sentiments to th...

  12. A Genetic Algorithm-Based Feature Selection

    Directory of Open Access Journals (Sweden)

    Babatunde Oluleye

    2014-07-01

    Full Text Available This article details the exploration and application of Genetic Algorithm (GA for feature selection. Particularly a binary GA was used for dimensionality reduction to enhance the performance of the concerned classifiers. In this work, hundred (100 features were extracted from set of images found in the Flavia dataset (a publicly available dataset. The extracted features are Zernike Moments (ZM, Fourier Descriptors (FD, Lengendre Moments (LM, Hu 7 Moments (Hu7M, Texture Properties (TP and Geometrical Properties (GP. The main contributions of this article are (1 detailed documentation of the GA Toolbox in MATLAB and (2 the development of a GA-based feature selector using a novel fitness function (kNN-based classification error which enabled the GA to obtain a combinatorial set of feature giving rise to optimal accuracy. The results obtained were compared with various feature selectors from WEKA software and obtained better results in many ways than WEKA feature selectors in terms of classification accuracy

  13. Controversies in Contemporary Facial Reanimation.

    Science.gov (United States)

    Kim, Leslie; Byrne, Patrick J

    2016-08-01

    Facial palsy is a devastating condition with profound functional, aesthetic, and psychosocial implications. Although the complexity of facial expression and intricate synergy of facial mimetic muscles are difficult to restore, the goal of management is to reestablish facial symmetry and movement. Facial reanimation surgery requires an individualized treatment approach based on the cause, pattern, and duration of facial palsy while considering patient age, comorbidities, motivation, and goals. Contemporary reconstructive options include a spectrum of static and dynamic procedures. Controversies in the evaluation of patients with facial palsy, timing of intervention, and management decisions for dynamic smile reanimation are discussed. PMID:27400842

  14. Dynamic Model of Facial Expression Recognition based on Eigen-face Approach

    OpenAIRE

    Bajaj, Nikunj; Routray, Aurobinda; Happy, S L

    2013-01-01

    Emotions are best way of communicating information; and sometimes it carry more information than words. Recently, there has been a huge interest in automatic recognition of human emotion because of its wide spread application in security, surveillance, marketing, advertisement, and human-computer interaction. To communicate with a computer in a natural way, it will be desirable to use more natural modes of human communication based on voice, gestures and facial expressions. In this paper, a h...

  15. Facial Expressions Recognition Using Eigenspaces

    OpenAIRE

    Senthil Ragavan Valayapalayam Kittusamy; Venkatesh Chakrapani

    2012-01-01

    A challenging research topic is to make the Computer Systems to recognize facial expressions from the face image. A method of facial expression recognition, based on Eigenspaces is presented in this study. Here, the authors recognize the userâs facial expressions from the input images, using a method that was customized from eigenface recognition. Evaluation was done for this method in terms of identification correctness using two different Facial Expressions databases, Cohn-Kanade facial exp...

  16. 结合FSVM和KNN的人脸表情识别%Facial Expression Recognition Based on FSVM and KNN

    Institute of Scientific and Technical Information of China (English)

    王小虎; 黄银珍; 张石清

    2013-01-01

    为了提高人脸表情的正确识别率,提出了一种组合模糊支持向量机(FSVM )和K-近邻(KNN)的人脸表情识别的新方法。该方法通过主成分分析(PCA )提取人脸表情特征,对于待分类的不同区域,根据区分程度自适应划分为不同区域类型;并结合FSVM和KNN算法的特点,对不同区域类型切换分类算法。实验表明,此方法既能保证分类的精确度,又能简化计算复杂度。%To improve the recognition accuracy ,a new approach for facial expression recognition based on Fuzzy Support Vector Machine (FSVM ) and K-Nearest Neighbor (KNN) is presented in this paper .At first ,the feature of the static facial expression image is extracted by the Principle Component Analysis (PCA ) ,then ,the algorithm divide the region into different types ,and combine with the characteristic of the FSVM and KNN ,switch the classification methods to the different types .The result of the experiment show that proposed algorithm can achieve good recognition accuracy ,and can simplify the computation complexity .

  17. 3-D-CT reconstructions in fractures of the skull base and facial skeleton

    International Nuclear Information System (INIS)

    3-D reconstructions of the skull base, temporal bone, and skull fractures were compared to 2-D CT to evaluate the diagnostic value in traumatized patients. 38 patients with 22 fractures of the facial skeleton (orbita, zygomatic, Le Fort), 12 temporal bone, and 4 skull fractures were investigated. Subjective grading was perfomed by two physicians (ENT/RAD) in respect of quality diagnostic validity and estimated clinical impact. The average image validity and quality were graded good. In the temporal bone the average information supplied by 3-D was of inferior value; here, the lack of information regarding the inner ear structures was responsible for the lack of clinical impact. In fractures of the facial skeleton and the skull base of good to very good image quality was seen and clinical relevance was high. 3-D CT is capable of demonstrating fractures, which is of little value in the temporal bone, but of high value in the skull base and the facial skeleton, especially if surfaces are involved or fragments are displaced. (orig.)

  18. [Hemodynamic features assessment in submental and facial arteries in patients with early atherosclerotic disease of brachycephalic arteries].

    Science.gov (United States)

    Nadtochiĭ, A G; Grudianov, A I; Avraamova, T V

    2014-01-01

    By ultrasonicduplex scanning nature estimated haemodynamics in the arteriessubmentalis and facial of patients with early signs of atherosclerotic changes in the brakhiotsefalarteries and periodontal pathology of different stages - for perfection of prophylaxis of periodontal diseases by the means of investigation of prophylaxis vascular diseases. It was established, that influence of risk factors is more important than the age of patients. PMID:25588335

  19. Intensity-based registration and fusion of thermal and visual facial-images

    Science.gov (United States)

    Arslan, Musa Serdar; Elbakaray, Mohamed I.; Reza, Shamim; Iftekharuddin, Khan M.

    2012-10-01

    Fusion of images from different modalities provides information that cannot be obtained by viewing the images separately and consecutively. Automatic fusion of thermal and visual images is of great interest in defense and medical applications. In this study, we implemented automatic intensity-based illumination, translation and scale invariant registration of deformable objects in thermal and visual images by maximization of a similarity measure such as generalized correlation ratio. This method was originally used to register ultrasound (US) and magnetic resonance images (MRI) successfully. In our current work, we propose a major modification to the original algorithm by investigating appropriate information content in the input data. The registration of facial thermal and visual images in this algorithm is achieved by maximization of the similarity measure between the input images in the appropriate image channel. The algorithm is tested using real facial images with illumination, scale, and translation variations and the results show acceptable accuracy.

  20. Texton Based Shape Features on Local Binary Pattern for Age Classification

    Directory of Open Access Journals (Sweden)

    V.Vijaya Kumar

    2012-07-01

    Full Text Available Classification and recognition of objects is interest of many researchers. Shape is a significant feature of objects and it plays a crucial role in image classification and recognition. The present paper assumes that the features that drastically affect the adulthood classification system are the Shape features (SF of face. Based on this, the present paper proposes a new technique of adulthood classification by extracting feature parameters of face on Integrated Texton based LBP (IT-LBP images. The present paper evaluates LBP features on facial images. On LBP Texton Images complex shape features are evaluated on facial images for a precise age classification.LBP is a local texture operator with low computational complexity and low sensitivity to changes in illumination. Textons are considered as texture shape primitives which are located with certain placement rules. The proposed shape features represent emergent patterns showing a common property all over the image. The experimental evidence on FGnet aging database clearly indicates the significance and accuracy of the proposed classification method over the other existing methods.

  1. Facial Expression Recognition Techniques Based on Bilinear Model%基于双线性模型的人脸表情识别技术

    Institute of Scientific and Technical Information of China (English)

    徐欢

    2014-01-01

    Aiming at the problems existing in facial expression recognition currently , based on the data in the 3D expression data-base BU-3DFE, we study the point cloud alignment of 3D facial expression data , establish the bilinear models based on the align-ment data , and improve the recognition algorithms based on bilinear model in order to form the new recognition and classification algorithms, to reduce the quantity of identity feature calculation in original algorithm , to minimize the influence of identity feature on the total expression recognition process , to improve the results of facial expression recognition , and to ultimately achieve the high robustness of 3D facial expression recognition .%针对现阶段人脸表情识别过程中所遇到的问题,基于三维数据库BU-3DFE中的三维表情数据,研究三维人脸表情数据的点云对齐及基于对齐数据的双线性模型建立,对基于双线性模型的识别算法加以改进,形成新的识别分类算法,降低原有算法中身份特征参与计算的比重,最大可能地降低身份特征对于整个表情识别过程的影响。旨在提高表情识别的结果,最终实现高鲁棒性的三维表情识别。

  2. Three-dimensional, Full-sized, Silicone-based, Facial Replicas for Teaching Outcome Measures in Acne

    OpenAIRE

    Tan, Jerry K. L.; Tang, Jing

    2010-01-01

    Background: The scientific integrity of outcome measurements is dependent upon reproducibility and accuracy. In acne assessments, there is no current gold standard for accuracy in lesion counting and global grading. Purpose: The purpose of this study was to create facial acne replicas for use in acne training and for evaluation of rater accuracy. Methods: Two full-sized, three-dimensional, silicone-based, facial replicas with predetermined acne lesion type and number were created. Their teach...

  3. Ontology Based Feature Driven Development Life Cycle

    Directory of Open Access Journals (Sweden)

    Farheen Siddiqui

    2012-01-01

    Full Text Available The upcoming technology support for semantic web promises fresh directions for Software Engineering community. Also semantic web has its roots in knowledge engineering that provoke software engineers to look for application of ontology applications throughout the Software Engineering lifecycle. The internal components of a semantic web are "light weight", and may be of less quality standards than the externally visible modules. In fact the internal components are generated from external (ontological component. That's the reason agile development approaches such as feature driven development are suitable for applications internal component development. As yet there is no particular procedure that describes the role of ontology in FDD processes. Therefore we propose an ontology based feature driven development for semantic web application that can be used form application model development to feature design and implementation. Features are precisely defined in the OWL-based domain model. Transition from OWL based domain model to feature list is directly defined in transformation rules. On the other hand the ontology based overall model can be easily validated through automated tools. Advantages of ontology-based feature Driven development are also discussed.

  4. Rheology-based facial animation realistic face model

    Institute of Scientific and Technical Information of China (English)

    ZENG Dan; PEI Li

    2009-01-01

    This paper presents a rheology-based approach to animate realistic face model. The dynamic and biorheological characteristics of the force member (muscles) and stressed member (face) are considered. The stressed face can be modeled as viscoelastic bodies with the Hooke bodies and Newton bodies connected in a composite series-parallel manner. Then, the stress-strain relationship is derived, and the constitutive equations established. Using these constitutive equations, the face model can be animated with the force generated by muscles. Experimental results show that this method can realistically simulate the mechanical properties and motion characteristics of human face, and performance of this method is satisfactory.

  5. Facial Action Unit Recognition under Incomplete Data Based on Multi-label Learning with Missing Labels

    KAUST Repository

    Li, Yongqiang

    2016-07-07

    Facial action unit (AU) recognition has been applied in a wild range of fields, and has attracted great attention in the past two decades. Most existing works on AU recognition assumed that the complete label assignment for each training image is available, which is often not the case in practice. Labeling AU is expensive and time consuming process. Moreover, due to the AU ambiguity and subjective difference, some AUs are difficult to label reliably and confidently. Many AU recognition works try to train the classifier for each AU independently, which is of high computation cost and ignores the dependency among different AUs. In this work, we formulate AU recognition under incomplete data as a multi-label learning with missing labels (MLML) problem. Most existing MLML methods usually employ the same features for all classes. However, we find this setting is unreasonable in AU recognition, as the occurrence of different AUs produce changes of skin surface displacement or face appearance in different face regions. If using the shared features for all AUs, much noise will be involved due to the occurrence of other AUs. Consequently, the changes of the specific AUs cannot be clearly highlighted, leading to the performance degradation. Instead, we propose to extract the most discriminative features for each AU individually, which are learned by the supervised learning method. The learned features are further embedded into the instance-level label smoothness term of our model, which also includes the label consistency and the class-level label smoothness. Both a global solution using st-cut and an approximated solution using conjugate gradient (CG) descent are provided. Experiments on both posed and spontaneous facial expression databases demonstrate the superiority of the proposed method in comparison with several state-of-the-art works.

  6. Multiresolution Feature Based Fractional Power Polynomial Kernel Fisher Discriminant Model for Face Recognition

    OpenAIRE

    Dattatray V. Jadhav; Jayant V. Kulkarni; Raghunath S. Holambe

    2008-01-01

    This paper prese nts a technique for face recognition which uses wavelet transform to derive desirable facial features. Three level decompositions are used to form the pyramidal multiresolution features to cope with the variations due to illumination and facial expression changes. The fractional power polynomial kernel maps the input data into an implicit feature space with a nonlinear mapping. Being linear in the feature space, but nonlinear in the input space, kernel is capable of deriving ...

  7. 基于多核学习的画像画风的识别%Drawing Style Recognition of Facial Sketch Based on Multiple Kernel Learning

    Institute of Scientific and Technical Information of China (English)

    张铭津; 李洁; 王楠楠

    2015-01-01

    画像的画风识别广泛应用于名画甄别和刑侦破案领域。文中提出基于多核学习的画像画风的识别算法。首先根据艺术评论家从画像部件的处理方式鉴定画像画风的方法,从画像中提取脸、左眼、右眼、鼻和嘴5个部件。然后根据画家从画像的明暗度和画像作者的绘画笔法识别画像画风的方法,从每个部件上提取灰度直方图特征、灰度矩特征、快速鲁棒特征和多尺度的局部二值模式特征。最后通过多核学习将不同部件和不同特征融合以进行画像画风的识别。实验表明,文中算法性能较好,能取得较高识别率。%The drawing style recognition of facial sketches is widely used for painting authentication and criminal investigation. A drawing style recognition algorithm of facial sketch based on multiple kernel learning is presented. Firstly, according to the way of art critics recognize the drawing style of facial sketch, five parts, the face part, left eye part, right eye part, nose part and mouth part, are extracted from the facial sketch. Then, gray histogram feature, gray moment feature, speeded-up robust feature and multiscale local binary pattern feature are extracted from each part on the basis of artistsˊ different understandings of lights and shadows on a face and various usages of the pencil . Finally, different parts and features are integrated and the drawing styles of facial sketches are classified by multiple kernel learning. Experimental results demonstrate that the proposed algorithm has better performance and obtains higher recognition rates.

  8. Adaptive norm-based coding of facial identity.

    Science.gov (United States)

    Rhodes, Gillian; Jeffery, Linda

    2006-09-01

    Identification of a face is facilitated by adapting to its computationally opposite identity, suggesting that the average face functions as a norm for coding identity [Leopold, D. A., O'Toole, A. J., Vetter, T., & Blanz, V. (2001). Prototype-referenced shape encoding revealed by high-level aftereffects. Nature Neuroscience, 4, 89-94; Leopold, D. A., Rhodes, G., Müller, K. -M., & Jeffery, L. (2005). The dynamics of visual adaptation to faces. Proceedings of the Royal Society of London, Series B, 272, 897-904]. Crucially, this interpretation requires that the aftereffect is selective for the opposite identity, but this has not been convincingly demonstrated. We demonstrate such selectivity, observing a larger aftereffect for opposite than non-opposite adapt-test pairs that are matched on perceptual contrast (dissimilarity). Component identities were also harder to detect in morphs of opposite than non-opposite face pairs. We propose an adaptive norm-based coding model of face identity. PMID:16647736

  9. Silicone based artificial skin for humanoid facial expressions

    Science.gov (United States)

    Tadesse, Yonas; Moore, David; Thayer, Nick; Priya, Shashank

    2009-03-01

    Artificial skin materials were synthesized using platinum-cured silicone elastomeric material (Reynolds Advanced Materials Inc.) as the base consisting of mainly polyorganosiloxanes, amorphous silica and platinum-siloxane complex compounds. Systematic incorporation of porosity in this material was found to lower the force required to deform the skin in axial direction. In this study, we utilized foaming agents comprising of sodium bicarbonate and dilute form of acetic acid for modifying the polymeric chain and introducing the porosity. Experimental determination of functional relationship between the concentration of foaming agent, slacker and non-reactive silicone fluid and that of force - deformation behavior was conducted. Tensile testing of material showed a local parabolic relationship between the concentrations of foaming agents used (per milliliter of siloxane compound) and strain. This data can be used to optimize the amount of additives in platinum cured silicone to obtain desired force - displacement characteristics. Addition of "silicone thinner" and "slacker" showed a monotonically increasing strain behavior. A mathematical model was developed to arrive at the performance metrics of artificial skin.

  10. Facial Expression Recognition Based on RGB-D%基于RGB-D的人脸表情识别研究

    Institute of Scientific and Technical Information of China (English)

    吴会霞; 陶青川; 龚雪友

    2016-01-01

    针对二维人脸表情识别在复杂光照及光照条件较差时,识别准确率较低的问题,提出一种基于RGB-D 的融合多分类器的面部表情识别的算法。该算法首先在图像的彩色信息(Y、Cr、Q)和深度信息(D)上分别提取其LPQ,Gabor,LBP 以及HOG 特征信息,并对提取的高维特征信息做线性降维(PCA)及特征空间转换(LDA),而后用最近邻分类法得到各表情弱分类器,并用AdaBoost 算法权重分配弱分类器从而生成强分类器,最后用Bayes 进行多分类器的融合,统计输出平均识别率。在具有复杂光照条件变化的人脸表情库CurtinFaces 和KinectFaceDB 上,该算法平均识别率最高达到98.80%。试验结果表明:比较于单独彩色图像的表情识别算法,深度信息的融合能够更加明显的提升面部表情识别的识别率,并且具有一定的应用价值。%For two-dimensional facial expression recognition complex when poor lighting and illumination conditions, a low recognition rate of prob-lem, proposes a facial expression recognition algorithm based on multi-feature RGB-D fusion. Extracts their LPQ, Gabor, LBP and HOG feature information in image color information(Y, Cr, Q) and depth information (D) on, and the extraction of high-dimensional feature in-formation does linear dimensionality reduction (PCA) and feature space conversion (LDA), and then gives each face of weak classifiers nearest neighbor classification, and with AdaBoost algorithm weight distribution of weak classifiers to generate strong classifier, and finally with Bayes multi-classifier fusion, statistical output average recognition rate. With complex changes in lighting conditions and facial ex-pression libraries CurtinFaces KinectFaceDB, the algorithm average recognition rate of up to 98.80%. The results showed that: compared to a separate color image expression recognition algorithm, the fusion depth information can be more

  11. Texture feature based liver lesion classification

    Science.gov (United States)

    Doron, Yeela; Mayer-Wolf, Nitzan; Diamant, Idit; Greenspan, Hayit

    2014-03-01

    Liver lesion classification is a difficult clinical task. Computerized analysis can support clinical workflow by enabling more objective and reproducible evaluation. In this paper, we evaluate the contribution of several types of texture features for a computer-aided diagnostic (CAD) system which automatically classifies liver lesions from CT images. Based on the assumption that liver lesions of various classes differ in their texture characteristics, a variety of texture features were examined as lesion descriptors. Although texture features are often used for this task, there is currently a lack of detailed research focusing on the comparison across different texture features, or their combinations, on a given dataset. In this work we investigated the performance of Gray Level Co-occurrence Matrix (GLCM), Local Binary Patterns (LBP), Gabor, gray level intensity values and Gabor-based LBP (GLBP), where the features are obtained from a given lesion`s region of interest (ROI). For the classification module, SVM and KNN classifiers were examined. Using a single type of texture feature, best result of 91% accuracy, was obtained with Gabor filtering and SVM classification. Combination of Gabor, LBP and Intensity features improved the results to a final accuracy of 97%.

  12. NEURO FUZZY MODEL FOR FACE RECOGNITION WITH CURVELET BASED FEATURE IMAGE

    OpenAIRE

    SHREEJA R,; KHUSHALI DEULKAR,; SHALINI BHATIA

    2011-01-01

    A facial recognition system is a computer application for automatically identifying or verifying a person from a digital image or a video frame from a video source. One of the ways to do this is by comparing selected facial features from the image and a facial database. It is typically used in security systems and can be compared to other biometric techniques such as fingerprint or iris recognition systems. Every face has approximately 80 nodal points like (Distance between the eyes, Width of...

  13. Facial emotion recognition system for autistic children: a feasible study based on FPGA implementation.

    Science.gov (United States)

    Smitha, K G; Vinod, A P

    2015-11-01

    Children with autism spectrum disorder have difficulty in understanding the emotional and mental states from the facial expressions of the people they interact. The inability to understand other people's emotions will hinder their interpersonal communication. Though many facial emotion recognition algorithms have been proposed in the literature, they are mainly intended for processing by a personal computer, which limits their usability in on-the-move applications where portability is desired. The portability of the system will ensure ease of use and real-time emotion recognition and that will aid for immediate feedback while communicating with caretakers. Principal component analysis (PCA) has been identified as the least complex feature extraction algorithm to be implemented in hardware. In this paper, we present a detailed study of the implementation of serial and parallel implementation of PCA in order to identify the most feasible method for realization of a portable emotion detector for autistic children. The proposed emotion recognizer architectures are implemented on Virtex 7 XC7VX330T FFG1761-3 FPGA. We achieved 82.3% detection accuracy for a word length of 8 bits. PMID:26239162

  14. Implementation of an Improved Facial Recognition Algorithm in a Web based Learning System

    Directory of Open Access Journals (Sweden)

    Adeolu Olabode Afolabi

    2012-11-01

    Full Text Available The study focuses on proffering solution to some identified data insecurity problems in software development using Web-based learning system as a test bed, by development of an hybrid crypto-biometric security system, and the use of an enhanced eigen-based facial recognition algorithm. The methodology is by implementation of an optimized principal component analysis eigen facial recognition algorithm for black faces using matlab. A comparative analysis of performance of the optimized principal component analysis (OPCA and (PCA principal component analysis is done and it was found out that OPCA performed better than PCA. Also a web based learning system using Hypertext Pre-processor (PHP, Scripting Language for the Web-based pages, Asynchronous JavaScript and XML (AJAX is developed as a test bed for the crypto biometric system. With this work a prototype for the secured Web-based learning infrastructure is developed and its contextual framework, also an optimized principal component analysis algorithm for black face recognition evolve as contributions to knowledge. hence it will foster indigenization of electronic learning technology which will adequately address the related challenges in the phenomenon of system security in terms of confidentiality and integrity of the system.

  15. Face recognition via edge-based Gabor feature representation for plastic surgery-altered images

    Science.gov (United States)

    Chude-Olisah, Chollette C.; Sulong, Ghazali; Chude-Okonkwo, Uche A. K.; Hashim, Siti Z. M.

    2014-12-01

    Plastic surgery procedures on the face introduce skin texture variations between images of the same person (intra-subject), thereby making the task of face recognition more difficult than in normal scenario. Usually, in contemporary face recognition systems, the original gray-level face image is used as input to the Gabor descriptor, which translates to encoding some texture properties of the face image. The texture-encoding process significantly degrades the performance of such systems in the case of plastic surgery due to the presence of surgically induced intra-subject variations. Based on the proposition that the shape of significant facial components such as eyes, nose, eyebrow, and mouth remains unchanged after plastic surgery, this paper employs an edge-based Gabor feature representation approach for the recognition of surgically altered face images. We use the edge information, which is dependent on the shapes of the significant facial components, to address the plastic surgery-induced texture variation problems. To ensure that the significant facial components represent useful edge information with little or no false edges, a simple illumination normalization technique is proposed for preprocessing. Gabor wavelet is applied to the edge image to accentuate on the uniqueness of the significant facial components for discriminating among different subjects. The performance of the proposed method is evaluated on the Georgia Tech (GT) and the Labeled Faces in the Wild (LFW) databases with illumination and expression problems, and the plastic surgery database with texture changes. Results show that the proposed edge-based Gabor feature representation approach is robust against plastic surgery-induced face variations amidst expression and illumination problems and outperforms the existing plastic surgery face recognition methods reported in the literature.

  16. Comparative analysis of the anterior and posterior length and deflection angle of the cranial base, in individuals with facial Pattern I, II and III

    Directory of Open Access Journals (Sweden)

    Guilherme Thiesen

    2013-02-01

    Full Text Available OBJECTIVE: This study evaluated the variations in the anterior cranial base (S-N, posterior cranial base (S-Ba and deflection of the cranial base (SNBa among three different facial patterns (Pattern I, II and III. METHOD: A sample of 60 lateral cephalometric radiographs of Brazilian Caucasian patients, both genders, between 8 and 17 years of age was selected. The sample was divided into 3 groups (Pattern I, II and III of 20 individuals each. The inclusion criteria for each group were the ANB angle, Wits appraisal and the facial profile angle (G'.Sn.Pg'. To compare the mean values obtained from (SNBa, S-N, S-Ba each group measures, the ANOVA test and Scheffé's Post-Hoc test were applied. RESULTS AND CONCLUSIONS: There was no statistically significant difference for the deflection angle of the cranial base among the different facial patterns (Patterns I, II and III. There was no significant difference for the measures of the anterior and posterior cranial base between the facial Patterns I and II. The mean values for S-Ba were lower in facial Pattern III with statistically significant difference. The mean values of S-N in the facial Pattern III were also reduced, but without showing statistically significant difference. This trend of lower values in the cranial base measurements would explain the maxillary deficiency and/or mandibular prognathism features that characterize the facial Pattern III.OBJETIVO: o presente estudo avaliou as variações da base craniana anterior (S-N, base craniana posterior (S-Ba, e ângulo de deflexão da base do crânio (SNBa entre três diferentes padrões faciais (Padrão I, II e III. MÉTODOS: selecionou-se uma amostra de 60 telerradiografias em norma lateral de pacientes brasileiros leucodermas, de ambos os sexos, com idades entre 8 anos e 17 anos. A amostra foi dividida em três grupos (Padrão I, II e III, sendo cada grupo constituído de 20 indivíduos. Os critérios de seleção dos indivíduos para cada grupo

  17. Facial Reconstruction and Rehabilitation.

    Science.gov (United States)

    Guntinas-Lichius, Orlando; Genther, Dane J; Byrne, Patrick J

    2016-01-01

    Extracranial infiltration of the facial nerve by salivary gland tumors is the most frequent cause of facial palsy secondary to malignancy. Nevertheless, facial palsy related to salivary gland cancer is uncommon. Therefore, reconstructive facial reanimation surgery is not a routine undertaking for most head and neck surgeons. The primary aims of facial reanimation are to restore tone, symmetry, and movement to the paralyzed face. Such restoration should improve the patient's objective motor function and subjective quality of life. The surgical procedures for facial reanimation rely heavily on long-established techniques, but many advances and improvements have been made in recent years. In the past, published experiences on strategies for optimizing functional outcomes in facial paralysis patients were primarily based on small case series and described a wide variety of surgical techniques. However, in the recent years, larger series have been published from high-volume centers with significant and specialized experience in surgical and nonsurgical reanimation of the paralyzed face that have informed modern treatment. This chapter reviews the most important diagnostic methods used for the evaluation of facial paralysis to optimize the planning of each individual's treatment and discusses surgical and nonsurgical techniques for facial rehabilitation based on the contemporary literature. PMID:27093062

  18. Texture Classification Based on Texton Features

    Directory of Open Access Journals (Sweden)

    U Ravi Babu

    2012-08-01

    Full Text Available Texture Analysis plays an important role in the interpretation, understanding and recognition of terrain, biomedical or microscopic images. To achieve high accuracy in classification the present paper proposes a new method on textons. Each texture analysis method depends upon how the selected texture features characterizes image. Whenever a new texture feature is derived it is tested whether it precisely classifies the textures. Here not only the texture features are important but also the way in which they are applied is also important and significant for a crucial, precise and accurate texture classification and analysis. The present paper proposes a new method on textons, for an efficient rotationally invariant texture classification. The proposed Texton Features (TF evaluates the relationship between the values of neighboring pixels. The proposed classification algorithm evaluates the histogram based techniques on TF for a precise classification. The experimental results on various stone textures indicate the efficacy of the proposed method when compared to other methods.

  19. 基于图像差分法的人脸表情识别%Facial Expression Recognition Based on the difference image

    Institute of Scientific and Technical Information of China (English)

    2013-01-01

    This paper discusses the static image feature extraction methods.Presents a method that based on the difference image method of facial expression recognition.Through difference image to find the feature points,and use the toolboxes in MATLAB to fit the feature points in order to find the change with feature region.Through the experiment,verify the feasibility of the method.%  讨论了静态图像表情特征提取方法,提出了一种基于图像差分法的人脸表情识别方法。通过差值图找到特征点,采用特征点拟合的办法找出特征区域的变化,通过 Matlab 验证了该方法的可行性。

  20. Statistical feature extraction based iris recognition system

    Indian Academy of Sciences (India)

    ATUL BANSAL; RAVINDER AGARWAL; R K SHARMA

    2016-05-01

    Iris recognition systems have been proposed by numerous researchers using different feature extraction techniques for accurate and reliable biometric authentication. In this paper, a statistical feature extraction technique based on correlation between adjacent pixels has been proposed and implemented. Hamming distance based metric has been used for matching. Performance of the proposed iris recognition system (IRS) has been measured by recording false acceptance rate (FAR) and false rejection rate (FRR) at differentthresholds in the distance metric. System performance has been evaluated by computing statistical features along two directions, namely, radial direction of circular iris region and angular direction extending from pupil tosclera. Experiments have also been conducted to study the effect of number of statistical parameters on FAR and FRR. Results obtained from the experiments based on different set of statistical features of iris images show thatthere is a significant improvement in equal error rate (EER) when number of statistical parameters for feature extraction is increased from three to six. Further, it has also been found that increasing radial/angular resolution,with normalization in place, improves EER for proposed iris recognition system

  1. Facial Expression Recognition Using SVM Classifier

    OpenAIRE

    Vasanth P.C.; Nataraj. K. R

    2015-01-01

    Facial feature tracking and facial actions recognition from image sequence attracted great attention in computer vision field. Computational facial expression analysis is a challenging research topic in computer vision. It is required by many applications such as human-computer interaction, computer graphic animation and automatic facial expression recognition. In recent years, plenty of computer vision techniques have been developed to track or recognize the facial activities in three levels...

  2. Nablus mask-like facial syndrome

    DEFF Research Database (Denmark)

    Allanson, Judith; Smith, Amanda; Hare, Heather; Albrecht, Beate; Bijlsma, Emilia; Dallapiccola, Bruno; Donti, Emilio; Fitzpatrick, David; Isidor, Bertrand; Lachlan, Katherine; Le Caignec, Cedric; Prontera, Paolo; Raas-Rothschild, Annick; Rogaia, Daniela; van Bon, Bregje; Aradhya, Swaroop; Crocker, Susan F; Jarinova, Olga; McGowan-Jordan, Jean; Boycott, Kym; Bulman, Dennis; Fagerberg, Christina

    Nablus mask-like facial syndrome (NMLFS) has many distinctive phenotypic features, particularly tight glistening skin with reduced facial expression, blepharophimosis, telecanthus, bulky nasal tip, abnormal external ear architecture, upswept frontal hairline, and sparse eyebrows. Over the last few...... heterozygous deletions significantly overlapping the region associated with NMLFS. Notably, while one mother and child were said to have mild tightening of facial skin, none of these individuals exhibited reduced facial expression or the classical facial phenotype of NMLFS. These findings indicate that...

  3. Feature Selection Based on Mutual Correlation

    Czech Academy of Sciences Publication Activity Database

    Haindl, Michal; Somol, Petr; Ververidis, D.; Kotropoulos, C.

    2006-01-01

    Roč. 19, č. 4225 (2006), s. 569-577. ISSN 0302-9743. [Iberoamerican Congress on Pattern Recognition. CIARP 2006 /11./. Cancun, 14.11.2006-17.11.2006] R&D Projects: GA AV ČR 1ET400750407; GA MŠk 1M0572; GA AV ČR IAA2075302 EU Projects: European Commission(XE) 507752 - MUSCLE Institutional research plan: CEZ:AV0Z10750506 Keywords : feature selection Subject RIV: BD - Theory of Information Impact factor: 0.402, year: 2005 http:// library .utia.cas.cz/separaty/historie/haindl-feature selection based on mutual correlation.pdf

  4. Heartbeat Rate Measurement from Facial Video

    DEFF Research Database (Denmark)

    Haque, Mohammad Ahsanul; Irani, Ramin; Nasrollahi, Kamal; Moeslund, Thomas B.

    2016-01-01

    combining a ‘Good feature to track’ and a ‘Supervised descent method’ in order to overcome the limitations of currently available facial video based HR measuring systems. Such limitations include, e.g., unrealistic restriction of the subject’s movement and artificial lighting during data capture. A face...

  5. 一种基于人脸图像的年龄估计方法%An Age Estimation Method Based on Facial Images

    Institute of Scientific and Technical Information of China (English)

    罗佳佳; 蔡超

    2012-01-01

    Research on age estimation has a significant impact on Human-Computer Interaction. In this paper, an age estimation method based on facial images is proposed. The new method establishes a face anthropometry template based on craniofacial growth pattern theory to obtain facial geometric proportion features, and extracts texture features of facial local area using fractional differential approach, combines these two kinds of features to form personal age feature vectors. Machine learning methods such as clustering algorithms we used to obtain age-feature knowledge matrix, and in age estimating, such knowledge matrix voting on estimate age of input facial image. Experimental results show that the estimation error is small and the classification accuracy is close to human judgment.%有关年龄估计的研究在人机交互领域有着非常重要的意义.该文提出一种基于人脸图像的年龄估计方法,该方法首先基于颅面成长模式理论建立人脸测量模板,在此模板上计算面部几何比例特征,然后运用分数阶微分提取人脸局部区域的纹理特征,结合这两类特征构成个体年龄特征向量;通过聚类学习的方法训练年龄特征向量获得年龄-特征映射矩阵,最后由此矩阵表决出输人人脸的估计年龄.实验结果表明,基于这两种特征构建的年龄估计模型可以获得较好的年龄估计结果,年龄误差较小,分类准确率接近人的主观判断结果.

  6. Multi Feature Content Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    Rajshree S. Dubey,

    2010-09-01

    Full Text Available There are numbers of methods prevailing for Image Mining Techniques. This Paper includes the features of four techniques I,e Color Histogram, Color moment, Texture, and Edge Histogram Descriptor. The nature of the Image is basically based on the Human Perception of the Image. The Machine interpretation of the Image is based on the Contours and surfaces of the Images. The study of the Image Mining is a very challenging task because it involves the Pattern Recognition which is a very important tool for the Machine Vision system. A combination of four feature extraction methods namely color istogram, Color Moment, texture, and Edge Histogram Descriptor. There is a provision to add new features in future for better retrievalefficiency. In this paper the combination of the four techniques are used and the Euclidian distances are calculated of the every features are added and the averages are made .The user interface is provided by the Mat lab. The image properties analyzed in this work are by using computer vision and image processing algorithms. For colorthe histogram of images are computed, for texture co occurrence matrix based entropy, energy, etc, are calculated and for edge density it is Edge Histogram Descriptor (EHD that is found. For retrieval of images, the averages of the four techniques are made and the resultant Image is retrieved.

  7. Persistent facial pain conditions

    DEFF Research Database (Denmark)

    Forssell, Heli; Alstergren, Per; Bakke, Merete;

    2016-01-01

    Persistent facial pains, especially temporomandibular disorders (TMD), are common conditions. As dentists are responsible for the treatment of most of these disorders, up-to date knowledge on the latest advances in the field is essential for successful diagnosis and management. The review covers...... TMD, and different neuropathic or putative neuropathic facial pains such as persistent idiopathic facial pain and atypical odontalgia, trigeminal neuralgia and painful posttraumatic trigeminal neuropathy. The article presents an overview of TMD pain as a biopsychosocial condition, its prevalence......, clinical features, consequences, central and peripheral mechanisms, diagnostic criteria (DC/TMD), and principles of management. For each of the neuropathic facial pain entities, the definitions, prevalence, clinical features, and diagnostics are described. The current understanding of the pathophysiology...

  8. An object-oriented feature-based design system face-based detection of feature interactions

    International Nuclear Information System (INIS)

    This paper presents an object-oriented, feature-based design system which supports the integration of design and manufacture by ensuring that part descriptions fully account for any feature interactions. Manufacturing information is extracted from the feature descriptions in the form of volumes and Tool Access Directions, TADs. When features interact, both volumes and TADs are updated. This methodology has been demonstrated by developing a prototype system in which ACIS attributes are used to record feature information within the data structure of the solid model. The system implemented in the C++ programming language and embedded in a menu-driven X-windows user interface to the ACIS 3D Toolkit. (author)

  9. Facial melanoses: Indian perspective

    Directory of Open Access Journals (Sweden)

    Neena Khanna

    2011-01-01

    Full Text Available Facial melanoses (FM are a common presentation in Indian patients, causing cosmetic disfigurement with considerable psychological impact. Some of the well defined causes of FM include melasma, Riehl′s melanosis, Lichen planus pigmentosus, erythema dyschromicum perstans (EDP, erythrosis, and poikiloderma of Civatte. But there is considerable overlap in features amongst the clinical entities. Etiology in most of the causes is unknown, but some factors such as UV radiation in melasma, exposure to chemicals in EDP, exposure to allergens in Riehl′s melanosis are implicated. Diagnosis is generally based on clinical features. The treatment of FM includes removal of aggravating factors, vigorous photoprotection, and some form of active pigment reduction either with topical agents or physical modes of treatment. Topical agents include hydroquinone (HQ, which is the most commonly used agent, often in combination with retinoic acid, corticosteroids, azelaic acid, kojic acid, and glycolic acid. Chemical peels are important modalities of physical therapy, other forms include lasers and dermabrasion.

  10. The classification of facial emotions: a computer-based taxonomic approach.

    Science.gov (United States)

    Pilowsky, I; Katsikitis, M

    1994-01-01

    This study investigated whether the six 'fundamental' expressions of emotion each have configurational properties which would result in their being grouped into classes by a classification program. Twenty-three actors posed the six 'fundamental' emotions of happiness, surprise, fear, disgust, anger, sadness and a neutral expression. Still images of these videotaped expressions were digitised and distance measures between facial landmark points were obtained. These measures were subjected to a numerical taxonomy procedure which generated five classes. Class 1 contained almost 70% of the happiness expressions. In Class 2 the majority of expressions were of surprise. Each of classes three, four and five consisted of mixtures of emotions. Class 5 however, was distinguished from all other classes by the complete absence of happiness expressions. The typical facial appearance of members of each class is described (based on distance measures). These findings support the salience of happiness among emotional expressions and may have implications for our understanding of the brain's function in the early development of the human infant as a social organism. PMID:8151051

  11. Competence Judgments Based on Facial Appearance Are Better Predictors of American Elections Than of Korean Elections.

    Science.gov (United States)

    Na, Jinkyung; Kim, Seunghee; Oh, Hyewon; Choi, Incheol; O'Toole, Alice

    2015-07-01

    Competence judgments based on facial appearance predict election results in Western countries, which indicates that these inferences contribute to decisions with social and political consequence. Because trait inferences are less pronounced in Asian cultures, such competence judgments should predict Asian election results less accurately than they do Western elections. In the study reported here, we compared Koreans' and Americans' competence judgments from face-to-trait inferences for candidates in U.S. Senate and state gubernatorial elections and Korean Assembly elections. Perceived competence was a far better predictor of the outcomes of real elections held in the United States than of elections held in Korea. When deciding which of two candidates to vote for in hypothetical elections, however, Koreans and Americans both voted on the basis of perceived competence inferred from facial appearance. Combining actual and hypothetical election results, we conclude that for Koreans, competence judgments from face-to-trait inferences are critical in voting only when other information is unavailable. However, in the United States, such competence judgments are substantially important, even in the presence of other information. PMID:25956912

  12. SVM Based Recognition of Facial Expressions Used In Indian Sign Language

    OpenAIRE

    Daleesha M Viswanathan; Sumam Mary Idicula

    2015-01-01

    In sign language systems, facial expressions are an intrinsic component that usually accompanies hand gestures. The facial expressions would modify or change the meaning of hand gesture into a statement, a question or improve the meaning and understanding of hand gestures. The scientific literature available in Indian Sign Language (ISL) on facial expression recognition is scanty. Contrary to American Sign Language (ASL), head movements are less conspicuous in ISL and the answers to questions...

  13. FEATURE EXTRACTION FOR EMG BASED PROSTHESES CONTROL

    Directory of Open Access Journals (Sweden)

    R. Aishwarya

    2013-01-01

    Full Text Available The control of prosthetic limb would be more effective if it is based on Surface Electromyogram (SEMG signals from remnant muscles. The analysis of SEMG signals depend on a number of factors, such as amplitude as well as time- and frequency-domain properties. Time series analysis using Auto Regressive (AR model and Mean frequency which is tolerant to white Gaussian noise are used as feature extraction techniques. EMG Histogram is used as another feature vector that was seen to give more distinct classification. The work was done with SEMG dataset obtained from the NINAPRO DATABASE, a resource for bio robotics community. Eight classes of hand movements hand open, hand close, Wrist extension, Wrist flexion, Pointing index, Ulnar deviation, Thumbs up, Thumb opposite to little finger are taken into consideration and feature vectors are extracted. The feature vectors can be given to an artificial neural network for further classification in controlling the prosthetic arm which is not dealt in this paper.

  14. BROAD PHONEME CLASSIFICATION USING SIGNAL BASED FEATURES

    Directory of Open Access Journals (Sweden)

    Deekshitha G

    2014-12-01

    Full Text Available Speech is the most efficient and popular means of human communication Speech is produced as a sequence of phonemes. Phoneme recognition is the first step performed by automatic speech recognition system. The state-of-the-art recognizers use mel-frequency cepstral coefficients (MFCC features derived through short time analysis, for which the recognition accuracy is limited. Instead of this, here broad phoneme classification is achieved using features derived directly from the speech at the signal level itself. Broad phoneme classes include vowels, nasals, fricatives, stops, approximants and silence. The features identified useful for broad phoneme classification are voiced/unvoiced decision, zero crossing rate (ZCR, short time energy, most dominant frequency, energy in most dominant frequency, spectral flatness measure and first three formants. Features derived from short time frames of training speech are used to train a multilayer feedforward neural network based classifier with manually marked class label as output and classification accuracy is then tested. Later this broad phoneme classifier is used for broad syllable structure prediction which is useful for applications such as automatic speech recognition and automatic language identification.

  15. Arabic writer identification based on diacritic's features

    Science.gov (United States)

    Maliki, Makki; Al-Jawad, Naseer; Jassim, Sabah A.

    2012-06-01

    Natural languages like Arabic, Kurdish, Farsi (Persian), Urdu, and any other similar languages have many features, which make them different from other languages like Latin's script. One of these important features is diacritics. These diacritics are classified as: compulsory like dots which are used to identify/differentiate letters, and optional like short vowels which are used to emphasis consonants. Most indigenous and well trained writers often do not use all or some of these second class of diacritics, and expert readers can infer their presence within the context of the writer text. In this paper, we investigate the use of diacritics shapes and other characteristic as parameters of feature vectors for Arabic writer identification/verification. Segmentation techniques are used to extract the diacritics-based feature vectors from examples of Arabic handwritten text. The results of evaluation test will be presented, which has been carried out on an in-house database of 50 writers. Also the viability of using diacritics for writer recognition will be demonstrated.

  16. Face Recognition from Still Images to Video Sequences: A Local-Feature-Based Framework

    Directory of Open Access Journals (Sweden)

    Chen Shaokang

    2011-01-01

    Full Text Available Although automatic faces recognition has shown success for high-quality images under controlled conditions, for video-based recognition it is hard to attain similar levels of performance. We describe in this paper recent advances in a project being undertaken to trial and develop advanced surveillance systems for public safety. In this paper, we propose a local facial feature based framework for both still image and video-based face recognition. The evaluation is performed on a still image dataset LFW and a video sequence dataset MOBIO to compare 4 methods for operation on feature: feature averaging (Avg-Feature, Mutual Subspace Method (MSM, Manifold to Manifold Distance (MMS, and Affine Hull Method (AHM, and 4 methods for operation on distance on 3 different features. The experimental results show that Multi-region Histogram (MRH feature is more discriminative for face recognition compared to Local Binary Patterns (LBP and raw pixel intensity. Under the limitation on a small number of images available per person, feature averaging is more reliable than MSM, MMD, and AHM and is much faster. Thus, our proposed framework—averaging MRH feature is more suitable for CCTV surveillance systems with constraints on the number of images and the speed of processing.

  17. Techniques in Facial Expression Recognition

    OpenAIRE

    Avinash Prakash Pandhare; Umesh Balkrishna Chavan

    2016-01-01

    Facial expression recognition is gaining widespread importance as the applications related to Human – Computer interactions are increasing. This paper mentions various techniques and approaches that have been used in the field of facial expression recognition. Facial expression recognition takes place in various stages and these stages have been implemented by various approaches. Viola and Jones for face detection, Gabor filters for feature extraction, SVM classifiers for classifi...

  18. Facial expression recognition based on Gabor wavelet transform%基于Gabor小波的人脸表情特征提取研究

    Institute of Scientific and Technical Information of China (English)

    王甫龙; 薄华

    2012-01-01

    In order to make the computer have a better recognition to face expression,the method of facial expression recognition based on Gabor wavelets transform is discussed.Firstly,with pre-processing is executed to a given static grey image containing facial expression information.Pre-processing including the identification of pure face facial expression region,size and gray-scale normalized,the methods based on two-dimensional Gabor transform for feature extraction and fastPCA mentioned in this paper for diminishing Gabor feature are discussed.Secondly,in the low dimensional space,use the FLD to obtain the features useful to classification.Finally,SVM is applied to sort the facial expressions.Compared with the conventional methods,experimental results show that this method has fast identification speed and better higher recognition accuracy.%为了使计算机能更好的识别人脸表情,对基于Gabor小波变换的人脸表情识别方法进行了研究。首先对包含表情区域的静态灰度图像进行预处理,包括对确定的人脸表情区域进行尺寸和灰度归一化,然后利用二维Gabor小波变换提取脸部表情特征,使用快速PCA方法对提取的Gabor小波特征初步降维。再在低维的空间中,利用Fisher准则提取那些有利于分类的特征,最后用SVM分类器进行分类。实验结果表明,上述提出的方法比传统的方法识别速度更快,能达到实时性的要求,并且具有很好的鲁棒性,识别率高。

  19. Down syndrome detection from facial photographs using machine learning techniques

    Science.gov (United States)

    Zhao, Qian; Rosenbaum, Kenneth; Sze, Raymond; Zand, Dina; Summar, Marshall; Linguraru, Marius George

    2013-02-01

    Down syndrome is the most commonly occurring chromosomal condition; one in every 691 babies in United States is born with it. Patients with Down syndrome have an increased risk for heart defects, respiratory and hearing problems and the early detection of the syndrome is fundamental for managing the disease. Clinically, facial appearance is an important indicator in diagnosing Down syndrome and it paves the way for computer-aided diagnosis based on facial image analysis. In this study, we propose a novel method to detect Down syndrome using photography for computer-assisted image-based facial dysmorphology. Geometric features based on facial anatomical landmarks, local texture features based on the Contourlet transform and local binary pattern are investigated to represent facial characteristics. Then a support vector machine classifier is used to discriminate normal and abnormal cases; accuracy, precision and recall are used to evaluate the method. The comparison among the geometric, local texture and combined features was performed using the leave-one-out validation. Our method achieved 97.92% accuracy with high precision and recall for the combined features; the detection results were higher than using only geometric or texture features. The promising results indicate that our method has the potential for automated assessment for Down syndrome from simple, noninvasive imaging data.

  20. SVM Based Recognition of Facial Expressions Used In Indian Sign Language

    Directory of Open Access Journals (Sweden)

    Daleesha M Viswanathan

    2015-02-01

    Full Text Available In sign language systems, facial expressions are an intrinsic component that usually accompanies hand gestures. The facial expressions would modify or change the meaning of hand gesture into a statement, a question or improve the meaning and understanding of hand gestures. The scientific literature available in Indian Sign Language (ISL on facial expression recognition is scanty. Contrary to American Sign Language (ASL, head movements are less conspicuous in ISL and the answers to questions such as yes or no are signed by hand. Purpose of this paper is to present our work in recognizing facial expression changes in isolated ISL sentences. Facial gesture pattern results in the change of skin textures by forming wrinkles and furrows. Gabor wavelet method is well-known for capturing subtle textural changes on surfaces. Therefore, a unique approach was developed to model facial expression changes with Gabor wavelet parameters that were chosen from partitioned face areas. These parameters were incorporated with Euclidian distance measure. Multi class SVM classifier was used in this recognition system to identify facial expressions in an isolated facial expression sequences in ISL. An accuracy of 92.12 % was achieved by our proposed system.

  1. Cosmetics as a Feature of the Extended Human Phenotype: Modulation of the Perception of Biologically Important Facial Signals

    OpenAIRE

    Nancy L Etcoff; Shannon Stock; Haley, Lauren E.; Vickery, Sarah A.; House, David M.

    2011-01-01

    Research on the perception of faces has focused on the size, shape, and configuration of inherited features or the biological phenotype, and largely ignored the effects of adornment, or the extended phenotype. Research on the evolution of signaling has shown that animals frequently alter visual features, including color cues, to attract, intimidate or protect themselves from conspecifics. Humans engage in conscious manipulation of visual signals using cultural tools in real time rather than g...

  2. Facial Transplantation.

    Science.gov (United States)

    Russo, Jack E; Genden, Eric M

    2016-08-01

    Reconstruction of severe facial deformities poses a unique surgical challenge: restoring the aesthetic form and function of the face. Facial transplantation has emerged over the last decade as an option for reconstruction of these defects in carefully selected patients. As the world experience with facial transplantation grows, debate remains regarding whether such a highly technical, resource-intensive procedure is warranted, all to improve quality of life but not necessarily prolong it. This article reviews the current state of facial transplantation with focus on the current controversies and challenges, with particular attention to issues of technique, immunology, and ethics. PMID:27400850

  3. Research on Method of Facial Expression Recognition Based on Curvelet Transform and SVM%基于Curvelet变换和SVM的人脸表情识别方法研究

    Institute of Scientific and Technical Information of China (English)

    薄璐; 周菊香

    2013-01-01

    In this paper,curvelet transform is used for facial expression recognition.A method based on curvelet transform and SVM is introduced to facial expression recognition.During expression feature extracting,principal component analysis is also used to reduce the dimension of coefficient features after curvelet transform decomposition.Conducting experiments on JAFFE and Cohn-Kanade expression database respectively,the results show that the method can effectively identify the facial expression.Compared with other methods,the proposed method that gets an average recognition rate of facial expression is significantly better.%论文将Curvelet变换用于人脸表情识别,提出了一种基于Curvelet变换与SVM相结合的人脸表情识别方法.在表情特征提取过程中,还采用了主分量分析方法对Curvelet变换分解后得到的系数特征进行降维处理.分别对JAFFE和Cohn-Kanade表情数据库进行了实验,结果表明该方法可以有效地对人脸表情进行识别,与其他方法比较,采用该文方法得到人脸表情的平均识别率明显更优.

  4. Fast Facial Detection by Depth Map Analysis

    OpenAIRE

    Ming-Yuan Shieh; Tsung-Min Hsieh

    2013-01-01

    In order to obtain correct facial recognition results, one needs to adopt appropriate facial detection techniques. Moreover, the effects of facial detection are usually affected by the environmental conditions such as background, illumination, and complexity of objectives. In this paper, the proposed facial detection scheme, which is based on depth map analysis, aims to improve the effectiveness of facial detection and recognition under different environmental illumination conditions. The pro...

  5. Dominant Local Binary Pattern Based Face Feature Selection and Detection

    Directory of Open Access Journals (Sweden)

    Kavitha.T

    2010-04-01

    Full Text Available Face Detection plays a major role in Biometrics.Feature selection is a problem of formidable complexity. Thispaper proposes a novel approach to extract face features forface detection. The LBP features can be extracted faster in asingle scan through the raw image and lie in a lower dimensional space, whilst still retaining facial information efficiently. The LBP features are robust to low-resolution images. The dominant local binary pattern (DLBP is used to extract features accurately. A number of trainable methods are emerging in the empirical practice due to their effectiveness. The proposed method is a trainable system for selecting face features from over-completes dictionaries of imagemeasurements. After the feature selection procedure is completed the SVM classifier is used for face detection. The main advantage of this proposal is that it is trained on a very small training set. The classifier is used to increase the selection accuracy. This is not only advantageous to facilitate the datagathering stage, but, more importantly, to limit the training time. CBCL frontal faces dataset is used for training and validation.

  6. Using Kinect for real-time emotion recognition via facial expressions

    Institute of Scientific and Technical Information of China (English)

    Qi-rong MAO; Xin-yu PAN; Yong-zhao ZHAN; Xiang-jun SHEN

    2015-01-01

    Emotion recognition via facial expressions (ERFE) has attracted a great deal of interest with recent advances in artificial intelligence and pattern recognition. Most studies are based on 2D images, and their performance is usually computationally expensive. In this paper, we propose a real-time emotion recognition approach based on both 2D and 3D facial expression features captured by Kinect sensors. To capture the deformation of the 3D mesh during facial expression, we combine the features of animation units (AUs) and feature point positions (FPPs) tracked by Kinect. A fusion algorithm based on improved emotional profiles (IEPs) and maximum confidence is proposed to recognize emotions with these real-time facial expression features. Experiments on both an emotion dataset and a real-time video show the superior performance of our method.

  7. A hybrid features based image matching algorithm

    Science.gov (United States)

    Tu, Zhenbiao; Lin, Tao; Sun, Xiao; Dou, Hao; Ming, Delie

    2015-12-01

    In this paper, we present a novel image matching method to find the correspondences between two sets of image interest points. The proposed method is based on a revised third-order tensor graph matching method, and introduces an energy function that takes four kinds of energy term into account. The third-order tensor method can hardly deal with the situation that the number of interest points is huge. To deal with this problem, we use a potential matching set and a vote mechanism to decompose the matching task into several sub-tasks. Moreover, the third-order tensor method sometimes could only find a local optimum solution. Thus we use a cluster method to divide the feature points into some groups and only sample feature triangles between different groups, which could make the algorithm to find the global optimum solution much easier. Experiments on different image databases could prove that our new method would obtain correct matching results with relatively high efficiency.

  8. Facial Expression Recognition via Non-Negative Least-Squares Sparse Coding

    Directory of Open Access Journals (Sweden)

    Ying Chen

    2014-05-01

    Full Text Available Sparse coding is an active research subject in signal processing, computer vision, and pattern recognition. A novel method of facial expression recognition via non-negative least squares (NNLS sparse coding is presented in this paper. The NNLS sparse coding is used to form a facial expression classifier. To testify the performance of the presented method, local binary patterns (LBP and the raw pixels are extracted for facial feature representation. Facial expression recognition experiments are conducted on the Japanese Female Facial Expression (JAFFE database. Compared with other widely used methods such as linear support vector machines (SVM, sparse representation-based classifier (SRC, nearest subspace classifier (NSC, K-nearest neighbor (KNN and radial basis function neural networks (RBFNN, the experiment results indicate that the presented NNLS method performs better than other used methods on facial expression recognition tasks.

  9. Predicting facial characteristics from complex polygenic variations

    DEFF Research Database (Denmark)

    Fagertun, Jens; Wolffhechel, Karin Marie Brandt; Pers, Tune;

    2015-01-01

    traits in a linear regression. We show in this proof-of-concept study for facial trait prediction from genome-wide SNP data that some facial characteristics can be modeled by genetic information: facial width, eyebrow width, distance between eyes, and features involving mouth shape are predicted with...

  10. Facial Schwannoma

    Directory of Open Access Journals (Sweden)

    Mohammadtaghi Khorsandi Ashtiani

    2005-06-01

    Full Text Available Background: Facial schwannoma is a rare tumor arising from any part of the nerve. Probable symptoms are partial or facial weakness, hearing loss, visible mass in the ear, otorrhea, loss of taste, rarely pain, and sometimes without any symptoms. Patients should undergo a complete neurotologic history, examination with documentation of facial and auditory function, specially C.T. scan or M.R.I. Surgery is the only treatment option although the decision of when to remove facial schwannoma in the presence of normal facial function is difficult. Case: A 19-year-old girl with all above symptoms in the right side except loss of taste is diagnosed having facial schwannoma with full examination, audiometric, and radiological tests. She underwent surgery. In follow-up facial function were mostly restored. Conclusion: The need for careful assessment of patients with Bell's palsy cannot be overemphasized. In spite of the negative results if still there is any suspicoin, total facial nerve exploration is necessary.

  11. An Improved Surface Simplification Method for Facial Expression Animation Based on Homogeneous Coordinate Transformation Matrix and Maximum Shape Operator

    OpenAIRE

    Juin-Ling Tseng

    2016-01-01

    Facial animation is one of the most popular 3D animation topics researched in recent years. However, when using facial animation, a 3D facial animation model has to be stored. This 3D facial animation model requires many triangles to accurately describe and demonstrate facial expression animation because the face often presents a number of different expressions. Consequently, the costs associated with facial animation have increased rapidly. In an effort to reduce storage costs, researchers h...

  12. Empirical mode decomposition-based facial pose estimation inside video sequences

    Science.gov (United States)

    Qing, Chunmei; Jiang, Jianmin; Yang, Zhijing

    2010-03-01

    We describe a new pose-estimation algorithm via integration of the strength in both empirical mode decomposition (EMD) and mutual information. While mutual information is exploited to measure the similarity between facial images to estimate poses, EMD is exploited to decompose input facial images into a number of intrinsic mode function (IMF) components, which redistribute the effect of noise, expression changes, and illumination variations as such that, when the input facial image is described by the selected IMF components, all the negative effects can be minimized. Extensive experiments were carried out in comparisons to existing representative techniques, and the results show that the proposed algorithm achieves better pose-estimation performances with robustness to noise corruption, illumination variation, and facial expressions.

  13. Cosmetics alter biologically-based factors of beauty: evidence from facial contrast.

    Science.gov (United States)

    Jones, Alex L; Russell, Richard; Ward, Robert

    2015-01-01

    The use of cosmetics by women seems to consistently increase their attractiveness. What factors of attractiveness do cosmetics alter to achieve this? Facial contrast is a known cue to sexual dimorphism and youth, and cosmetics exaggerate sexual dimorphisms in facial contrast. Here, we demonstrate that the luminance contrast pattern of the eyes and eyebrows is consistently sexually dimorphic across a large sample of faces, with females possessing lower brow contrasts than males, and greater eye contrast than males. Red-green and yellow-blue color contrasts were not found to differ consistently between the sexes. We also show that women use cosmetics not only to exaggerate sexual dimorphisms of brow and eye contrasts, but also to increase contrasts that decline with age. These findings refine the notion of facial contrast, and demonstrate how cosmetics can increase attractiveness by manipulating factors of beauty associated with facial contrast. PMID:25725411

  14. 基于改进 LTP 算子和稀疏表示的人脸表情识别%Facial Expression Recognition Based on Improved LTP and Sparse Representation

    Institute of Scientific and Technical Information of China (English)

    李立赛; 应自炉

    2015-01-01

    In order to improve the facial expression recognition rate in practical application, an improved local ternary patterns (ILTP) algorithm was proposed on the basis of the local ternary patterns (LTP) algorithm,and was combined with sparse representation-based classifier (SRC) to form a new algorithm to be applied to human facial expression recognition. Then facial expression features are extracted by ILTP algorithm, and the features are treated as the input of the SRC to complete facial expressions recognition. Experimental results based on JAFFE database prove that the new algorithm can get a facial expression recognition rate of 70.48% and is highly feasible.%为了提高实际应用中的人脸表情识别率,本文提出了改进局部三值模式算法(ILTP),并结合稀疏表达分类器(SRC)组成新的算法应用于人脸表情识别.该算法首先利用 ILTP 算法对人脸表情图像进行特征提取,然后将得到的图像顶层特征数据和图像底层特征数据作为SRC 的输入,从而完成人脸表情分类.基于 JAFFE 数据的实验结果表明:改进算法的人脸表情识别率达70.48%,具有较高的可行性.

  15. A New Method of Facial Expression Recognition Based on SPE Plus SVM

    Science.gov (United States)

    Ying, Zilu; Huang, Mingwei; Wang, Zhen; Wang, Zhewei

    A novel method of facial expression recognition (FER) is presented, which uses stochastic proximity embedding (SPE) for data dimension reduction, and support vector machine (SVM) for expression classification. The proposed algorithm is applied to Japanese Female Facial Expression (JAFFE) database for FER, better performance is obtained compared with some traditional algorithms, such as PCA and LDA etc.. The result have further proved the effectiveness of the proposed algorithm.

  16. 基于小波MCBP和WEF的人脸表情识别%Facial expression recognition based on wavelet transformed MCBP and WEF

    Institute of Scientific and Technical Information of China (English)

    胡敏; 陈杏; 王晓华; 许良凤; 李瑞

    2012-01-01

    现有的多尺度中心化二值模式(MCBP)通过在原始图像上改变CBP算子的半径,随着算子半径的增加计算量也迅速增加.针对这个问题,提出一种基于小波的MCBP(WMCBP)的人脸表情识别方法,对小波分解后的两幅低频图像的特征区域进行CBP变换,得到多级局部CBP直方图序列特征.该方法不仅能获得更加准确的多尺度信息,而且大大降低了运算量.为进一步提高表情识别率,引入了加权的小波能量特征(WWEF).通过对JAFFE人脸表情库的实验证明;这两部分特征在一定程度上可互补,将它们融合能在不明显增加运算量的前提下增强WMCBP的表情识别能力.%The existing multi-scale centralized binary patterns get multi-scale features by changing the radius of CBP operator on the original image, with the increase of the radius of operator, the computation of the algorithm increase rapidly. To deal with this problem, a facial expression recognition method based on Wavelet Transformed MCBP is developed in this paper. The method can not only get more accurate multi-scale information, but also greatly reduces the computation complexity.Furthermore, WWEF is introduced in order to enhance the facial expression recognition accuracy. Experiments on JAFFE facial expression database show that these two types of features are complementary to some extent, and the fusion of them can enhance the performance of WMCBP in facial expression recognition without increasing the computation obviously.

  17. Facial biometry by stimulating salient singularity masks

    OpenAIRE

    Lefebvre, Grégoire; Garcia, Christophe

    2007-01-01

    We present a novel approach for face recognition based on salient singularity descriptors. The automatic feature extraction is performed thanks to a salient point detector, and the singularity information selection is performed by a SOM region-based structuring. The spatial singularity distribution is preserved in order to activate specific neuron maps and the local salient signature stimuli reveals the individual identity. This proposed method appears to be particularly robust to facial expr...

  18. Multispectral image fusion based on fractal features

    Science.gov (United States)

    Tian, Jie; Chen, Jie; Zhang, Chunhua

    2004-01-01

    Imagery sensors have been one indispensable part of the detection and recognition systems. They are widely used to the field of surveillance, navigation, control and guide, et. However, different imagery sensors depend on diverse imaging mechanisms, and work within diverse range of spectrum. They also perform diverse functions and have diverse circumstance requires. So it is unpractical to accomplish the task of detection or recognition with a single imagery sensor under the conditions of different circumstances, different backgrounds and different targets. Fortunately, the multi-sensor image fusion technique emerged as important route to solve this problem. So image fusion has been one of the main technical routines used to detect and recognize objects from images. While, loss of information is unavoidable during fusion process, so it is always a very important content of image fusion how to preserve the useful information to the utmost. That is to say, it should be taken into account before designing the fusion schemes how to avoid the loss of useful information or how to preserve the features helpful to the detection. In consideration of these issues and the fact that most detection problems are actually to distinguish man-made objects from natural background, a fractal-based multi-spectral fusion algorithm has been proposed in this paper aiming at the recognition of battlefield targets in the complicated backgrounds. According to this algorithm, source images are firstly orthogonally decomposed according to wavelet transform theories, and then fractal-based detection is held to each decomposed image. At this step, natural background and man-made targets are distinguished by use of fractal models that can well imitate natural objects. Special fusion operators are employed during the fusion of area that contains man-made targets so that useful information could be preserved and features of targets could be extruded. The final fused image is reconstructed from the

  19. Automatic Facial Expression Recognition and Operator Functional State

    Science.gov (United States)

    Blanson, Nina

    2011-01-01

    The prevalence of human error in safety-critical occupations remains a major challenge to mission success despite increasing automation in control processes. Although various methods have been proposed to prevent incidences of human error, none of these have been developed to employ the detection and regulation of Operator Functional State (OFS), or the optimal condition of the operator while performing a task, in work environments due to drawbacks such as obtrusiveness and impracticality. A video-based system with the ability to infer an individual's emotional state from facial feature patterning mitigates some of the problems associated with other methods of detecting OFS, like obtrusiveness and impracticality in integration with the mission environment. This paper explores the utility of facial expression recognition as a technology for inferring OFS by first expounding on the intricacies of OFS and the scientific background behind emotion and its relationship with an individual's state. Then, descriptions of the feedback loop and the emotion protocols proposed for the facial recognition program are explained. A basic version of the facial expression recognition program uses Haar classifiers and OpenCV libraries to automatically locate key facial landmarks during a live video stream. Various methods of creating facial expression recognition software are reviewed to guide future extensions of the program. The paper concludes with an examination of the steps necessary in the research of emotion and recommendations for the creation of an automatic facial expression recognition program for use in real-time, safety-critical missions.

  20. Automatic Facial Expression Recognition and Operator Functional State

    Science.gov (United States)

    Blanson, Nina

    2012-01-01

    The prevalence of human error in safety-critical occupations remains a major challenge to mission success despite increasing automation in control processes. Although various methods have been proposed to prevent incidences of human error, none of these have been developed to employ the detection and regulation of Operator Functional State (OFS), or the optimal condition of the operator while performing a task, in work environments due to drawbacks such as obtrusiveness and impracticality. A video-based system with the ability to infer an individual's emotional state from facial feature patterning mitigates some of the problems associated with other methods of detecting OFS, like obtrusiveness and impracticality in integration with the mission environment. This paper explores the utility of facial expression recognition as a technology for inferring OFS by first expounding on the intricacies of OFS and the scientific background behind emotion and its relationship with an individual's state. Then, descriptions of the feedback loop and the emotion protocols proposed for the facial recognition program are explained. A basic version of the facial expression recognition program uses Haar classifiers and OpenCV libraries to automatically locate key facial landmarks during a live video stream. Various methods of creating facial expression recognition software are reviewed to guide future extensions of the program. The paper concludes with an examination of the steps necessary in the research of emotion and recommendations for the creation of an automatic facial expression recognition program for use in real-time, safety-critical missions

  1. Facial Expressions with Some Mixed Expressions Recognition Using Neural Networks

    Directory of Open Access Journals (Sweden)

    Dr.R.Parthasarathi

    2011-01-01

    Full Text Available Facial feature extraction is the essential step of facial expression recognition. The automatic facial impression evaluation applies for wide area use. The important facial feature vectors for expressionanalysis are analyzed. The extracted feature vector loads all known feature vectors and trains the NN using as input training vectors while PCA is used for dimensionality reduction. The method is effective for both dimension reduction and good recognition performance in comparison with other proposed methods as shown in experiment results.

  2. Facial Expressions with Some Mixed Expressions Recognition Using Neural Networks

    OpenAIRE

    Dr.R.Parthasarathi; V.Lokeswar Reddy,; K.Vishnuthej,; G.Vishnu Vandan

    2011-01-01

    Facial feature extraction is the essential step of facial expression recognition. The automatic facial impression evaluation applies for wide area use. The important facial feature vectors for expressionanalysis are analyzed. The extracted feature vector loads all known feature vectors and trains the NN using as input training vectors while PCA is used for dimensionality reduction. The method is effective for both dimension reduction and good recognition performance in comparison with other p...

  3. 3D animation of facial plastic surgery based on computer graphics

    Science.gov (United States)

    Zhang, Zonghua; Zhao, Yan

    2013-12-01

    More and more people, especial women, are getting desired to be more beautiful than ever. To some extent, it becomes true because the plastic surgery of face was capable in the early 20th and even earlier as doctors just dealing with war injures of face. However, the effect of post-operation is not always satisfying since no animation could be seen by the patients beforehand. In this paper, by combining plastic surgery of face and computer graphics, a novel method of simulated appearance of post-operation will be given to demonstrate the modified face from different viewpoints. The 3D human face data are obtained by using 3D fringe pattern imaging systems and CT imaging systems and then converted into STL (STereo Lithography) file format. STL file is made up of small 3D triangular primitives. The triangular mesh can be reconstructed by using hash function. Top triangular meshes in depth out of numbers of triangles must be picked up by ray-casting technique. Mesh deformation is based on the front triangular mesh in the process of simulation, which deforms interest area instead of control points. Experiments on face model show that the proposed 3D animation facial plastic surgery can effectively demonstrate the simulated appearance of post-operation.

  4. Facial Data Field

    Institute of Scientific and Technical Information of China (English)

    WANG Shuliang; YUAN Hanning; CAO Baohua; WANG Dakui

    2015-01-01

    Expressional face recognition is a challenge in computer vision for complex expressions. Facial data field is proposed to recognize expression. Fundamentals are presented in the methodology of face recognition upon data field and subsequently, technical algorithms including normalizing faces, generating facial data field, extracting feature points in partitions, assigning weights and recog-nizing faces. A case is studied with JAFFE database for its verification. Result indicates that the proposed method is suitable and eff ective in expressional face recognition con-sidering the whole average recognition rate is up to 94.3%. In conclusion, data field is considered as a valuable alter-native to pattern recognition.

  5. FACIAL GEOMETRIC BEAUTY SCORE BASED ON SEMI-SUPERVISED REGRESSION LEARNING%基于半监督回归学习的人脸几何美丽分数

    Institute of Scientific and Technical Information of China (English)

    戴礼青; 金忠; 孙明明

    2015-01-01

    基于人脸美学的迅速发展,对人脸的几何特征定义、几何特征规范化以及几何特征对判断人脸美与否的贡献进行研究。首先定义人脸几何美丽分数函数,然后将流形学习与半监督学习相结合,用流形上的半监督回归方法学习人脸几何美丽分数。为了突出几何特征,还验证了人脸表情与几何美丽分数之间的关系。与 K 近邻(KNN)、支持向量机(SVM)、C4.5决策树分类方法相比,通过实验验证,证明了所提方法的有效性和可行性。%Based on rapid development of facial aesthetics,we mainly study the definition of facial geometric feature,the normalisation of geometric features and the contribution of geometric features to judging whether the face is beauty or not.First,we define the facial geometric beauty score function,and then combine the manifold learning with semi-supervised learning,use semi-supervised regression on manifolds to learn geometric beauty score of faces.In order to highlight the geometric features,we also verify the relationship between facial expression and geometric beauty scores.Compared with KNN,SVM and C4.5 decision tree classification methods,the validity and the feasibility of the proposed methods are proved by experiment.

  6. Fingerprint Feature Extraction Based on Macroscopic Curvature

    Institute of Scientific and Technical Information of China (English)

    Zhang Xiong; He Gui-ming; Zhang Yun

    2003-01-01

    In the Automatic Fingerprint Identification System (AFIS), extracting the feature of fingerprint is very important. The local curvature of ridges of fingerprint is irregular, so people have the barrier to effectively extract the fingerprint curve features to describe fingerprint. This article proposes a novel algorithm; it embraces information of few nearby fingerprint ridges to extract a new characteristic which can describe the curvature feature of fingerprint. Experimental results show the algorithm is feasible, and the characteristics extracted by it can clearly show the inner macroscopic curve properties of fingerprint. The result also shows that this kind of characteristic is robust to noise and pollution.

  7. Fingerprint Feature Extraction Based on Macroscopic Curvature

    Institute of Scientific and Technical Information of China (English)

    Zhang; Xiong; He; Gui-Ming; 等

    2003-01-01

    In the Automatic Fingerprint Identification System(AFIS), extracting the feature of fingerprint is very important. The local curvature of ridges of fingerprint is irregular, so people have the barrier to effectively extract the fingerprint curve features to describe fingerprint. This article proposes a novel algorithm; it embraces information of few nearby fingerprint ridges to extract a new characterstic which can describe the curvature feature of fingerprint. Experimental results show the algorithm is feasible, and the characteristics extracted by it can clearly show the inner macroscopic curve properties of fingerprint. The result also shows that this kind of characteristic is robust to noise and pollution.

  8. Facial trauma

    Science.gov (United States)

    Maxillofacial injury; Midface trauma; Facial injury; LeFort injuries ... Kellman RM. Maxillofacial trauma. In: Flint PW, Haughey BH, Lund LJ, et al, eds. Cummings Otolaryngology: Head & Neck Surgery . 6th ed. Philadelphia, PA: ...

  9. Facial trauma

    Science.gov (United States)

    Kellman RM. Maxillofacial trauma. In: Flint PW, Haughey BH, Lund LJ, et al, eds. Cummings Otolaryngology: Head & Neck Surgery . 6th ed. Philadelphia, PA: Elsevier Saunders; 2015:chap 23. Mayersak RJ. Facial trauma. In: Marx JA, Hockberger RS, ...

  10. Clustering Based Feature Learning on Variable Stars

    CERN Document Server

    Mackenzie, Cristóbal; Protopapas, Pavlos

    2016-01-01

    The success of automatic classification of variable stars strongly depends on the lightcurve representation. Usually, lightcurves are represented as a vector of many statistical descriptors designed by astronomers called features. These descriptors commonly demand significant computational power to calculate, require substantial research effort to develop and do not guarantee good performance on the final classification task. Today, lightcurve representation is not entirely automatic; algorithms that extract lightcurve features are designed by humans and must be manually tuned up for every survey. The vast amounts of data that will be generated in future surveys like LSST mean astronomers must develop analysis pipelines that are both scalable and automated. Recently, substantial efforts have been made in the machine learning community to develop methods that prescind from expert-designed and manually tuned features for features that are automatically learned from data. In this work we present what is, to our ...

  11. Innovations in individual feature history management - The significance of feature-based temporal model

    Science.gov (United States)

    Choi, J.; Seong, J.C.; Kim, B.; Usery, E.L.

    2008-01-01

    A feature relies on three dimensions (space, theme, and time) for its representation. Even though spatiotemporal models have been proposed, they have principally focused on the spatial changes of a feature. In this paper, a feature-based temporal model is proposed to represent the changes of both space and theme independently. The proposed model modifies the ISO's temporal schema and adds new explicit temporal relationship structure that stores temporal topological relationship with the ISO's temporal primitives of a feature in order to keep track feature history. The explicit temporal relationship can enhance query performance on feature history by removing topological comparison during query process. Further, a prototype system has been developed to test a proposed feature-based temporal model by querying land parcel history in Athens, Georgia. The result of temporal query on individual feature history shows the efficiency of the explicit temporal relationship structure. ?? Springer Science+Business Media, LLC 2007.

  12. A Framework for Real-Time Face and Facial Feature Tracking using Optical Flow Pre-estimation and Template Tracking

    CERN Document Server

    Gast, E R

    2011-01-01

    This work presents a framework for tracking head movements and capturing the movements of the mouth and both the eyebrows in real-time. We present a head tracker which is a combination of a optical flow and a template based tracker. The estimation of the optical flow head tracker is used as starting point for the template tracker which fine-tunes the head estimation. This approach together with re-updating the optical flow points prevents the head tracker from drifting. This combination together with our switching scheme, makes our tracker very robust against fast movement and motion-blur. We also propose a way to reduce the influence of partial occlusion of the head. In both the optical flow and the template based tracker we identify and exclude occluded points.

  13. [Plant Spectral Discrimination Based on Phenological Features].

    Science.gov (United States)

    Zhang, Lei; Zhao, Jian-long; Jia, Kun; Li, Xiao-song

    2015-10-01

    Spectral analysis plays a significant role onplant characteristic identification and mechanism recognition, there were many papers published on the aspects of absorption features in the spectra of chlorophyll and moisture, spectral analysis onvegetation red edge effect, spectra profile feature extraction, spectra profile conversion, vegetation leaf structure and chemical composition impacts on the spectra in past years. However, fewer researches issued on spectral changes caused by plant seasonal changes of life form, chlorophyll, leaf area index. This paper studied on spectral observation of 11 plants of various life form, plant leaf structure and its size, phenological characteristics, they include deciduous forest with broad vertical leaf, needle leaf evergreen forest, needle leaf deciduous forest, deciduous forest with broadflat leaf, high shrub with big leaf, high shrub with little leaf, deciduous forest with broad little leaf, short shrub, meadow, steppe and grass. Field spectral data were observed with SVC-HR768 (Spectra Vista company, USA), the band width covers 350-2 500 nm, spectral resolution reaches 1-4 nm. The features of NDVI, spectral maximum absorption depth in green band, and spectral maximum absorption depth in red band were measured after continuum removal processing, the mean, amplitude and gradient of these features on seasonal change profile were analyzed, meanwhile, separability research on plant spectral feature of growth period and maturation period were compared. The paper presents a calculation method of separability of vegetation spectra which consider feature spatial distances. This index is carried on analysis of the vegetation discrimination. The results show that: the spectral features during plant growth period are easier to distinguish than them during maturation period. With the same features comparison, plant separability of growth period is 3 points higher than it during maturation period. The overall separabilityof vegetation

  14. Heartbeat Signal from Facial Video for Biometric Recognition

    DEFF Research Database (Denmark)

    Haque, Mohammad Ahsanul; Nasrollahi, Kamal; Moeslund, Thomas B.

    2015-01-01

    , and blood volume pressure provide the possibility of extracting heartbeat signal from facial video instead of using obtrusive ECG or PCG sensors in the body. This paper proposes the Heartbeat Signal from Facial Video (HSFV) as a new biometric trait for human identity recognition, for the first time to...... the best of our knowledge. Feature extraction from the HSFV is accomplished by employing Radon transform on a waterfall model of the replicated HSFV. The pairwise Minkowski distances are obtained from the Radon image as the features. The authentication is accomplished by a decision tree based...

  15. Perceived sexual orientation based on vocal and facial stimuli is linked to self-rated sexual orientation in Czech men.

    Directory of Open Access Journals (Sweden)

    Jaroslava Varella Valentova

    Full Text Available Previous research has shown that lay people can accurately assess male sexual orientation based on limited information, such as face, voice, or behavioral display. Gender-atypical traits are thought to serve as cues to sexual orientation. We investigated the presumed mechanisms of sexual orientation attribution using a standardized set of facial and vocal stimuli of Czech men. Both types of stimuli were rated for sexual orientation and masculinity-femininity by non-student heterosexual women and homosexual men. Our data showed that by evaluating vocal stimuli both women and homosexual men can judge sexual orientation of the target men in agreement with their self-reported sexual orientation. Nevertheless, only homosexual men accurately attributed sexual orientation of the two groups from facial images. Interestingly, facial images of homosexual targets were rated as more masculine than heterosexual targets. This indicates that attributions of sexual orientation are affected by stereotyped association between femininity and male homosexuality; however, reliance on such cues can lead to frequent misjudgments as was the case with the female raters. Although our study is based on a community sample recruited in a non-English speaking country, the results are generally consistent with the previous research and thus corroborate the validity of sexual orientation attributions.

  16. A genome-wide association study identifies five loci influencing facial morphology in Europeans.

    Directory of Open Access Journals (Sweden)

    Fan Liu

    2012-09-01

    Full Text Available Inter-individual variation in facial shape is one of the most noticeable phenotypes in humans, and it is clearly under genetic regulation; however, almost nothing is known about the genetic basis of normal human facial morphology. We therefore conducted a genome-wide association study for facial shape phenotypes in multiple discovery and replication cohorts, considering almost ten thousand individuals of European descent from several countries. Phenotyping of facial shape features was based on landmark data obtained from three-dimensional head magnetic resonance images (MRIs and two-dimensional portrait images. We identified five independent genetic loci associated with different facial phenotypes, suggesting the involvement of five candidate genes--PRDM16, PAX3, TP63, C5orf50, and COL17A1--in the determination of the human face. Three of them have been implicated previously in vertebrate craniofacial development and disease, and the remaining two genes potentially represent novel players in the molecular networks governing facial development. Our finding at PAX3 influencing the position of the nasion replicates a recent GWAS of facial features. In addition to the reported GWA findings, we established links between common DNA variants previously associated with NSCL/P at 2p21, 8q24, 13q31, and 17q22 and normal facial-shape variations based on a candidate gene approach. Overall our study implies that DNA variants in genes essential for craniofacial development contribute with relatively small effect size to the spectrum of normal variation in human facial morphology. This observation has important consequences for future studies aiming to identify more genes involved in the human facial morphology, as well as for potential applications of DNA prediction of facial shape such as in future forensic applications.

  17. Melanoma detection algorithm based on feature fusion.

    Science.gov (United States)

    Barata, Catarina; Emre Celebi, M; Marques, Jorge S

    2015-08-01

    A Computer Aided-Diagnosis (CAD) System for melanoma diagnosis usually makes use of different types of features to characterize the lesions. The features are often combined into a single vector that can belong to a high dimensional space (early fusion). However, it is not clear if this is the optimal strategy and works on other fields have shown that early fusion has some limitations. In this work, we address this issue and investigate which is the best approach to combine different features comparing early and late fusion. Experiments carried on the datasets PH2 (single source) and EDRA (multi source) show that late fusion performs better, leading to classification scores of Sensitivity = 98% and Specificity = 90% (PH(2)) and Sensitivity = 83% and Specificity = 76% (EDRA). PMID:26736837

  18. A New Color Facial Identification Feature Extraction Method' and Automatic Identification%一种改进的彩色人脸鉴别特征抽取方法及自动识别

    Institute of Scientific and Technical Information of China (English)

    高燕; 明曙军; 刘永俊

    2011-01-01

    Currently face recognition has made some success, algorithms are constantly being improved. According to the common needs of the average sample solution in traditional linear analysis methods, this paper proposes the face recognition based on intermediate samples. This method can remove the influence of average samples to interference samples. Combined with the color of face recognition, the paper proposes color facial identification feature extraction and automatic identification based on the middle samples. Finally, extensive experiments performed on the international and universal AR standard color face database verify the effectiveness of the proposed method.%针对传统的线性分析方法中都需要的平均样本的共性,提出了基于中间样本的人脸识别.这种方法有效去除了干扰样本对平均样本的影响,并结合彩色人脸识别,提出了基于中间样本的彩色人脸鉴别特征抽取及自动识别方法.最后,在国际通用的AR标准彩色人脸库中进行了大量实验,验证了算法的有效性.

  19. Analytical Features: A Knowledge-Based Approach to Audio Feature Generation

    Directory of Open Access Journals (Sweden)

    Pachet François

    2009-01-01

    Full Text Available We present a feature generation system designed to create audio features for supervised classification tasks. The main contribution to feature generation studies is the notion of analytical features (AFs, a construct designed to support the representation of knowledge about audio signal processing. We describe the most important aspects of AFs, in particular their dimensional type system, on which are based pattern-based random generators, heuristics, and rewriting rules. We show how AFs generalize or improve previous approaches used in feature generation. We report on several projects using AFs for difficult audio classification tasks, demonstrating their advantage over standard audio features. More generally, we propose analytical features as a paradigm to bring raw signals into the world of symbolic computation.

  20. Face Puzzle – Two new video-based tasks for measuring explicit and implicit aspects of facial emotion recognition

    Directory of Open Access Journals (Sweden)

    Dorit eKliemann

    2013-06-01

    Full Text Available Recognizing others’ emotional states is crucial for effective social interaction. While most facial emotion recognition tasks use explicit prompts that trigger consciously controlled processing, emotional faces are almost exclusively processed implicitly in real life. Recent attempts in social cognition suggest a dual process perspective, whereby explicit and implicit processes largely operate independently. However, due to differences in methodology the direct comparison of implicit and explicit social cognition has remained a challenge.Here, we introduce a new tool to comparably measure implicit and explicit processing aspects comprising basic and complex emotions in facial expressions. We developed two video-based tasks with similar answer formats to assess performance in respective facial emotion recognition processes: Face Puzzle, implicit and explicit. To assess the tasks’ sensitivity to atypical social cognition and to infer interrelationship patterns between explicit and implicit processes in typical and atypical development, we included healthy adults (NT, n= 24 and adults with autism spectrum disorder (ASD, n = 24.Item analyses yielded good reliability of the new tasks. Group-specific results indicated sensitivity to subtle social impairments in high-functioning ASD. Correlation analyses with established implicit and explicit socio-cognitive measures were further in favor of the tasks’ external validity. Between group comparisons provide first hints of differential relations between implicit and explicit aspects of facial emotion recognition processes in healthy compared to ASD participants. In addition, an increased magnitude of between group differences in the implicit task was found for a speed-accuracy composite measure. The new Face Puzzle tool thus provides two new tasks to separately assess explicit and implicit social functioning, for instance, to measure subtle impairments as well as potential improvements due to social

  1. Face puzzle-two new video-based tasks for measuring explicit and implicit aspects of facial emotion recognition.

    Science.gov (United States)

    Kliemann, Dorit; Rosenblau, Gabriela; Bölte, Sven; Heekeren, Hauke R; Dziobek, Isabel

    2013-01-01

    Recognizing others' emotional states is crucial for effective social interaction. While most facial emotion recognition tasks use explicit prompts that trigger consciously controlled processing, emotional faces are almost exclusively processed implicitly in real life. Recent attempts in social cognition suggest a dual process perspective, whereby explicit and implicit processes largely operate independently. However, due to differences in methodology the direct comparison of implicit and explicit social cognition has remained a challenge. Here, we introduce a new tool to comparably measure implicit and explicit processing aspects comprising basic and complex emotions in facial expressions. We developed two video-based tasks with similar answer formats to assess performance in respective facial emotion recognition processes: Face Puzzle, implicit and explicit. To assess the tasks' sensitivity to atypical social cognition and to infer interrelationship patterns between explicit and implicit processes in typical and atypical development, we included healthy adults (NT, n = 24) and adults with autism spectrum disorder (ASD, n = 24). Item analyses yielded good reliability of the new tasks. Group-specific results indicated sensitivity to subtle social impairments in high-functioning ASD. Correlation analyses with established implicit and explicit socio-cognitive measures were further in favor of the tasks' external validity. Between group comparisons provide first hints of differential relations between implicit and explicit aspects of facial emotion recognition processes in healthy compared to ASD participants. In addition, an increased magnitude of between group differences in the implicit task was found for a speed-accuracy composite measure. The new Face Puzzle tool thus provides two new tasks to separately assess explicit and implicit social functioning, for instance, to measure subtle impairments as well as potential improvements due to social cognitive

  2. Feature-based Ontology Mapping from an Information Receivers’ Viewpoint

    DEFF Research Database (Denmark)

    Glückstad, Fumiko Kano; Mørup, Morten

    This paper compares four algorithms for computing feature-based similarities between concepts respectively possessing a distinctive set of features. The eventual purpose of comparing these feature-based similarity algorithms is to identify a candidate term in a Target Language (TL) that can optim...

  3. Facial Sports Injuries

    Science.gov (United States)

    ... Calendar Find an ENT Doctor Near You Facial Sports Injuries Facial Sports Injuries Patient Health Information News media ... should receive immediate medical attention. Prevention Of Facial Sports Injuries The best way to treat facial sports injuries ...

  4. Content-Based Image Retrieval Using Multiple Features

    OpenAIRE

    Zhang, Chi; Huang, Lei

    2014-01-01

    Algorithms of Content-Based Image Retrieval (CBIR) have been well developed along with the explosion of information. These algorithms are mainly distinguished based on feature used to describe the image content. In this paper, the algorithms that are based on color feature and texture feature for image retrieval will be presented. Color Coherence Vector based image retrieval algorithm is also attempted during the implementation process, but the best result is generated from the algorithms tha...

  5. Multiresolution Feature Based Fractional Power Polynomial Kernel Fisher Discriminant Model for Face Recognition

    Directory of Open Access Journals (Sweden)

    Dattatray V. Jadhav

    2008-05-01

    Full Text Available This paper prese nts a technique for face recognition which uses wavelet transform to derive desirable facial features. Three level decompositions are used to form the pyramidal multiresolution features to cope with the variations due to illumination and facial expression changes. The fractional power polynomial kernel maps the input data into an implicit feature space with a nonlinear mapping. Being linear in the feature space, but nonlinear in the input space, kernel is capable of deriving low dimensional features that incorporate higher order statistic. The Linear Discriminant Analysis is applied to kernel mapped multiresolution featured data. The effectiveness of this Wavelet Kernel Fisher Classifier algorithm is compared with the different existing popular algorithms for face recognition using FERET, ORL Yale and YaleB databases. This algorithm performs better than some of the existing popular algorithms.

  6. Feature Selection with Neighborhood Entropy-Based Cooperative Game Theory

    Directory of Open Access Journals (Sweden)

    Kai Zeng

    2014-01-01

    Full Text Available Feature selection plays an important role in machine learning and data mining. In recent years, various feature measurements have been proposed to select significant features from high-dimensional datasets. However, most traditional feature selection methods will ignore some features which have strong classification ability as a group but are weak as individuals. To deal with this problem, we redefine the redundancy, interdependence, and independence of features by using neighborhood entropy. Then the neighborhood entropy-based feature contribution is proposed under the framework of cooperative game. The evaluative criteria of features can be formalized as the product of contribution and other classical feature measures. Finally, the proposed method is tested on several UCI datasets. The results show that neighborhood entropy-based cooperative game theory model (NECGT yield better performance than classical ones.

  7. Facial blindsight

    Directory of Open Access Journals (Sweden)

    Marco eSolcà

    2015-09-01

    Full Text Available Blindsight denotes unconscious residual visual capacities in the context of an inability to consciously recollect or identify visual information. It has been described for color and shape discrimination, movement or facial emotion recognition. The present study investigates a patient suffering from cortical blindness whilst maintaining select residual abilities in face detection. Our patient presented the capacity to distinguish between jumbled/normal faces, known/unknown faces or famous people’s categories although he failed to explicitly recognize or describe them. Conversely, performance was at chance level when asked to categorize non-facial stimuli. Our results provide clinical evidence for the notion that some aspects of facial processing can occur without perceptual awareness, possibly using direct tracts from the thalamus to associative visual cortex, bypassing the primary visual cortex.

  8. Facial blindsight.

    Science.gov (United States)

    Solcà, Marco; Guggisberg, Adrian G; Schnider, Armin; Leemann, Béatrice

    2015-01-01

    Blindsight denotes unconscious residual visual capacities in the context of an inability to consciously recollect or identify visual information. It has been described for color and shape discrimination, movement or facial emotion recognition. The present study investigates a patient suffering from cortical blindness whilst maintaining select residual abilities in face detection. Our patient presented the capacity to distinguish between jumbled/normal faces, known/unknown faces or famous people's categories although he failed to explicitly recognize or describe them. Conversely, performance was at chance level when asked to categorize non-facial stimuli. Our results provide clinical evidence for the notion that some aspects of facial processing can occur without perceptual awareness, possibly using direct tracts from the thalamus to associative visual cortex, bypassing the primary visual cortex. PMID:26483655

  9. Geometrically Invariant Watermarking Scheme Based on Local Feature Points

    Directory of Open Access Journals (Sweden)

    Jing Li

    2012-06-01

    Full Text Available Based on local invariant feature points and cross ratio principle, this paper presents a feature-point-based image watermarking scheme. It is robust to geometric attacks and some signal processes. It extracts local invariant feature points from the image using the improved scale invariant feature transform algorithm. Utilizing these points as vertexes it constructs some quadrilaterals to be as local feature regions. Watermark is inserted these local feature regions repeatedly. In order to get stable local regions it adjusts the number and distribution of extracted feature points. In every chosen local feature region it decides locations to embed watermark bits based on the cross ratio of four collinear points, the cross ratio is invariant to projective transformation. Watermark bits are embedded by quantization modulation, in which the quantization step value is computed with the given PSNR. Experimental results show that the proposed method can strongly fight more geometrical attacks and the compound attacks of geometrical ones.

  10. Histological image classification using biologically interpretable shape-based features

    International Nuclear Information System (INIS)

    Automatic cancer diagnostic systems based on histological image classification are important for improving therapeutic decisions. Previous studies propose textural and morphological features for such systems. These features capture patterns in histological images that are useful for both cancer grading and subtyping. However, because many of these features lack a clear biological interpretation, pathologists may be reluctant to adopt these features for clinical diagnosis. We examine the utility of biologically interpretable shape-based features for classification of histological renal tumor images. Using Fourier shape descriptors, we extract shape-based features that capture the distribution of stain-enhanced cellular and tissue structures in each image and evaluate these features using a multi-class prediction model. We compare the predictive performance of the shape-based diagnostic model to that of traditional models, i.e., using textural, morphological and topological features. The shape-based model, with an average accuracy of 77%, outperforms or complements traditional models. We identify the most informative shapes for each renal tumor subtype from the top-selected features. Results suggest that these shapes are not only accurate diagnostic features, but also correlate with known biological characteristics of renal tumors. Shape-based analysis of histological renal tumor images accurately classifies disease subtypes and reveals biologically insightful discriminatory features. This method for shape-based analysis can be extended to other histological datasets to aid pathologists in diagnostic and therapeutic decisions

  11. Deletion of 4.4 Mb at 2q33.2q33.3 May Cause Growth Deficiency in a Patient with Mental Retardation, Facial Dysmorphic Features and Speech Delay.

    Science.gov (United States)

    Papoulidis, Ioannis; Paspaliaris, Vassilis; Papageorgiou, Elena; Siomou, Elissavet; Dagklis, Themistoklis; Sotiriou, Sotirios; Thomaidis, Loretta; Manolakos, Emmanouil

    2015-01-01

    A patient with a rare interstitial deletion of chromosomal band 2q33.2q33.3 is described. The clinical features resembled the 2q33.1 microdeletion syndrome (Glass syndrome), including mental retardation, facial dysmorphism, high-arched narrow palate, growth deficiency, and speech delay. The chromosomal aberration was characterized by whole genome BAC aCGH. A comparison of the current patient and Glass syndrome features revealed that this case displayed a relatively mild phenotype. Overall, it is suggested that the deleted region of 2q33 causative for Glass syndrome may be larger than initially suggested. PMID:25925190

  12. Dwt - Based Feature Extraction from ecg Signal

    Directory of Open Access Journals (Sweden)

    V.K.Srivastava

    2013-01-01

    Full Text Available Electrocardiogram is used to measure the rate and regularity of heartbeats to detect any irregularity to the heart. An ECG translates the heart electrical activity into wave-line on paper or screen. For the feature extraction and classification task we will be using discrete wavelet transform (DWT as wavelet transform is a two-dimensional timescale processing method, so it is suitable for the non-stationary ECG signals(due to adequate scale values and shifting in time. Then the data will be analyzed and classified using neuro-fuzzy which is a hybrid of artificial neural networks and fuzzy logic.

  13. 基于纠错输出编码的人脸表情识别%Facial expression recognition based on error-correcting output coding

    Institute of Scientific and Technical Information of China (English)

    余棉水; 朱岸青; 解晓萌

    2014-01-01

    多分类问题一直是模式识别领域的一个热点,提出了一种基于纠错输出编码和支持向量机的多分类器算法。根据通信编码理论设计纠错输出编码矩阵;按照该编码矩阵设计若干个互不相关的子支持向量机,根据编码原理将它们融合为一个多分类器。为了验证本分类器的有效性,采用Gabor小波提取人脸表情特征,应用二元主成分(2DPCA)分析法对提取的特征进行降维处理,应用该分类器进行了人脸表情的识别。实验结果表明,提出的方法能有效提高人脸表情的识别率,并具有极好的鲁棒性。%Multiple classification problems has been a hot topic in the field of pattern recognition. This paper proposes a multiple classifier algorithm based on Error-Correcting Output Coding(ECOC)and Support Vector Machine(SVM). According to the communication coding theory to design error-correcting output coding matrix, it constructs some irrelevant SVMs, and integrates them as a multiple classifier. In order to verify the effectiveness of the classifier, using the Gabor wavelet to extract facial expression features, and application of two principal components(2DPCA)to reduce the dimension of extracted features, the classifier is used for facial expression recognition. Experimental results show that the method can effectively improve facial expression recognition and has excellent robustness.

  14. Reconocimiento facial

    OpenAIRE

    Urtiaga Abad, Juan Alfonso

    2014-01-01

    El presente proyecto trata sobre uno de los campos más problemáticos de la inteligencia artificial, el reconocimiento facial. Algo tan sencillo para las personas como es reconocer una cara conocida se traduce en complejos algoritmos y miles de datos procesados en cuestión de segundos. El proyecto comienza con un estudio del estado del arte de las diversas técnicas de reconocimiento facial, desde las más utilizadas y probadas como el PCA y el LDA, hasta técnicas experimentales que utilizan ...

  15. Multinomial logistic regression-based feature selection for hyperspectral data

    Science.gov (United States)

    Pal, Mahesh

    2012-02-01

    This paper evaluates the performance of three feature selection methods based on multinomial logistic regression, and compares the performance of the best multinomial logistic regression-based feature selection approach with the support vector machine based recurring feature elimination approach. Two hyperspectral datasets, one consisting of 65 features (DAIS data) and other with 185 features (AVIRIS data) were used. Result suggests that a total of between 15 and 10 features selected by using the multinomial logistic regression-based feature selection approach as proposed by Cawley and Talbot achieve a significant improvement in classification accuracy in comparison to the use of all the features of the DAIS and AVIRIS datasets. In addition to the improved performance, the Cawley and Talbot approach does not require any user-defined parameter, thus avoiding the requirement of a model selection stage. In comparison, the other two multinomial logistic regression-based feature selection approaches require one user-defined parameter and do not perform as well as the Cawley and Talbot approach in terms of (i) the number of features required to achieve classification accuracy comparable to that achieved using the full dataset, and (ii) the classification accuracy achieved by the selected features. The Cawley and Talbot approach was also found to be computationally more efficient than the SVM-RFE technique, though both use the same number of selected features to achieve an equal or even higher level of accuracy than that achieved with full hyperspectral datasets.

  16. Classifying Chimpanzee Facial Expressions Using Muscle Action

    OpenAIRE

    Parr, Lisa A.; Bridget M Waller; Vick, Sarah J.; Bard, Kim A.

    2007-01-01

    The Chimpanzee Facial Action Coding System (ChimpFACS) is an objective, standardized observational tool for measuring facial movement in chimpanzees based on the well-known human Facial Action Coding System (FACS; P. Ekman & W. V. Friesen, 1978). This tool enables direct structural comparisons of facial expressions between humans and chimpanzees in terms of their common underlying musculature. Here the authors provide data on the first application of the ChimpFACS to validate existing categor...

  17. Simultaneous Channel and Feature Selection of Fused EEG Features Based on Sparse Group Lasso

    Directory of Open Access Journals (Sweden)

    Jin-Jia Wang

    2015-01-01

    Full Text Available Feature extraction and classification of EEG signals are core parts of brain computer interfaces (BCIs. Due to the high dimension of the EEG feature vector, an effective feature selection algorithm has become an integral part of research studies. In this paper, we present a new method based on a wrapped Sparse Group Lasso for channel and feature selection of fused EEG signals. The high-dimensional fused features are firstly obtained, which include the power spectrum, time-domain statistics, AR model, and the wavelet coefficient features extracted from the preprocessed EEG signals. The wrapped channel and feature selection method is then applied, which uses the logistical regression model with Sparse Group Lasso penalized function. The model is fitted on the training data, and parameter estimation is obtained by modified blockwise coordinate descent and coordinate gradient descent method. The best parameters and feature subset are selected by using a 10-fold cross-validation. Finally, the test data is classified using the trained model. Compared with existing channel and feature selection methods, results show that the proposed method is more suitable, more stable, and faster for high-dimensional feature fusion. It can simultaneously achieve channel and feature selection with a lower error rate. The test accuracy on the data used from international BCI Competition IV reached 84.72%.

  18. Feature selection for splice site prediction: A new method using EDA-based feature ranking

    Directory of Open Access Journals (Sweden)

    Rouzé Pierre

    2004-05-01

    Full Text Available Abstract Background The identification of relevant biological features in large and complex datasets is an important step towards gaining insight in the processes underlying the data. Other advantages of feature selection include the ability of the classification system to attain good or even better solutions using a restricted subset of features, and a faster classification. Thus, robust methods for fast feature selection are of key importance in extracting knowledge from complex biological data. Results In this paper we present a novel method for feature subset selection applied to splice site prediction, based on estimation of distribution algorithms, a more general framework of genetic algorithms. From the estimated distribution of the algorithm, a feature ranking is derived. Afterwards this ranking is used to iteratively discard features. We apply this technique to the problem of splice site prediction, and show how it can be used to gain insight into the underlying biological process of splicing. Conclusion We show that this technique proves to be more robust than the traditional use of estimation of distribution algorithms for feature selection: instead of returning a single best subset of features (as they normally do this method provides a dynamical view of the feature selection process, like the traditional sequential wrapper methods. However, the method is faster than the traditional techniques, and scales better to datasets described by a large number of features.

  19. Application of data fusion in computer facial recognition

    Directory of Open Access Journals (Sweden)

    Wang Ai Qiang

    2013-11-01

    Full Text Available The recognition rate of single recognition method is inefficiency in computer facial recognition. We proposed a new confluent facial recognition method using data fusion technology, a variety of recognition algorithm are combined to form the fusion-based face recognition system to improve the recognition rate in many ways. Data fusion considers three levels of data fusion, feature level fusion and decision level fusion. And the data layer uses a simple weighted average algorithm, which is easy to implement. Artificial neural network algorithm was selected in feature layer and fuzzy reasoning algorithm was used in decision layer. Finally, we compared with the BP neural network algorithm in the MATLAB experimental platform. The result shows that the recognition rate has been greatly improved after adopting data fusion technology in computer facial recognition.

  20. Study on Isomerous CAD Model Exchange Based on Feature

    Institute of Scientific and Technical Information of China (English)

    SHAO Xiaodong; CHEN Feng; XU Chenguang

    2006-01-01

    A model-exchange method based on feature between isomerous CAD systems is put forward in this paper. In this method, CAD model information is accessed at both feature and geometry levels and converted according to standard feature operation. The feature information including feature tree, dimensions and constraints, which will be lost in traditional data conversion, as well as geometry are converted completely from source CAD system to destination one. So the transferred model can be edited through feature operation, which cannot be implemented by general model-exchange interface.

  1. CONSTRUCTION AND MODIFICATION OF FLEXIBLE FEATURE-BASED MODELS

    Institute of Scientific and Technical Information of China (English)

    1999-01-01

    A new approach is proposed to generate flexible featrure-based models (FFBM), which can be modified dynamically. BRep/CSFG/FRG hybrid scheme is used to describe FFBM, in which BRep explicitly defines the model, CSFG (Constructive solid-feature geometry) tree records the feature-based modelling procedure and FRG (Feature relation graph) reflects different knids of relationship among features. Topological operators with local retrievability are designed to implement feature addition, which is traced by topological operation list (TOL) in detail. As a result, FFBM can be modified directly in the system database. Related features' chain reactions and variable topologies are supported in design modification, after which the product information adhering on features will not be lost. Further, a feature can be modified as rapidly as it was added.

  2. Automatic Part Primitive Feature Identification Based on Faceted Models

    Directory of Open Access Journals (Sweden)

    Muizuddin Azka

    2012-09-01

    Full Text Available Feature recognition technology has been developed along with the process of integrating CAD/CAPP/CAM. Automatic feature detection applications based on faceted models expected to speed up the manufacturing process design activities such as setting tool to be used or required machining process in a variety of different features. This research focuses on detection of primitive features available in a part. This is done by applying part slicing and grouping adjacent facets. Type of feature is identified by simply evaluating normal vector direction of all features group. In order to identify features on various planes of a part, planes, one at a time, are rotated to be parallel with the reference plane. The results showed that this method can identify the primitive features automatically accurately in all planes of tested part, this covered : pocket, cylindrical and profile feature.

  3. INTEGRATED EXPRESSIONAL AND COLOR INVARIANT FACIAL RECOGNITION SCHEME FOR HUMAN BIOMETRIC SYSTEM

    Directory of Open Access Journals (Sweden)

    M.Punithavalli

    2013-09-01

    Full Text Available In many practical applications like biometrics, video surveillance and human computer interaction, face recognition plays a major role. The previous works focused on recognizing and enhancing the biometric systems based on the facial components of the system. In this work, we are going to build Integrated Expressional and Color Invariant Facial Recognition scheme for human biometric recognition suited to different security provisioning public participation areas.At first, the features of the face are identified and processed using bayes classifier with RGB and HSV color bands. Second, psychological emotional variance are identified and linked with the respective human facial expression based on the facial action code system. Finally, an integrated expressional and color invariant facial recognition is proposed for varied conditions of illumination, pose, transformation, etc. These conditions on color invariant model are suited to easy and more efficient biometric recognition system in public domain and high confidential security zones. The integration is made derived genetic operation on the color and expression components of the facial feature system. Experimental evaluation is planned to done with public face databases (DBs such as CMU-PIE, Color FERET, XM2VTSDB, SCface, and FRGC 2.0 to estimate the performance of the proposed integrated expressional facial and color invariant recognition scheme [IEFCIRS]. Performance evaluation is done based on the constraints like recognition rate, security and evalaution time.

  4. Robust speech features representation based on computational auditory model

    Institute of Scientific and Technical Information of China (English)

    LU Xugang; JIA Chuan; DANG Jianwu

    2004-01-01

    A speech signal processing and features extracting method based on computational auditory model is proposed. The computational model is based on psychological, physiological knowledge and digital signal processing methods. In each stage of a hearing perception system, there is a corresponding computational model to simulate its function. Based on this model, speech features are extracted. In each stage, the features in different kinds of level are extracted. A further processing for primary auditory spectrum based on lateral inhibition is proposed to extract much more robust speech features. All these features can be regarded as the internal representations of speech stimulation in hearing system. The robust speech recognition experiments are conducted to test the robustness of the features. Results show that the representations based on the proposed computational auditory model are robust representations for speech signals.

  5. Features Fusion Based on FLD for Face Recognition

    OpenAIRE

    Changjun Zhou; Qiang Zhang; Xiaopeng Wei; Ziqi Wei

    2010-01-01

    In this paper, we introduced a features fusion method for face recognition based on Fisher’s Linear Discriminant (FLD). The method extract features by employed Two-Dimensional principal component analysis (2DPCA) and Gabor wavelets, and then fuse their features which are extracted with FLD respectively. As a holistic feature extraction method, 2DPCA performs dimensional reduction to the input dataset while retaining characteristics of the dataset that contribute most to its variance by elimin...

  6. Accurate Image Retrieval Algorithm Based on Color and Texture Feature

    Directory of Open Access Journals (Sweden)

    Chunlai Yan

    2013-06-01

    Full Text Available Content-Based Image Retrieval (CBIR is one of the most active hot spots in the current research field of multimedia retrieval. According to the description and extraction of visual content (feature of the image, CBIR aims to find images that contain specified content (feature in the image database. In this paper, several key technologies of CBIR, e. g. the extraction of the color and texture features of the image, as well as the similarity measures are investigated. On the basis of the theoretical research, an image retrieval system based on color and texture features is designed. In this system, the Weighted Color Feature based on HSV space is adopted as a color feature vector, four features of the Co-occurrence Matrix, saying Energy, Entropy, Inertia Quadrature and Correlation, are used to construct texture vectors, and the Euclidean distance for similarity measure is employed as well. Experimental results show that this CBIR system is efficient in image retrieval.

  7. Facial melanoses: Indian perspective

    OpenAIRE

    Neena Khanna; Seemab Rasool

    2011-01-01

    Facial melanoses (FM) are a common presentation in Indian patients, causing cosmetic disfigurement with considerable psychological impact. Some of the well defined causes of FM include melasma, Riehl′s melanosis, Lichen planus pigmentosus, erythema dyschromicum perstans (EDP), erythrosis, and poikiloderma of Civatte. But there is considerable overlap in features amongst the clinical entities. Etiology in most of the causes is unknown, but some factors such as UV radiation in melasma, exposure...

  8. Multifinger Feature Level Fusion Based Fingerprint Identification

    OpenAIRE

    Praveen N; Tessamma Thomas

    2012-01-01

    Fingerprint based authentication systems are one of the cost-effective biometric authentication techniques employed for personal identification. As the data base population increases, fast identification/recognition algorithms are required with high accuracy. Accuracy can be increased using multimodal evidences collected by multiple biometric traits. In this work, consecutive fingerprint images are taken, global singularities are located using directional field strength and their local orient...

  9. High Dimensional Data Clustering Using Fast Cluster Based Feature Selection

    Directory of Open Access Journals (Sweden)

    Karthikeyan.P

    2014-03-01

    Full Text Available Feature selection involves identifying a subset of the most useful features that produces compatible results as the original entire set of features. A feature selection algorithm may be evaluated from both the efficiency and effectiveness points of view. While the efficiency concerns the time required to find a subset of features, the effectiveness is related to the quality of the subset of features. Based on these criteria, a fast clustering-based feature selection algorithm (FAST is proposed and experimentally evaluated in this paper. The FAST algorithm works in two steps. In the first step, features are divided into clusters by using graph-theoretic clustering methods. In the second step, the most representative feature that is strongly related to target classes is selected from each cluster to form a subset of features. Features in different clusters are relatively independent; the clustering-based strategy of FAST has a high probability of producing a subset of useful and independent features. To ensure the efficiency of FAST, we adopt the efficient minimum-spanning tree (MST using the Kruskal‟s Algorithm clustering method. The efficiency and effectiveness of the FAST algorithm are evaluated through an empirical study. Index Terms—

  10. Segmentation-Based PolSAR Image Classification Using Visual Features: RHLBP and Color Features

    Directory of Open Access Journals (Sweden)

    Jian Cheng

    2015-05-01

    Full Text Available A segmentation-based fully-polarimetric synthetic aperture radar (PolSAR image classification method that incorporates texture features and color features is designed and implemented. This method is based on the framework that conjunctively uses statistical region merging (SRM for segmentation and support vector machine (SVM for classification. In the segmentation step, we propose an improved local binary pattern (LBP operator named the regional homogeneity local binary pattern (RHLBP to guarantee the regional homogeneity in PolSAR images. In the classification step, the color features extracted from false color images are applied to improve the classification accuracy. The RHLBP operator and color features can provide discriminative information to separate those pixels and regions with similar polarimetric features, which are from different classes. Extensive experimental comparison results with conventional methods on L-band PolSAR data demonstrate the effectiveness of our proposed method for PolSAR image classification.

  11. Face Recognition Using Particle Swarm Optimization-Based Selected Features

    Directory of Open Access Journals (Sweden)

    Rabab M. Ramadan

    2009-06-01

    Full Text Available Feature selection (FS is a global optimization problem in machine learning, which reduces the number of features, removes irrelevant, noisy and redundant data, and results in acceptable recognition accuracy. It is the most important step that affects the performance of a pattern recognition system. This paper presents a novel feature selection algorithm based on particle swarm optimization (PSO. PSO is a computational paradigm based on the idea of collaborative behavior inspired by the social behavior of bird flocking or fish schooling. The algorithm is applied to coefficients extracted by two feature extraction techniques: the discrete cosine transforms (DCT and the discrete wavelet transform (DWT. The proposedPSO-based feature selection algorithm is utilized to search the feature space for the optimal feature subset where features are carefully selected according to a well defined discrimination criterion. Evolution is driven by a fitness function defined in terms of maximizing the class separation (scatter index. The classifier performance and the length of selected feature vector are considered for performance evaluation using the ORL facedatabase. Experimental results show that the PSO-based feature selection algorithm was found to generate excellent recognition results with the minimal set of selected features.

  12. Feature Selection for Neural Network Based Stock Prediction

    Science.gov (United States)

    Sugunnasil, Prompong; Somhom, Samerkae

    We propose a new methodology of feature selection for stock movement prediction. The methodology is based upon finding those features which minimize the correlation relation function. We first produce all the combination of feature and evaluate each of them by using our evaluate function. We search through the generated set with hill climbing approach. The self-organizing map based stock prediction model is utilized as the prediction method. We conduct the experiment on data sets of the Microsoft Corporation, General Electric Co. and Ford Motor Co. The results show that our feature selection method can improve the efficiency of the neural network based stock prediction.

  13. Gender Classification Based on Geometry Features of Palm Image

    OpenAIRE

    Ming Wu; Yubo Yuan

    2014-01-01

    This paper presents a novel gender classification method based on geometry features of palm image which is simple, fast, and easy to handle. This gender classification method based on geometry features comprises two main attributes. The first one is feature extraction by image processing. The other one is classification system with polynomial smooth support vector machine (PSSVM). A total of 180 palm images were collected from 30 persons to verify the validity of the proposed gender classi...

  14. Optimized features selection for gender classification using optimization algorithms

    OpenAIRE

    KHAN, Sajid Ali; Nazir, Muhammad; RIAZ, Naveed

    2013-01-01

    Optimized feature selection is an important task in gender classification. The optimized features not only reduce the dimensions, but also reduce the error rate. In this paper, we have proposed a technique for the extraction of facial features using both appearance-based and geometric-based feature extraction methods. The extracted features are then optimized using particle swarm optimization (PSO) and the bee algorithm. The geometric-based features are optimized by PSO with ensem...

  15. Facial image of Biblical Jews from Israel.

    Science.gov (United States)

    Kobyliansky, E; Balueva, T; Veselovskaya, E; Arensburg, B

    2008-06-01

    The present report deals with reconstructing the facial shapes of ancient inhabitants of Israel based on their cranial remains. The skulls of a male from the Hellenistic period and a female from the Roman period have been reconstructed. They were restored using the most recently developed programs in anthropological facial reconstruction, especially that of the Institute of Ethnology and Anthropology of the Russian Academy of Sciences (Balueva & Veselovskaya 2004). The basic craniometrical measurements of the two skulls were measured according to Martin & Saller (1957) and compared to the data from three ancient populations of Israel described by Arensburg et al. (1980): that of the Hellenistic period dating from 332 to 37 B.C., that of the Roman period, from 37 B.C. to 324 C.E., and that of the Byzantine period that continued until the Arab conquest in 640 C.E. Most of this osteological material was excavated in the Jordan River and the Dead Sea areas. A sample from the XVIIth century Jews from Prague (Matiegka 1926) was also used for osteometrical comparisons. The present study will characterize not only the osteological morphology of the material, but also the facial appearance of ancient inhabitants of Israel. From an anthropometric point of view, the two skulls studied here definitely belong to the same sample from the Hellenistic, Roman, and Byzantine populations of Israel as well as from Jews from Prague. Based on its facial reconstruction, the male skull may belong to the large Mediterranean group that inhabited this area from historic to modern times. The female skull also exhibits all the Mediterranean features but, in addition, probably some equatorial (African) mixture manifested by the shape of the reconstructed nose and the facial prognatism. PMID:18712157

  16. Fingerprint image segmentation based on multi-features histogram analysis

    Science.gov (United States)

    Wang, Peng; Zhang, Youguang

    2007-11-01

    An effective fingerprint image segmentation based on multi-features histogram analysis is presented. We extract a new feature, together with three other features to segment fingerprints. Two of these four features, each of which is related to one of the other two, are reciprocals with each other, so features are divided into two groups. These two features' histograms are calculated respectively to determine which feature group is introduced to segment the aim-fingerprint. The features could also divide fingerprints into two classes with high and low quality. Experimental results show that our algorithm could classify foreground and background effectively with lower computational cost, and it can also reduce pseudo-minutiae detected and improve the performance of AFIS.

  17. Ear Biometrics Based on Geometrical Feature Extraction

    OpenAIRE

    Choraś, Micha

    2005-01-01

    Biometrics identification methods proved to be very efficient, more natural and easy for users than traditional methods of human identification. In fact, only biometrics methods truly identify humans, not keys and cards they posses or passwords they should remember. The future of biometrics will surely lead to systems based on image analysis as the data acquisition is very simple and requires only cameras, scanners or sensors. More importantly such methods could be passive, which means that t...

  18. Comparison of Feature Based Fingerspelling Recognition Algorithms

    OpenAIRE

    Ghasemzadeh, Aman

    2012-01-01

    ABSTRACT : Sign language is a manual language which uses hand gestures instead of sounds. These gestures are produced by combining hand-shapes, orientation and movement of hands. Sign language is not international and it has been defined with the intension of communicating with deaf people. In sign language, two major types of communication are considered. The first one is based on word sign vocabulary, where common words are defined by body language. The second, which is also known as finger...

  19. Unsupervised Feature Selection Based on the Distribution of Features Attributed to Imbalanced Data Sets

    Directory of Open Access Journals (Sweden)

    Mina Alibeigi, Sattar Hashemi & Ali Hamzeh

    2011-04-01

    Full Text Available Since dealing with high dimensional data is computationally complex and sometimes evenintractable, recently several feature reduction methods have been developed to reduce thedimensionality of the data in order to simplify the calculation analysis in various applications suchas text categorization, signal processing, image retrieval and gene expressions among manyothers. Among feature reduction techniques, feature selection is one of the most popular methodsdue to the preservation of the original meaning of features. However, most of the current featureselection methods do not have a good performance when fed on imbalanced data sets which arepervasive in real world applications.In this paper, we propose a new unsupervised feature selection method attributed to imbalanceddata sets, which will remove redundant features from the original feature space based on thedistribution of features. To show the effectiveness of the proposed method, popular featureselection methods have been implemented and compared. Experimental results on the severalimbalanced data sets, derived from UCI repository database, illustrate the effectiveness of theproposed method in comparison with other rival methods in terms of both AUC and F1performance measures of 1-Nearest Neighbor and Naïve Bayes classifiers and the percent of theselected features.

  20. Facial expression (mood) recognition from facial images using committee neural networks

    OpenAIRE

    Hariharan SI; Reddy Narender P; Kulkarni Saket S

    2009-01-01

    Abstract Background Facial expressions are important in facilitating human communication and interactions. Also, they are used as an important tool in behavioural studies and in medical rehabilitation. Facial image based mood detection techniques may provide a fast and practical approach for non-invasive mood detection. The purpose of the present study was to develop an intelligent system for facial image based expression classification using committee neural networks. Methods Several facial ...

  1. Methods to quantify soft-tissue based facial growth and treatment outcomes in children: a systematic review.

    Directory of Open Access Journals (Sweden)

    Sander Brons

    Full Text Available CONTEXT: Technological advancements have led craniofacial researchers and clinicians into the era of three-dimensional digital imaging for quantitative evaluation of craniofacial growth and treatment outcomes. OBJECTIVE: To give an overview of soft-tissue based methods for quantitative longitudinal assessment of facial dimensions in children until six years of age and to assess the reliability of these methods in studies with good methodological quality. DATA SOURCE: PubMed, EMBASE, Cochrane Library, Web of Science, Scopus and CINAHL were searched. A hand search was performed to check for additional relevant studies. STUDY SELECTION: Primary publications on facial growth and treatment outcomes in children younger than six years of age were included. DATA EXTRACTION: Independent data extraction by two observers. A quality assessment instrument was used to determine the methodological quality. Methods, used in studies with good methodological quality, were assessed for reliability expressed as the magnitude of the measurement error and the correlation coefficient between repeated measurements. RESULTS: In total, 47 studies were included describing 4 methods: 2D x-ray cephalometry; 2D photography; anthropometry; 3D imaging techniques (surface laser scanning, stereophotogrammetry and cone beam computed tomography. In general the measurement error was below 1 mm and 1° and correlation coefficients range from 0.65 to 1.0. CONCLUSION: Various methods have shown to be reliable. However, at present stereophotogrammetry seems to be the best 3D method for quantitative longitudinal assessment of facial dimensions in children until six years of age due to its millisecond fast image capture, archival capabilities, high resolution and no exposure to ionizing radiation.

  2. Novel dynamic Bayesian networks for facial action element recognition and understanding

    Science.gov (United States)

    Zhao, Wei; Park, Jeong-Seon; Choi, Dong-You; Lee, Sang-Woong

    2011-12-01

    In daily life, language is an important tool of communication between people. Besides language, facial action can also provide a great amount of information. Therefore, facial action recognition has become a popular research topic in the field of human-computer interaction (HCI). However, facial action recognition is quite a challenging task due to its complexity. In a literal sense, there are thousands of facial muscular movements, many of which have very subtle differences. Moreover, muscular movements always occur simultaneously when the pose is changed. To address this problem, we first build a fully automatic facial points detection system based on a local Gabor filter bank and principal component analysis. Then, novel dynamic Bayesian networks are proposed to perform facial action recognition using the junction tree algorithm over a limited number of feature points. In order to evaluate the proposed method, we have used the Korean face database for model training. For testing, we used the CUbiC FacePix, facial expressions and emotion database, Japanese female facial expression database, and our own database. Our experimental results clearly demonstrate the feasibility of the proposed approach.

  3. Feature selection using feature dissimilarity measure and density-based clustering: application to biological data.

    Science.gov (United States)

    Sengupta, Debarka; Aich, Indranil; Bandyopadhyay, Sanghamitra

    2015-10-01

    Reduction of dimensionality has emerged as a routine process in modelling complex biological systems. A large number of feature selection techniques have been reported in the literature to improve model performance in terms of accuracy and speed. In the present article an unsupervised feature selection technique is proposed, using maximum information compression index as the dissimilarity measure and the well-known density-based cluster identification technique DBSCAN for identifying the largest natural group of dissimilar features. The algorithm is fast and less sensitive to the user-supplied parameters. Moreover, the method automatically determines the required number of features and identifies them. We used the proposed method for reducing dimensionality of a number of benchmark data sets of varying sizes. Its performance was also extensively compared with some other well-known feature selection methods. PMID:26564974

  4. Feature selection using feature dissimilarity measure and density-based clustering: Application to biological data

    Indian Academy of Sciences (India)

    Debarka Sengupta; Indranil Aich; Sanghamitra Bandyopadhyay

    2015-10-01

    Reduction of dimensionality has emerged as a routine process in modelling complex biological systems. A large number of feature selection techniques have been reported in the literature to improve model performance in terms of accuracy and speed. In the present article an unsupervised feature selection technique is proposed, using maximum information compression index as the dissimilarity measure and the well-known density-based cluster identification technique DBSCAN for identifying the largest natural group of dissimilar features. The algorithm is fast and less sensitive to the user-supplied parameters. Moreover, the method automatically determines the required number of features and identifies them. We used the proposed method for reducing dimensionality of a number of benchmark data sets of varying sizes. Its performance was also extensively compared with some other well-known feature selection methods.

  5. Age group and gender recognition from human facial images

    OpenAIRE

    Shewaye, Tizita Nesibu

    2013-01-01

    This work presents an automatic human gender and age group recognition system based on human facial images. It makes an extensive experiment with row pixel intensity valued features and Discrete Cosine Transform (DCT) coefficient features with Principal Component Analysis and k-Nearest Neighbor classification to identify the best recognition approach. The final results show approaches using DCT coefficient outperform their counter parts resulting in a 99% correct gender recognition rate and 6...

  6. Mutual information-based feature selection for radiomics

    Science.gov (United States)

    Oubel, Estanislao; Beaumont, Hubert; Iannessi, Antoine

    2016-03-01

    Background The extraction and analysis of image features (radiomics) is a promising field in the precision medicine era, with applications to prognosis, prediction, and response to treatment quantification. In this work, we present a mutual information - based method for quantifying reproducibility of features, a necessary step for qualification before their inclusion in big data systems. Materials and Methods Ten patients with Non-Small Cell Lung Cancer (NSCLC) lesions were followed over time (7 time points in average) with Computed Tomography (CT). Five observers segmented lesions by using a semi-automatic method and 27 features describing shape and intensity distribution were extracted. Inter-observer reproducibility was assessed by computing the multi-information (MI) of feature changes over time, and the variability of global extrema. Results The highest MI values were obtained for volume-based features (VBF). The lesion mass (M), surface to volume ratio (SVR) and volume (V) presented statistically significant higher values of MI than the rest of features. Within the same VBF group, SVR showed also the lowest variability of extrema. The correlation coefficient (CC) of feature values was unable to make a difference between features. Conclusions MI allowed to discriminate three features (M, SVR, and V) from the rest in a statistically significant manner. This result is consistent with the order obtained when sorting features by increasing values of extrema variability. MI is a promising alternative for selecting features to be considered as surrogate biomarkers in a precision medicine context.

  7. Until they have faces: the ethics of facial allograft transplantation

    OpenAIRE

    Agich, G; Siemionow, M

    2005-01-01

    The ethical discussion of facial allograft transplantation (FAT) for severe facial deformity, popularly known as facial transplantation, has been one sided and sensationalistic. It is based on film and fiction rather than science and clinical experience. Based on our experience in developing the first IRB approved protocol for FAT, we critically discuss the problems with this discussion, which overlooks the plight of individuals with severe facial deformities. We discuss why FAT for facial de...

  8. Individual discriminative face recognition models based on subsets of features

    DEFF Research Database (Denmark)

    Clemmensen, Line Katrine Harder; Gomez, David Delgado; Ersbøll, Bjarne Kjær

    2007-01-01

    face recognition problem. The elastic net model is able to select a subset of features with low computational effort compared to other state-of-the-art feature selection methods. Furthermore, the fact that the number of features usually is larger than the number of images in the data base makes feature...... selection techniques such as forward selection or lasso regression become inadequate. In the experimental section, the performance of the elastic net model is compared with geometrical and color based algorithms widely used in face recognition such as Procrustes nearest neighbor, Eigenfaces, or Fisher...

  9. 基于层次分析法语义知识的人脸表情识别新方法%A novel facial expression recognition method based on semantic knowledge of analytical hierarchy process

    Institute of Scientific and Technical Information of China (English)

    胡步发; 黄银成; 陈炳兴

    2011-01-01

    At present, there are intrinsic differences between machine recognition of facial expression and human perception in the facial expression recognition system, which affect the precision of facial expression recognition. In order to reduce the semantic gap between the low-level visual features of face images and high-level semantic, a novel facial expression recognition method based on semantic knowledge of analytical hierarchy process (AHP) is presented. The analytical hierarchy process method is adopted to describe the high-level semantic of face images of the training set, which further used to establish semantic features. In the stage of low-level visual features extraction, the 2nd-order principal component analysis method is proposed to extract the texture features of face images. In the recognition stage, only low-level visual features of the input face image is used, and k-nearest neighbor method combined with semantic features in the study stage is used to classify the facial expressions. The proposed method combines the low-level visual features with high-level semantic features, reducing the semantic gap between them. The experiments are conducted on Japanese Female Facial Expression (JAFFE) database and the overall recognition rate of 93.92% is achieved. Theoretical analysis and experimental results both show that the proposed method has higher recognition rate.%在目前的人脸表情识别系统中,人脸表情的机器识别和人类感知之间存在着本质的差异,造成人脸表情识别率不高.为了减小人脸图像底层视觉特征与高层语义之间的语义鸿沟,提出一种基于层次分析法(AHP)语义知识的人脸表情识别新方法.该方法首先采用层次分析法对训练集中人脸图像进行高层语义描述,建立语义特征向量,在底层视觉特征提取阶段,提出一种二阶PCA(principal component analysis)方法来提取人脸图像的纹理特征;在识别阶段,仅利用输入人脸图像的底层

  10. Colesteatoma causando paralisia facial Cholesteatoma causing facial paralysis

    Directory of Open Access Journals (Sweden)

    José Ricardo Gurgel Testa

    2003-10-01

    blood supply or production of neurotoxic substances secreted from either the cholesteatoma matrix or bacteria enclosed in the tumor. AIM: To evaluate the incidence, clinical features and treatment of the facial palsy due cholesteatoma. STUDY DESIGN: Clinical retrospective. MATERIAL AND METHOD: Retrospective study of 10 cases of facial paralysis due cholesteatoma selected through a survey of 206 decompressions of the facial nerve due various aetiologies realized in the last 10 years in UNIFESP-EPM. RESULTS: The incidence of facial paralysis due cholesteatoma in this study was 4,85%, with female predominance (60%. The average age of the patients was 39 years. The duration and severity of the facial palsy associated with the extension of lesion were important for the functional recovery of the facial nerve. CONCLUSION: Early surgical approach is necessary in these cases to improve the nerve function more adequately. When disruption or intense fibrous replacement occurs in the facial nerve, nerve grafting (greater auricular/sural nerves and/or hypoglossal facial anastomosis may be suggested.

  11. Movie Review Classification and Feature based Summarization of Movie Reviews

    Directory of Open Access Journals (Sweden)

    Sabeeha Mohammed Basheer#1, Syed Farook

    2013-07-01

    Full Text Available Sentiment classification and feature based summarization are essential steps involved with the classification and summarization of movie reviews. The movie review classification is based on sentiment classification and condensed descriptions of movie reviews are generated from the feature based summarization. Experiments are conducted to identify the best machine learning based sentiment classification approach. Latent Semantic Analysis and Latent Dirichlet Allocation were compared to identify features which in turn affects the summary size. The focus of the system design is on classification accuracy and system response time.

  12. Feature based and feature free textual CBR: a comparison in spam filtering

    OpenAIRE

    Delany, Sarah Jane; Bridge, Derek

    2006-01-01

    Spam filtering is a text classification task to which Case-Based Reasoning (CBR) has been successfully applied. We describe the ECUE system, which classifies emails using a feature-based form of textual CBR. Then, we describe an alternative way to compute the distances between cases in a feature-free fashion, using a distance measure based on text compression. This distance measure has the advantages of having no set-up costs and being resilient to concept drift. We report an empirical compar...

  13. Facial Expression Classification Based on Multi Artificial Neural Network and Two Dimensional Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Le Hoang Thai

    2011-05-01

    Full Text Available Facial expression classification is a kind of image classification and it has received much attention, in recent years. There are many approaches to solve these problems with aiming to increase efficient classification. One of famous suggestions is described as first step, project image to different spaces; second step, in each of these spaces, images are classified into responsive class and the last step, combine the above classified results into the final result. The advantages of this approach are to reflect fulfill and multiform of image classified. In this paper, we use 2D-PCA and its variants to project the pattern or image into different spaces with different grouping strategies. Then we develop a model which combines many Neural Networks applied for the last step. This model evaluates the reliability of each space and gives the final classification conclusion. Our model links many Neural Networks together, so we call it Multi Artificial Neural Network (MANN. We apply our proposal model for 6 basic facial expressions on JAFFE database consisting 213 images posed by 10 Japanese female models.

  14. A spiking neural network based cortex-like mechanism and application to facial expression recognition.

    Science.gov (United States)

    Fu, Si-Yao; Yang, Guo-Sheng; Kuai, Xin-Kai

    2012-01-01

    In this paper, we present a quantitative, highly structured cortex-simulated model, which can be simply described as feedforward, hierarchical simulation of ventral stream of visual cortex using biologically plausible, computationally convenient spiking neural network system. The motivation comes directly from recent pioneering works on detailed functional decomposition analysis of the feedforward pathway of the ventral stream of visual cortex and developments on artificial spiking neural networks (SNNs). By combining the logical structure of the cortical hierarchy and computing power of the spiking neuron model, a practical framework has been presented. As a proof of principle, we demonstrate our system on several facial expression recognition tasks. The proposed cortical-like feedforward hierarchy framework has the merit of capability of dealing with complicated pattern recognition problems, suggesting that, by combining the cognitive models with modern neurocomputational approaches, the neurosystematic approach to the study of cortex-like mechanism has the potential to extend our knowledge of brain mechanisms underlying the cognitive analysis and to advance theoretical models of how we recognize face or, more specifically, perceive other people's facial expression in a rich, dynamic, and complex environment, providing a new starting point for improved models of visual cortex-like mechanism. PMID:23193391

  15. A Spiking Neural Network Based Cortex-Like Mechanism and Application to Facial Expression Recognition

    Directory of Open Access Journals (Sweden)

    Si-Yao Fu

    2012-01-01

    Full Text Available In this paper, we present a quantitative, highly structured cortex-simulated model, which can be simply described as feedforward, hierarchical simulation of ventral stream of visual cortex using biologically plausible, computationally convenient spiking neural network system. The motivation comes directly from recent pioneering works on detailed functional decomposition analysis of the feedforward pathway of the ventral stream of visual cortex and developments on artificial spiking neural networks (SNNs. By combining the logical structure of the cortical hierarchy and computing power of the spiking neuron model, a practical framework has been presented. As a proof of principle, we demonstrate our system on several facial expression recognition tasks. The proposed cortical-like feedforward hierarchy framework has the merit of capability of dealing with complicated pattern recognition problems, suggesting that, by combining the cognitive models with modern neurocomputational approaches, the neurosystematic approach to the study of cortex-like mechanism has the potential to extend our knowledge of brain mechanisms underlying the cognitive analysis and to advance theoretical models of how we recognize face or, more specifically, perceive other people’s facial expression in a rich, dynamic, and complex environment, providing a new starting point for improved models of visual cortex-like mechanism.

  16. Acoustic Event Detection Based on MRMR Selected Feature Vectors

    OpenAIRE

    VOZARIKOVA Eva; Juhar, Jozef; CIZMAR Anton

    2012-01-01

    This paper is focused on the detection of potentially dangerous acoustic events such as gun shots and breaking glass in the urban environment. Various feature extraction methods can be used forrepresenting the sound in the detection system based on Hidden Markov Models of acoustic events. Mel – frequency cepstral coefficients, low - level descriptors defined in MPEG-7 standard and another time andspectral features were considered in the system. For the selection of final subset of features Mi...

  17. Feature Learning based Deep Supervised Hashing with Pairwise Labels

    OpenAIRE

    Li, Wu-Jun; Wang, Sheng; Kang, Wang-Cheng

    2015-01-01

    Recent years have witnessed wide application of hashing for large-scale image retrieval. However, most existing hashing methods are based on hand-crafted features which might not be optimally compatible with the hashing procedure. Recently, deep hashing methods have been proposed to perform simultaneous feature learning and hash-code learning with deep neural networks, which have shown better performance than traditional hashing methods with hand-crafted features. Most of these deep hashing m...

  18. Image Retrieval Based on Content Using Color Feature

    OpenAIRE

    Afifi, Ahmed J.; Wesam M. Ashour

    2012-01-01

    Content-based image retrieval from large resources has become an area of wide interest in many applications. In this paper we present a CBIR system that uses Ranklet Transform and the color feature as a visual feature to represent the images. Ranklet Transform is proposed as a preprocessing step to make the image invariant to rotation and any image enhancement operations. To speed up the retrieval time, images are clustered according to their features using k-means clustering algorithm.

  19. 一种基于深度学习的表情识别方法%A Facial Expression Recognition Method Based on Deep Learning

    Institute of Scientific and Technical Information of China (English)

    王剑云; 李小霞

    2015-01-01

    According to the problem that traditional facial expression recognition method could not act a robust performance , we propose an algorithm based on deep learning .First of all, we train two sparse auto encoder in two scales , and the parameter of the hidden layer should be a series of convolutional kernel , we use these kernels to extract first-layer features .Then we get second-layer features through max-pooling operators , it improves the invariance of the features .Finally we parallelize seven four-layers neural networks to accomplish the recognition task .The experiment result shows this deep neural networks structure act a robust perform-ance in facial expression recognition task in the case of the test samples ’ ID information did not appear in the training samples .%针对人脸表情识别鲁棒性差,容易受身份信息干扰的问题,提出一种具有局部并行结构的深度神经网络识别算法。首先使用稀疏自编码算法训练得到不同尺度的卷积核,然后提取卷积核特征并作池化处理,使特征具有一定的平移不变性,最后采用与表情相关的7个并行的4层网络得到最终的分类结果。实验结果表明,在标准的人脸表情识别库上进行独立测试时,本文提出的局部并行深度神经网络的表情识别方法对测试集的人不出现在训练集中的情况有较好表现,相比其他算法更具有实用性。

  20. Feature-based multiresolution techniques for product design

    Institute of Scientific and Technical Information of China (English)

    LEE Sang Hun; LEE Kunwoo

    2006-01-01

    3D computer-aided design (CAD) systems based on feature-based solid modelling technique have been widely spread and used for product design. However, when part models associated with features are used in various downstream applications,simplified models in various levels of detail (LODs) are frequently more desirable than the full details of the parts. In particular,the need for feature-based multiresolution representation of a solid model representing an object at multiple LODs in the feature unit is increasing for engineering tasks. One challenge is to generate valid models at various LODs after an arbitrary rearrangement of features using a certain LOD criterion, because composite Boolean operations consisting of union and subtraction are not commutative. The other challenges are to devise proper topological framework for multiresolution representation, to suggest more reasonable LOD criteria, and to extend applications. This paper surveys the recent research on these issues.

  1. 基于Gabor小波和SVM的人脸表情识别算法%Facial Expression Recognition Algorithm Based on Gabor Wavelet Automatic Segmentation and SVM

    Institute of Scientific and Technical Information of China (English)

    陈亚雄

    2011-01-01

    针对包含表情信息的静态图像,提出基于Gabor小波和SVM的人脸表情识别算法.根据先验知识,并使用形态学和积分投影相结合定位眉毛眼睛区域,采用模板内计算均值定位嘴巴区域,自动分割出表情子区域.对分割出的表情子区域进行Gabor小波特征提取,在利用Fisher线性判别对特征进行降维,去除冗余和相关.利用支持向量机对人脸表情进行分类.用该算法在日本表情数据库上进行测试,获得了较高的识别准确率.证明了该算法的有效性.%A facial recognition algorithm based on Gabor wavelet and SVM is proposed in allusion to static image containing expression Information. The mathematical morphology combined with projection is adopted to locate the brow and eye region < and the calculating mean value in template is employed to locate the mouth region, which can segment the expression sub-regions automatically. The features of the expression sub-regions are extracted by Gabor wavelet transformation and then effective Gabor expression features are selected by Fisher linear discriminate (FLD) to deduce the dimension and redundancy of the features. The features are sent to support vector machine (SVM) to classify the different expressions. The algorithm was tested in Japanese female expression database. It can get a high precision of recognition. The feasibility of this method was verified by experiments.

  2. 基于提升小波和FLD的人脸表情识别%Facial expression recognition based on lifting wavelet and FLD

    Institute of Scientific and Technical Information of China (English)

    董玉龙; 姜威

    2012-01-01

    A new facial expression feature extraction method based lifting wavelet and FLD is presented. The lifting wavelet is transformed completely in time-space domain and has the multi-resolution characters, so it is advantageous in dealing with feature extraction of the image's details. The result shows that the whole character made up by the LF and HF components contains the main expression feature of the expression image. Then the Fisher linear discriminant (FLD) is used to extract features from the lifting wavelet processing images. The K-neighbor method is used for classificatioa Experiment shows effectively that in the JAFFE database recognition rate reaches 94. 3% and recognition time only lasts 2. 9 s. The new methods proves to be faster and more effective.%提出一种基于提升小波和Fisher线性判别法(FLD)相结合的人脸表情特征提取方法.提升小波是完全基于时空域的变换,具有多分辨率的特征,更有利于表情细节信息的提取,并且运算时间短,便于实现.图像经过提升小波变换后,取其低频分量和高频分量相结合作为整体特征,实验证明保存了绝大部分的表情分量,然后用Fisher线性判别法(FLD)进行特征提取,采用K-近邻法进行分类.在JAFFE数据库中,分辨率达到94.3%,识别时间为2.9s,证明了方法的有效性.

  3. Feature-based attention enhances performance by increasing response gain.

    Science.gov (United States)

    Herrmann, Katrin; Heeger, David J; Carrasco, Marisa

    2012-12-01

    Covert spatial attention can increase contrast sensitivity either by changes in contrast gain or by changes in response gain, depending on the size of the attention field and the size of the stimulus (Herrmann et al., 2010), as predicted by the normalization model of attention (Reynolds & Heeger, 2009). For feature-based attention, unlike spatial attention, the model predicts only changes in response gain, regardless of whether the featural extent of the attention field is small or large. To test this prediction, we measured the contrast dependence of feature-based attention. Observers performed an orientation-discrimination task on a spatial array of grating patches. The spatial locations of the gratings were varied randomly so that observers could not attend to specific locations. Feature-based attention was manipulated with a 75% valid and 25% invalid pre-cue, and the featural extent of the attention field was manipulated by introducing uncertainty about the upcoming grating orientation. Performance accuracy was better for valid than for invalid pre-cues, consistent with a change in response gain, when the featural extent of the attention field was small (low uncertainty) or when it was large (high uncertainty) relative to the featural extent of the stimulus. These results for feature-based attention clearly differ from results of analogous experiments with spatial attention, yet both support key predictions of the normalization model of attention. PMID:22580017

  4. Gender differences in the neural network of facial mimicry of smiles – An rTMS study

    OpenAIRE

    Korb, Sebastian; Malsert, Jennifer; Rochas, Vincent; Rihs, Tonia; Rieger, Sebastian Walter; Schwab, Samir; Niedenthal, Paula M.; Grandjean, Didier Maurice

    2015-01-01

    Under theories of embodied emotion, exposure to a facial expression triggers facial mimicry. Facial feedback is then used to recognize and judge the perceived expression. However, the neural bases of facial mimicry and of the use of facial feedback remain poorly understood. Furthermore, gender differences in facial mimicry and emotion recognition suggest that different neural substrates might accompany the production of facial mimicry, and the processing of facial feedback, in men and women. ...

  5. Facial Recognition Technology: An analysis with scope in India

    CERN Document Server

    Thorat, S B; Dandale, Jyoti P

    2010-01-01

    A facial recognition system is a computer application for automatically identifying or verifying a person from a digital image or a video frame from a video source. One of the way is to do this is by comparing selected facial features from the image and a facial database.It is typically used in security systems and can be compared to other biometrics such as fingerprint or eye iris recognition systems. In this paper we focus on 3-D facial recognition system and biometric facial recognision system. We do critics on facial recognision system giving effectiveness and weaknesses. This paper also introduces scope of recognision system in India.

  6. Facial Recognition Technology: An analysis with scope in India

    Directory of Open Access Journals (Sweden)

    S.B.Thorat

    2010-04-01

    Full Text Available A facial recognition system is a computer application for automatically identifying or verifying a person from a digital image or a video frame from a video source. One of the way is to do this is by comparing selected facial features from the image and a facial database.It is typically used in security systems and can be compared to other biometrics such as fingerprint or eye iris recognition systems. In this paper we focus on 3-D facial recognition system and biometric facial recognision system. We do critics on facial recognision system giving effectiveness and weaknesses. This paper also introduces scope of recognision system in India.

  7. Facial Expression Recognition in Nonvisual Imagery

    Science.gov (United States)

    Olague, Gustavo; Hammoud, Riad; Trujillo, Leonardo; Hernández, Benjamín; Romero, Eva

    This chapter presents two novel approaches that allow computer vision applications to perform human facial expression recognition (FER). From a prob lem standpoint, we focus on FER beyond the human visual spectrum, in long-wave infrared imagery, thus allowing us to offer illumination-independent solutions to this important human-computer interaction problem. From a methodological stand point, we introduce two different feature extraction techniques: a principal com ponent analysis-based approach with automatic feature selection and one based on texture information selected by an evolutionary algorithm. In the former, facial fea tures are selected based on interest point clusters, and classification is carried out us ing eigenfeature information; in the latter, an evolutionary-based learning algorithm searches for optimal regions of interest and texture features based on classification accuracy. Both of these approaches use a support vector machine-committee for classification. Results show effective performance for both techniques, from which we can conclude that thermal imagery contains worthwhile information for the FER problem beyond the human visual spectrum.

  8. Multi-features Based Approach for Moving Shadow Detection

    Institute of Scientific and Technical Information of China (English)

    ZHOU Ning; ZHOU Man-li; XU Yi-ping; FANG Bao-hong

    2004-01-01

    In the video-based surveillance application, moving shadows can affect the correct localization and detection of moving objects. This paper aims to present a method for shadow detection and suppression used for moving visual object detection. The major novelty of the shadow suppression is the integration of several features including photometric invariant color feature, motion edge feature, and spatial feature etc. By modifying process for false shadow detected, the averaging detection rate of moving object reaches above 90% in the test of Hall-Monitor sequence.

  9. Features of underwater echo extraction based on signal sparse decomposition

    Institute of Scientific and Technical Information of China (English)

    YANG Bo; BU Yinyong; ZHAO Haiming

    2012-01-01

    In order to better realize sound echo recognition of underwater materials with heavily uneven surface, a features abstraction method based on the theory of signal sparse decomposition has been proposed. Instead of the common time frequency dictionary, sets of training echo samples are used directly as dictionary to realize echo sparse decomposition under L1 optimization and abstract a kind of energy features of the echo. Experiments on three kinds of bottom materials including the Cobalt Crust show that the Fisher distribution with this method is superior to that of edge features and of Singular Value Decomposition (SVD) features in wavelet domain. It means no doubt that much better classification result of underwater bottom materials can be obtained with the proposed energy features than the other two. It is concluded that echo samples used as a dictionary is feasible and the class information of echo introduced by this dictionary can help to obtain better echo features.

  10. Content Based Image Retrieval by Multi Features using Image Blocks

    Directory of Open Access Journals (Sweden)

    Arpita Mathur

    2013-12-01

    Full Text Available Content based image retrieval (CBIR is an effective method of retrieving images from large image resources. CBIR is a technique in which images are indexed by extracting their low level features like, color, texture, shape, and spatial location, etc. Effective and efficient feature extraction mechanisms are required to improve existing CBIR performance. This paper presents a novel approach of CBIR system in which higher retrieval efficiency is achieved by combining the information of image features color, shape and texture. The color feature is extracted using color histogram for image blocks, for shape feature Canny edge detection algorithm is used and the HSB extraction in blocks is used for texture feature extraction. The feature set of the query image are compared with the feature set of each image in the database. The experiments show that the fusion of multiple features retrieval gives better retrieval results than another approach used by Rao et al. This paper presents comparative study of performance of the two different approaches of CBIR system in which the image features color, shape and texture are used.

  11. Opinion mining feature-level using Naive Bayes and feature extraction based analysis dependencies

    Science.gov (United States)

    Sanda, Regi; Baizal, Z. K. Abdurahman; Nhita, Fhira

    2015-12-01

    Development of internet and technology, has major impact and providing new business called e-commerce. Many e-commerce sites that provide convenience in transaction, and consumers can also provide reviews or opinions on products that purchased. These opinions can be used by consumers and producers. Consumers to know the advantages and disadvantages of particular feature of the product. Procuders can analyse own strengths and weaknesses as well as it's competitors products. Many opinions need a method that the reader can know the point of whole opinion. The idea emerged from review summarization that summarizes the overall opinion based on sentiment and features contain. In this study, the domain that become the main focus is about the digital camera. This research consisted of four steps 1) giving the knowledge to the system to recognize the semantic orientation of an opinion 2) indentify the features of product 3) indentify whether the opinion gives a positive or negative 4) summarizing the result. In this research discussed the methods such as Naï;ve Bayes for sentiment classification, and feature extraction algorithm based on Dependencies Analysis, which is one of the tools in Natural Language Processing (NLP) and knowledge based dictionary which is useful for handling implicit features. The end result of research is a summary that contains a bunch of reviews from consumers on the features and sentiment. With proposed method, accuration for sentiment classification giving 81.2 % for positive test data, 80.2 % for negative test data, and accuration for feature extraction reach 90.3 %.

  12. Facial Erythema of Rosacea - Aetiology, Different Pathophysiologies and Treatment Options.

    Science.gov (United States)

    Steinhoff, Martin; Schmelz, Martin; Schauber, Jürgen

    2016-06-15

    Rosacea is a common chronic skin condition that displays a broad diversity of clinical manifestations. Although the pathophysiological mechanisms of the four subtypes are not completely elucidated, the key elements often present are augmented immune responses of the innate and adaptive immune system, and neurovascular dysregulation. The most common primary feature of all cutaneous subtypes of rosacea is transient or persistent facial erythema. Perilesional erythema of papules or pustules is based on the sustained vasodilation and plasma extravasation induced by the inflammatory infiltrates. In contrast, transient erythema has rapid kinetics induced by trigger factors independent of papules or pustules. Amongst the current treatments for facial erythema of rosacea, only the selective α2-adrenergic receptor agonist brimonidine 0.33% topical gel (Mirvaso®) is approved. This review aims to discuss the potential causes, different pathophysiologies and current treatment options to address the unmet medical needs of patients with facial erythema of rosacea. PMID:26714888

  13. Feature selection gait-based gender classification under different circumstances

    Science.gov (United States)

    Sabir, Azhin; Al-Jawad, Naseer; Jassim, Sabah

    2014-05-01

    This paper proposes a gender classification based on human gait features and investigates the problem of two variations: clothing (wearing coats) and carrying bag condition as addition to the normal gait sequence. The feature vectors in the proposed system are constructed after applying wavelet transform. Three different sets of feature are proposed in this method. First, Spatio-temporal distance that is dealing with the distance of different parts of the human body (like feet, knees, hand, Human Height and shoulder) during one gait cycle. The second and third feature sets are constructed from approximation and non-approximation coefficient of human body respectively. To extract these two sets of feature we divided the human body into two parts, upper and lower body part, based on the golden ratio proportion. In this paper, we have adopted a statistical method for constructing the feature vector from the above sets. The dimension of the constructed feature vector is reduced based on the Fisher score as a feature selection method to optimize their discriminating significance. Finally k-Nearest Neighbor is applied as a classification method. Experimental results demonstrate that our approach is providing more realistic scenario and relatively better performance compared with the existing approaches.

  14. An Effective Combined Feature For Web Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    H.M.R.B Herath

    2015-08-01

    Full Text Available Abstract Technology advances as well as the emergence of large scale multimedia applications and the revolution of the World Wide Web has changed the world into a digital age. Anybody can use their mobile phone to take a photo at any time anywhere and upload that image to ever growing image databases. Development of effective techniques for visual and multimedia retrieval systems is one of the most challenging and important directions of the future research. This paper proposes an effective combined feature for web based image retrieval. Frequently used colour and texture features are explored in order to develop a combined feature for this purpose. Widely used three colour features Colour moments Colour coherence vector and Colour Correlogram and three texture features Grey Level Co-occurrence matrix Tamura features and Gabor filter were analyzed for their performance. Precision and Recall were used to evaluate the performance of each of these techniques. By comparing precision and recall values the methods that performed best were taken and combined to form a hybrid feature. The developed combined feature was evaluated by developing a web based CBIR system. A web crawler was used to first crawl through Web sites and images found in those sites are downloaded and the combined feature representation technique was used to extract image features. The test results indicated that this web system can be used to index web images with the combined feature representation schema and to find similar images. Random image retrievals using the web system shows that the combined feature can be used to retrieve images belonging to the general image domain. Accuracy of the retrieval can be noted high for natural images like outdoor scenes images of flowers etc. Also images which have a similar colour and texture distribution were retrieved as similar even though the images were belonging to deferent semantic categories. This can be ideal for an artist who wants

  15. Deep Feature-based Face Detection on Mobile Devices

    OpenAIRE

    Sarkar, Sayantan; Patel, Vishal M.; Chellappa, Rama

    2016-01-01

    We propose a deep feature-based face detector for mobile devices to detect user's face acquired by the front facing camera. The proposed method is able to detect faces in images containing extreme pose and illumination variations as well as partial faces. The main challenge in developing deep feature-based algorithms for mobile devices is the constrained nature of the mobile platform and the non-availability of CUDA enabled GPUs on such devices. Our implementation takes into account the speci...

  16. Facial myokymia as a presenting symptom of vestibular schwannoma.

    Directory of Open Access Journals (Sweden)

    Joseph B

    2002-07-01

    Full Text Available Facial myokymia is a rare presenting feature of a vestibular schwannoma. We present a 48 year old woman with a large right vestibular schwannoma, who presented with facial myokymia. It is postulated that facial myokymia might be due to a defect in the motor axons of the 7th nerve or due to brain stem compression by the tumor.

  17. Facial Scar Revision: Understanding Facial Scar Treatment

    Science.gov (United States)

    ... the Campaign Campaign Update Member Donors Corporate Partners Recognition Program Planned Giving 1887 Annual Giving Annual Report Donate Contact Us Trust your face to a facial plastic surgeon Facial Scar Revision ...

  18. Sympathicotomy for isolated facial blushing

    DEFF Research Database (Denmark)

    Licht, Peter Bjørn; Pilegaard, Hans K; Ladegaard, Lars

    2012-01-01

    Background. Facial blushing is one of the most peculiar of human expressions. The pathophysiology is unclear, and the prevalence is unknown. Thoracoscopic sympathectomy may cure the symptom and is increasingly used in patients with isolated facial blushing. The evidence base for the optimal level...... of targeting the sympathetic chain is limited to retrospective case studies. We present a randomized clinical trial. Methods. 100 patients were randomized (web-based, single-blinded) to rib-oriented (R2 or R2-R3) sympathicotomy for isolated facial blushing at two university hospitals during a 6-year...... between R2 and R2-R3 sympathicotomy for isolated facial blushing. Both were effective, and QOL increased significantly. Despite very frequent side effects, the vast majority of patients were satisfied. Surprisingly, many patients experienced mild recurrent symptoms within the first year; this should...

  19. FACIAL ACTIONS TRACKING FOR EXPRESSION CLONING BASED ON MIXTURE EDGE APPEARANCE MODELS%用于表情克隆的基于混合边缘外观模型的面部动作追踪技术

    Institute of Scientific and Technical Information of China (English)

    刘正宏; 汪晓妍

    2011-01-01

    提出了基于混合观察模型的实时、全自动面部特征追踪技术.用一个三维参数化的模型用来为人脸和面部动作建模,同时用弱透视投影技术为头部姿态建模.WSF混合外观模型从基于非线性归一边缘强度的形状无关的局部片面被构建出来.实验结果表明,从观察模型和可适应混合学习中边缘强度的测量可以提高追踪的精确度和鲁棒性.%Facial feature tracking plays an important role in interactive entertainment, such as expression cloning. Online-learning methods such as online appearance models have achieved good results in tracking, as they have strong abilities to adapt to variations. However, most previous works use only raw intensity to build observation models, which is very sensitive to illumination and expression changes. In this paper, a real time, fully automatic facial feature tracking approach using local structure based mixture observation model is presented. A 3D parameterized model is used to model face and facial actions, a weak perspective projection method is used to model head pose. WSF Mixture Appearance Models are built from shape-free patches based on non-linear normalized edge strength. Experimental results demonstrate that edge strength measures in observation modeling and adaptive mixture learning can improve accuracy and robustness of tracking.

  20. Automatic Facial Measurements for Quantitative Analysis of Rhinoplasty

    Directory of Open Access Journals (Sweden)

    Mousa Shamsi

    2007-08-01

    Full Text Available Proposing automated algorithms for quantitative analysis of facial images based on facial features may assist surgeons to validate the success of nose surgery in objective and reproducible manner. In this paper, we attempt to develop automatic procedures for quantitative analysis of rhinoplasty operation based on several standard linear and spatial features. The main processing steps include image enhancement, "ncorrection of varying illumination effect, automatic facial skin detection, automatic feature extraction, facial measurements and surgery analysis. For quantitative analysis of nose surgery, we randomly selected 100 patients from the database provided by the ENT division of Imam Hospital, Tehran, Iran. The frontal and profile images of these patients before and after rhinoplasty were available for experiments. For statistical analysis of nasal two clinical parameters, i.e., Nasolabial Angle and Nasal Projection ratio are computed. The mean and standard deviation of Nasolabial Angle by manual measurement of a specialist was 95.98˚(±9.58˚ and 111.02˚(±10.07˚ before and after nose surgery, respectively. The proposed algorithm has automatically computed this parameter as 94.12˚ (±8.86˚ and 109.65˚ (±8.86˚ before and after nose surgery. In addition, the proposed algorithm has automatically computed the Nasal Projection by Good's method as 0.584(±0.0491 and 0.537(±0.066 before and after nose surgery, respectively. Meanwhile, this parameter has manually been measured by a specialist as 0.576(±0.052 and 0.537(±0.077 before and after nose surgery, respectively. The result of the proposed facial skin segmentation, feature detection algorithms, and estimated values for the above two clinical parameters in the presence of the mentioned datasets declare that the techniques are applicable in the common clinical practice of the nose surgery.

  1. Spatiotemporal Features for Asynchronous Event-based Data

    Directory of Open Access Journals (Sweden)

    Xavier eLagorce

    2015-02-01

    Full Text Available Bio-inspired asynchronous event-based vision sensors are currently introducing a paradigm shift in visual information processing. These new sensors rely on a stimulus-driven principle of light acquisition similar to biological retinas. They are event-driven and fully asynchronous, thereby reducing redundancy and encoding exact times of input signal changes, leading to a very precise temporal resolution. Approaches for higher-level computer vision often rely on the realiable detection of features in visual frames, but similar definitions of features for the novel dynamic and event-based visual input representation of silicon retinas have so far been lacking. This article addresses the problem of learning and recognizing features for event-based vision sensors, which capture properties of truly spatiotemporal volumes of sparse visual event information. A novel computational architecture for learning and encoding spatiotemporal features is introduced based on a set of predictive recurrent reservoir networks, competing via winner-take-all selection. Features are learned in an unsupervised manner from real-world input recorded with event-based vision sensors. It is shown that the networks in the architecture learn distinct and task-specific dynamic visual features, and can predict their trajectories over time.

  2. Does Facial Amimia Impact the Recognition of Facial Emotions? An EMG Study in Parkinson's Disease.

    Science.gov (United States)

    Argaud, Soizic; Delplanque, Sylvain; Houvenaghel, Jean-François; Auffret, Manon; Duprez, Joan; Vérin, Marc; Grandjean, Didier; Sauleau, Paul

    2016-01-01

    According to embodied simulation theory, understanding other people's emotions is fostered by facial mimicry. However, studies assessing the effect of facial mimicry on the recognition of emotion are still controversial. In Parkinson's disease (PD), one of the most distinctive clinical features is facial amimia, a reduction in facial expressiveness, but patients also show emotional disturbances. The present study used the pathological model of PD to examine the role of facial mimicry on emotion recognition by investigating EMG responses in PD patients during a facial emotion recognition task (anger, joy, neutral). Our results evidenced a significant decrease in facial mimicry for joy in PD, essentially linked to the absence of reaction of the zygomaticus major and the orbicularis oculi muscles in response to happy avatars, whereas facial mimicry for expressions of anger was relatively preserved. We also confirmed that PD patients were less accurate in recognizing positive and neutral facial expressions and highlighted a beneficial effect of facial mimicry on the recognition of emotion. We thus provide additional arguments for embodied simulation theory suggesting that facial mimicry is a potential lever for therapeutic actions in PD even if it seems not to be necessarily required in recognizing emotion as such. PMID:27467393

  3. Dynamic Facial Prosthetics for Sufferers of Facial Paralysis

    Directory of Open Access Journals (Sweden)

    Fergal Coulter

    2011-10-01

    Full Text Available BackgroundThis paper discusses the various methods and the materialsfor the fabrication of active artificial facial muscles. Theprimary use for these will be the reanimation of paralysedor atrophied muscles in sufferers of non-recoverableunilateral facial paralysis.MethodThe prosthetic solution described in this paper is based onsensing muscle motion of the contralateral healthy musclesand replicating that motion across a patient’s paralysed sideof the face, via solid state and thin film actuators. Thedevelopment of this facial prosthetic device focused onrecreating a varying intensity smile, with emphasis ontiming, displacement and the appearance of the wrinklesand folds that commonly appear around the nose and eyesduring the expression.An animatronic face was constructed with actuations beingmade to a silicone representation musculature, usingmultiple shape-memory alloy cascades. Alongside theartificial muscle physical prototype, a facial expressionrecognition software system was constructed. This formsthe basis of an automated calibration and reconfigurationsystem for the artificial muscles following implantation, soas to suit the implantee’s unique physiognomy.ResultsAn animatronic model face with silicone musculature wasdesigned and built to evaluate the performance of ShapeMemory Alloy artificial muscles, their power controlcircuitry and software control systems. A dual facial motionsensing system was designed to allow real time control overmodel – a piezoresistive flex sensor to measure physicalmotion, and a computer vision system to evaluate real toartificial muscle performance.Analysis of various facial expressions in real subjects wasmade, which give useful data upon which to base thesystems parameter limits.ConclusionThe system performed well, and the various strengths andshortcomings of the materials and methods are reviewedand considered for the next research phase, when newpolymer based artificial muscles are constructed

  4. Korean Facial Emotion Recognition Tasks for Schizophrenia Research

    OpenAIRE

    Bahk, Yong-Chun; Jang, Seon-Keong; Lee, Jee Ye; Choi, Kee-Hong

    2015-01-01

    Objective Despite the fact that facial emotion recognition (FER) tasks using Western faces should be applied with caution to non-Western participants or patients, there are few psychometrically sound and validated FER tasks featuring Easterners' facial expressions for emotions. Thus, we aimed to develop and establish the psychometric properties of the Korean Facial Emotion Identification Task (K-FEIT) and the Korean Facial Emotion Discrimination Task (K-FEDT) for individuals with schizophreni...

  5. NEURO FUZZY MODEL FOR FACE RECOGNITION WITH CURVELET BASED FEATURE IMAGE

    Directory of Open Access Journals (Sweden)

    SHREEJA R,

    2011-06-01

    Full Text Available A facial recognition system is a computer application for automatically identifying or verifying a person from a digital image or a video frame from a video source. One of the ways to do this is by comparing selected facial features from the image and a facial database. It is typically used in security systems and can be compared to other biometric techniques such as fingerprint or iris recognition systems. Every face has approximately 80 nodal points like (Distance between the eyes, Width of the nose etc.The basic face recognition system capture the sample, extract feature, compare template and perform matching. In this paper two methods of face recognition are compared- neural networks and neuro fuzzy method. For this curvelet transform is used for feature extraction. Feature vector is formed by extracting statistical quantities of curve coefficients. From the statistical results it is concluded that neuro fuzzy method is the better technique for face recognition as compared to neural network.

  6. SVM-based glioma grading. Optimization by feature reduction analysis

    International Nuclear Information System (INIS)

    We investigated the predictive power of feature reduction analysis approaches in support vector machine (SVM)-based classification of glioma grade. In 101 untreated glioma patients, three analytic approaches were evaluated to derive an optimal reduction in features; (i) Pearson's correlation coefficients (PCC), (ii) principal component analysis (PCA) and (iii) independent component analysis (ICA). Tumor grading was performed using a previously reported SVM approach including whole-tumor cerebral blood volume (CBV) histograms and patient age. Best classification accuracy was found using PCA at 85% (sensitivity = 89%, specificity = 84%) when reducing the feature vector from 101 (100-bins rCBV histogram + age) to 3 principal components. In comparison, classification accuracy by PCC was 82% (89%, 77%, 2 dimensions) and 79% by ICA (87%, 75%, 9 dimensions). For improved speed (up to 30%) and simplicity, feature reduction by all three methods provided similar classification accuracy to literature values (∝87%) while reducing the number of features by up to 98%. (orig.)

  7. EMOTION ANALYSIS OF SONGS BASED ON LYRICAL AND AUDIO FEATURES

    Directory of Open Access Journals (Sweden)

    Adit Jamdar

    2015-05-01

    Full Text Available In this paper, a method is proposed to detect the emotion of a song based on its lyrical and audio features. Lyrical features are generated by segmentation of lyrics during the process of data extraction. ANEW and WordNet knowledge is then incorporated to compute Valence and Arousal values. In addition to this, linguistic association rules are applied to ensure that the issue of ambiguity is properly addressed. Audio features are used to supplement the lyrical ones and include attributes like energy, tempo, and danceability. These features are extracted from The Echo Nest, a widely used music intelligence platform. Construction of training and test sets is done on the basis of social tags extracted from the last.fm website. The classification is done by applying feature weighting and stepwise threshold reduction on the k-Nearest Neighbors algorithm to provide fuzziness in the classification.

  8. Recognition of Facial Expressions using Local Binary Patterns of Important Facial Parts

    OpenAIRE

    Ramchand Hablani; Narendra Chaudhari; Sanjay Tanwani

    2013-01-01

    Facial Expression Recognition is one of the exciting and challenging field; it has important applications in many areas such as data driven animation, human computer interaction and robotics. Extracting effective features from the human face is an important step for successful facial expression recognition. In this paper we have evaluated Local Binary Patterns of some important parts of human face, for person independent as well as person dependent facial expression recognition. Extensive exp...

  9. Maximized Posteriori Attributes Selection from Facial Salient Landmarks for Face Recognition

    CERN Document Server

    Gupta, Phalguni; Sing, Jamuna Kanta; Tistarelli, Massimo

    2010-01-01

    This paper presents a robust and dynamic face recognition technique based on the extraction and matching of devised probabilistic graphs drawn on SIFT features related to independent face areas. The face matching strategy is based on matching individual salient facial graph characterized by SIFT features as connected to facial landmarks such as the eyes and the mouth. In order to reduce the face matching errors, the Dempster-Shafer decision theory is applied to fuse the individual matching scores obtained from each pair of salient facial features. The proposed algorithm is evaluated with the ORL and the IITK face databases. The experimental results demonstrate the effectiveness and potential of the proposed face recognition technique also in case of partially occluded faces.

  10. Integrated Low-Rank-Based Discriminative Feature Learning for Recognition.

    Science.gov (United States)

    Zhou, Pan; Lin, Zhouchen; Zhang, Chao

    2016-05-01

    Feature learning plays a central role in pattern recognition. In recent years, many representation-based feature learning methods have been proposed and have achieved great success in many applications. However, these methods perform feature learning and subsequent classification in two separate steps, which may not be optimal for recognition tasks. In this paper, we present a supervised low-rank-based approach for learning discriminative features. By integrating latent low-rank representation (LatLRR) with a ridge regression-based classifier, our approach combines feature learning with classification, so that the regulated classification error is minimized. In this way, the extracted features are more discriminative for the recognition tasks. Our approach benefits from a recent discovery on the closed-form solutions to noiseless LatLRR. When there is noise, a robust Principal Component Analysis (PCA)-based denoising step can be added as preprocessing. When the scale of a problem is large, we utilize a fast randomized algorithm to speed up the computation of robust PCA. Extensive experimental results demonstrate the effectiveness and robustness of our method. PMID:26080387

  11. FACIAL EXPRESSION RECOGNITION BASED ON COMBINATION OF DIFFERENCE IMAGE AND GABOR WAVELET%结合差图像和Gabor小波的人脸表情识别

    Institute of Scientific and Technical Information of China (English)

    丁志起; 赵晖

    2011-01-01

    提出一种结合差图像和Gabor小波变换的人脸特征提取方法,并使用支持向量机SVM(Support Vector Machines)进行人脸表情识别.对包含情感信息的静态灰度图像进行预处理,将眼睛和嘴巴等表情子区域从人脸中切割出来,求出其差图像,然后提取差图像的Gabor特征,使用下采样降维减少特征向量的维数并进行归一化,最后使用SVM进行分类.与只从表情子区域提取Ga-bon特征的识别方法进行了比较,结果显示识别效果更好.%In this paper we introduce a facial expression features extraction algorithm which is the combination of difference image and Gabor wavelet transform,and use the support vector machine (SVM) to recognise facial expression. For a given static grey image containing facial expression information,pre-processing is executed first,the expression sub-regions including the eyes and the mouth respectively are cut from the face for obtaining their difference images,then we extract Gabor feature vectors of the difference images, and employ downsampling to reduce the dimensionality of the eigenvectors, and normalise the treated data, finally we use SVM to classify the facial expression. This combination method has been compared with the recognition method which only extracts the. Gabor feature from expression sub-region, the result indicates that the combination one has better recognition performance.

  12. Linear feature selection in texture analysis - A PLS based method

    DEFF Research Database (Denmark)

    Marques, Joselene; Igel, Christian; Lillholm, Martin;

    2013-01-01

    We present a texture analysis methodology that combined uncommitted machine-learning techniques and partial least square (PLS) in a fully automatic framework. Our approach introduces a robust PLS-based dimensionality reduction (DR) step to specifically address outliers and high-dimensional feature...... limited number of samples, the data were evaluated using Monte Carlo cross validation (CV). The developed DR method demonstrated consistency in selecting a relatively homogeneous set of features across the CV iterations. Per each CV group, a median of 19 % of the original features was selected and...... considering all CV groups, the methods selected 36 % of the original features available. The diagnosis evaluation reached a generalization area-under-the-ROC curve of 0.92, which was higher than established cartilage-based markers known to relate to OA diagnosis....

  13. Video Retrieval: An Adaptive Novel Feature Based Approach for Movies

    Directory of Open Access Journals (Sweden)

    Viral B. Thakar

    2013-03-01

    Full Text Available Video Retrieval is a field, where many techniques and methods have been proposed and have claimed to perform reliably on the videos like broadcasting of news & sports events. As a movie contains a large amount of visual information varying in random manner, it requires a highly robust algorithm for automatic shot boundary detection as well as retrieval. In this paper, we described a new adaptive approach for shot boundary detection which is able to detect not only abrupt transitions like hard cuts but also special effects like wipes, fades, and dissolves as well in different movies. To partition a movie video into shots and retrieve many metrics were constructed to measure the similarity among video frames based on all the available video features. However, too many features will reduce the efficiency of the shot boundary detection. Therefore, it is necessary to perform feature reduction for every decision. For this purpose we are following a minimum features based algorithm.

  14. Image feature extraction based multiple ant colonies cooperation

    Science.gov (United States)

    Zhang, Zhilong; Yang, Weiping; Li, Jicheng

    2015-05-01

    This paper presents a novel image feature extraction algorithm based on multiple ant colonies cooperation. Firstly, a low resolution version of the input image is created using Gaussian pyramid algorithm, and two ant colonies are spread on the source image and low resolution image respectively. The ant colony on the low resolution image uses phase congruency as its inspiration information, while the ant colony on the source image uses gradient magnitude as its inspiration information. These two ant colonies cooperate to extract salient image features through sharing a same pheromone matrix. After the optimization process, image features are detected based on thresholding the pheromone matrix. Since gradient magnitude and phase congruency of the input image are used as inspiration information of the ant colonies, our algorithm shows higher intelligence and is capable of acquiring more complete and meaningful image features than other simpler edge detectors.

  15. Visible and infrared image registration based on visual salient features

    Science.gov (United States)

    Wu, Feihong; Wang, Bingjian; Yi, Xiang; Li, Min; Hao, Jingya; Qin, Hanlin; Zhou, Huixin

    2015-09-01

    In order to improve the precision of visible and infrared (VIS/IR) image registration, an image registration method based on visual salient (VS) features is presented. First, a VS feature detector based on the modified visual attention model is presented to extract VS points. Because the iterative, within-feature competition method used in visual attention models is time consuming, an alternative fast visual salient (FVS) feature detector is proposed to make VS features more efficient. Then, a descriptor-rearranging (DR) strategy is adopted to describe feature points. This strategy combines information of both IR image and its negative image to overcome the contrast reverse problem between VIS and IR images, making it easier to find the corresponding points on VIS/IR images. Experiments show that both VS and FVS detectors have higher repeatability scores than scale invariant feature transform in the cases of blurring, brightness change, JPEG compression, noise, and viewpoint, except big scale change. The combination of VS detector and DR registration strategy can achieve precise image registration, but it is time-consuming. The combination of FVS detector and DR registration strategy can also reach a good registration of VIS/IR images but in a shorter time.

  16. The Phase Spectra Based Feature for Robust Speech Recognition

    Directory of Open Access Journals (Sweden)

    Abbasian ALI

    2009-07-01

    Full Text Available Speech recognition in adverse environment is one of the major issue in automatic speech recognition nowadays. While most current speech recognition system show to be highly efficient for ideal environment but their performance go down extremely when they are applied in real environment because of noise effected speech. In this paper a new feature representation based on phase spectra and Perceptual Linear Prediction (PLP has been suggested which can be used for robust speech recognition. It is shown that this new features can improve the performance of speech recognition not only in clean condition but also in various levels of noise condition when it is compared to PLP features.

  17. Development of a manufacturing feature-based design system

    OpenAIRE

    Hoque, A.S.M. Mojahidul

    2010-01-01

    Traditional CAD systems are based on the serial approach of the product development cycle: the design process is not integrated with other activities and thus it can not provide information for subsequent phases of product development. In order to eliminate this problem, many modern CAD systems allow the composition of designs from building blocks of higher level of abstraction called features. Although features used in current systems tend to be named after manufacturing processes, they do n...

  18. Feature Learning for Fingerprint-Based Positioning in Indoor Environment

    OpenAIRE

    Zengwei Zheng; Yuanyi Chen; Tao He; Lin Sun; Dan Chen

    2015-01-01

    Recent years have witnessed a growing interest in using Wi-Fi received signal strength for indoor fingerprint-based positioning. However, previous study about this problem has primarily faced two main challenges. One is that positioning fingerprint feature using received signal strength is unstable due to heterogeneous devices and dynamic environment status, which will greatly degrade the positioning accuracy. Another is that some improved positioning fingerprint features will suffer the curs...

  19. Feature-based attention enhances performance by increasing response gain

    OpenAIRE

    Herrmann, Katrin; Heeger, David J.; Carrasco, Marisa

    2012-01-01

    Covert spatial attention can increase contrast sensitivity either by changes in contrast gain or by changes in response gain, depending on the size of the attention field and the size of the stimulus (Herrmann, Montaser-Kouhsari, Carrasco, & Heeger, 2010), as predicted by the normalization model of attention (Reynolds & Heeger, 2009). For feature-based attention, unlike spatial attention, the model predicts only changes in response gain, regardless of whether the featural extent of the attent...

  20. Geometry-based SAR curvilinear feature selection for damage detection

    OpenAIRE

    Brett, P.T.B.,; Guida, R.

    2012-01-01

    Bright curvilinear features in Synthetic Aperture Radar (SAR) images arising from the geometry of urban structures have been successfully used for estimating urban earthquake damage, using single pre- and post-event high resolution amplitude SAR images. In this paper, further automation of the process of selecting candidate curvilinear features for change detection is proposed, based on a model selection using priors derived from idealised building geometry. The technique is demonstrated usin...

  1. Prototype Theory Based Feature Representation for PolSAR Images

    OpenAIRE

    Huang Xiaojing; Yang Xiangli; Huang Pingping; Yang Wen

    2016-01-01

    This study presents a new feature representation approach for Polarimetric Synthetic Aperture Radar (PolSAR) image based on prototype theory. First, multiple prototype sets are generated using prototype theory. Then, regularized logistic regression is used to predict similarities between a test sample and each prototype set. Finally, the PolSAR image feature representation is obtained by ensemble projection. Experimental results of an unsupervised classification of PolSAR images show that our...

  2. Three-Dimensional Facial Adaptation for MPEG-4 Talking Heads

    Directory of Open Access Journals (Sweden)

    Nikos Grammalidis

    2002-10-01

    Full Text Available This paper studies a new method for three-dimensional (3D facial model adaptation and its integration into a text-to-speech (TTS system. The 3D facial adaptation requires a set of two orthogonal views of the user′s face with a number of feature points located on both views. Based on the correspondences of the feature points′ positions, a generic face model is deformed nonrigidly treating every facial part as a separate entity. A cylindrical texture map is then built from the two image views. The generated head models are compared to corresponding models obtained by the commonly used adaptation method that utilizes 3D radial bases functions. The generated 3D models are integrated into a talking head system, which consists of two distinct parts: a multilingual text to speech sub-system and an MPEG-4 compliant facial animation sub-system. Support for the Greek language has been added, while preserving lip and speech synchronization.

  3. Incorporating Feature-Based Annotations into Automatically Generated Knowledge Representations

    Science.gov (United States)

    Lumb, L. I.; Lederman, J. I.; Aldridge, K. D.

    2006-12-01

    Earth Science Markup Language (ESML) is efficient and effective in representing scientific data in an XML- based formalism. However, features of the data being represented are not accounted for in ESML. Such features might derive from events (e.g., a gap in data collection due to instrument servicing), identifications (e.g., a scientifically interesting area/volume in an image), or some other source. In order to account for features in an ESML context, we consider them from the perspective of annotation, i.e., the addition of information to existing documents without changing the originals. Although it is possible to extend ESML to incorporate feature-based annotations internally (e.g., by extending the XML schema for ESML), there are a number of complicating factors that we identify. Rather than pursuing the ESML-extension approach, we focus on an external representation for feature-based annotations via XML Pointer Language (XPointer). In previous work (Lumb &Aldridge, HPCS 2006, IEEE, doi:10.1109/HPCS.2006.26), we have shown that it is possible to extract relationships from ESML-based representations, and capture the results in the Resource Description Format (RDF). Thus we explore and report on this same requirement for XPointer-based annotations of ESML representations. As in our past efforts, the Global Geodynamics Project (GGP) allows us to illustrate with a real-world example this approach for introducing annotations into automatically generated knowledge representations.

  4. MRI-based diagnostic imaging of the intratemporal facial nerve; Die kernspintomographische Darstellung des intratemporalen N. facialis

    Energy Technology Data Exchange (ETDEWEB)

    Kress, B.; Baehren, W. [Bundeswehrkrankenhaus Ulm (Germany). Abt. fuer Radiologie

    2001-07-01

    Detailed imaging of the five sections of the full intratemporal course of the facial nerve can be achieved by MRI and using thin tomographic section techniques and surface coils. Contrast media are required for tomographic imaging of pathological processes. Established methods are available for diagnostic evaluation of cerebellopontine angle tumors and chronic Bell's palsy, as well as hemifacial spasms. A method still under discussion is MRI for diagnostic evaluation of Bell's palsy in the presence of fractures of the petrous bone, when blood volumes in the petrous bone make evaluation even more difficult. MRI-based diagnostic evaluation of the idiopatic facial paralysis currently is subject to change. Its usual application cannot be recommended for routine evaluation at present. However, a quantitative analysis of contrast medium uptake of the nerve may be an approach to improve the prognostic value of MRI in acute phases of Bell's palsy. (orig./CB) [German] Die detaillierte kernspintomographische Darstellung des aus 5 Abschnitten bestehenden intratemporalen Verlaufes des N. facialis gelingt mit der MRI unter Einsatz von Duennschichttechniken und Oberflaechenspulen. Zur Darstellung von pathologischen Vorgaengen ist die Gabe von Kontrastmittel notwendig. Die Untersuchung in der Diagnostik von Kleinhirnbrueckenwinkeltumoren und der chronischen Facialisparese ist etabliert, ebenso wie die Diagnostik des Hemispasmus facialis. In der Diskussion ist die MRI zur Dokumentation der Facialisparese bei Felsenbeinfrakturen, wobei die Einblutungen im Felsenbein die Beurteilung erschweren. Die kernspintomographische Diagnostik der idiopathischen Facialisparese befindet sich im Wandel. In der herkoemmlichen Form wird sie nicht zur Routinediagnostik empfohlen. Die quantitative Analyse der Kontrastmittelaufnahme im Nerv koennte jedoch die prognostische Bedeutung der MRI in der Akutphase der Bell's palsy erhoehen. (orig.)

  5. Facial transplantation.

    Science.gov (United States)

    Siemionow, Maria; Kulahci, Yalcin

    2007-11-01

    The face has functional and aesthetic importance. It represents the most identifiable aspect of an individual's physical being. Its role in a person's identity and ability to communicate can therefore not be overstated. The face also plays an important role in certain functional needs such as speech, communicative competence, eye protection, and emotional expressiveness. The latter function bears significant social and psychological import, because two thirds of our communication takes place through nonverbal facial expressions. Accordingly, the significance of reconstruction of the face is indisputable. Yet despite application of meticulous techniques and the development of innovative approaches, full functional and aesthetic reconstruction of the face remains challenging. This is because optimal reconstruction of specialized units of the face have to address both the functional and aesthetic roles of the face. PMID:20567679

  6. Facial Recognition

    Directory of Open Access Journals (Sweden)

    Mihalache Sergiu

    2014-05-01

    Full Text Available During their lifetime, people learn to recognize thousands of faces that they interact with. Face perception refers to an individual's understanding and interpretation of the face, particularly the human face, especially in relation to the associated information processing in the brain. The proportions and expressions of the human face are important to identify origin, emotional tendencies, health qualities, and some social information. From birth, faces are important in the individual's social interaction. Face perceptions are very complex as the recognition of facial expressions involves extensive and diverse areas in the brain. Our main goal is to put emphasis on presenting human faces specialized studies, and also to highlight the importance of attractiviness in their retention. We will see that there are many factors that influence face recognition.

  7. Facial Orf

    Directory of Open Access Journals (Sweden)

    Enver Turan

    2012-06-01

    Full Text Available Orf, also known as ecthyma contagiosum or contagious pustular dermatitis, is a zoonotic viral disease caused by the direct or indirect contact of damaged skin with infected animals. The causative agent is an epitheliotropic DNA virus from the Parapoxvirus family and affects sheeps, goats and some other domestic or wild ruminants. A patient presented to our clinic with two nodular lesions on his face after contact with the raw meat of ruminants and the differential diagnoses other than echtyma contagiosum were eliminated by punch biopsy. Although orf lesions are usually found as solitary lesions on the hands and fingers, they have rarely been reported on the face, nostrils, tongue, eye lids and perianal region. It can present as an atypical lesion or multiple lesions. A thirty-six year old male patient, who had two facial orf lesions after contact with sheep, is presented due to the unusual location and multiplicity of the lesions.

  8. Vision Based Reconstruction Multi-Clouds of Scale Invariant Feature Transform Features for Indoor Navigation

    OpenAIRE

    Abbas M. Ali; Md J. Nordin

    2009-01-01

    Problem statement: Navigation for visually impaired people needs to exploit more approaches for solving problems it has, especially in image based methods navigation. Approach: This study introduces a new approach of an electronic cane for navigation through the environment. By forming multi clouds of SIFT features for the scene objects in the environment using some considerations. Results: The system gives an efficient localization within the weighted topological graph. Instead of building a...

  9. Feature-based attentional modulation of orientation perception in somatosensation

    Directory of Open Access Journals (Sweden)

    Meike Annika Schweisfurth

    2014-07-01

    Full Text Available In a reaction time study of human tactile orientation detection the effects of spatial attention and feature-based attention were investigated. Subjects had to give speeded responses to target orientations (parallel and orthogonal to the finger axis in a random stream of oblique tactile distractor orientations presented to their index and ring fingers. Before each block of trials, subjects received a tactile cue at one finger. By manipulating the validity of this cue with respect to its location and orientation (feature, we provided an incentive to subjects to attend spatially to the cued location and only there to the cued orientation. Subjects showed quicker responses to parallel compared to orthogonal targets, pointing to an orientation anisotropy in sensory processing. Also, faster reaction times were observed in location-matched trials, i.e. when targets appeared on the cued finger, representing a perceptual benefit of spatial attention. Most importantly, reaction times were shorter to orientations matching the cue, both at the cued and at the uncued location, documenting a global enhancement of tactile sensation by feature-based attention. This is the first report of a perceptual benefit of feature-based attention outside the spatial focus of attention in somatosensory perception. The similarity to effects of feature-based attention in visual perception supports the notion of matching attentional mechanisms across sensory domains.

  10. Syntactic and Sentence Feature Based Hybrid Approach for Text Summarization

    Directory of Open Access Journals (Sweden)

    D.Y. Sakhare

    2014-02-01

    Full Text Available Recently, there has been a significant research in automatic text summarization using feature-based techniques in which most of them utilized any one of the soft computing techniques. But, making use of syntactic structure of the sentences for text summarization has not widely applied due to its difficulty of handling it in summarization process. On the other hand, feature-based technique available in the literature showed efficient results in most of the techniques. So, combining syntactic structure into the feature-based techniques is surely smooth the summarization process in a way that the efficiency can be achieved. With the intention of combining two different techniques, we have presented an approach of text summarization that combines feature and syntactic structure of the sentences. Here, two neural networks are trained based on the feature score and the syntactic structure of sentences. Finally, the two neural networks are combined with weighted average to find the sentence score of the sentences. The experimentation is carried out using DUC 2002 dataset for various compression ratios. The results showed that the proposed approach achieved F-measure of 80% for the compression ratio 50 % that proved the better results compared with the existing techniques.

  11. A Facial Expression Classification System Integrating Canny, Principal Component Analysis and Artificial Neural Network

    CERN Document Server

    Thai, Le Hoang; Hai, Tran Son

    2011-01-01

    Facial Expression Classification is an interesting research problem in recent years. There are a lot of methods to solve this problem. In this research, we propose a novel approach using Canny, Principal Component Analysis (PCA) and Artificial Neural Network. Firstly, in preprocessing phase, we use Canny for local region detection of facial images. Then each of local region's features will be presented based on Principal Component Analysis (PCA). Finally, using Artificial Neural Network (ANN)applies for Facial Expression Classification. We apply our proposal method (Canny_PCA_ANN) for recognition of six basic facial expressions on JAFFE database consisting 213 images posed by 10 Japanese female models. The experimental result shows the feasibility of our proposal method.

  12. Iris Recognition System Based on Feature Level Fusion

    Directory of Open Access Journals (Sweden)

    Dr. S. R. Ganorkar

    2013-11-01

    Full Text Available Multibiometric systems utilize the evidence presented by multiple biometric sources (e.g., face and fingerprint, multiple fingers of a single user, multiple matchers, etc. in order to determine or verify the identity of an individual. Information from multiple sources can be consolidated in several distinct levels. But fusion of two different biometric traits are difficult due to (i the feature sets of multiple modalities may be incompatible (e.g., minutiae set of fingerprints and eigen-coefficients of face; (ii the relationship between the feature spaces of different biometric systems may not be known; (iii concatenating two feature vectors may result in a feature vector with very large dimensionality leading to the `curse of dimensionality problem, huge storage space and different processing algorithm. Also if we are use multiple images of single biometric trait, then it doesn’t show much variations. So in this paper, we present a efficient technique of feature-based fusion in a multimodal system where left eye and right eye are used as input. Iris recognition basically contains iris location, feature extraction, and identification. This algorithm uses canny edge detection to identify inner and outer boundary of iris. Then this image is feed to Gabor wavelet transform to extract the feature and finally matching is done by using indexing algorithm. The results from the analysis of works indicate that the proposed technique can lead to substantial improvement in performance.

  13. Facial morphology and obstructive sleep apnea

    Science.gov (United States)

    Capistrano, Anderson; Cordeiro, Aldir; Capelozza, Leopoldino; Almeida, Veridiana Correia; Silva, Priscila Izabela de Castro e; Martinez, Sandra; de Almeida-Pedrin, Renata Rodrigues

    2015-01-01

    Objective: This study aimed at assessing the relationship between facial morphological patterns (I, II, III, Long Face and Short Face) as well as facial types (brachyfacial, mesofacial and dolichofacial) and obstructive sleep apnea (OSA) in patients attending a center specialized in sleep disorders. Methods: Frontal, lateral and smile photographs of 252 patients (157 men and 95 women), randomly selected from a polysomnography clinic, with mean age of 40.62 years, were evaluated. In order to obtain diagnosis of facial morphology, the sample was sent to three professors of Orthodontics trained to classify patients' face according to five patterns, as follows: 1) Pattern I; 2) Pattern II; 3) Pattern III; 4) Long facial pattern; 5) Short facial pattern. Intraexaminer agreement was assessed by means of Kappa index. The professors ranked patients' facial type based on a facial index that considers the proportion between facial width and height. Results: The multiple linear regression model evinced that, when compared to Pattern I, Pattern II had the apnea and hypopnea index (AHI) worsened in 6.98 episodes. However, when Pattern II was compared to Pattern III patients, the index for the latter was 11.45 episodes lower. As for the facial type, brachyfacial patients had a mean AHI of 22.34, while dolichofacial patients had a significantly statistical lower index of 10.52. Conclusion: Patients' facial morphology influences OSA. Pattern II and brachyfacial patients had greater AHI, while Pattern III patients showed a lower index. PMID:26691971

  14. Magnetoencephalographic study on facial movements

    Directory of Open Access Journals (Sweden)

    Kensaku eMiki

    2014-07-01

    Full Text Available In this review, we introduced our three studies that focused on facial movements. In the first study, we examined the temporal characteristics of neural responses elicited by viewing mouth movements, and assessed differences between the responses to mouth opening and closing movements and an averting eyes condition. Our results showed that the occipitotemporal area, the human MT/V5 homologue, was active in the perception of both mouth and eye motions. Viewing mouth and eye movements did not elicit significantly different activity in the occipitotemporal area, which indicated that perception of the movement of facial parts may be processed in the same manner, and this is different from motion in general. In the second study, we investigated whether early activity in the occipitotemporal region evoked by eye movements was influenced by a face contour and/or features such as the mouth. Our results revealed specific information processing for eye movements in the occipitotemporal region, and this activity was significantly influenced by whether movements appeared with the facial contour and/or features, in other words, whether the eyes moved, even if the movement itself was the same. In the third study, we examined the effects of inverting the facial contour (hair and chin and features (eyes, nose, and mouth on processing for static and dynamic face perception. Our results showed the following: (1 In static face perception, activity in the right fusiform area was affected more by the inversion of features while that in the left fusiform area was affected more by a disruption in the spatial relationship between the contour and features, and (2 In dynamic face perception, activity in the right occipitotemporal area was affected by the inversion of the facial contour.

  15. A flower image retrieval method based on ROI feature

    Institute of Scientific and Technical Information of China (English)

    洪安祥; 陈刚; 李均利; 池哲儒; 张亶

    2004-01-01

    Flower image retrieval is a very important step for computer-aided plant species recognition. In this paper, we propose an efficient segmentation method based on color clustering and domain knowledge to extract flower regions from flower images. For flower retrieval, we use the color histogram of a flower region to characterize the color features of flower and two shape-based features sets, Centroid-Contour Distance (CCD) and Angle Code Histogram (ACH), to characterize the shape features of a flower contour. Experimental results showed that our flower region extraction method based on color clustering and domain knowledge can produce accurate flower regions. Flower retrieval results on a database of 885 flower images collected from 14 plant species showed that our Region-of-Interest (ROI) based retrieval approach using both color and shape features can perform better than a method based on the global color histogram proposed by Swain and Ballard (1991) and a method based on domain knowledge-driven segmentation and color names proposed by Das et al.(1999).

  16. A flower image retrieval method based on ROI feature

    Institute of Scientific and Technical Information of China (English)

    洪安祥; 陈刚; 李均利; 池哲儒; 张亶

    2004-01-01

    Flower image retrieval is a very important step for computer-aided plant species recognition.In this paper,we propose an efficient segmentation method based on color clustering and domain knowledge to extract flower regions from flower images.For flower retrieval,we use the color histogram of a flower region to characterize the color features of flower and two shape-based features sets,Centroid-Contour Distance(CCD)and Angle Code Histogram(ACH),to characterize the shape features of a flower contour.Experimental results showed that our flower region extraction method based on color clustering and domain knowledge can produce accurate flower regions.Flower retrieval results on a database of 885 flower images collected from 14 plant species showed that our Region-of-Interest(ROD based retrieval approach using both color and shape features can perform better than a method based on the global color histogram proposed by Swain and Ballard(1991)and a method based on domain knowledge-driven segmentation and color names proposed by Das et al.(1999).

  17. Automatic processing of facial affects in patients with borderline personality disorder: associations with symptomatology and comorbid disorders

    OpenAIRE

    Donges, Uta-Susan; Dukalski, Bibiana; Kersting, Anette; Suslow, Thomas

    2015-01-01

    Background Instability of affects and interpersonal relations are important features of borderline personality disorder (BPD). Interpersonal problems of individuals suffering from BPD might develop based on abnormalities in the processing of facial affects and high sensitivity to negative affective expressions. The aims of the present study were to examine automatic evaluative shifts and latencies as a function of masked facial affects in patients with BPD compared to healthy individuals. As ...

  18. Object Analysis of Human Emotions by Contourlets and GLCM Features

    Directory of Open Access Journals (Sweden)

    R. Suresh

    2014-08-01

    Full Text Available Facial expression is one of the most significant ways to express the intention, emotion and other nonverbal messages of human beings. A computerized human emotion recognition system based on Contourlet transformation is proposed. In order to analyze the presented study, seven kind of human emotions such as anger, fear, happiness, surprise, sadness, disgust and neutral of facial images are taken into account. The considered emotional images of human are represented by Contourlet transformation that decomposes the images into directional sub-bands at multiple levels. The features are extracted from the obtained sub-bands and stored for further analysis. Also, texture features from Gray Level Co-occurrence Matrix (GLCM are extracted and fused together with contourlet features to obtain higher recognition accuracy. To recognize the facial expressions, K Nearest Neighbor (KNN classifier is used to recognize the input facial image into one of the seven analyzed expressions and over 90% accuracy is achieved.

  19. Remote Sensing Image Feature Extracting Based Multiple Ant Colonies Cooperation

    Directory of Open Access Journals (Sweden)

    Zhang Zhi-long

    2014-02-01

    Full Text Available This paper presents a novel feature extraction method for remote sensing imagery based on the cooperation of multiple ant colonies. First, multiresolution expression of the input remote sensing imagery is created, and two different ant colonies are spread on different resolution images. The ant colony in the low-resolution image uses phase congruency as the inspiration information, whereas that in the high-resolution image uses gradient magnitude. The two ant colonies cooperate to detect features in the image by sharing the same pheromone matrix. Finally, the image features are extracted on the basis of the pheromone matrix threshold. Because a substantial amount of information in the input image is used as inspiration information of the ant colonies, the proposed method shows higher intelligence and acquires more complete and meaningful image features than those of other simple edge detectors.

  20. [Facial femalization in transgenders].

    Science.gov (United States)

    Yahalom, R; Blinder, D; Nadel, S

    2015-07-01

    Transsexualism is a gender identity disorder in which there is a strong desire to live and be accepted as a member of the opposite sex. In male-to-female transsexuals with strong masculine facial features, facial feminization surgery is performed as part of the gender reassignment. A strong association between femininity and attractiveness has been attributed to the upper third of the face and the interplay of the glabellar prominence of the forehead. Studies have shown that a certain lower jaw shape is characteristic of males with special attention to the strong square mandibular angle and chin and also suggest that the attractive female jaw is smaller with a more round shape mandibular angles and a pointy chin. Other studies have shown that feminization of the forehead through cranioplasty have the most significant impact in determining the gender of a patient. Facial feminization surgeries are procedures aimed to change the features of the male face to that of a female face. These include contouring of the forehead, brow lift, mandible angle reduction, genioplasty, rhinoplasty and a variety of soft tissue adjustments. In our maxillofacial surgery department at the Sheba Medical Center we perform forehead reshaping combining with brow lift and at the same surgery, mandibular and chin reshaping to match the remodeled upper third of the face. The forehead reshaping is done by cranioplasty with additional reduction of the glabella area by burring of the frontal bone. After reducing the frontal bossing around the superior orbital rims we manage the soft tissue to achieve the brow lift. The mandibular reshaping, is performed by intraoral approach and include contouring of the angles by osteotomy for a more round shape (rather than the manly square shape angles), as well as reshaping of the bone in the chin area in order to make it more pointy, by removing the lateral parts of the chin and in some cases performing also genioplasty reduction by AP osteotomy. PMID

  1. Multi Local Feature Selection Using Genetic Algorithm For Face Identification

    Directory of Open Access Journals (Sweden)

    Dzulkifli Mohamad

    2007-02-01

    Full Text Available Face recognition is a biometric authentication method that has become moresignificant and relevant in recent years. It is becoming a more mature technologythat has been employed in many large scale systems such as Visa InformationSystem, surveillance access control and multimedia search engine. Generally,there are three categories of approaches for recognition, namely global facialfeature, local facial feature and hybrid feature. Although the global facial-basedfeature approach is the most researched area, this approach is still plagued withmany difficulties and drawbacks due to factors such as face orientation,illumination, and the presence of foreign objects. This paper presents animproved offline face recognition algorithm based on a multi-local featureselection approach for grayscale images. The approach taken in this workconsists of five stages, namely face detection, facial feature (eyes, nose andmouth extraction, moment generation, facial feature classification and faceidentification. Subsequently, these stages were applied to 3065 images fromthree distinct facial databases, namely ORL, Yale and AR. The experimentalresults obtained have shown that recognition rates of more than 89% have beenachieved as compared to other global-based features and local facial-basedfeature approaches. The results also revealed that the technique is robust andinvariant to translation, orientation, and scaling.

  2. Portable Facial Recognition Jukebox Using Fisherfaces (Frj)

    OpenAIRE

    Richard Mo; Adnan Shaout

    2016-01-01

    A portable real-time facial recognition system that is able to play personalized music based on the identified person’s preferences was developed. The system is called Portable Facial Recognition Jukebox Using Fisherfaces (FRJ). Raspberry Pi was used as the hardware platform for its relatively low cost and ease of use. This system uses the OpenCV open source library to implement the computer vision Fisherfaces facial recognition algorithms, and uses the Simple DirectMedia Layer (SDL) library ...

  3. Do facial expressions develop before birth?

    Directory of Open Access Journals (Sweden)

    Nadja Reissland

    Full Text Available BACKGROUND: Fetal facial development is essential not only for postnatal bonding between parents and child, but also theoretically for the study of the origins of affect. However, how such movements become coordinated is poorly understood. 4-D ultrasound visualisation allows an objective coding of fetal facial movements. METHODOLOGY/FINDINGS: Based on research using facial muscle movements to code recognisable facial expressions in adults and adapted for infants, we defined two distinct fetal facial movements, namely "cry-face-gestalt" and "laughter- gestalt," both made up of up to 7 distinct facial movements. In this conceptual study, two healthy fetuses were then scanned at different gestational ages in the second and third trimester. We observed that the number and complexity of simultaneous movements increased with gestational age. Thus, between 24 and 35 weeks the mean number of co-occurrences of 3 or more facial movements increased from 7% to 69%. Recognisable facial expressions were also observed to develop. Between 24 and 35 weeks the number of co-occurrences of 3 or more movements making up a "cry-face gestalt" facial movement increased from 0% to 42%. Similarly the number of co-occurrences of 3 or more facial movements combining to a "laughter-face gestalt" increased from 0% to 35%. These changes over age were all highly significant. SIGNIFICANCE: This research provides the first evidence of developmental progression from individual unrelated facial movements toward fetal facial gestalts. We propose that there is considerable potential of this method for assessing fetal development: Subsequent discrimination of normal and abnormal fetal facial development might identify health problems in utero.

  4. Facial expression recognition using angle-related information from facial meshes

    Czech Academy of Sciences Publication Activity Database

    Vretos, N.; Solachidis, V.; Somol, Petr; Pitas, I.

    Lausanne, Switzerland: EURASIP, 2008, s. 1-5. [16th European Signal Processing Conference (EUSIPCO- 2008). Lausanne (CH), 25.08.2008-29.08.2008] R&D Projects: GA MŠk 1M0572; GA ČR GA102/08/0593 EU Projects: European Commission(XE) 507752 - MUSCLE Institutional research plan: CEZ:AV0Z10750506 Keywords : facial expression * facial meshes * recognition * feature selection Subject RIV: BD - Theory of Information http://library.utia.cas.cz/separaty/2008/RO/somol-facial expression recognition using angle-related information from facial meshes.pdf

  5. Content Based Video Retrieval using trajectory and Velocity features

    Directory of Open Access Journals (Sweden)

    Dr. S. D. Sawarkar

    2012-09-01

    Full Text Available The Internet forms today’s largest source of Information containing a high density of multimedia objects and its content is often semantically related. The identification of relevant media objects in such a vast collection poses a major problem that is studied in the area of multimedia information retrieval. Before the emergence of content-based retrieval, media was annotated with text, allowing the media to be accessed by text-based searching based on the classification of subject or semantics.In typical content-based retrieval systems, the contents of the media in the database are extracted and described by multi-dimensional feature vectors, also called descriptors. In our paper to retrieve desired data, users submit query examples to the retrieval system. The system then represents these examples with feature vectors. The distances (i.e.,similarities between the feature vectors of the query example and those of the media in the feature dataset are then computed and ranked. Retrieval is conducted by applying an indexing scheme to provide an efficient way to search the video database. Finally, the system ranks the search results and then returns the top search results that are most similar to the query examples.Therefore, a content-based retrieval system has four aspects: feature extraction and representation, dimension reduction of feature, indexing, and query specifications. With the search engine being developed, the user should have the ability to initiate a retrieval procedure by using video retrieval in a way that there is a better chance for a user to find the desired content.

  6. Robust feature extraction for character recognition based on binary images

    Science.gov (United States)

    Wang, Lijun; Zhang, Li; Xing, Yuxiang; Wang, Zhiming; Gao, Hewei

    2006-01-01

    Optical Character Recognition (OCR) is a classical research field and has become one of most successful applications in the area of pattern recognition. Feature extraction is a key step in the process of OCR. This paper presents three algorithms for feature extraction based on binary images: the Lattice with Distance Transform (DTL), Stroke Density (SD) and Co-occurrence Matrix (CM). DTL algorithm improves the robustness of the lattice feature by using distance transform to increase the distance of the foreground and background and thus reduce the influence from the boundary of strokes. SD and CM algorithms extract robust stroke features base on the fact that human recognize characters according to strokes, including length and orientation. SD reflects the quantized stroke information including the length and the orientation. CM reflects the length and orientation of a contour. SD and CM together sufficiently describe strokes. Since these three groups of feature vectors complement each other in expressing characters, we integrate them and adopt a hierarchical algorithm to achieve optimal performance. Our methods are tested on the USPS (United States Postal Service) database and the Vehicle License Plate Number Pictures Database (VLNPD). Experimental results shows that the methods gain high recognition rate and cost reasonable average running time. Also, based on similar condition, we compared our results to the box method proposed by Hannmandlu [18]. Our methods demonstrated better performance in efficiency.

  7. Intact Rapid Facial Mimicry as well as Generally Reduced Mimic Responses in Stable Schizophrenia Patients

    Science.gov (United States)

    Chechko, Natalya; Pagel, Alena; Otte, Ellen; Koch, Iring; Habel, Ute

    2016-01-01

    Spontaneous emotional expressions (rapid facial mimicry) perform both emotional and social functions. In the current study, we sought to test whether there were deficits in automatic mimic responses to emotional facial expressions in patients (15 of them) with stable schizophrenia compared to 15 controls. In a perception-action interference paradigm (the Simon task; first experiment), and in the context of a dual-task paradigm (second experiment), the task-relevant stimulus feature was the gender of a face, which, however, displayed a smiling or frowning expression (task-irrelevant stimulus feature). We measured the electromyographical activity in the corrugator supercilii and zygomaticus major muscle regions in response to either compatible or incompatible stimuli (i.e., when the required response did or did not correspond to the depicted facial expression). The compatibility effect based on interactions between the implicit processing of a task-irrelevant emotional facial expression and the conscious production of an emotional facial expression did not differ between the groups. In stable patients (in spite of a reduced mimic reaction), we observed an intact capacity to respond spontaneously to facial emotional stimuli. PMID:27303335

  8. Image mosaic method based on SIFT features of line segment.

    Science.gov (United States)

    Zhu, Jun; Ren, Mingwu

    2014-01-01

    This paper proposes a novel image mosaic method based on SIFT (Scale Invariant Feature Transform) feature of line segment, aiming to resolve incident scaling, rotation, changes in lighting condition, and so on between two images in the panoramic image mosaic process. This method firstly uses Harris corner detection operator to detect key points. Secondly, it constructs directed line segments, describes them with SIFT feature, and matches those directed segments to acquire rough point matching. Finally, Ransac method is used to eliminate wrong pairs in order to accomplish image mosaic. The results from experiment based on four pairs of images show that our method has strong robustness for resolution, lighting, rotation, and scaling. PMID:24511326

  9. Image Mosaic Method Based on SIFT Features of Line Segment

    Directory of Open Access Journals (Sweden)

    Jun Zhu

    2014-01-01

    Full Text Available This paper proposes a novel image mosaic method based on SIFT (Scale Invariant Feature Transform feature of line segment, aiming to resolve incident scaling, rotation, changes in lighting condition, and so on between two images in the panoramic image mosaic process. This method firstly uses Harris corner detection operator to detect key points. Secondly, it constructs directed line segments, describes them with SIFT feature, and matches those directed segments to acquire rough point matching. Finally, Ransac method is used to eliminate wrong pairs in order to accomplish image mosaic. The results from experiment based on four pairs of images show that our method has strong robustness for resolution, lighting, rotation, and scaling.

  10. GPR-Based Landmine Detection and Identification Using Multiple Features

    Directory of Open Access Journals (Sweden)

    Kwang Hee Ko

    2012-01-01

    Full Text Available This paper presents a method to identify landmines in various burial conditions. A ground penetration radar is used to generate data set, which is then processed to reduce the ground effect and noise to obtain landmine signals. Principal components and Fourier coefficients of the landmine signals are computed, which are used as features of each landmine for detection and identification. A database is constructed based on the features of various types of landmines and the ground conditions, including the different levels of moisture and types of ground and the burial depths of the landmines. Detection and identification is performed by searching for features in the database. For a robust decision, the counting method and the Mahalanobis distance-based likelihood ratio test method are employed. Four landmines, different in size and material, are considered as examples that demonstrate the efficiency of the proposed method for detecting and identifying landmines.

  11. A Distributed Feature-based Environment for Collaborative Design

    Directory of Open Access Journals (Sweden)

    Wei-Dong Li

    2003-02-01

    Full Text Available This paper presents a client/server design environment based on 3D feature-based modelling and Java technologies to enable design information to be shared efficiently among members within a design team. In this environment, design tasks and clients are organised through working sessions generated and maintained by a collaborative server. The information from an individual design client during a design process is updated and broadcast to other clients in the same session through an event-driven and call-back mechanism. The downstream manufacturing analysis modules can be wrapped as agents and plugged into the open environment to support the design activities. At the server side, a feature-feature relationship is established and maintained to filter the varied information of a working part, so as to facilitate efficient information update during the design process.

  12. Expression Recognition Based on Variant Sampling Method and Gabor Features%基于多种采样方式和Gabor特征的表情识别

    Institute of Scientific and Technical Information of China (English)

    徐洁; 章毓晋

    2011-01-01

    设计一种表情识别系统,采用多种采样方式和不同尺度的局部Gabor滤波器,通过主成分分析与线性判别分析对人脸表情识别系统进行特征优化选择.该系统大幅缩减特征提取及分类的时空需求量,表情识别率也有所提高.对原始图像沿垂直方向采样识别效果说明人脸垂直方向包含更多的表情信息.实验测试结果表明.Gabor变换后的人脸表情主要特征信息在不同的尺度和方向上具有集中性和冗余性,小尺度全方向的滤波器组能获得更好的识别性.%This paper investigates a facial expression recognition system based on variant sampling method and different scales of local Gabor features optimized by Principal Component Analysis(PCA)+ Linear Discriminant Analysis(LDA). The sampling method not only reduces the need of compute time and storage memory, but also improves the recognition rates. The result obtained from the sampling in the vertical direction expresses that this direction includes much more facial expression information. Also the influence on facial expression recognition rates based on variant Gabor filters in different scales and directions can be concluded that the primitive information of facial expression features have redundancy in scales and directions.

  13. Areal Feature Matching Algorithm Based on Spatial Similarity

    Directory of Open Access Journals (Sweden)

    HAO Yan-ling

    2016-02-01

    Full Text Available Disparities of features that represent the same real world entity from disparate sources usually occur,thus their identification or matching is crucial to map compilation.Areal features possess much in maps' representation.Motivated by the idea of identifying the same entity through integrating known information by eyes,an areal feature matching algorithm based on spatial similarity is proposed in this paper.Regarding the areal feature as a whole entity,the total similarity is obtained by integrating positional similarity,shape similarity and size similarity with a weighted average algorithm.Then the matching entities are achieved according to the maximum total similarity.The proposed algorithm identifies an areal feature by its shape-center point in order to calculate positional similarity.The shape similarity is given by the function of describing the shape,which ensures its precision not be effected by all interference and avoids the loss of shape information.Furthermore,the size of areal features is measured by its covered area.Test results show the stability and reliability of the proposed algorithm.

  14. Feature-based attention across saccades and immediate postsaccadic selection.

    Science.gov (United States)

    Eymond, Cécile; Cavanagh, Patrick; Collins, Thérèse

    2016-07-01

    Before each eye movement, attentional resources are drawn to the saccade goal. This saccade-related attention is known to be spatial in nature, and in this study we asked whether it also evokes any feature selectivity that is maintained across the saccade. After a saccade toward a colored target, participants performed a postsaccadic feature search on an array displayed at landing. The saccade target either had the same color as the search target in the postsaccadic array (congruent trials) or a different color (incongruent or neutral trials). Our results show that the color of the saccade target did not prime the subsequent feature search. This suggests that "landmark search", the process of searching for the saccade target once the eye lands (Deubel in Visual Cognition, 11, 173-202, 2004), may not involve the attentional mechanisms that underlie feature search. We also analyzed intertrial effects and observed priming of pop-out (Maljkovic & Nakayama in Memory & Cognition, 22, 657-672, 1994) for the postsaccadic feature search: the detection of the color singleton became faster when its color was repeated on successive trials. However, search performance revealed no effect of congruency between the saccade and search targets, either within or across trials, suggesting that the priming of pop-out is specific to target repetitions within the same task and is not seen for repetitions across tasks. Our results support a dissociation between feature-based attention and the attentional mechanisms associated with eye movement programming. PMID:27084700

  15. An Efficient Annotation of Search Results Based on Feature

    Directory of Open Access Journals (Sweden)

    A. Jebha

    2015-10-01

    Full Text Available  With the increased number of web databases, major part of deep web is one of the bases of database. In several search engines, encoded data in the returned resultant pages from the web often comes from structured databases which are referred as Web databases (WDB. A result page returned from WDB has multiple search records (SRR.Data units obtained from these databases are encoded into the dynamic resultant pages for manual processing. In order to make these units to be machine process able, relevant information are extracted and labels of data are assigned meaningfully. In this paper, feature ranking is proposed to extract the relevant information of extracted feature from WDB. Feature ranking is practical to enhance ideas of data and identify relevant features. This research explores the performance of feature ranking process by using the linear support vector machines with various feature of WDB database for annotation of relevant results. Experimental result of proposed system provides better result when compared with the earlier methods.

  16. PCA-HOG symmetrical feature based diseased cell detection

    Science.gov (United States)

    Wan, Min-jie

    2016-04-01

    A histogram of oriented gradient (HOG) feature is applied to the field of diseased cell detection, which can detect diseased cells in high resolution tissue images rapidly, accurately and efficiently. Firstly, motivated by symmetrical cellular forms, a new HOG symmetrical feature based on the traditional HOG feature is proposed to meet the condition of cell detection. Secondly, considering the high feature dimension of traditional HOG feature leads to plenty of memory resources and long runtime in practical applications, a classical dimension reduction method called principal component analysis (PCA) is used to reduce the dimension of high-dimensional HOG descriptor. Because of that, computational speed is increased greatly, and the accuracy of detection can be controlled in a proper range at the same time. Thirdly, support vector machine (SVM) classifier is trained with PCA-HOG symmetrical features proposed above. At last, practical tissue images is detected and analyzed by SVM classifier. In order to verify the effectiveness of this new algorithm, it is practically applied to conduct diseased cell detection which takes 200 pieces of H&E (hematoxylin & eosin) high resolution staining histopathological images collected from 20 breast cancer patients as a sample. The experiment shows that the average processing rate can be 25 frames per second and the detection accuracy can be 92.1%.

  17. Efficient Identification Using a Prime-Feature-Based Technique

    DEFF Research Database (Denmark)

    Hussain, Dil Muhammad Akbar; Haq, Shaiq A.; Valente, Andrea

    2011-01-01

    , which are called minutiae points. Designing a reliable automatic fingerprint matching algorithm for minimal platform is quite challenging. In real-time systems, efficiency of the matching algorithm is of utmost importance. To achieve this goal, a prime-feature-based indexing algorithm is proposed in...

  18. Feature-Based Statistical Analysis of Combustion Simulation Data

    Energy Technology Data Exchange (ETDEWEB)

    Bennett, J; Krishnamoorthy, V; Liu, S; Grout, R; Hawkes, E; Chen, J; Pascucci, V; Bremer, P T

    2011-11-18

    We present a new framework for feature-based statistical analysis of large-scale scientific data and demonstrate its effectiveness by analyzing features from Direct Numerical Simulations (DNS) of turbulent combustion. Turbulent flows are ubiquitous and account for transport and mixing processes in combustion, astrophysics, fusion, and climate modeling among other disciplines. They are also characterized by coherent structure or organized motion, i.e. nonlocal entities whose geometrical features can directly impact molecular mixing and reactive processes. While traditional multi-point statistics provide correlative information, they lack nonlocal structural information, and hence, fail to provide mechanistic causality information between organized fluid motion and mixing and reactive processes. Hence, it is of great interest to capture and track flow features and their statistics together with their correlation with relevant scalar quantities, e.g. temperature or species concentrations. In our approach we encode the set of all possible flow features by pre-computing merge trees augmented with attributes, such as statistical moments of various scalar fields, e.g. temperature, as well as length-scales computed via spectral analysis. The computation is performed in an efficient streaming manner in a pre-processing step and results in a collection of meta-data that is orders of magnitude smaller than the original simulation data. This meta-data is sufficient to support a fully flexible and interactive analysis of the features, allowing for arbitrary thresholds, providing per-feature statistics, and creating various global diagnostics such as Cumulative Density Functions (CDFs), histograms, or time-series. We combine the analysis with a rendering of the features in a linked-view browser that enables scientists to interactively explore, visualize, and analyze the equivalent of one terabyte of simulation data. We highlight the utility of this new framework for combustion

  19. Facial Recognition in a Group-Living Cichlid Fish

    OpenAIRE

    Masanori Kohda; Lyndon Alexander Jordan; Takashi Hotta; Naoya Kosaka; Kenji Karino; Hirokazu Tanaka; Masami Taniyama; Tomohiro Takeyama

    2015-01-01

    The theoretical underpinnings of the mechanisms of sociality, e.g. territoriality, hierarchy, and reciprocity, are based on assumptions of individual recognition. While behavioural evidence suggests individual recognition is widespread, the cues that animals use to recognise individuals are established in only a handful of systems. Here, we use digital models to demonstrate that facial features are the visual cue used for individual recognition in the social fish Neolamprologus pulcher. Focal...

  20. Complex chromosome rearrangement in a child with microcephaly, dysmorphic facial features and mosaicism for a terminal deletion del(18(q21.32-qter investigated by FISH and array-CGH: Case report

    Directory of Open Access Journals (Sweden)

    Kokotas Haris

    2008-11-01

    Full Text Available Abstract We report on a 7 years and 4 months old Greek boy with mild microcephaly and dysmorphic facial features. He was a sociable child with maxillary hypoplasia, epicanthal folds, upslanting palpebral fissures with long eyelashes, and hypertelorism. His ears were prominent and dysmorphic, he had a long philtrum and a high arched palate. His weight was 17 kg (25th percentile and his height 120 cm (50th percentile. High resolution chromosome analysis identified in 50% of the cells a normal male karyotype, and in 50% of the cells one chromosome 18 showed a terminal deletion from 18q21.32. Molecular cytogenetic investigation confirmed a del(18(q21.32-qter in the one chromosome 18, but furthermore revealed the presence of a duplication in q21.2 in the other chromosome 18. The case is discussed concerning comparable previously reported cases and the possible mechanisms of formation.

  1. Digital video steganalysis using motion vector recovery-based features.

    Science.gov (United States)

    Deng, Yu; Wu, Yunjie; Zhou, Linna

    2012-07-10

    As a novel digital video steganography, the motion vector (MV)-based steganographic algorithm leverages the MVs as the information carriers to hide the secret messages. The existing steganalyzers based on the statistical characteristics of the spatial/frequency coefficients of the video frames cannot attack the MV-based steganography. In order to detect the presence of information hidden in the MVs of video streams, we design a novel MV recovery algorithm and propose the calibration distance histogram-based statistical features for steganalysis. The support vector machine (SVM) is trained with the proposed features and used as the steganalyzer. Experimental results demonstrate that the proposed steganalyzer can effectively detect the presence of hidden messages and outperform others by the significant improvements in detection accuracy even with low embedding rates. PMID:22781241

  2. Image Recommendation Algorithm Using Feature-Based Collaborative Filtering

    Science.gov (United States)

    Kim, Deok-Hwan

    As the multimedia contents market continues its rapid expansion, the amount of image contents used in mobile phone services, digital libraries, and catalog service is increasing remarkably. In spite of this rapid growth, users experience high levels of frustration when searching for the desired image. Even though new images are profitable to the service providers, traditional collaborative filtering methods cannot recommend them. To solve this problem, in this paper, we propose feature-based collaborative filtering (FBCF) method to reflect the user's most recent preference by representing his purchase sequence in the visual feature space. The proposed approach represents the images that have been purchased in the past as the feature clusters in the multi-dimensional feature space and then selects neighbors by using an inter-cluster distance function between their feature clusters. Various experiments using real image data demonstrate that the proposed approach provides a higher quality recommendation and better performance than do typical collaborative filtering and content-based filtering techniques.

  3. A Hybrid method of face detection based on Feature Extraction using PIFR and Feature Optimization using TLBO

    Directory of Open Access Journals (Sweden)

    Kapil Verma

    2016-01-01

    Full Text Available In this paper we proposed a face detection method based on feature selection and feature optimization. Now in current research trend of biometric security used the process of feature optimization for better improvement of face detection technique. Basically our face consists of three types of feature such as skin color, texture and shape and size of face. The most important feature of face is skin color and texture of face. In this detection technique used texture feature of face image. For the texture extraction of image face used partial feature extraction function, these function is most promising shape feature analysis. For the selection of feature and optimization of feature used multi-objective TLBO. TLBO algorithm is population based searching technique and defines two constraints function for the process of selection and optimization. The proposed algorithm of face detection based on feature selection and feature optimization process. Initially used face image data base and passes through partial feature extractor function and these transform function gives a texture feature of face image. For the evaluation of performance our proposed algorithm implemented in MATLAB 7.8.0 software and face image used provided by Google face image database. For numerical analysis of result used hit and miss ratio. Our empirical evaluation of result shows better prediction result in compression of PIFR method of face detection.

  4. Ear Recognition Based on Gabor Features and KFDA

    OpenAIRE

    Li Yuan; Zhichun Mu

    2014-01-01

    We propose an ear recognition system based on 2D ear images which includes three stages: ear enrollment, feature extraction, and ear recognition. Ear enrollment includes ear detection and ear normalization. The ear detection approach based on improved Adaboost algorithm detects the ear part under complex background using two steps: offline cascaded classifier training and online ear detection. Then Active Shape Model is applied to segment the ear part and normalize all the ear images to the s...

  5. Facial Orf

    Directory of Open Access Journals (Sweden)

    Enver Turan

    2012-06-01

    Full Text Available Orf, also known as ecthyma contagiosum or contagious pustular dermatitis, is a zoonotic viral disease caused by the direct or indirect contact of damaged skin with infected animals. The causative agent is an epitheliotropic DNA virus from the Parapoxvirus family and affects sheeps, goats and some other domestic or wild ruminants. A patient presented to our clinic with two nodular lesions on his face after contact with the raw meat of ruminants and the differential diagnoses other than echtyma contagiosum were eliminated by punch biopsy. Although orf lesions are usually found as solitary lesions on the hands and fingers, they have rarely been reported on the face, nostrils, tongue, eye lids and perianal region. It can present as an atypical lesion or multiple lesions. A thirty-six year old male patient, who had two facial orf lesions after contact with sheep, is presented due to the unusual location and multiplicity of the lesions. (Turk J Dermatol 2012; 6: 58-60

  6. Feature-based ATR performance method for FLIR

    Science.gov (United States)

    Doria, David M.

    2004-09-01

    A performance model for FLIR automatic target recognition is discussed. Key aspects of this model are that (a) relationships between sensor optical resolution, sampling, noise and estimated P(ID) are implicitly defined, (b) premised on the use of the particular features that are used, the analysis of the matching structure leads to an explicit "shape similarity" measure between targets, (c) the notion of "shape" includes both internal signature attributes and external contour information; (d) the values of this shape measure can be measured for both true and false target models using combined CAD rendering, sensor models, and features, (e) in addition to the P(ID), the system also is able to predict the probability of declaration P(Declare|Target) for a given true target, (f) the system is able to predict the probability of false declaration for a given confuser or confuser to target similarity specification, (g) M (with M greater than or equal to 2) class problems are able to be handled, and (h) the diagonals along confusion matrices can be estimated directly using this approach. The model relies on analysis of performance of a particular type of shape-based features, with the goal of developing explicit relationships from low level features through high level model matching. Based on the predicted densities of the ensemble of features, the system approximates an expression for the likelihood of the observed features under noisy conditions with a given sensor, conditioned on the target type, aspect, and range. Using some engineering approximations that relate to the distance transform-type method of matching that is analyzed, a tractable form of a non-unique correspondence based approximate likelihood expression is obtained, which can be used to estimate bounds on the performance of similar sensor/ATR systems that rely on these features. Such an approach could also be applied to other phenomenologies, such as synthetic aperture radar, using an appropriate low

  7. Commercial Shot Classification Based on Multiple Features Combination

    Science.gov (United States)

    Liu, Nan; Zhao, Yao; Zhu, Zhenfeng; Ni, Rongrong

    This paper presents a commercial shot classification scheme combining well-designed visual and textual features to automatically detect TV commercials. To identify the inherent difference between commercials and general programs, a special mid-level textual descriptor is proposed, aiming to capture the spatio-temporal properties of the video texts typical of commercials. In addition, we introduce an ensemble-learning based combination method, named Co-AdaBoost, to interactively exploit the intrinsic relations between the visual and textual features employed.

  8. 基于拓扑知觉理论的人脸表情识别方法%Facial Expression Recognition Method Based on Topological Perception Theory

    Institute of Scientific and Technical Information of China (English)

    王晓峰; 张丽君

    2012-01-01

    On traditional computer visual field, the task is widely considered to be independent bottom-up, this causes low recognition rate of image. This paper proposes the facial expression recognition method based on lhe topology consciousness theory. The method applies the stability of human face topology invariance to abstract the facial outline. And adds the PCA (0 integrate as the facial large extent characterized information, applies large range priority principle to facial expression recognition, and designs lhe RBF+Adaboost classification. Experimental results show this method can improve the rate of facial expression recognition.%摘要:在传统的计算机视觉领域中,底层任务被认为是自主的、自底向上的过程,造成较低的图像识别率,为此,提出一种基于拓扑知觉理论的人脸表情识别方法.该方法把人脸具有拓扑不变性的性质用于人脸拓扑轮廓的提取,将提取的特征与主成分分析相结合,作为人脸大范围特征信息,将大范围优先原理应用于人脸表情的识别算法中,设计RBF+Adaboost多层分类器.实验结果表明,该方法可以提高人脸表情的识别率.

  9. Extraction of Eyes for Facial Expression Identification of Students

    OpenAIRE

    G.Sofia,; DR. M. MOHAMED SATHIK

    2010-01-01

    Facial expressions play an essential role in communications in social interactions with other human beings which deliver rich information about their emotions. Facial expression analysis has wide range ofapplications in the areas such as Psychology, Animations, Interactive games, Image retrieval and Image understanding. Selecting the relevant feature and ignoring the unimportant feature is the key step in facial expression recognition system. Here, we propose an efficient method for identifyi...

  10. Mammogram Analysis Based on Pixel Intensity Mean Features

    Directory of Open Access Journals (Sweden)

    B. Santhi

    2012-01-01

    Full Text Available Problem statement: In the recent years, Computer Aided Diagnosis (CAD can be very useful for detection of breast cancer. Mammography can be used as an efficient tool for breast cancer diagnosis. A computer based diagnosis and classification system can reduce unnecessary biopsy. Approach: This study investigates a new approach to the classification of mammogram images based on pixel intensity mean features. The proposed method for the classification of normal and abnormal (cancerous pattern is a two step process. The first step is feature extraction. The intensity based features are extracted from the digital mammograms. The second step is the classification process, differentiating between normal and abnormal pattern. Artificial neural networks are used to classify the data. Experimental evaluation is performed on the Digital Database for Screening Mammography (DDSM, benchmark database. Results and Conclusion: Experiments are performed to verify that the proposed pixel intensity means the features improve the accuracy of the classification. The proposed CAD system achieves better classification performance with the accuracy of 98%.

  11. In search of Leonardo: computer-based facial image analysis of Renaissance artworks for identifying Leonardo as subject

    Science.gov (United States)

    Tyler, Christopher W.; Smith, William A. P.; Stork, David G.

    2012-03-01

    One of the enduring mysteries in the history of the Renaissance is the adult appearance of the archetypical "Renaissance Man," Leonardo da Vinci. His only acknowledged self-portrait is from an advanced age, and various candidate images of younger men are difficult to assess given the absence of documentary evidence. One clue about Leonardo's appearance comes from the remark of the contemporary historian, Vasari, that the sculpture of David by Leonardo's master, Andrea del Verrocchio, was based on the appearance of Leonardo when he was an apprentice. Taking a cue from this statement, we suggest that the more mature sculpture of St. Thomas, also by Verrocchio, might also have been a portrait of Leonardo. We tested the possibility Leonardo was the subject for Verrocchio's sculpture by a novel computational technique for the comparison of three-dimensional facial configurations. Based on quantitative measures of similarities, we also assess whether another pair of candidate two-dimensional images are plausibly attributable as being portraits of Leonardo as a young adult. Our results are consistent with the claim Leonardo is indeed the subject in these works, but we need comparisons with images in a larger corpora of candidate artworks before our results achieve statistical significance.

  12. Facial Recognition Technology: An analysis with scope in India

    OpenAIRE

    S.B. Thorat; Nayak, S. K.; Jyoti P Dandale

    2010-01-01

    A facial recognition system is a computer application for automatically identifying or verifying a person from a digital image or a video frame from a video source. One of the way is to do this is by comparing selected facial features from the image and a facial database.It is typically used in security systems and can be compared to other biometrics such as fingerprint or eye iris recognition systems. In this paper we focus on 3-D facial recognition system and biometric facial recognision sy...

  13. SPEECH/MUSIC CLASSIFICATION USING WAVELET BASED FEATURE EXTRACTION TECHNIQUES

    Directory of Open Access Journals (Sweden)

    Thiruvengatanadhan Ramalingam

    2014-01-01

    Full Text Available Audio classification serves as the fundamental step towards the rapid growth in audio data volume. Due to the increasing size of the multimedia sources speech and music classification is one of the most important issues for multimedia information retrieval. In this work a speech/music discrimination system is developed which utilizes the Discrete Wavelet Transform (DWT as the acoustic feature. Multi resolution analysis is the most significant statistical way to extract the features from the input signal and in this study, a method is deployed to model the extracted wavelet feature. Support Vector Machines (SVM are based on the principle of structural risk minimization. SVM is applied to classify audio into their classes namely speech and music, by learning from training data. Then the proposed method extends the application of Gaussian Mixture Models (GMM to estimate the probability density function using maximum likelihood decision methods. The system shows significant results with an accuracy of 94.5%.

  14. Digital signature systems based on smart card and fingerprint feature

    Institute of Scientific and Technical Information of China (English)

    You Lin; Xu Maozhi; Zheng Zhiming

    2007-01-01

    Two signature systems based on smart cards and fingerprint features are proposed. In one signature system, the cryptographic key is stored in the smart card and is only accessible when the signer's extracted fingerprint features match his stored template. To resist being tampered on public channel, the user's message and the signed message are encrypted by the signer's public key and the user's public key, respectively. In the other signature system, the keys are generated by combining the signer's fingerprint features, check bits, and a rememberable key,and there are no matching process and keys stored on the smart card. Additionally, there is generally more than one public key in this system, that is, there exist some pseudo public keys except a real one.

  15. An image segmentation based method for iris feature extraction

    Institute of Scientific and Technical Information of China (English)

    XU Guang-zhu; ZHANG Zai-feng; MA Yi-de

    2008-01-01

    In this article, the local anomalistic blocks such ascrypts, furrows, and so on in the iris are initially used directly asiris features. A novel image segmentation method based onintersecting cortical model (ICM) neural network was introducedto segment these anomalistic blocks. First, the normalized irisimage was put into ICM neural network after enhancement.Second, the iris features were segmented out perfectly and wereoutput in binary image type by the ICM neural network. Finally,the fourth output pulse image produced by ICM neural networkwas chosen as the iris code for the convenience of real timeprocessing. To estimate the performance of the presentedmethod, an iris recognition platform was produced and theHamming Distance between two iris codes was computed tomeasure the dissimilarity between them. The experimentalresults in CASIA vl.0 and Bath iris image databases show thatthe proposed iris feature extraction algorithm has promisingpotential in iris recognition.

  16. Features Extraction for Object Detection Based on Interest Point

    Directory of Open Access Journals (Sweden)

    Amin Mohamed Ahsan

    2013-05-01

    Full Text Available In computer vision, object detection is an essential process for further processes such as object tracking, analyzing and so on. In the same context, extraction features play important role to detect the object correctly. In this paper we present a method to extract local features based on interest point which is used to detect key-points within an image, then, compute histogram of gradient (HOG for the region surround that point. Proposed method used speed-up robust feature (SURF method as interest point detector and exclude the descriptor. The new descriptor is computed by using HOG method. The proposed method got advantages of both mentioned methods. To evaluate the proposed method, we used well-known dataset which is Caltech101. The initial result is encouraging in spite of using a small data for training.

  17. DWT Based Fingerprint Recognition using Non Minutiae Features

    CERN Document Server

    R., Shashi Kumar D; Chhootaray, R K; Pattanaik, Sabyasachi

    2011-01-01

    Forensic applications like criminal investigations, terrorist identification and National security issues require a strong fingerprint data base and efficient identification system. In this paper we propose DWT based Fingerprint Recognition using Non Minutiae (DWTFR) algorithm. Fingerprint image is decomposed into multi resolution sub bands of LL, LH, HL and HH by applying 3 level DWT. The Dominant local orientation angle {\\theta} and Coherence are computed on LL band only. The Centre Area Features and Edge Parameters are determined on each DWT level by considering all four sub bands. The comparison of test fingerprint with database fingerprint is decided based on the Euclidean Distance of all the features. It is observed that the values of FAR, FRR and TSR are improved compared to the existing algorithm.

  18. Short stature, digit anomalies and dysmorphic facial features are associated with the duplication of miR-17 ~ 92 cluster.

    Science.gov (United States)

    Hemmat, Morteza; Rumple, Melissa J; Mahon, Loretta W; Strom, Charles M; Anguiano, Arturo; Talai, Maryam; Nguyen, Bryant; Boyar, Fatih Z

    2014-01-01

    MicroRNAs (miRNAs) are key regulators of gene expression, playing important roles in development, homeostasis, and disease. Recent experimental evidence indicates that mutation or deregulation of the MIR17HG gene (miR-17 ~ 92 cluster) contributes to the pathogenesis of a variety of human diseases, including cancer and congenital developmental defects. We report on a 9-year-old boy who presented with developmental delay, autism spectrum disorder, short stature, mild macrocephaly, lower facial weakness, hypertelorism, downward slanting palpebral fissures, brachydactyly, and clinodactyly. SNP-microarray analysis revealed 516 kb microduplication at 13q31.3 involving the entire MIR17HG gene encoding the miR-17 ~ 92 polycistronic miRNA cluster, and the first five exons of the GPC5 gene. Family study confirmed that the microduplication was maternally inherited by the proband and one of his five half-brothers; digit and other skeletal anomalies were exclusive to the family members harboring the microduplication. This case represents the smallest reported microduplication to date at 13q31.3 and provides evidence supporting the important role of miR-17 ~ 92 gene dosage in normal growth and skeletal development. We postulate that any dosage abnormality of MIR17HG, either deletion or duplication, is sufficient to interrupt skeletal developmental pathway, with variable outcome from growth retardation to overgrowth. PMID:24739087

  19. Facial Expression Recognition System based on Gabor filter%基于Gabor滤波器的面部表情识别系统

    Institute of Scientific and Technical Information of China (English)

    宋小双

    2016-01-01

    由于缺乏有效的面部表情识别技术,面部表情识别在日常生活中的潜在应用没有得到重视。随着计算机化的盛行,运用计算机的面部识别也逐渐开始盛行。该文以MATLAB为开发工具,对面部表情进行研究。该文选择亚采样和归一化对表情图像原图进行预处理,找到面部特征的位置。然后再使用Gabor小波对预处理图像进行滤波,接着对滤波后的图像算欧式距离,最后使用最近邻方法找出最近的类,识别出表情图像所对应的情绪类型。%Facial expression recognition has potential application in different aspects of day-to-day life not yet realized due to ab-sence of effective expression recognition techniques. With the computerization of the prevalence, the use of the facial recogni-tion has gradually been popular. In this paper, MATLAB as a development tool was used for the study of facial expressions. This paper selected sub-sampling and normalized for original image pre-processing, and then find the location pf facial features. Then it uses the Gabor wavelet image preprocessing filter. The next step is counting the filtered image Euclidean distance. Finally, us-ing the Nearest Neighborhood Classifier method to find the most recent class, identify the face image corresponding type of emo-tion.

  20. Content-based retrieval of remote sensed images using a feature-based approach

    Science.gov (United States)

    Vellaikal, Asha; Dao, Son; Kuo, C.-C. Jay

    1995-01-01

    A feature-based representation model for content-based retrieval from a remote sensed image database is described in this work. The representation is formed by clustering spatially local pixels, and the cluster features are used to process several types of queries which are expected to occur frequently in the context of remote sensed image retrieval. Preliminary experimental results show that the feature-based representation provides a very promising tool for content-based access.

  1. Video segmentation using multiple features based on EM algorithm

    Institute of Scientific and Technical Information of China (English)

    张风超; 杨杰; 刘尔琦

    2004-01-01

    Object-based video segmentation is an important issue for many multimedia applications. A video segmentation method based on EM algorithm is proposed. We consider video segmentation as an unsupervised classification problem and apply EM algorithm to obtain the maximum-likelihood estimation of the Gaussian model parameters for model-based segmentation. We simultaneously combine multiple features (motion, color) within a maximum likelihood framework to obtain accurate segment results. We also use the temporal consistency among video frames to improve the speed of EM algorithm. Experimental results on typical MPEG-4 sequences and real scene sequences show that our method has an attractive accuracy and robustness.

  2. Hyperspectral image classifier based on beach spectral feature

    International Nuclear Information System (INIS)

    The seashore, especially coral bank, is sensitive to human activities and environmental changes. A multispectral image, with coarse spectral resolution, is inadaptable for identify subtle spectral distinctions between various beaches. To the contrary, hyperspectral image with narrow and consecutive channels increases our capability to retrieve minor spectral features which is suit for identification and classification of surface materials on the shore. Herein, this paper used airborne hyperspectral data, in addition to ground spectral data to study the beaches in Qingdao. The image data first went through image pretreatment to deal with the disturbance of noise, radiation inconsistence and distortion. In succession, the reflection spectrum, the derivative spectrum and the spectral absorption features of the beach surface were inspected in search of diagnostic features. Hence, spectra indices specific for the unique environment of seashore were developed. According to expert decisions based on image spectrums, the beaches are ultimately classified into sand beach, rock beach, vegetation beach, mud beach, bare land and water. In situ surveying reflection spectrum from GER1500 field spectrometer validated the classification production. In conclusion, the classification approach under expert decision based on feature spectrum is proved to be feasible for beaches

  3. Facial expression recognition algorithm based on local Gabor wavelet automatic segmentation%基于自动分割的局部Gabor小波人脸表情识别算法

    Institute of Scientific and Technical Information of China (English)

    刘姗姗; 王玲

    2009-01-01

    针对包含表情信息的静态灰度图像,提出基于自动分割的局部Gabor小波人脸表情识别算法.首先使用数学形态学与积分投影相结合定位眉毛眼睛区域,采用模板内计算均值定位嘴巴区域,自动分割出表情子区域.接着,对分割出的表情子区域进行Gabor小波变换提取表情特征,再利用Fisher线性判别分析进行选择,有效地去除了表情特征的冗余性和相关性.最后利用支持向量机实现对人脸表情的分类.用该算法在日本女性表情数据库上进行测试,实现了自动化且易于实现,结果证明了该方法的有效性.%A local Gabor wavelet facial expression recognition algorithm based on automatic segmentation to the still image containing facial expression information was introduced. Firstly, mathematical morphology combined with projection was used to locate the brow and eye region, and the mouth region was located by calculating template average, which can segment the expression sub-regions automatically. Secondly, features of the expression sub-regions were extracted by Gabor wavelet transformation and then effective Gabor expression features were selected by Fisher Linear Discriminant ( FLD) analysis, removing the redundancy and relevance of expression features. Finally the features were sent to Support Vector Machine (SVM) to classify different expressions. The algorithm was tested on Japanese female facial expression database. It is easy to realize automation. The feasibility of this method has been verified by experiments.

  4. Smartphone-based heart-rate measurement using facial images and a spatiotemporal alpha-trimmed mean filter.

    Science.gov (United States)

    Lee, J-S; Lin, K-W; Syue, J-L

    2016-04-29

    Currently, cardiovascular disease affects a relatively high proportion of the world's population. Thus, developing simple and effective methods for monitoring patients with cardiovascular disease is critical for research. Monitoring the heart rate of patients is a relatively simple and effective method for managing patients with this condition. For patients, the desired heart rate monitoring equipment should be portable, instantaneous, and accurate. Because smartphones have become the most prevalent mobile device, we utilized this technology as a platform for developing a novel heart-rate measurement system. Catering to the phenomenon of people using the front camera of their smartphones as a mirror, the proposed system was designed to analyze facial-image sequences captured using the front camera. A spatiotemporal alpha-trimmed mean filter was developed to estimate a user's heart rate quickly and accurately. The experimental results show that in addition to achieving these objectives, the developed system outperforms a similar personal computer-based system. In addition, the system performs effectively even when users are wearing glasses. Hence, the proposed system demonstrates practical value for people who must monitor their heart rate daily. PMID:27177107

  5. Facial Cosmetic Surgery

    Science.gov (United States)

    ... and Soft Tissue Surgery Dental and Soft Tissue Surgery Oral and facial surgeons surgically treat the soft tissues ... and Soft Tissue Surgery Dental and Soft Tissue Surgery Oral and facial surgeons surgically treat the soft tissues ...

  6. Biosensor method and system based on feature vector extraction

    Science.gov (United States)

    Greenbaum, Elias; Rodriguez, Jr., Miguel; Qi, Hairong; Wang, Xiaoling

    2012-04-17

    A method of biosensor-based detection of toxins comprises the steps of providing at least one time-dependent control signal generated by a biosensor in a gas or liquid medium, and obtaining a time-dependent biosensor signal from the biosensor in the gas or liquid medium to be monitored or analyzed for the presence of one or more toxins selected from chemical, biological or radiological agents. The time-dependent biosensor signal is processed to obtain a plurality of feature vectors using at least one of amplitude statistics and a time-frequency analysis. At least one parameter relating to toxicity of the gas or liquid medium is then determined from the feature vectors based on reference to the control signal.

  7. Automated annual cropland mapping using knowledge-based temporal features

    Science.gov (United States)

    Waldner, François; Canto, Guadalupe Sepulcre; Defourny, Pierre

    2015-12-01

    Global, timely, accurate and cost-effective cropland mapping is a prerequisite for reliable crop condition monitoring. This article presented a simple and comprehensive methodology capable to meet the requirements of operational cropland mapping by proposing (1) five knowledge-based temporal features that remain stable over time, (2) a cleaning method that discards misleading pixels from a baseline land cover map and (3) a classifier that delivers high accuracy cropland maps (> 80%). This was demonstrated over four contrasted agrosystems in Argentina, Belgium, China and Ukraine. It was found that the quality and accuracy of the baseline impact more the certainty of the classification rather than the classification output itself. In addition, it was shown that interpolation of the knowledge-based features increases the stability of the classifier allowing for its re-use from year to year without recalibration. Hence, the method shows potential for application at larger scale as well as for delivering cropland map in near real time.

  8. Collaborative Tracking of Image Features Based on Projective Invariance

    Science.gov (United States)

    Jiang, Jinwei

    -mode sensors for improving the flexibility and robustness of the system. From the experimental results during three field tests for the LASOIS system, we observed that most of the errors in the image processing algorithm are caused by the incorrect feature tracking. This dissertation addresses the feature tracking problem in image sequences acquired from cameras. Despite many alternatives to feature tracking problem, iterative least squares solution solving the optical flow equation has been the most popular approach used by many in the field. This dissertation attempts to leverage the former efforts to enhance feature tracking methods by introducing a view geometric constraint to the tracking problem, which provides collaboration among features. In contrast to alternative geometry based methods, the proposed approach provides an online solution to optical flow estimation in a collaborative fashion by exploiting Horn and Schunck flow estimation regularized by view geometric constraints. Proposed collaborative tracker estimates the motion of a feature based on the geometry of the scene and how the other features are moving. Alternative to this approach, a new closed form solution to tracking that combines the image appearance with the view geometry is also introduced. We particularly use invariants in the projective coordinates and conjecture that the traditional appearance solution can be significantly improved using view geometry. The geometric constraint is introduced by defining a new optical flow equation which exploits the scene geometry from a set drawn from tracked features. At the end of each tracking loop the quality of the tracked features is judged using both appearance similarity and geometric consistency. Our experiments demonstrate robust tracking performance even when the features are occluded or they undergo appearance changes due to projective deformation of the template. The proposed collaborative tracking method is also tested in the visual navigation

  9. Oro-facial-digital syndrome type II with otolaryngological manifestations

    Directory of Open Access Journals (Sweden)

    A Havle

    2015-01-01

    Full Text Available We present a case of oro-facial-digital syndrome type II (Mohr′s syndrome which is characterized by malformations of the oral cavity, face and digits. The facial and oral features include tongue nodules, cleft or high-arched palate, missing teeth, broad nose; cleft lip. The digital features include clinodactyly, polydactyly, syndactyly, brachydactyly and duplication of the hallux.

  10. Clustering based on Random Graph Model embedding Vertex Features

    OpenAIRE

    Zanghi, Hugo; Volant, Stevenn; Ambroise, Christophe

    2009-01-01

    Large datasets with interactions between objects are common to numerous scientific fields (i.e. social science, internet, biology...). The interactions naturally define a graph and a common way to explore or summarize such dataset is graph clustering. Most techniques for clustering graph vertices just use the topology of connections ignoring informations in the vertices features. In this paper, we provide a clustering algorithm exploiting both types of data based on a statistical model with l...

  11. Evaluation of feature-based methods for automated network orientation

    OpenAIRE

    Apollonio, F I; Ballabeni, A.; M. Gaiani; F. Remondino

    2014-01-01

    Every day new tools and algorithms for automated image processing and 3D reconstruction purposes become available, giving the possibility to process large networks of unoriented and markerless images, delivering sparse 3D point clouds at reasonable processing time. In this paper we evaluate some feature-based methods used to automatically extract the tie points necessary for calibration and orientation procedures, in order to better understand their performances for 3D reconstruction...

  12. Validation of Underwater Sensor Package Using Feature Based SLAM.

    Science.gov (United States)

    Cain, Christopher; Leonessa, Alexander

    2016-01-01

    Robotic vehicles working in new, unexplored environments must be able to locate themselves in the environment while constructing a picture of the objects in the environment that could act as obstacles that would prevent the vehicles from completing their desired tasks. In enclosed environments, underwater range sensors based off of acoustics suffer performance issues due to reflections. Additionally, their relatively high cost make them less than ideal for usage on low cost vehicles designed to be used underwater. In this paper we propose a sensor package composed of a downward facing camera, which is used to perform feature tracking based visual odometry, and a custom vision-based two dimensional rangefinder that can be used on low cost underwater unmanned vehicles. In order to examine the performance of this sensor package in a SLAM framework, experimental tests are performed using an unmanned ground vehicle and two feature based SLAM algorithms, the extended Kalman filter based approach and the Rao-Blackwellized, particle filter based approach, to validate the sensor package. PMID:26999142

  13. Validation of Underwater Sensor Package Using Feature Based SLAM

    Directory of Open Access Journals (Sweden)

    Christopher Cain

    2016-03-01

    Full Text Available Robotic vehicles working in new, unexplored environments must be able to locate themselves in the environment while constructing a picture of the objects in the environment that could act as obstacles that would prevent the vehicles from completing their desired tasks. In enclosed environments, underwater range sensors based off of acoustics suffer performance issues due to reflections. Additionally, their relatively high cost make them less than ideal for usage on low cost vehicles designed to be used underwater. In this paper we propose a sensor package composed of a downward facing camera, which is used to perform feature tracking based visual odometry, and a custom vision-based two dimensional rangefinder that can be used on low cost underwater unmanned vehicles. In order to examine the performance of this sensor package in a SLAM framework, experimental tests are performed using an unmanned ground vehicle and two feature based SLAM algorithms, the extended Kalman filter based approach and the Rao-Blackwellized, particle filter based approach, to validate the sensor package.

  14. Validation of Underwater Sensor Package Using Feature Based SLAM

    Science.gov (United States)

    Cain, Christopher; Leonessa, Alexander

    2016-01-01

    Robotic vehicles working in new, unexplored environments must be able to locate themselves in the environment while constructing a picture of the objects in the environment that could act as obstacles that would prevent the vehicles from completing their desired tasks. In enclosed environments, underwater range sensors based off of acoustics suffer performance issues due to reflections. Additionally, their relatively high cost make them less than ideal for usage on low cost vehicles designed to be used underwater. In this paper we propose a sensor package composed of a downward facing camera, which is used to perform feature tracking based visual odometry, and a custom vision-based two dimensional rangefinder that can be used on low cost underwater unmanned vehicles. In order to examine the performance of this sensor package in a SLAM framework, experimental tests are performed using an unmanned ground vehicle and two feature based SLAM algorithms, the extended Kalman filter based approach and the Rao-Blackwellized, particle filter based approach, to validate the sensor package. PMID:26999142

  15. Biological Computation Indexes of Brain Oscillations in Unattended Facial Expression Processing Based on Event-Related Synchronization/Desynchronization.

    Science.gov (United States)

    Yu, Bo; Ma, Lin; Li, Haifeng; Zhao, Lun; Bo, Hongjian; Wang, Xunda

    2016-01-01

    Estimation of human emotions from Electroencephalogram (EEG) signals plays a vital role in affective Brain Computer Interface (BCI). The present study investigated the different event-related synchronization (ERS) and event-related desynchronization (ERD) of typical brain oscillations in processing Facial Expressions under nonattentional condition. The results show that the lower-frequency bands are mainly used to update Facial Expressions and distinguish the deviant stimuli from the standard ones, whereas the higher-frequency bands are relevant to automatically processing different Facial Expressions. Accordingly, we set up the relations between each brain oscillation and processing unattended Facial Expressions by the measures of ERD and ERS. This research first reveals the contributions of each frequency band for comprehension of Facial Expressions in preattentive stage. It also evidences that participants have emotional experience under nonattentional condition. Therefore, the user's emotional state under nonattentional condition can be recognized in real time by the ERD/ERS computation indexes of different frequency bands of brain oscillations, which can be used in affective BCI to provide the user with more natural and friendly ways. PMID:27471545

  16. Biological Computation Indexes of Brain Oscillations in Unattended Facial Expression Processing Based on Event-Related Synchronization/Desynchronization

    Science.gov (United States)

    Ma, Lin; Li, Haifeng; Zhao, Lun; Bo, Hongjian; Wang, Xunda

    2016-01-01

    Estimation of human emotions from Electroencephalogram (EEG) signals plays a vital role in affective Brain Computer Interface (BCI). The present study investigated the different event-related synchronization (ERS) and event-related desynchronization (ERD) of typical brain oscillations in processing Facial Expressions under nonattentional condition. The results show that the lower-frequency bands are mainly used to update Facial Expressions and distinguish the deviant stimuli from the standard ones, whereas the higher-frequency bands are relevant to automatically processing different Facial Expressions. Accordingly, we set up the relations between each brain oscillation and processing unattended Facial Expressions by the measures of ERD and ERS. This research first reveals the contributions of each frequency band for comprehension of Facial Expressions in preattentive stage. It also evidences that participants have emotional experience under nonattentional condition. Therefore, the user's emotional state under nonattentional condition can be recognized in real time by the ERD/ERS computation indexes of different frequency bands of brain oscillations, which can be used in affective BCI to provide the user with more natural and friendly ways. PMID:27471545

  17. HBS: a novel biometric feature based on heartbeat morphology.

    Science.gov (United States)

    Islam, Md Saiful; Alajlan, Naif; Bazi, Yakoub; Hichri, Haikel S

    2012-05-01

    In this paper, a new feature named heartbeat shape (HBS) is proposed for ECG-based biometrics. HBS is computed from the morphology of segmented heartbeats. Computation of the feature involves three basic steps: 1) resampling and normalization of a heartbeat; 2) reduction of matching error; and 3) shift invariant transformation. In order to construct both gallery and probe templates, a few consecutive heartbeats which could be captured in a reasonably short period of time are required. Thus, the identification and verification methods become efficient. We have tested the proposed feature independently on two publicly available databases with 76 and 26 subjects, respectively, for identification and verification. The second database contains several subjects having clinically proven cardiac irregularities (atrial premature contraction arrhythmia). Experiments on these two databases yielded high identification accuracy (98% and 99.85%, respectively) and low verification equal error rate (1.88% and 0.38%, respectively). These results were obtained by using templates constructed from five consecutive heartbeats only. This feature compresses the original ECG signal significantly to be useful for efficient communication and access of information in telecardiology scenarios. PMID:22361664

  18. Feature Selection Model Based Content Analysis for Combating Web Spam

    Directory of Open Access Journals (Sweden)

    Shipra Mittal

    2016-04-01

    Full Text Available With the increasing growth of Internet and World Wi de Web, information retrieval (IR has attracted much attention in recent years. Quick, ac curate and quality information mining is the core concern of successful search companies. Likewi se, spammers try to manipulate IR system to fulfil their stealthy needs. Spamdexing, (also known as web spamming is one of the spamming techniques of adversarial IR, allowing use rs to exploit ranking of specific documents in search engine result page (SERP. Spammers take advantage of different features of web indexing system for notorious motives. Suitable mac hine learning approaches can be useful in analysis of spam patterns and automated detection o f spam. This paper examines content based features of web documents and discusses the potenti al of feature selection (FS in upcoming studies to combat web spam. The objective of featu re selection is to select the salient features to improve prediction performance and to understand th e underlying data generation techniques. A publically available web data set namely WEBSPAM - UK2007 is used for all evaluations.

  19. Image recognition of diseased rice seeds based on color feature

    Science.gov (United States)

    Cheng, Fang; Ying, Yibin

    2004-11-01

    The objective of this research is to develop a digital image analysis algorithm for detection of diseased rice seeds based on color features. The rice seeds used for this study involved five varieties of Jinyou402, Shanyou10, Zhongyou207, Jiayou99 and IIyou3207. Images of rice seeds were acquired with a color machine vision system. Each original RGB image was converted to HSV color space and preprocessed to show, as hue in the seed region while the pixels value of background was zero. The hue values were scaled so that they varied from 0.0 to 1.0. Then six color features were extracted and evaluated for their contributions to seed classification. Determined using Blocks method, the mean hue value shows the strongest classification ability. Parzen windowing function method was used to estimate probability density distribution and a threshold of mean hue was drawn to classify normal seeds and diseased seeds. The average accuracy of test data set is 95% for Jinyou402. Then the feature of hue histogram was extracted for diseased seeds and partitioned into two clusters of spot diseased seeds and severe diseased seeds. Desired results were achieved when the two cancroids locations were used to discriminate the disease degree. Combined with the two features of mean hue and histogram, all seeds could be classified as normal seeds, spot diseased seeds and severe diseased seeds. Finally, the algorithm was implemented for all the five varieties to test the adaptability.

  20. Iris-based medical analysis by geometric deformation features.

    Science.gov (United States)

    Ma, Lin; Zhang, D; Li, Naimin; Cai, Yan; Zuo, Wangmeng; Wang, Kuanguan

    2013-01-01

    Iris analysis studies the relationship between human health and changes in the anatomy of the iris. Apart from the fact that iris recognition focuses on modeling the overall structure of the iris, iris diagnosis emphasizes the detecting and analyzing of local variations in the characteristics of irises. This paper focuses on studying the geometrical structure changes in irises that are caused by gastrointestinal diseases, and on measuring the observable deformations in the geometrical structures of irises that are related to roundness, diameter and other geometric forms of the pupil and the collarette. Pupil and collarette based features are defined and extracted. A series of experiments are implemented on our experimental pathological iris database, including manual clustering of both normal and pathological iris images, manual classification by non-specialists, manual classification by individuals with a medical background, classification ability verification for the proposed features, and disease recognition by applying the proposed features. The results prove the effectiveness and clinical diagnostic significance of the proposed features and a reliable recognition performance for automatic disease diagnosis. Our research results offer a novel systematic perspective for iridology studies and promote the progress of both theoretical and practical work in iris diagnosis. PMID:23144041