WorldWideScience

Sample records for based facial feature

  1. Nonparametric Facial Feature Localization Using Segment-Based Eigenfeatures

    Directory of Open Access Journals (Sweden)

    Hyun-Chul Choi

    2016-01-01

    Full Text Available We present a nonparametric facial feature localization method using relative directional information between regularly sampled image segments and facial feature points. Instead of using any iterative parameter optimization technique or search algorithm, our method finds the location of facial feature points by using a weighted concentration of the directional vectors originating from the image segments pointing to the expected facial feature positions. Each directional vector is calculated by linear combination of eigendirectional vectors which are obtained by a principal component analysis of training facial segments in feature space of histogram of oriented gradient (HOG. Our method finds facial feature points very fast and accurately, since it utilizes statistical reasoning from all the training data without need to extract local patterns at the estimated positions of facial features, any iterative parameter optimization algorithm, and any search algorithm. In addition, we can reduce the storage size for the trained model by controlling the energy preserving level of HOG pattern space.

  2. Likelihood Ratio-Based Detection of Facial Features

    NARCIS (Netherlands)

    Bazen, A.M.; Veldhuis, Raymond N.J.; Croonen, Gerrie H.

    One of the first steps in face recognition, after image acquisition, is registration. A simple but effective technique of registration is to align facial features, such as eyes, nose and mouth, as well as possible to a standard face. This requires an accurate automatic estimate of the locations of

  3. Facial expression recognition in the wild based on multimodal texture features

    Science.gov (United States)

    Sun, Bo; Li, Liandong; Zhou, Guoyan; He, Jun

    2016-11-01

    Facial expression recognition in the wild is a very challenging task. We describe our work in static and continuous facial expression recognition in the wild. We evaluate the recognition results of gray deep features and color deep features, and explore the fusion of multimodal texture features. For the continuous facial expression recognition, we design two temporal-spatial dense scale-invariant feature transform (SIFT) features and combine multimodal features to recognize expression from image sequences. For the static facial expression recognition based on video frames, we extract dense SIFT and some deep convolutional neural network (CNN) features, including our proposed CNN architecture. We train linear support vector machine and partial least squares classifiers for those kinds of features on the static facial expression in the wild (SFEW) and acted facial expression in the wild (AFEW) dataset, and we propose a fusion network to combine all the extracted features at decision level. The final achievement we gained is 56.32% on the SFEW testing set and 50.67% on the AFEW validation set, which are much better than the baseline recognition rates of 35.96% and 36.08%.

  4. An Algorithm Based on the Self-Organized Maps for the Classification of Facial Features

    Directory of Open Access Journals (Sweden)

    Gheorghe Gîlcă

    2015-12-01

    Full Text Available This paper deals with an algorithm based on Self Organized Maps networks which classifies facial features. The proposed algorithm can categorize the facial features defined by the input variables: eyebrow, mouth, eyelids into a map of their grouping. The groups map is based on calculating the distance between each input vector and each output neuron layer , the neuron with the minimum distance being declared winner neuron. The network structure consists of two levels: the first level contains three input vectors, each having forty-one values, while the second level contains the SOM competitive network which consists of 100 neurons. The proposed system can classify facial features quickly and easily using the proposed algorithm based on SOMs.

  5. Tracking subtle stereotypes of children with trisomy 21: from facial-feature-based to implicit stereotyping.

    Science.gov (United States)

    Enea-Drapeau, Claire; Carlier, Michèle; Huguet, Pascal

    2012-01-01

    Stigmatization is one of the greatest obstacles to the successful integration of people with Trisomy 21 (T21 or Down syndrome), the most frequent genetic disorder associated with intellectual disability. Research on attitudes and stereotypes toward these people still focuses on explicit measures subjected to social-desirability biases, and neglects how variability in facial stigmata influences attitudes and stereotyping. The participants were 165 adults including 55 young adult students, 55 non-student adults, and 55 professional caregivers working with intellectually disabled persons. They were faced with implicit association tests (IAT), a well-known technique whereby response latency is used to capture the relative strength with which some groups of people--here photographed faces of typically developing children and children with T21--are automatically (without conscious awareness) associated with positive versus negative attributes in memory. Each participant also rated the same photographed faces (consciously accessible evaluations). We provide the first evidence that the positive bias typically found in explicit judgments of children with T21 is smaller for those whose facial features are highly characteristic of this disorder, compared to their counterparts with less distinctive features and to typically developing children. We also show that this bias can coexist with negative evaluations at the implicit level (with large effect sizes), even among professional caregivers. These findings support recent models of feature-based stereotyping, and more importantly show how crucial it is to go beyond explicit evaluations to estimate the true extent of stigmatization of intellectually disabled people.

  6. An Efficient Multimodal 2D + 3D Feature-based Approach to Automatic Facial Expression Recognition

    KAUST Repository

    Li, Huibin

    2015-07-29

    We present a fully automatic multimodal 2D + 3D feature-based facial expression recognition approach and demonstrate its performance on the BU-3DFE database. Our approach combines multi-order gradient-based local texture and shape descriptors in order to achieve efficiency and robustness. First, a large set of fiducial facial landmarks of 2D face images along with their 3D face scans are localized using a novel algorithm namely incremental Parallel Cascade of Linear Regression (iPar-CLR). Then, a novel Histogram of Second Order Gradients (HSOG) based local image descriptor in conjunction with the widely used first-order gradient based SIFT descriptor are used to describe the local texture around each 2D landmark. Similarly, the local geometry around each 3D landmark is described by two novel local shape descriptors constructed using the first-order and the second-order surface differential geometry quantities, i.e., Histogram of mesh Gradients (meshHOG) and Histogram of mesh Shape index (curvature quantization, meshHOS). Finally, the Support Vector Machine (SVM) based recognition results of all 2D and 3D descriptors are fused at both feature-level and score-level to further improve the accuracy. Comprehensive experimental results demonstrate that there exist impressive complementary characteristics between the 2D and 3D descriptors. We use the BU-3DFE benchmark to compare our approach to the state-of-the-art ones. Our multimodal feature-based approach outperforms the others by achieving an average recognition accuracy of 86.32%. Moreover, a good generalization ability is shown on the Bosphorus database.

  7. Tracking subtle stereotypes of children with trisomy 21: from facial-feature-based to implicit stereotyping.

    Directory of Open Access Journals (Sweden)

    Claire Enea-Drapeau

    Full Text Available BACKGROUND: Stigmatization is one of the greatest obstacles to the successful integration of people with Trisomy 21 (T21 or Down syndrome, the most frequent genetic disorder associated with intellectual disability. Research on attitudes and stereotypes toward these people still focuses on explicit measures subjected to social-desirability biases, and neglects how variability in facial stigmata influences attitudes and stereotyping. METHODOLOGY/PRINCIPAL FINDINGS: The participants were 165 adults including 55 young adult students, 55 non-student adults, and 55 professional caregivers working with intellectually disabled persons. They were faced with implicit association tests (IAT, a well-known technique whereby response latency is used to capture the relative strength with which some groups of people--here photographed faces of typically developing children and children with T21--are automatically (without conscious awareness associated with positive versus negative attributes in memory. Each participant also rated the same photographed faces (consciously accessible evaluations. We provide the first evidence that the positive bias typically found in explicit judgments of children with T21 is smaller for those whose facial features are highly characteristic of this disorder, compared to their counterparts with less distinctive features and to typically developing children. We also show that this bias can coexist with negative evaluations at the implicit level (with large effect sizes, even among professional caregivers. CONCLUSION: These findings support recent models of feature-based stereotyping, and more importantly show how crucial it is to go beyond explicit evaluations to estimate the true extent of stigmatization of intellectually disabled people.

  8. Local binary pattern variants-based adaptive texture features analysis for posed and nonposed facial expression recognition

    Science.gov (United States)

    Sultana, Maryam; Bhatti, Naeem; Javed, Sajid; Jung, Soon Ki

    2017-09-01

    Facial expression recognition (FER) is an important task for various computer vision applications. The task becomes challenging when it requires the detection and encoding of macro- and micropatterns of facial expressions. We present a two-stage texture feature extraction framework based on the local binary pattern (LBP) variants and evaluate its significance in recognizing posed and nonposed facial expressions. We focus on the parametric limitations of the LBP variants and investigate their effects for optimal FER. The size of the local neighborhood is an important parameter of the LBP technique for its extraction in images. To make the LBP adaptive, we exploit the granulometric information of the facial images to find the local neighborhood size for the extraction of center-symmetric LBP (CS-LBP) features. Our two-stage texture representations consist of an LBP variant and the adaptive CS-LBP features. Among the presented two-stage texture feature extractions, the binarized statistical image features and adaptive CS-LBP features were found showing high FER rates. Evaluation of the adaptive texture features shows competitive and higher performance than the nonadaptive features and other state-of-the-art approaches, respectively.

  9. Artificial Neural Networks and Gene Expression Programing based age estimation using facial features

    Directory of Open Access Journals (Sweden)

    Baddrud Z. Laskar

    2015-10-01

    Full Text Available This work is about estimating human age automatically through analysis of facial images. It has got a lot of real-world applications. Due to prompt advances in the fields of machine vision, facial image processing, and computer graphics, automatic age estimation via faces in computer is one of the dominant topics these days. This is due to widespread real-world applications, in areas of biometrics, security, surveillance, control, forensic art, entertainment, online customer management and support, along with cosmetology. As it is difficult to estimate the exact age, this system is to estimate a certain range of ages. Four sets of classifications have been used to differentiate a person’s data into one of the different age groups. The uniqueness about this study is the usage of two technologies i.e., Artificial Neural Networks (ANN and Gene Expression Programing (GEP to estimate the age and then compare the results. New methodologies like Gene Expression Programing (GEP have been explored here and significant results were found. The dataset has been developed to provide more efficient results by superior preprocessing methods. This proposed approach has been developed, tested and trained using both the methods. A public data set was used to test the system, FG-NET. The quality of the proposed system for age estimation using facial features is shown by broad experiments on the available database of FG-NET.

  10. Human Amygdala Tracks a Feature-Based Valence Signal Embedded within the Facial Expression of Surprise.

    Science.gov (United States)

    Kim, M Justin; Mattek, Alison M; Bennett, Randi H; Solomon, Kimberly M; Shin, Jin; Whalen, Paul J

    2017-09-27

    Human amygdala function has been traditionally associated with processing the affective valence (negative vs positive) of an emotionally charged event, especially those that signal fear or threat. However, this account of human amygdala function can be explained by alternative views, which posit that the amygdala might be tuned to either (1) general emotional arousal (activation vs deactivation) or (2) specific emotion categories (fear vs happy). Delineating the pure effects of valence independent of arousal or emotion category is a challenging task, given that these variables naturally covary under many circumstances. To circumvent this issue and test the sensitivity of the human amygdala to valence values specifically, we measured the dimension of valence within the single facial expression category of surprise. Given the inherent valence ambiguity of this category, we show that surprised expression exemplars are attributed valence and arousal values that are uniquely and naturally uncorrelated. We then present fMRI data from both sexes, showing that the amygdala tracks these consensus valence values. Finally, we provide evidence that these valence values are linked to specific visual features of the mouth region, isolating the signal by which the amygdala detects this valence information. SIGNIFICANCE STATEMENT There is an open question as to whether human amygdala function tracks the valence value of cues in the environment, as opposed to either a more general emotional arousal value or a more specific emotion category distinction. Here, we demonstrate the utility of surprised facial expressions because exemplars within this emotion category take on valence values spanning the dimension of bipolar valence (positive to negative) at a consistent level of emotional arousal. Functional neuroimaging data showed that amygdala responses tracked the valence of surprised facial expressions, unconfounded by arousal. Furthermore, a machine learning classifier identified

  11. Robust Feature Detection for Facial Expression Recognition

    Directory of Open Access Journals (Sweden)

    Spiros Ioannou

    2007-07-01

    Full Text Available This paper presents a robust and adaptable facial feature extraction system used for facial expression recognition in human-computer interaction (HCI environments. Such environments are usually uncontrolled in terms of lighting and color quality, as well as human expressivity and movement; as a result, using a single feature extraction technique may fail in some parts of a video sequence, while performing well in others. The proposed system is based on a multicue feature extraction and fusion technique, which provides MPEG-4-compatible features assorted with a confidence measure. This confidence measure is used to pinpoint cases where detection of individual features may be wrong and reduce their contribution to the training phase or their importance in deducing the observed facial expression, while the fusion process ensures that the final result regarding the features will be based on the extraction technique that performed better given the particular lighting or color conditions. Real data and results are presented, involving both extreme and intermediate expression/emotional states, obtained within the sensitive artificial listener HCI environment that was generated in the framework of related European projects.

  12. Robust Feature Detection for Facial Expression Recognition

    Directory of Open Access Journals (Sweden)

    Ioannou Spiros

    2007-01-01

    Full Text Available This paper presents a robust and adaptable facial feature extraction system used for facial expression recognition in human-computer interaction (HCI environments. Such environments are usually uncontrolled in terms of lighting and color quality, as well as human expressivity and movement; as a result, using a single feature extraction technique may fail in some parts of a video sequence, while performing well in others. The proposed system is based on a multicue feature extraction and fusion technique, which provides MPEG-4-compatible features assorted with a confidence measure. This confidence measure is used to pinpoint cases where detection of individual features may be wrong and reduce their contribution to the training phase or their importance in deducing the observed facial expression, while the fusion process ensures that the final result regarding the features will be based on the extraction technique that performed better given the particular lighting or color conditions. Real data and results are presented, involving both extreme and intermediate expression/emotional states, obtained within the sensitive artificial listener HCI environment that was generated in the framework of related European projects.

  13. The Research of the Facial Expression Recognition Method for Human-Computer Interaction Based on the Gabor Features of the Key Regions

    Directory of Open Access Journals (Sweden)

    Zhan Qun

    2014-08-01

    Full Text Available According to the fact that the Gabor features of the global face image are interfered easily, the method of facial expression recognition based on the Gabor transforming to the key area of the human face image was discussed. The face features location was achieved by the active shape model and the Gabor features of the local area of the key points relation to expression was extracted. On the basis, the PCA was utilized to realize dimensional reduction of the Gabor features. On the end, the facial expression recognition was realized based on the support vector machine. Compared with the global face image Gabor features, experimental results demonstrate that Gabor features of the key area of human face image can increase the accuracy of the facial expression recognition effectively.

  14. Assessment for facial nerve paralysis based on facial asymmetry.

    Science.gov (United States)

    Anping, Song; Guoliang, Xu; Xuehai, Ding; Jiaxin, Song; Gang, Xu; Wu, Zhang

    2017-12-01

    Facial nerve paralysis (FNP) is a loss of facial movement due to facial nerve damage, which will lead to significant physical pain and abnormal function in patients. Traditional FNP grading methods are solely based on clinician's judgment and are time-consuming and subjective. Hence, an accurate, quantitative and objective method of evaluating FNP is proposed for constructing a standard system, which will be an invaluable tool for clinicians who treat the patient with FNP. In this paper, we introduce a novel method for quantitative assessment of FNP which combines an effective facial landmark estimation (FLE) algorithm and facial asymmetrical feature (FAF) by processing facial movement image. The facial landmarks can be detected automatically and accurately using FLE. The FAF is based on the angle of key facial landmark connection and mirror degree of multiple regions on human face. Our method provides significant contribution as it describes the displacement of facial organ and the changes of facial organ exposure during performing facial movements. Experiments show that our method is effective, accurate and convenient in practice, which is beneficial to FNP diagnosis and personalized rehabilitation therapy for each patient.

  15. Facial Expression Recognition Based on Facial Motion Patterns

    Directory of Open Access Journals (Sweden)

    Leila Farmohammadi

    2015-08-01

    Full Text Available Facial expression is one of the most powerful and direct mediums embedded in human beings to communicate with other individuals’ feelings and abilities. In recent years, many surveys have been carried on facial expression analysis. With developments in machine vision and artificial intelligence, facial expression recognition is considered a key technique of the developments in computer interaction of mankind and is applied in the natural interaction between human and computer, machine vision and psycho- medical therapy. In this paper, we have developed a new method to recognize facial expressions based on discovering differences of facial expressions, and consequently appointed a unique pattern to each single expression.by analyzing the image by means of a neighboring window on it, this recognition system is locally estimated. The features are extracted as binary local features; and according to changes in points of windows, facial points get a directional motion per each facial expression. Using pointy motion of all facial expressions and stablishing a ranking system, we delete additional motion points that decrease and increase, respectively, the ranking size and strenghth. Classification is provided according to the nearest neighbor. In the conclusion of the paper, the results obtained from the experiments on tatal data of Cohn-Kanade demonstrate that our proposed algorithm, compared to previous methods (hierarchical algorithm combined with several features and morphological methods as well as geometrical algorithms, has a better performance and higher reliability.

  16. Detection of Facial Features in Scale-Space

    Directory of Open Access Journals (Sweden)

    P. Hosten

    2007-01-01

    Full Text Available This paper presents a new approach to the detection of facial features. A scale adapted Harris Corner detector is used to find interest points in scale-space. These points are described by the SIFT descriptor. Thus invariance with respect to image scale, rotation and illumination is obtained. Applying a Karhunen-Loeve transform reduces the dimensionality of the feature space. In the training process these features are clustered by the k-means algorithm, followed by a cluster analysis to find the most distinctive clusters, which represent facial features in feature space. Finally, a classifier based on the nearest neighbor approach is used to decide whether the features obtained from the interest points are facial features or not. 

  17. Extraction of Facial Features from Color Images

    Directory of Open Access Journals (Sweden)

    J. Pavlovicova

    2008-09-01

    Full Text Available In this paper, a method for localization and extraction of faces and characteristic facial features such as eyes, mouth and face boundaries from color image data is proposed. This approach exploits color properties of human skin to localize image regions – face candidates. The facial features extraction is performed only on preselected face-candidate regions. Likewise, for eyes and mouth localization color information and local contrast around eyes are used. The ellipse of face boundary is determined using gradient image and Hough transform. Algorithm was tested on image database Feret.

  18. Odor valence linearly modulates attractiveness, but not age assessment, of invariant facial features in a memory-based rating task.

    Science.gov (United States)

    Seubert, Janina; Gregory, Kristen M; Chamberland, Jessica; Dessirier, Jean-Marc; Lundström, Johan N

    2014-01-01

    Scented cosmetic products are used across cultures as a way to favorably influence one's appearance. While crossmodal effects of odor valence on perceived attractiveness of facial features have been demonstrated experimentally, it is unknown whether they represent a phenomenon specific to affective processing. In this experiment, we presented odors in the context of a face battery with systematic feature manipulations during a speeded response task. Modulatory effects of linear increases of odor valence were investigated by juxtaposing subsequent memory-based ratings tasks--one predominantly affective (attractiveness) and a second, cognitive (age). The linear modulation pattern observed for attractiveness was consistent with additive effects of face and odor appraisal. Effects of odor valence on age perception were not linearly modulated and may be the result of cognitive interference. Affective and cognitive processing of faces thus appear to differ in their susceptibility to modulation by odors, likely as a result of privileged access of olfactory stimuli to affective brain networks. These results are critically discussed with respect to potential biases introduced by the preceding speeded response task.

  19. Towards the automation of forensic facial individualisation: Comparing forensic to non forensic eyebrow features

    NARCIS (Netherlands)

    Zeinstra, Christopher Gerard; Veldhuis, Raymond N.J.; Spreeuwers, Lieuwe Jan

    2014-01-01

    The Facial Identification Scientific Working Group (FISWG) publishes recommendations regarding one-to-one facial comparisons. At this moment a draft version of a facial image comparison feature list for morphological analysis has been published. This feature list is based on casework experience by

  20. Facial and Ocular Features of Marfan Syndrome

    Directory of Open Access Journals (Sweden)

    Juan C. Leoni

    2014-10-01

    Full Text Available Marfan syndrome is the most common inherited disorder of connective tissue affecting multiple organ systems. Identification of the facial, ocular and skeletal features should prompt referral for aortic imaging since sudden death by aortic dissection and rupture remains a major cause of death in patients with unrecognized Marfan syndrome. Echocardiography is recommended as the initial imaging test, and once a dilated aortic root is identified magnetic resonance or computed tomography should be done to assess the entire aorta. Prophylactic aortic root replacement is safe and has been demonstrated to improve life expectancy in patients with Marfan syndrome. Medical therapy for Marfan syndrome includes the use of beta blockers in older children and adults with an enlarged aorta. Addition of angiotensin receptor antagonists has been shown to slow the progression of aortic root dilation compared to beta blockers alone. Lifelong and regular follow up in a center for specialized care is important for patients with Marfan syndrome. We present a case of a patient with clinical features of Marfan syndrome and discuss possible therapeutic interventions for her dilated aorta.

  1. Model-based coding of facial images based on facial muscle motion through isodensity maps

    Science.gov (United States)

    So, Ikken; Nakamura, Osamu; Minami, Toshi

    1991-11-01

    A model-based coding system has come under serious consideration for the next generation of image coding schemes, aimed at greater efficiency in TV telephone and TV conference systems. In this model-based coding system, the sender's model image is transmitted and stored at the receiving side before the start of the conversation. During the conversation, feature points are extracted from the facial image of the sender and are transmitted to the receiver. The facial expression of the sender facial is reconstructed from the feature points received and a wireframed model constructed at the receiving side. However, the conventional methods have the following problems: (1) Extreme changes of the gray level, such as in wrinkles caused by change of expression, cannot be reconstructed at the receiving side. (2) Extraction of stable feature points from facial images with irregular features such as spectacles or facial hair is very difficult. To cope with the first problem, a new algorithm based on isodensity lines which can represent detailed changes in expression by density correction has already been proposed and good results obtained. As for the second problem, we propose in this paper a new algorithm to reconstruct facial images by transmitting other feature points extracted from isodensity maps.

  2. Regression-based Multi-View Facial Expression Recognition

    NARCIS (Netherlands)

    Rudovic, Ognjen; Patras, Ioannis; Pantic, Maja

    2010-01-01

    We present a regression-based scheme for multi-view facial expression recognition based on 2蚠D geometric features. We address the problem by mapping facial points (e.g. mouth corners) from non-frontal to frontal view where further recognition of the expressions can be performed using a

  3. Prediction of Mortality Based on Facial Characteristics

    OpenAIRE

    Delorme, Arnaud; Pierce, Alan; Michel, Leena; Radin, Dean

    2016-01-01

    Recent studies have shown that characteristics of the face contain a wealth of information about health, age and chronic clinical conditions. Such studies involve objective measurement of facial features correlated with historical health information. But some individuals also claim to be adept at gauging mortality based on a glance at a person’s photograph. To test this claim, we invited 12 such individuals to see if they could determine if a person was alive or dead based solely on a brief e...

  4. Prediction of mortality based on facial characteristics

    OpenAIRE

    Arnaud Delorme; Arnaud Delorme; Alan Pierce; Leena Michel; Dean Radin

    2016-01-01

    Recent studies have shown that characteristics of the face contain a wealth of information about health, age and chronic clinical conditions. Such studies involve objective measurement of facial features correlated with historical health information. But some individuals also claim to be adept at gauging mortality based on a glance at a person’s photograph. To test this claim, we invited 12 such individuals to see if they could determine if a person was alive or dead based solely on a brief ...

  5. Vascular Ehlers-Danlos Syndrome Without the Characteristic Facial Features

    Science.gov (United States)

    Inokuchi, Ryota; Kurata, Hideaki; Endo, Kiyoshi; Kitsuta, Yoichi; Nakajima, Susumu; Hatamochi, Atsushi; Yahagi, Naoki

    2014-01-01

    Abstract As a type of Ehlers-Danlos syndrome (EDS), vascular EDs (vEDS) is typified by a number of characteristic facial features (eg, large eyes, small chin, sunken cheeks, thin nose and lips, lobeless ears). However, vEDs does not typically display hypermobility of the large joints and skin hyperextensibility, which are features typical of the more common forms of EDS. Thus, colonic perforation or aneurysm rupture may be the first presentation of the disease. Because both complications are associated with a reduced life expectancy for individuals with this condition, an awareness of the clinical features of vEDS is important. Here, we describe the treatment of vEDS lacking the characteristic facial attributes in a 24-year-old healthy man who presented to the emergency room with abdominal pain. Enhanced computed tomography revealed diverticula and perforation in the sigmoid colon. The lesion of the sigmoid colon perforation was removed, and Hartmann procedure was performed. During the surgery, the control of bleeding was required because of vascular fragility. Subsequent molecular and genetic analysis was performed based on the suspected diagnosis of vEDS. These analyses revealed reduced type III collagen synthesis in cultured skin fibroblasts and identified a previously undocumented mutation in the gene for a1 type III collagen, confirming the diagnosis of vEDS. After eliciting a detailed medical profile, we learned his mother had a history of extensive bruising since childhood and idiopathic hematothorax. Both were prescribed oral celiprolol. One year after admission, the patient was free of recurrent perforation. This case illustrates an awareness of the clinical characteristics of vEDS and the family history is important because of the high mortality from this condition even in young people. Importantly, genetic assays could help in determining the surgical procedure and offer benefits to relatives since this condition is inherited in an autosomal dominant

  6. Extracted facial feature of racial closely related faces

    Science.gov (United States)

    Liewchavalit, Chalothorn; Akiba, Masakazu; Kanno, Tsuneo; Nagao, Tomoharu

    2010-02-01

    Human faces contain a lot of demographic information such as identity, gender, age, race and emotion. Human being can perceive these pieces of information and use it as an important clue in social interaction with other people. Race perception is considered the most delicacy and sensitive parts of face perception. There are many research concerning image-base race recognition, but most of them are focus on major race group such as Caucasoid, Negroid and Mongoloid. This paper focuses on how people classify race of the racial closely related group. As a sample of racial closely related group, we choose Japanese and Thai face to represents difference between Northern and Southern Mongoloid. Three psychological experiment was performed to study the strategies of face perception on race classification. As a result of psychological experiment, it can be suggested that race perception is an ability that can be learn. Eyes and eyebrows are the most attention point and eyes is a significant factor in race perception. The Principal Component Analysis (PCA) was performed to extract facial features of sample race group. Extracted race features of texture and shape were used to synthesize faces. As the result, it can be suggested that racial feature is rely on detailed texture rather than shape feature. This research is a indispensable important fundamental research on the race perception which are essential in the establishment of human-like race recognition system.

  7. Facial features matching using a virtual structuring element

    NARCIS (Netherlands)

    Valenti, R.; Sebe, N.; Gevers, T.

    2008-01-01

    Face analysis in a real-world environment is a complex task as it should deal with challenging problems such as pose variations, illumination changes and complex backgrounds. The use of active appearance models for facial features detection is often successful in restricted environments, but the

  8. Variation of facial features among three African populations: Body height match analyses.

    Science.gov (United States)

    Taura, M G; Adamu, L H; Gudaji, A

    2017-01-01

    Body height is one of the variables that show a correlation with facial craniometry. Here we seek to discriminate the three populations (Nigerians, Ugandans and Kenyans) using facial craniometry based on different categories of body height of adult males. A total of 513 individuals comprising 234 Nigerians, 169 Ugandans and 110 Kenyans with mean age of 25.27, s=5.13 (18-40 years) participated. Paired and unpaired facial features were measured using direct craniometry. Multivariate and stepwise discriminate function analyses were used for differentiation of the three populations. The result showed significant overall facial differences among the three populations in all the body height categories. Skull height, total facial height, outer canthal distance, exophthalmometry, right ear width and nasal length were significantly different among the three different populations irrespective of body height categories. Other variables were sensitive to body height. Stepwise discriminant function analyses included maximum of six variables for better discrimination between the three populations. The single best discriminator of the groups was total facial height, however, for body height >1.70m the single best discriminator was nasal length. Most of the variables were better used with function 1, hence, better discrimination than function 2. In conclusion, adult body height in addition to other factors such as age, sex, and ethnicity should be considered in making decision on facial craniometry. However, not all the facial linear dimensions were sensitive to body height. Copyright © 2016 Elsevier GmbH. All rights reserved.

  9. Facial expression recognition based on improved deep belief networks

    Science.gov (United States)

    Wu, Yao; Qiu, Weigen

    2017-08-01

    In order to improve the robustness of facial expression recognition, a method of face expression recognition based on Local Binary Pattern (LBP) combined with improved deep belief networks (DBNs) is proposed. This method uses LBP to extract the feature, and then uses the improved deep belief networks as the detector and classifier to extract the LBP feature. The combination of LBP and improved deep belief networks is realized in facial expression recognition. In the JAFFE (Japanese Female Facial Expression) database on the recognition rate has improved significantly.

  10. Facial features and social attractiveness: preferences of Bosnian female students

    Directory of Open Access Journals (Sweden)

    Nina Bosankić

    2015-09-01

    Full Text Available This research aimed at testing multiple fitness hypothesis of attraction, investigating relationship between male facial characteristic and female students' reported readiness to engage in various social relations. A total of 27 male photos were evaluated on five dimensions on a seven-point Likert-type scale ranging from -3 to 3, by convenient sample of 90 female students of University of Sarajevo. The dimensions were: desirable to date – not desirable to date; desirable to marry – not desirable to marry; desirable to have sex with – not desirable to have sex with; desirable to be a friend – not desirable to be a friend; attractive - not attractive. Facial metric measurements of facial features such as distance between the eyes, smile width and height were performed using AutoCad. The results indicate that only smile width positively correlates with desirability of establishing friendship, whilst none of the other characteristics correlates with any of the other dimensions. This leads to the conclusion that motivation to establish various social relations cannot be reduced to mere physical appearance, mainly facial features, but many other variables yet to be investigated.

  11. Orientations for the successful categorization of facial expressions and their link with facial features.

    Science.gov (United States)

    Duncan, Justin; Gosselin, Frédéric; Cobarro, Charlène; Dugas, Gabrielle; Blais, Caroline; Fiset, Daniel

    2017-12-01

    Horizontal information was recently suggested to be crucial for face identification. In the present paper, we expand on this finding and investigate the role of orientations for all the basic facial expressions and neutrality. To this end, we developed orientation bubbles to quantify utilization of the orientation spectrum by the visual system in a facial expression categorization task. We first validated the procedure in Experiment 1 with a simple plaid-detection task. In Experiment 2, we used orientation bubbles to reveal the diagnostic-i.e., task relevant-orientations for the basic facial expressions and neutrality. Overall, we found that horizontal information was highly diagnostic for expressions-surprise excepted. We also found that utilization of horizontal information strongly predicted performance level in this task. Despite the recent surge of research on horizontals, the link with local features remains unexplored. We were thus also interested in investigating this link. In Experiment 3, location bubbles were used to reveal the diagnostic features for the basic facial expressions. Crucially, Experiments 2 and 3 were run in parallel on the same participants, in an interleaved fashion. This way, we were able to correlate individual orientation and local diagnostic profiles. Our results indicate that individual differences in horizontal tuning are best predicted by utilization of the eyes.

  12. Facial Feature Extraction Using Frequency Map Series in PCNN

    Directory of Open Access Journals (Sweden)

    Rencan Nie

    2016-01-01

    Full Text Available Pulse coupled neural network (PCNN has been widely used in image processing. The 3D binary map series (BMS generated by PCNN effectively describes image feature information such as edges and regional distribution, so BMS can be treated as the basis of extracting 1D oscillation time series (OTS for an image. However, the traditional methods using BMS did not consider the correlation of the binary sequence in BMS and the space structure for every map. By further processing for BMS, a novel facial feature extraction method is proposed. Firstly, consider the correlation among maps in BMS; a method is put forward to transform BMS into frequency map series (FMS, and the method lessens the influence of noncontinuous feature regions in binary images on OTS-BMS. Then, by computing the 2D entropy for every map in FMS, the 3D FMS is transformed into 1D OTS (OTS-FMS, which has good geometry invariance for the facial image, and contains the space structure information of the image. Finally, by analyzing the OTS-FMS, the standard Euclidean distance is used to measure the distances for OTS-FMS. Experimental results verify the effectiveness of OTS-FMS in facial recognition, and it shows better recognition performance than other feature extraction methods.

  13. Facial expression identification using 3D geometric features from Microsoft Kinect device

    Science.gov (United States)

    Han, Dongxu; Al Jawad, Naseer; Du, Hongbo

    2016-05-01

    Facial expression identification is an important part of face recognition and closely related to emotion detection from face images. Various solutions have been proposed in the past using different types of cameras and features. Microsoft Kinect device has been widely used for multimedia interactions. More recently, the device has been increasingly deployed for supporting scientific investigations. This paper explores the effectiveness of using the device in identifying emotional facial expressions such as surprise, smile, sad, etc. and evaluates the usefulness of 3D data points on a face mesh structure obtained from the Kinect device. We present a distance-based geometric feature component that is derived from the distances between points on the face mesh and selected reference points in a single frame. The feature components extracted across a sequence of frames starting and ending by neutral emotion represent a whole expression. The feature vector eliminates the need for complex face orientation correction, simplifying the feature extraction process and making it more efficient. We applied the kNN classifier that exploits a feature component based similarity measure following the principle of dynamic time warping to determine the closest neighbors. Preliminary tests on a small scale database of different facial expressions show promises of the newly developed features and the usefulness of the Kinect device in facial expression identification.

  14. Neural network based facial recognition system

    Science.gov (United States)

    Luebbers, Paul G.; Uwechue, Okechukwu A.; Pandya, Abhijit S.

    1994-03-01

    Researchers have for many years tried to develop machine recognition systems using video images of the human face as the input, with limited success. This paper presents a technique for recognizing individuals based on facial features using a novel multi-layer neural network architecture called `PWRNET'. We envision a real-time version of this technique to be used for high security applications. Two systems are proposed. One involves taking a grayscale video image and using it directly, the other involves decomposing the grayscale image into a series of binary images using the isodensity regions of the image. Isodensity regions are the areas within an image where the intensity is within a certain range. The binary image is produced by setting the pixels inside this intensity range to one, and the rest of the pixels in the image to zero. Features based on moments are subsequently extracted from these grayscale images. These features are then used for classification of the image. The classification is accomplished using an artificial neural network called `PWRNET', which produces a polynomial expression of the trained network. There is one neural network for each individual to be identified, with an output value which is either positive or negative identification. A detailed development of the design is presented, and identification for small population of individuals is presented. It is shown that the system is effective for variations in both scale and translation, which are considered to be reasonable variations for this type of facial identification.

  15. Auto zoom crop from face detection and facial features

    Science.gov (United States)

    Ptucha, Raymond; Rhoda, David; Mittelstaedt, Brian

    2013-02-01

    The automatic recomposition of a digital photograph to a more pleasing composition or alternate aspect ratio is a very powerful concept. The human face is arguably one of the most frequently photographed and important subjects. Although evidence suggests only a minority of photos contain faces, the vast majority of images used in consumer photobooks contain faces. Face detection and facial understanding algorithms are becoming ubiquitous to the computational photography community and facial features have a dominating influence on both aesthetic and compositional properties of the displayed image. We introduce a fully automatic recomposition algorithm, capable of zooming in to a more pleasing composition, re-trimming to alternate aspect ratios, or a combination thereof. We use facial bounding boxes, input and output aspect ratios, along with derived composition rules to introduce a facecrop algorithm with superior performance to more complex saliency or region of interest detection algorithms. We further introduce sophisticated facial understanding rules to improve user satisfaction further. We demonstrate through psychophysical studies the improved subjective quality of our method compared to state-of-the-art techniques.

  16. A Micro-GA Embedded PSO Feature Selection Approach to Intelligent Facial Emotion Recognition.

    Science.gov (United States)

    Mistry, Kamlesh; Zhang, Li; Neoh, Siew Chin; Lim, Chee Peng; Fielding, Ben

    2017-06-01

    This paper proposes a facial expression recognition system using evolutionary particle swarm optimization (PSO)-based feature optimization. The system first employs modified local binary patterns, which conduct horizontal and vertical neighborhood pixel comparison, to generate a discriminative initial facial representation. Then, a PSO variant embedded with the concept of a micro genetic algorithm (mGA), called mGA-embedded PSO, is proposed to perform feature optimization. It incorporates a nonreplaceable memory, a small-population secondary swarm, a new velocity updating strategy, a subdimension-based in-depth local facial feature search, and a cooperation of local exploitation and global exploration search mechanism to mitigate the premature convergence problem of conventional PSO. Multiple classifiers are used for recognizing seven facial expressions. Based on a comprehensive study using within- and cross-domain images from the extended Cohn Kanade and MMI benchmark databases, respectively, the empirical results indicate that our proposed system outperforms other state-of-the-art PSO variants, conventional PSO, classical GA, and other related facial expression recognition models reported in the literature by a significant margin.

  17. Joint Facial Action Unit Detection and Feature Fusion: A Multi-conditional Learning Approach.

    Science.gov (United States)

    Eleftheriadis, Stefanos; Rudovic, Ognjen; Pantic, Maja

    2016-10-05

    Automated analysis of facial expressions can benefit many domains, from marketing to clinical diagnosis of neurodevelopmental disorders. Facial expressions are typically encoded as a combination of facial muscle activations, i.e., action units. Depending on context, these action units co-occur in specific patterns, and rarely in isolation. Yet, most existing methods for automatic action unit detection fail to exploit dependencies among them, and the corresponding facial features. To address this, we propose a novel multi-conditional latent variable model for simultaneous fusion of facial features and joint action unit detection. Specifically, the proposed model performs feature fusion in a generative fashion via a low-dimensional shared subspace, while simultaneously performing action unit detection using a discriminative classification approach. We show that by combining the merits of both approaches, the proposed methodology outperforms existing purely discriminative/generative methods for the target task. To reduce the number of parameters, and avoid overfitting, a novel Bayesian learning approach based on Monte Carlo sampling is proposed, to integrate out the shared subspace. We validate the proposed method on posed and spontaneous data from three publicly available datasets (CK+, DISFA and Shoulder-pain), and show that both feature fusion and joint learning of action units leads to improved performance compared to the state-of-the-art methods for the task.

  18. Resorcinarene-Based Facial Glycosides

    DEFF Research Database (Denmark)

    Hussain, Hazrat; Du, Yang; Tikhonova, Elena

    2017-01-01

    chains are facially segregated from the carbohydrate head groups. Of these facial amphiphiles, two RGAs (RGA-C11 and RGA-C13) conferred markedly enhanced stability to four tested membrane proteins compared to a gold-standard conventional detergent. The relatively high water solubility and micellar...

  19. Featural processing in recognition of emotional facial expressions.

    Science.gov (United States)

    Beaudry, Olivia; Roy-Charland, Annie; Perron, Melanie; Cormier, Isabelle; Tapp, Roxane

    2014-04-01

    The present study aimed to clarify the role played by the eye/brow and mouth areas in the recognition of the six basic emotions. In Experiment 1, accuracy was examined while participants viewed partial and full facial expressions; in Experiment 2, participants viewed full facial expressions while their eye movements were recorded. Recognition rates were consistent with previous research: happiness was highest and fear was lowest. The mouth and eye/brow areas were not equally important for the recognition of all emotions. More precisely, while the mouth was revealed to be important in the recognition of happiness and the eye/brow area of sadness, results are not as consistent for the other emotions. In Experiment 2, consistent with previous studies, the eyes/brows were fixated for longer periods than the mouth for all emotions. Again, variations occurred as a function of the emotions, the mouth having an important role in happiness and the eyes/brows in sadness. The general pattern of results for the other four emotions was inconsistent between the experiments as well as across different measures. The complexity of the results suggests that the recognition process of emotional facial expressions cannot be reduced to a simple feature processing or holistic processing for all emotions.

  20. Men's preference for women's facial features: testing homogamy and the paternity uncertainty hypothesis.

    Science.gov (United States)

    Bovet, Jeanne; Barthes, Julien; Durand, Valérie; Raymond, Michel; Alvergne, Alexandra

    2012-01-01

    Male mate choice might be based on both absolute and relative strategies. Cues of female attractiveness are thus likely to reflect both fitness and reproductive potential, as well as compatibility with particular male phenotypes. In humans, absolute clues of fertility and indices of favorable developmental stability are generally associated with increased women's attractiveness. However, why men exhibit variable preferences remains less studied. Male mate choice might be influenced by uncertainty of paternity, a selective factor in species where the survival of the offspring depends on postnatal paternal care. For instance, in humans, a man might prefer a woman with recessive traits, thereby increasing the probability that his paternal traits will be visible in the child and ensuring paternity. Alternatively, attractiveness is hypothesized to be driven by self-resembling features (homogamy), which would reduce outbreeding depression. These hypotheses have been simultaneously evaluated for various facial traits using both real and artificial facial stimuli. The predicted preferences were then compared to realized mate choices using facial pictures from couples with at least 1 child. No evidence was found to support the paternity uncertainty hypothesis, as recessive features were not preferred by male raters. Conversely, preferences for self-resembling mates were found for several facial traits (hair and eye color, chin dimple, and thickness of lips and eyebrows). Moreover, realized homogamy for facial traits was also found in a sample of long-term mates. The advantages of homogamy in evolutionary terms are discussed.

  1. Men's preference for women's facial features: testing homogamy and the paternity uncertainty hypothesis.

    Directory of Open Access Journals (Sweden)

    Jeanne Bovet

    Full Text Available Male mate choice might be based on both absolute and relative strategies. Cues of female attractiveness are thus likely to reflect both fitness and reproductive potential, as well as compatibility with particular male phenotypes. In humans, absolute clues of fertility and indices of favorable developmental stability are generally associated with increased women's attractiveness. However, why men exhibit variable preferences remains less studied. Male mate choice might be influenced by uncertainty of paternity, a selective factor in species where the survival of the offspring depends on postnatal paternal care. For instance, in humans, a man might prefer a woman with recessive traits, thereby increasing the probability that his paternal traits will be visible in the child and ensuring paternity. Alternatively, attractiveness is hypothesized to be driven by self-resembling features (homogamy, which would reduce outbreeding depression. These hypotheses have been simultaneously evaluated for various facial traits using both real and artificial facial stimuli. The predicted preferences were then compared to realized mate choices using facial pictures from couples with at least 1 child. No evidence was found to support the paternity uncertainty hypothesis, as recessive features were not preferred by male raters. Conversely, preferences for self-resembling mates were found for several facial traits (hair and eye color, chin dimple, and thickness of lips and eyebrows. Moreover, realized homogamy for facial traits was also found in a sample of long-term mates. The advantages of homogamy in evolutionary terms are discussed.

  2. Extraction and representation of common feature from uncertain facial expressions with cloud model.

    Science.gov (United States)

    Wang, Shuliang; Chi, Hehua; Yuan, Hanning; Geng, Jing

    2017-12-01

    Human facial expressions are key ingredient to convert an individual's innate emotion in communication. However, the variation of facial expressions affects the reliable identification of human emotions. In this paper, we present a cloud model to extract facial features for representing human emotion. First, the uncertainties in facial expression are analyzed in the context of cloud model. The feature extraction and representation algorithm is established under cloud generators. With forward cloud generator, facial expression images can be re-generated as many as we like for visually representing the extracted three features, and each feature shows different roles. The effectiveness of the computing model is tested on Japanese Female Facial Expression database. Three common features are extracted from seven facial expression images. Finally, the paper is concluded and remarked.

  3. Millennial Filipino Student Engagement Analyzer Using Facial Feature Classification

    Science.gov (United States)

    Manseras, R.; Eugenio, F.; Palaoag, T.

    2018-03-01

    Millennials has been a word of mouth of everybody and a target market of various companies nowadays. In the Philippines, they comprise one third of the total population and most of them are still in school. Having a good education system is important for this generation to prepare them for better careers. And a good education system means having quality instruction as one of the input component indicators. In a classroom environment, teachers use facial features to measure the affect state of the class. Emerging technologies like Affective Computing is one of today’s trends to improve quality instruction delivery. This, together with computer vision, can be used in analyzing affect states of the students and improve quality instruction delivery. This paper proposed a system of classifying student engagement using facial features. Identifying affect state, specifically Millennial Filipino student engagement, is one of the main priorities of every educator and this directed the authors to develop a tool to assess engagement percentage. Multiple face detection framework using Face API was employed to detect as many student faces as possible to gauge current engagement percentage of the whole class. The binary classifier model using Support Vector Machine (SVM) was primarily set in the conceptual framework of this study. To achieve the most accuracy performance of this model, a comparison of SVM to two of the most widely used binary classifiers were tested. Results show that SVM bested RandomForest and Naive Bayesian algorithms in most of the experiments from the different test datasets.

  4. Facial Affect Recognition Using Regularized Discriminant Analysis-Based Algorithms

    Directory of Open Access Journals (Sweden)

    Cheng-Yuan Shih

    2010-01-01

    Full Text Available This paper presents a novel and effective method for facial expression recognition including happiness, disgust, fear, anger, sadness, surprise, and neutral state. The proposed method utilizes a regularized discriminant analysis-based boosting algorithm (RDAB with effective Gabor features to recognize the facial expressions. Entropy criterion is applied to select the effective Gabor feature which is a subset of informative and nonredundant Gabor features. The proposed RDAB algorithm uses RDA as a learner in the boosting algorithm. The RDA combines strengths of linear discriminant analysis (LDA and quadratic discriminant analysis (QDA. It solves the small sample size and ill-posed problems suffered from QDA and LDA through a regularization technique. Additionally, this study uses the particle swarm optimization (PSO algorithm to estimate optimal parameters in RDA. Experiment results demonstrate that our approach can accurately and robustly recognize facial expressions.

  5. Pain detection from facial images using unsupervised feature learning approach.

    Science.gov (United States)

    Kharghanian, Reza; Peiravi, Ali; Moradi, Farshad

    2016-08-01

    In this paper a new method for continuous pain detection is proposed. One approach to detect the presence of pain is by processing images taken from the face. It has been reported that expression of pain from the face can be detected utilizing Action Units (AUs). In this manner, each action units must be detected separately and then combined together through a linear expression. Also, pain detection can be directly done from a painful face. There are different methods to extract features of both shape and appearance. Shape and appearance features must be extracted separately, and then used to train a classifier. Here, a hierarchical unsupervised feature learning approach is proposed in order to extract the features needed for pain detection from facial images. In this work, features are extracted using convolutional deep belief network (CDBN). The extracted features include different properties of painful images such as head movements, shape and appearance information. The proposed model was tested on the publicly available UNBC MacMaster Shoulder Pain Archive Database and we achieved near 95% for the area under ROC curve metric that is prominent with respect to the other reported results.

  6. Hybrid facial image feature extraction and recognition for non-invasive chronic fatigue syndrome diagnosis.

    Science.gov (United States)

    Chen, Yunhua; Liu, Weijian; Zhang, Ling; Yan, Mingyu; Zeng, Yanjun

    2015-09-01

    Due to an absence of reliable biochemical markers, the diagnosis of chronic fatigue syndrome (CFS) mainly relies on the clinical symptoms, and the experience and skill of the doctors currently. To improve objectivity and reduce work intensity, a hybrid facial feature is proposed. First, several kinds of appearance features are identified in different facial regions according to clinical observations of traditional Chinese medicine experts, including vertical striped wrinkles on the forehead, puffiness of the lower eyelid, the skin colour of the cheeks, nose and lips, and the shape of the mouth corner. Afterwards, such features are extracted and systematically combined to form a hybrid feature. We divide the face into several regions based on twelve active appearance model (AAM) feature points, and ten straight lines across them. Then, Gabor wavelet filtering, CIELab color components, threshold-based segmentation and curve fitting are applied to extract features, and Gabor features are reduced by a manifold preserving projection method. Finally, an AdaBoost based score level fusion of multi-modal features is performed after classification of each feature. Despite that the subjects involved in this trial are exclusively Chinese, the method achieves an average accuracy of 89.04% on the training set and 88.32% on the testing set based on the K-fold cross-validation. In addition, the method also possesses desirable sensitivity and specificity on CFS prediction. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Prediction of mortality based on facial characteristics

    Directory of Open Access Journals (Sweden)

    Arnaud Delorme

    2016-05-01

    Full Text Available Recent studies have shown that characteristics of the face contain a wealth of information about health, age and chronic clinical conditions. Such studies involve objective measurement of facial features correlated with historical health information. But some individuals also claim to be adept at gauging mortality based on a glance at a person’s photograph. To test this claim, we invited 12 such individuals to see if they could determine if a person was alive or dead based solely on a brief examination of facial photographs. All photos used in the experiment were transformed into a uniform gray scale and then counterbalanced across eight categories: gender, age, gaze direction, glasses, head position, smile, hair color, and image resolution. Participants examined 404 photographs displayed on a computer monitor, one photo at a time, each shown for a maximum of 8 seconds. Half of the individuals in the photos were deceased, and half were alive at the time the experiment was conducted. Participants were asked to press a button if they thought the person in a photo was living or deceased. Overall mean accuracy on this task was 53.8%, where 50% was expected by chance (p < 0.004, two-tail. Statistically significant accuracy was independently obtained in 5 of the 12 participants. We also collected 32-channel electrophysiological recordings and observed a robust difference between images of deceased individuals correctly vs. incorrectly classified in the early event related potential at 100 ms post-stimulus onset. Our results support claims of individuals who report that some as-yet unknown features of the face predict mortality. The results are also compatible with claims about clairvoyance and warrants further investigation.

  8. Facial expression recognition based on improved local ternary pattern and stacked auto-encoder

    Science.gov (United States)

    Wu, Yao; Qiu, Weigen

    2017-08-01

    In order to enhance the robustness of facial expression recognition, we propose a method of facial expression recognition based on improved Local Ternary Pattern (LTP) combined with Stacked Auto-Encoder (SAE). This method uses the improved LTP extraction feature, and then uses the improved depth belief network as the detector and classifier to extract the LTP feature. The combination of LTP and improved deep belief network is realized in facial expression recognition. The recognition rate on CK+ databases has improved significantly.

  9. An Improved AAM Method for Extracting Human Facial Features

    Directory of Open Access Journals (Sweden)

    Tao Zhou

    2012-01-01

    Full Text Available Active appearance model is a statistically parametrical model, which is widely used to extract human facial features and recognition. However, intensity values used in original AAM cannot provide enough information for image texture, which will lead to a larger error or a failure fitting of AAM. In order to overcome these defects and improve the fitting performance of AAM model, an improved texture representation is proposed in this paper. Firstly, translation invariant wavelet transform is performed on face images and then image structure is represented using the measure which is obtained by fusing the low-frequency coefficients with edge intensity. Experimental results show that the improved algorithm can increase the accuracy of the AAM fitting and express more information for structures of edge and texture.

  10. An extensive analysis of various texture feature extractors to detect Diabetes Mellitus using facial specific regions.

    Science.gov (United States)

    Shu, Ting; Zhang, Bob; Yan Tang, Yuan

    2017-04-01

    Researchers have recently discovered that Diabetes Mellitus can be detected through non-invasive computerized method. However, the focus has been on facial block color features. In this paper, we extensively study the effects of texture features extracted from facial specific regions at detecting Diabetes Mellitus using eight texture extractors. The eight methods are from four texture feature families: (1) statistical texture feature family: Image Gray-scale Histogram, Gray-level Co-occurance Matrix, and Local Binary Pattern, (2) structural texture feature family: Voronoi Tessellation, (3) signal processing based texture feature family: Gaussian, Steerable, and Gabor filters, and (4) model based texture feature family: Markov Random Field. In order to determine the most appropriate extractor with optimal parameter(s), various parameter(s) of each extractor are experimented. For each extractor, the same dataset (284 Diabetes Mellitus and 231 Healthy samples), classifiers (k-Nearest Neighbors and Support Vector Machines), and validation method (10-fold cross validation) are used. According to the experiments, the first and third families achieved a better outcome at detecting Diabetes Mellitus than the other two. The best texture feature extractor for Diabetes Mellitus detection is the Image Gray-scale Histogram with bin number=256, obtaining an accuracy of 99.02%, a sensitivity of 99.64%, and a specificity of 98.26% by using SVM. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Sensorineural Deafness, Distinctive Facial Features and Abnormal Cranial Bones

    Science.gov (United States)

    Gad, Alona; Laurino, Mercy; Maravilla, Kenneth R.; Matsushita, Mark; Raskind, Wendy H.

    2008-01-01

    The Waardenburg syndromes (WS) account for approximately 2% of congenital sensorineural deafness. This heterogeneous group of diseases currently can be categorized into four major subtypes (WS types 1-4) on the basis of characteristic clinical features. Multiple genes have been implicated in WS, and mutations in some genes can cause more than one WS subtype. In addition to eye, hair and skin pigmentary abnormalities, dystopia canthorum and broad nasal bridge are seen in WS type 1. Mutations in the PAX3 gene are responsible for the condition in the majority of these patients. In addition, mutations in PAX3 have been found in WS type 3 that is distinguished by musculoskeletal abnormalities, and in a family with a rare subtype of WS, craniofacial-deafness-hand syndrome (CDHS), characterized by dysmorphic facial features, hand abnormalities, and absent or hypoplastic nasal and wrist bones. Here we describe a woman who shares some, but not all features of WS type 3 and CDHS, and who also has abnormal cranial bones. All sinuses were hypoplastic, and the cochlea were small. No sequence alteration in PAX3 was found. These observations broaden the clinical range of WS and suggest there may be genetic heterogeneity even within the CDHS subtype. PMID:18553554

  12. Facial Asymmetry-Based Age Group Estimation: Role in Recognizing Age-Separated Face Images.

    Science.gov (United States)

    Sajid, Muhammad; Taj, Imtiaz Ahmad; Bajwa, Usama Ijaz; Ratyal, Naeem Iqbal

    2018-04-23

    Face recognition aims to establish the identity of a person based on facial characteristics. On the other hand, age group estimation is the automatic calculation of an individual's age range based on facial features. Recognizing age-separated face images is still a challenging research problem due to complex aging processes involving different types of facial tissues, skin, fat, muscles, and bones. Certain holistic and local facial features are used to recognize age-separated face images. However, most of the existing methods recognize face images without incorporating the knowledge learned from age group estimation. In this paper, we propose an age-assisted face recognition approach to handle aging variations. Inspired by the observation that facial asymmetry is an age-dependent intrinsic facial feature, we first use asymmetric facial dimensions to estimate the age group of a given face image. Deeply learned asymmetric facial features are then extracted for face recognition using a deep convolutional neural network (dCNN). Finally, we integrate the knowledge learned from the age group estimation into the face recognition algorithm using the same dCNN. This integration results in a significant improvement in the overall performance compared to using the face recognition algorithm alone. The experimental results on two large facial aging datasets, the MORPH and FERET sets, show that the proposed age group estimation based on the face recognition approach yields superior performance compared to some existing state-of-the-art methods. © 2018 American Academy of Forensic Sciences.

  13. Interpretation of appearance: the effect of facial features on first impressions and personality

    DEFF Research Database (Denmark)

    Wolffhechel, Karin Marie Brandt; Fagertun, Jens; Jacobsen, Ulrik Plesner

    2014-01-01

    personality traits can be read above chance from a face, and that facial features influence first impressions. Despite the former, our prediction model fails to reliably infer personality traits from either facial features or first impressions. First impressions, however, could be inferred more reliably from...... facial features. We have generated artificial, extreme faces visualising the characteristics having an effect on first impressions for several traits. Conclusively, we find a relationship between first impressions, some personality traits and facial features and consolidate that people on average assess......Appearance is known to influence social interactions, which in turn could potentially influence personality development. In this study we focus on discovering the relationship between self-reported personality traits, first impressions and facial characteristics. The results reveal that several...

  14. A View of the Therapy for Bell's Palsy Based on Molecular Biological Analyses of Facial Muscles.

    Science.gov (United States)

    Moriyama, Hiroshi; Mitsukawa, Nobuyuki; Itoh, Masahiro; Otsuka, Naruhito

    2017-12-01

    Details regarding the molecular biological features of Bell's palsy have not been widely reported in textbooks. We genetically analyzed facial muscles and clarified these points. We performed genetic analysis of facial muscle specimens from Japanese patients with severe (House-Brackmann facial nerve grading system V) and moderate (House-Brackmann facial nerve grading system III) dysfunction due to Bell's palsy. Microarray analysis of gene expression was performed using specimens from the healthy and affected sides, and gene expression was compared. Changes in gene expression were defined as an affected side/healthy side ratio of >1.5 or Bell's palsy changes with the degree of facial nerve palsy. Especially, muscle, neuron, and energy category genes tended to fluctuate with the degree of facial nerve palsy. It is expected that this study will aid in the development of new treatments and diagnostic/prognostic markers based on the severity of facial nerve palsy.

  15. Ophthalmic profile and systemic features of pediatric facial nerve palsy.

    Science.gov (United States)

    Patil-Chhablani, Preeti; Murthy, Sowmya; Swaminathan, Meenakshi

    2015-12-01

    Facial nerve palsy (FNP) occurs less frequently in children as compared to adults but most cases are secondary to an identifiable cause. These children may have a variety of ocular and systemic features associated with the palsy and need detailed ophthalmic and systemic evaluation. This was a retrospective chart review of all the cases of FNP below the age of 16 years, presenting to a tertiary ophthalmic hospital over the period of 9 years, from January 2000 to December 2008. A total of 22 patients were included in the study. The average age at presentation was 6.08 years (range, 4 months to 16 years). Only one patient (4.54%) had bilateral FNP and 21 cases (95.45%) had unilateral FNP. Seventeen patients (77.27%) had congenital palsy and of these, five patients had a syndromic association, three had birth trauma and nine patients had idiopathic palsy. Five patients (22.72%) had an acquired palsy, of these, two had a traumatic cause and one patient each had neoplastic origin of the palsy, iatrogenic palsy after surgery for hemangioma and idiopathic palsy. Three patients had ipsilateral sixth nerve palsy, two children were diagnosed to have Moebius syndrome, one child had an ipsilateral Duane's syndrome with ipsilateral hearing loss. Corneal involvement was seen in eight patients (36.36%). Amblyopia was seen in ten patients (45.45%). Neuroimaging studies showed evidence of trauma, posterior fossa cysts, pontine gliosis and neoplasms such as a chloroma. Systemic associations included hemifacial macrosomia, oculovertebral malformations, Dandy Walker syndrome, Moebius syndrome and cerebral palsy FNP in children can have a number of underlying causes, some of which may be life threatening. It can also result in serious ocular complications including corneal perforation and severe amblyopia. These children require a multifaceted approach to their care.

  16. 3D facial geometric features for constrained local model

    NARCIS (Netherlands)

    Cheng, Shiyang; Zafeiriou, Stefanos; Asthana, Ashish; Asthana, Akshay; Pantic, Maja

    2014-01-01

    We propose a 3D Constrained Local Model framework for deformable face alignment in depth image. Our framework exploits the intrinsic 3D geometric information in depth data by utilizing robust histogram-based 3D geometric features that are based on normal vectors. In addition, we demonstrate the

  17. Injectable facial fillers: imaging features, complications, and diagnostic pitfalls at MRI and PET CT.

    Science.gov (United States)

    Mundada, Pravin; Kohler, Romain; Boudabbous, Sana; Toutous Trellu, Laurence; Platon, Alexandra; Becker, Minerva

    2017-12-01

    Injectable fillers are widely used for facial rejuvenation, correction of disabling volumetric fat loss in HIV-associated facial lipoatrophy, Romberg disease, and post-traumatic facial disfiguring. The purpose of this article is to acquaint the reader with the anatomy of facial fat compartments, as well as with the properties and key imaging features of commonly used facial fillers, filler-related complications, interpretation pitfalls, and dermatologic conditions mimicking filler-related complications. The distribution of facial fillers is characteristic and depends on the anatomy of the superficial fat compartments. Silicone has signature MRI features, calcium hydroxyapatite has characteristic calcifications, whereas other injectable fillers have overlapping imaging features. Most fillers (hyaluronic acid, collagen, and polyalkylimide-polyacrylamide hydrogels) have signal intensity patterns compatible with high water content. On PET-CT, most fillers show physiologic high FDG uptake, which should not be confounded with pathology. Abscess, cellulitis, non-inflammatory nodules, and foreign body granulomas are the most common filler-related complications, and imaging can help in the differential diagnosis. Diffusion weighted imaging helps in detecting a malignant lesion masked by injected facial fillers. Awareness of imaging features of facial fillers and their complications helps to avoid misinterpretation of MRI, and PET-CT scans and facilitates therapeutic decisions in unclear clinical cases. • Facial fillers are common incidental findings on MRI and PET-CT scans. • They have a characteristic appearance and typical anatomic distribution • Although considered as safe, facial filler injections are associated with several complications • As they may mask malignancy, knowledge of typical imaging features is mandatory. • MRI is a problem-solving tool for unclear cases.

  18. Clustering Based Approximation in Facial Image Retrieval

    OpenAIRE

    R.Pitchaiah

    2016-01-01

    The web search tool returns a great many pictures positioned by the essential words separated from the encompassing content. Existing article acknowledgment systems to prepare characterization models from human-named preparing pictures or endeavor to deduce the connection/probabilities in the middle of pictures and commented magic words. Albeit proficient in supporting in mining comparatively looking facial picture results utilizing feebly named ones, the learning phase of above bunch based c...

  19. [Facial diplegia as the presenting feature of Lyme disease].

    Science.gov (United States)

    Lesourd, A; Ngo, S; Sauvêtre, G; Héron, F; Levesque, H; Marie, I

    2015-05-01

    Diagnosis of neuroborreliosis may be difficult. Neuroborreliosis mainly results in lymphocytic meningitis and in meningoradiculitis (67-83% of cases). We report the case of a patient who developed a sudden facial diplegia, revealing neuroborreliosis proved by positive blood and cerebrospinal fluid serology. The patient had no previous history of tick bite and migrans erythema. The patient was given ceftriaxone therapy (2 g/day for 21 days), leading to resolution of all clinical symptoms. Our report underscores that neuroborreliosis should be considered in patients exhibiting facial diplegia. Thus, Lyme serology should be performed systematically in these patients. Altogether, early management is crucial, before the onset of neurological manifestations at late stage, leading to disabling sequelae despite antibiotic therapy. Copyright © 2014 Société nationale française de médecine interne (SNFMI). Published by Elsevier SAS. All rights reserved.

  20. A Novel Survey Based on Multiethnic Facial Semantic Web

    OpenAIRE

    LI Zedong; DUAN Xiaodong; ZHANG Qingling

    2013-01-01

    The face includes a number of facial features which are various in minorities. Firstly, according to the correlations of the face parts shape semantics, multiethnic facial semantic web is proposed. It represents the relationship which belongs to the same minority and the difference of that belongs to the different minorities. Secondly, multiethnic facial semantic web is reduced by the correlations between the parts of the face. The semantic web which is reduced can maintains most available in...

  1. EMG-based facial gesture recognition through versatile elliptic basis function neural network.

    Science.gov (United States)

    Hamedi, Mahyar; Salleh, Sh-Hussain; Astaraki, Mehdi; Noor, Alias Mohd

    2013-07-17

    Recently, the recognition of different facial gestures using facial neuromuscular activities has been proposed for human machine interfacing applications. Facial electromyograms (EMGs) analysis is a complicated field in biomedical signal processing where accuracy and low computational cost are significant concerns. In this paper, a very fast versatile elliptic basis function neural network (VEBFNN) was proposed to classify different facial gestures. The effectiveness of different facial EMG time-domain features was also explored to introduce the most discriminating. In this study, EMGs of ten facial gestures were recorded from ten subjects using three pairs of surface electrodes in a bi-polar configuration. The signals were filtered and segmented into distinct portions prior to feature extraction. Ten different time-domain features, namely, Integrated EMG, Mean Absolute Value, Mean Absolute Value Slope, Maximum Peak Value, Root Mean Square, Simple Square Integral, Variance, Mean Value, Wave Length, and Sign Slope Changes were extracted from the EMGs. The statistical relationships between these features were investigated by Mutual Information measure. Then, the feature combinations including two to ten single features were formed based on the feature rankings appointed by Minimum-Redundancy-Maximum-Relevance (MRMR) and Recognition Accuracy (RA) criteria. In the last step, VEBFNN was employed to classify the facial gestures. The effectiveness of single features as well as the feature sets on the system performance was examined by considering the two major metrics, recognition accuracy and training time. Finally, the proposed classifier was assessed and compared with conventional methods support vector machines and multilayer perceptron neural network. The average classification results showed that the best performance for recognizing facial gestures among all single/multi-features was achieved by Maximum Peak Value with 87.1% accuracy. Moreover, the results proved a

  2. The Importance of Facial Features and Their Spatial Organization for Attractiveness is Modulated by Gender

    Directory of Open Access Journals (Sweden)

    D Gill

    2011-04-01

    Full Text Available Many studies suggest that facial attractiveness signals mate quality. Fewer studies argue that the preference criteria emerge as a by-product of cortical processes. One way or the other, preference criteria should not be necessarily identical between female and male observers because either their preferences may have different evolutionary roles or they may even be due to known differences in visiospatial skills and brain function lateralization (ie, advantages favoring males' inability to determine spatial relations despite distracting information. The goal of this study was to assess sex differences in face attractiveness judgments by estimating the importance of facial features and their spatial organization. To this end, semipartial correlations were measured between intact-face preferences and preferences based on specific facial parts (eyes, nose, mouth, and hairstyle or preferences based more on configuration (as reflected by low spatial frequency images. The results show strategy modulations by both observers' and faces' genders. In general, the association between intact-face preferences and parts-based preferences was significantly higher for female compared with male participants. For female faces, males' preferences were more strongly associated with their low spatial frequency preferences than were those of females. The two genders' strategies were more similar when judging male faces, and males performed more criteria modifications across face gender. The similarities between sexes regarding male faces are in line with previous studies that showed higher assignment of importance among men to attractiveness. Moreover, the results may suggest that men adjust their strategy to assess the danger of other males as potential rivals for mates.

  3. Binary pattern flavored feature extractors for Facial Expression Recognition: An overview

    DEFF Research Database (Denmark)

    Kristensen, Rasmus Lyngby; Tan, Zheng-Hua; Ma, Zhanyu

    2015-01-01

    This paper conducts a survey of modern binary pattern flavored feature extractors applied to the Facial Expression Recognition (FER) problem. In total, 26 different feature extractors are included, of which six are selected for in depth description. In addition, the paper unifies important FER te...

  4. Autonomous facial recognition system inspired by human visual system based logarithmical image visualization technique

    Science.gov (United States)

    Wan, Qianwen; Panetta, Karen; Agaian, Sos

    2017-05-01

    Autonomous facial recognition system is widely used in real-life applications, such as homeland border security, law enforcement identification and authentication, and video-based surveillance analysis. Issues like low image quality, non-uniform illumination as well as variations in poses and facial expressions can impair the performance of recognition systems. To address the non-uniform illumination challenge, we present a novel robust autonomous facial recognition system inspired by the human visual system based, so called, logarithmical image visualization technique. In this paper, the proposed method, for the first time, utilizes the logarithmical image visualization technique coupled with the local binary pattern to perform discriminative feature extraction for facial recognition system. The Yale database, the Yale-B database and the ATT database are used for computer simulation accuracy and efficiency testing. The extensive computer simulation demonstrates the method's efficiency, accuracy, and robustness of illumination invariance for facial recognition.

  5. The importance of internal facial features in learning new faces.

    Science.gov (United States)

    Longmore, Christopher A; Liu, Chang Hong; Young, Andrew W

    2015-01-01

    For familiar faces, the internal features (eyes, nose, and mouth) are known to be differentially salient for recognition compared to external features such as hairstyle. Two experiments are reported that investigate how this internal feature advantage accrues as a face becomes familiar. In Experiment 1, we tested the contribution of internal and external features to the ability to generalize from a single studied photograph to different views of the same face. A recognition advantage for the internal features over the external features was found after a change of viewpoint, whereas there was no internal feature advantage when the same image was used at study and test. In Experiment 2, we removed the most salient external feature (hairstyle) from studied photographs and looked at how this affected generalization to a novel viewpoint. Removing the hair from images of the face assisted generalization to novel viewpoints, and this was especially the case when photographs showing more than one viewpoint were studied. The results suggest that the internal features play an important role in the generalization between different images of an individual's face by enabling the viewer to detect the common identity-diagnostic elements across non-identical instances of the face.

  6. Facial-paralysis diagnostic system based on 3D reconstruction

    Science.gov (United States)

    Khairunnisaa, Aida; Basah, Shafriza Nisha; Yazid, Haniza; Basri, Hassrizal Hassan; Yaacob, Sazali; Chin, Lim Chee

    2015-05-01

    The diagnostic process of facial paralysis requires qualitative assessment for the classification and treatment planning. This result is inconsistent assessment that potential affect treatment planning. We developed a facial-paralysis diagnostic system based on 3D reconstruction of RGB and depth data using a standard structured-light camera - Kinect 360 - and implementation of Active Appearance Models (AAM). We also proposed a quantitative assessment for facial paralysis based on triangular model. In this paper, we report on the design and development process, including preliminary experimental results. Our preliminary experimental results demonstrate the feasibility of our quantitative assessment system to diagnose facial paralysis.

  7. Active AU Based Patch Weighting for Facial Expression Recognition

    Directory of Open Access Journals (Sweden)

    Weicheng Xie

    2017-01-01

    Full Text Available Facial expression has many applications in human-computer interaction. Although feature extraction and selection have been well studied, the specificity of each expression variation is not fully explored in state-of-the-art works. In this work, the problem of multiclass expression recognition is converted into triplet-wise expression recognition. For each expression triplet, a new feature optimization model based on action unit (AU weighting and patch weight optimization is proposed to represent the specificity of the expression triplet. The sparse representation-based approach is then proposed to detect the active AUs of the testing sample for better generalization. The algorithm achieved competitive accuracies of 89.67% and 94.09% for the Jaffe and Cohn–Kanade (CK+ databases, respectively. Better cross-database performance has also been observed.

  8. Recovering Faces from Memory: The Distracting Influence of External Facial Features

    Science.gov (United States)

    Frowd, Charlie D.; Skelton, Faye; Atherton, Chris; Pitchford, Melanie; Hepton, Gemma; Holden, Laura; McIntyre, Alex H.; Hancock, Peter J. B.

    2012-01-01

    Recognition memory for unfamiliar faces is facilitated when contextual cues (e.g., head pose, background environment, hair and clothing) are consistent between study and test. By contrast, inconsistencies in external features, especially hair, promote errors in unfamiliar face-matching tasks. For the construction of facial composites, as carried…

  9. Facial Expression Recognition with Fusion Features Extracted from Salient Facial Areas

    Directory of Open Access Journals (Sweden)

    Yanpeng Liu

    2017-03-01

    Full Text Available In the pattern recognition domain, deep architectures are currently widely used and they have achieved fine results. However, these deep architectures make particular demands, especially in terms of their requirement for big datasets and GPU. Aiming to gain better results without deep networks, we propose a simplified algorithm framework using fusion features extracted from the salient areas of faces. Furthermore, the proposed algorithm has achieved a better result than some deep architectures. For extracting more effective features, this paper firstly defines the salient areas on the faces. This paper normalizes the salient areas of the same location in the faces to the same size; therefore, it can extracts more similar features from different subjects. LBP and HOG features are extracted from the salient areas, fusion features’ dimensions are reduced by Principal Component Analysis (PCA and we apply several classifiers to classify the six basic expressions at once. This paper proposes a salient areas definitude method which uses peak expressions frames compared with neutral faces. This paper also proposes and applies the idea of normalizing the salient areas to align the specific areas which express the different expressions. As a result, the salient areas found from different subjects are the same size. In addition, the gamma correction method is firstly applied on LBP features in our algorithm framework which improves our recognition rates significantly. By applying this algorithm framework, our research has gained state-of-the-art performances on CK+ database and JAFFE database.

  10. Automated Facial Expression Recognition Using Gradient-Based Ternary Texture Patterns

    Directory of Open Access Journals (Sweden)

    Faisal Ahmed

    2013-01-01

    Full Text Available Recognition of human expression from facial image is an interesting research area, which has received increasing attention in the recent years. A robust and effective facial feature descriptor is the key to designing a successful expression recognition system. Although much progress has been made, deriving a face feature descriptor that can perform consistently under changing environment is still a difficult and challenging task. In this paper, we present the gradient local ternary pattern (GLTP—a discriminative local texture feature for representing facial expression. The proposed GLTP operator encodes the local texture of an image by computing the gradient magnitudes of the local neighborhood and quantizing those values in three discrimination levels. The location and occurrence information of the resulting micropatterns is then used as the face feature descriptor. The performance of the proposed method has been evaluated for the person-independent face expression recognition task. Experiments with prototypic expression images from the Cohn-Kanade (CK face expression database validate that the GLTP feature descriptor can effectively encode the facial texture and thus achieves improved recognition performance than some well-known appearance-based facial features.

  11. Support vector machine-based facial-expression recognition method combining shape and appearance

    Science.gov (United States)

    Han, Eun Jung; Kang, Byung Jun; Park, Kang Ryoung; Lee, Sangyoun

    2010-11-01

    Facial expression recognition can be widely used for various applications, such as emotion-based human-machine interaction, intelligent robot interfaces, face recognition robust to expression variation, etc. Previous studies have been classified as either shape- or appearance-based recognition. The shape-based method has the disadvantage that the individual variance of facial feature points exists irrespective of similar expressions, which can cause a reduction of the recognition accuracy. The appearance-based method has a limitation in that the textural information of the face is very sensitive to variations in illumination. To overcome these problems, a new facial-expression recognition method is proposed, which combines both shape and appearance information, based on the support vector machine (SVM). This research is novel in the following three ways as compared to previous works. First, the facial feature points are automatically detected by using an active appearance model. From these, the shape-based recognition is performed by using the ratios between the facial feature points based on the facial-action coding system. Second, the SVM, which is trained to recognize the same and different expression classes, is proposed to combine two matching scores obtained from the shape- and appearance-based recognitions. Finally, a single SVM is trained to discriminate four different expressions, such as neutral, a smile, anger, and a scream. By determining the expression of the input facial image whose SVM output is at a minimum, the accuracy of the expression recognition is much enhanced. The experimental results showed that the recognition accuracy of the proposed method was better than previous researches and other fusion methods.

  12. De Novo Mutation in ABCC9 Causes Hypertrichosis Acromegaloid Facial Features Disorder.

    Science.gov (United States)

    Afifi, Hanan H; Abdel-Hamid, Mohamed S; Eid, Maha M; Mostafa, Inas S; Abdel-Salam, Ghada M H

    2016-01-01

    A 13-year-old Egyptian girl with generalized hypertrichosis, gingival hyperplasia, coarse facial appearance, no cardiovascular or skeletal anomalies, keloid formation, and multiple labial frenula was referred to our clinic for counseling. Molecular analysis of the ABCC9 gene showed a de novo missense mutation located in exon 27, which has been described previously with Cantu syndrome. An overlap between Cantu syndrome, acromegaloid facial syndrome, and hypertrichosis acromegaloid facial features disorder is apparent at the phenotypic and molecular levels. The patient reported here gives further evidence that these syndromes are an expression of the ABCC9-related disorders, ranging from hypertrichosis and acromegaloid facies to the severe end of Cantu syndrome. © 2016 Wiley Periodicals, Inc.

  13. The Association of Quantitative Facial Color Features with Cold Pattern in Traditional East Asian Medicine

    Directory of Open Access Journals (Sweden)

    Sujeong Mun

    2017-01-01

    Full Text Available Introduction. Facial diagnosis is a major component of the diagnostic method in traditional East Asian medicine. We investigated the association of quantitative facial color features with cold pattern using a fully automated facial color parameterization system. Methods. The facial color parameters of 64 participants were obtained from digital photographs using an automatic color correction and color parameter calculation system. Cold pattern severity was evaluated using a questionnaire. Results. The a⁎ values of the whole face, lower cheek, and chin were negatively associated with cold pattern score (CPS (whole face: B=-1.048, P=0.021; lower cheek: B=-0.494, P=0.007; chin: B=-0.640, P=0.031, while b⁎ value of the lower cheek was positively associated with CPS (B=0.234, P=0.019. The a⁎ values of the whole face were significantly correlated with specific cold pattern symptoms including cold abdomen (partial ρ=-0.354, P<0.01 and cold sensation in the body (partial ρ=-0.255, P<0.05. Conclusions. a⁎ values of the whole face were negatively associated with CPS, indicating that individuals with increased levels of cold pattern had paler faces. These findings suggest that objective facial diagnosis has utility for pattern identification.

  14. Feature Fusion Algorithm for Multimodal Emotion Recognition from Speech and Facial Expression Signal

    Directory of Open Access Journals (Sweden)

    Han Zhiyan

    2016-01-01

    Full Text Available In order to overcome the limitation of single mode emotion recognition. This paper describes a novel multimodal emotion recognition algorithm, and takes speech signal and facial expression signal as the research subjects. First, fuse the speech signal feature and facial expression signal feature, get sample sets by putting back sampling, and then get classifiers by BP neural network (BPNN. Second, measure the difference between two classifiers by double error difference selection strategy. Finally, get the final recognition result by the majority voting rule. Experiments show the method improves the accuracy of emotion recognition by giving full play to the advantages of decision level fusion and feature level fusion, and makes the whole fusion process close to human emotion recognition more, with a recognition rate 90.4%.

  15. 3D facial expression recognition using maximum relevance minimum redundancy geometrical features

    Science.gov (United States)

    Rabiu, Habibu; Saripan, M. Iqbal; Mashohor, Syamsiah; Marhaban, Mohd Hamiruce

    2012-12-01

    In recent years, facial expression recognition (FER) has become an attractive research area, which besides the fundamental challenges, it poses, finds application in areas, such as human-computer interaction, clinical psychology, lie detection, pain assessment, and neurology. Generally the approaches to FER consist of three main steps: face detection, feature extraction and expression recognition. The recognition accuracy of FER hinges immensely on the relevance of the selected features in representing the target expressions. In this article, we present a person and gender independent 3D facial expression recognition method, using maximum relevance minimum redundancy geometrical features. The aim is to detect a compact set of features that sufficiently represents the most discriminative features between the target classes. Multi-class one-against-one SVM classifier was employed to recognize the seven facial expressions; neutral, happy, sad, angry, fear, disgust, and surprise. The average recognition accuracy of 92.2% was recorded. Furthermore, inter database homogeneity was investigated between two independent databases the BU-3DFE and UPM-3DFE the results showed a strong homogeneity between the two databases.

  16. Facial Expression Recognition from Video Sequences Based on Spatial-Temporal Motion Local Binary Pattern and Gabor Multiorientation Fusion Histogram

    Directory of Open Access Journals (Sweden)

    Lei Zhao

    2017-01-01

    Full Text Available This paper proposes novel framework for facial expressions analysis using dynamic and static information in video sequences. First, based on incremental formulation, discriminative deformable face alignment method is adapted to locate facial points to correct in-plane head rotation and break up facial region from background. Then, spatial-temporal motion local binary pattern (LBP feature is extracted and integrated with Gabor multiorientation fusion histogram to give descriptors, which reflect static and dynamic texture information of facial expressions. Finally, a one-versus-one strategy based multiclass support vector machine (SVM classifier is applied to classify facial expressions. Experiments on Cohn-Kanade (CK + facial expression dataset illustrate that integrated framework outperforms methods using single descriptors. Compared with other state-of-the-art methods on CK+, MMI, and Oulu-CASIA VIS datasets, our proposed framework performs better.

  17. Facial Image Compression Based on Structured Codebooks in Overcomplete Domain

    Directory of Open Access Journals (Sweden)

    Vila-Forcén JE

    2006-01-01

    Full Text Available We advocate facial image compression technique in the scope of distributed source coding framework. The novelty of the proposed approach is twofold: image compression is considered from the position of source coding with side information and, contrarily to the existing scenarios where the side information is given explicitly; the side information is created based on a deterministic approximation of the local image features. We consider an image in the overcomplete transform domain as a realization of a random source with a structured codebook of symbols where each symbol represents a particular edge shape. Due to the partial availability of the side information at both encoder and decoder, we treat our problem as a modification of the Berger-Flynn-Gray problem and investigate a possible gain over the solutions when side information is either unavailable or available at the decoder. Finally, the paper presents a practical image compression algorithm for facial images based on our concept that demonstrates the superior performance in the very-low-bit-rate regime.

  18. A Multi-Layer Fusion-Based Facial Expression Recognition Approach with Optimal Weighted AUs

    Directory of Open Access Journals (Sweden)

    Xibin Jia

    2017-01-01

    Full Text Available Affective computing is an increasingly important outgrowth of Artificial Intelligence, which is intended to deal with rich and subjective human communication. In view of the complexity of affective expression, discriminative feature extraction and corresponding high-performance classifier selection are still a big challenge. Specific features/classifiers display different performance in different datasets. There has currently been no consensus in the literature that any expression feature or classifier is always good in all cases. Although the recently updated deep learning algorithm, which uses learning deep feature instead of manual construction, appears in the expression recognition research, the limitation of training samples is still an obstacle of practical application. In this paper, we aim to find an effective solution based on a fusion and association learning strategy with typical manual features and classifiers. Taking these typical features and classifiers in facial expression area as a basis, we fully analyse their fusion performance. Meanwhile, to emphasize the major attributions of affective computing, we select facial expression relative Action Units (AUs as basic components. In addition, we employ association rules to mine the relationships between AUs and facial expressions. Based on a comprehensive analysis from different perspectives, we propose a novel facial expression recognition approach that uses multiple features and multiple classifiers embedded into a stacking framework based on AUs. Extensive experiments on two public datasets show that our proposed multi-layer fusion system based on optimal AUs weighting has gained dramatic improvements on facial expression recognition in comparison to an individual feature/classifier and some state-of-the-art methods, including the recent deep learning based expression recognition one.

  19. Facial Feature Tracking Using Efficient Particle Filter and Active Appearance Model

    Directory of Open Access Journals (Sweden)

    Durkhyun Cho

    2014-09-01

    Full Text Available For natural human-robot interaction, the location and shape of facial features in a real environment must be identified. One robust method to track facial features is by using a particle filter and the active appearance model. However, the processing speed of this method is too slow for utilization in practice. In order to improve the efficiency of the method, we propose two ideas: (1 changing the number of particles situationally, and (2 switching the prediction model depending upon the degree of the importance of each particle using a combination strategy and a clustering strategy. Experimental results show that the proposed method is about four times faster than the conventional method using a particle filter and the active appearance model, without any loss of performance.

  20. Robust facial landmark detection based on initializing multiple poses

    Directory of Open Access Journals (Sweden)

    Xin Chai

    2016-10-01

    Full Text Available For robot systems, robust facial landmark detection is the first and critical step for face-based human identification and facial expression recognition. In recent years, the cascaded-regression-based method has achieved excellent performance in facial landmark detection. Nevertheless, it still has certain weakness, such as high sensitivity to the initialization. To address this problem, regression based on multiple initializations is established in a unified model; face shapes are then estimated independently according to these initializations. With a ranking strategy, the best estimate is selected as the final output. Moreover, a face shape model based on restricted Boltzmann machines is built as a constraint to improve the robustness of ranking. Experiments on three challenging datasets demonstrate the effectiveness of the proposed facial landmark detection method against state-of-the-art methods.

  1. Robust Facial Feature Tracking Using Shape-Constrained Multiresolution-Selected Linear Predictors.

    Science.gov (United States)

    Ong, Eng-Jon; Bowden, Richard

    2011-09-01

    This paper proposes a learned data-driven approach for accurate, real-time tracking of facial features using only intensity information. The task of automatic facial feature tracking is nontrivial since the face is a highly deformable object with large textural variations and motion in certain regions. Existing works attempt to address these problems by either limiting themselves to tracking feature points with strong and unique visual cues (e.g., mouth and eye corners) or by incorporating a priori information that needs to be manually designed (e.g., selecting points for a shape model). The framework proposed here largely avoids the need for such restrictions by automatically identifying the optimal visual support required for tracking a single facial feature point. This automatic identification of the visual context required for tracking allows the proposed method to potentially track any point on the face. Tracking is achieved via linear predictors which provide a fast and effective method for mapping pixel intensities into tracked feature position displacements. Building upon the simplicity and strengths of linear predictors, a more robust biased linear predictor is introduced. Multiple linear predictors are then grouped into a rigid flock to further increase robustness. To improve tracking accuracy, a novel probabilistic selection method is used to identify relevant visual areas for tracking a feature point. These selected flocks are then combined into a hierarchical multiresolution LP model. Finally, we also exploit a simple shape constraint for correcting the occasional tracking failure of a minority of feature points. Experimental results show that this method performs more robustly and accurately than AAMs, with minimal training examples on example sequences that range from SD quality to Youtube quality. Additionally, an analysis of the visual support consistency across different subjects is also provided.

  2. Predicting tooth color from facial features and gender: results from a white elderly cohort.

    Science.gov (United States)

    Hassel, Alexander J; Nitschke, Ina; Dreyhaupt, Jens; Wegener, Ina; Rammelsberg, Peter; Hassel, Jessica C

    2008-02-01

    Clinicians providing edentulous patients with complete dentures are often confronted with the problem of not knowing the patient's natural tooth color. It would be valuable to be able to determine this from other facial features. The purpose of this study was to assess the possibility of predicting tooth color in the elderly from hair and eye color, facial skin complexion, and gender. The lightness (L*), chroma (C*), and hue (h*) of the color of 541 natural teeth were measured for a white study population (94 subjects, 75 to 77 years old, 55.3% male) by means of a single measurement with a clinically applicable spectrophotometer. Hair and eye color and facial skin complexion were recorded in categories. Mixed-effects regression models were calculated for each L*, C*, and h* value with hair and eye color, facial skin complexion, and gender as independent variables (alpha=.05). Only gender and hair color in univariate analysis and, additionally, eye color in multivariate analysis, were significant predictors of tooth color. Higher L* values (lighter color) were associated with lighter eye color and with female gender. The C* value was lower (less saturated) for women. More yellow/green than yellow/red h* values were associated with hair colors other than black and with female gender. However, the parameter estimates of the variables were rather low. Determination of tooth color from hair and eye color and from gender in the white elderly was only partially possible.

  3. Facial Expression Recognition Based on TensorFlow Platform

    Directory of Open Access Journals (Sweden)

    Xia Xiao-Ling

    2017-01-01

    Full Text Available Facial expression recognition have a wide range of applications in human-machine interaction, pattern recognition, image understanding, machine vision and other fields. Recent years, it has gradually become a hot research. However, different people have different ways of expressing their emotions, and under the influence of brightness, background and other factors, there are some difficulties in facial expression recognition. In this paper, based on the Inception-v3 model of TensorFlow platform, we use the transfer learning techniques to retrain facial expression dataset (The Extended Cohn-Kanade dataset, which can keep the accuracy of recognition and greatly reduce the training time.

  4. Ring 2 chromosome associated with failure to thrive, microcephaly and dysmorphic facial features.

    Science.gov (United States)

    López-Uriarte, Arelí; Quintero-Rivera, Fabiola; de la Fuente Cortez, Beatriz; Puente, Viviana Gómez; Campos, María Del Roble Velazco; de Villarreal, Laura E Martínez

    2013-10-15

    We report here a child with a ring chromosome 2 [r(2)] associated with failure to thrive, microcephaly and dysmorphic features. The chromosomal aberration was defined by chromosome microarray analysis, revealing two small deletions of 2p25.3 (139 kb) and 2q37.3 (147 kb). We show the clinical phenotype of the patient, using a conventional approach and the molecular cytogenetics of a male with a history of prenatal intrauterine growth restriction (IUGR), failure to thrive, microcephaly and dysmorphic facial features. The phenotype is very similar to that reported in other clinical cases with ring chromosome 2. © 2013 Elsevier B.V. All rights reserved.

  5. Mirror on the wall: a study of women's perception of facial features as they age.

    Science.gov (United States)

    Sezgin, Billur; Findikcioglu, Kemal; Kaya, Basar; Sibar, Serhat; Yavuzer, Reha

    2012-05-01

    Facial aesthetic treatments are among the most popular cosmetic procedures worldwide, but the factors that motivate women to change their facial appearance are not fully understood. The authors examine the relationships among the facial areas on which women focus most as they age, women's general self-perception, and the effect of their personal focus on "beauty points" on their perception of other women's faces. In this prospective study, 200 women who presented to a cosmetic surgery outpatient clinic for consultation between December 2009 and February 2010 completed a questionnaire. The 200 participants were grouped by age: 20-29 years, 30-39, 40-49, and 50 or older (50 women in each group). They were asked which part of their face they focus on most when looking in the mirror, which part they notice most in other women (of different age groups), what they like/dislike most about their own face, and whether they wished to change any facial feature. A positive correlation was found between women's focal points and the areas they dislike or desire to change. Younger women focused mainly on their nose and skin, while older women focused on their periorbital area and jawline. Women focus on their personal focal points when looking at other women in their 20s and 30s, but not when looking at older women. Women presenting for cosmetic surgery consultation focus on the areas that they dislike most, which leads to a desire to change those features. The plastic surgeon must fully understand patients' expectations to select appropriate candidates and maximize satisfaction with the outcomes.

  6. Facial contour deformity correction with microvascular flaps based on the 3-dimentional template and facial moulage

    Directory of Open Access Journals (Sweden)

    Dinesh Kadam

    2013-01-01

    Full Text Available Introduction: Facial contour deformities presents with varied aetiology and degrees severity. Accurate assessment, selecting a suitable tissue and sculpturing it to fill the defect is challenging and largely subjective. Objective assessment with imaging and software is not always feasible and preparing a template is complicated. A three-dimensional (3D wax template pre-fabricated over the facial moulage aids surgeons to fulfil these tasks. Severe deformities demand a stable vascular tissue for an acceptable outcome. Materials and Methods: We present review of eight consecutive patients who underwent augmentation of facial contour defects with free flaps between June 2005 and January 2011. De-epithelialised free anterolateral thigh (ALT flap in three, radial artery forearm flap and fibula osteocutaneous flap in two each and groin flap was used in one patient. A 3D wax template was fabricated by augmenting the deformity on facial moulage. It was utilised to select the flap, to determine the exact dimensions and to sculpture intraoperatively. Ancillary procedures such as genioplasty, rhinoplasty and coloboma correction were performed. Results: The average age at the presentation was 25 years and average disease free interval was 5.5 years and all flaps survived. Mean follow-up period was 21.75 months. The correction was aesthetically acceptable and was maintained without any recurrence or atrophy. Conclusion: The 3D wax template on facial moulage is simple, inexpensive and precise objective tool. It provides accurate guide for the planning and execution of the flap reconstruction. The selection of the flap is based on the type and extent of the defect. Superiority of vascularised free tissue is well-known and the ALT flap offers a versatile option for correcting varying degrees of the deformities. Ancillary procedures improve the overall aesthetic outcomes and minor flap touch-up procedures are generally required.

  7. Enhancement of the Adaptive Shape Variants Average Values by Using Eight Movement Directions for Multi-Features Detection of Facial Sketch

    Directory of Open Access Journals (Sweden)

    Arif Muntasa

    2013-09-01

    Full Text Available This paper aims to detect multi features of a facial sketch by using a novel approach. The detection of multi features of facial sketch has been conducted by several researchers, but they mainly considered frontal face sketches as object samples. In fact, the detection of multi features of facial sketch with certain angle is very important to assist police for describing the criminal’s face, when criminal’s face only appears on certain angle. Integration of the maximum line gradient value enhancement and the level set methods was implemented to detect facial features sketches with tilt angle to 15 degrees. However, these methods tend to move towards non features when there are a lot of graffiti around the shape. To overcome this weakness, the author proposes a novel approach to move the shape by adding a parameter to control the movement based on enhancement of the adaptive shape variants average values with 8 movement directions. The experimental results show that the proposed method can improve the detection accuracy up to 92.74%.

  8. Enhancement of the Adaptive Shape Variants Average Values by Using Eight Movement Directions for Multi-Features Detection of Facial Sketch

    Directory of Open Access Journals (Sweden)

    Arif Muntasa

    2012-04-01

    Full Text Available This paper aims to detect multi features of a facial sketch by using a novel approach. The detection of multi features of facial sketch has been conducted by several researchers, but they mainly considered frontal face sketches as object samples. In fact, the detection of multi features of facial sketch with certain angle is very important to assist police for describing the criminal’s face, when criminal’s face only appears on certain angle. Integration of the maximum line gradient value enhancement and the level set methods was implemented to detect facial features sketches with tilt angle to 15 degrees. However, these methods tend to move towards non features when there are a lot of graffiti around the shape. To overcome this weakness, the author proposes a novel approach to move the shape by adding a parameter to control the movement based on enhancement of the adaptive shape variants average values with 8 movement directions. The experimental results show that the proposed method can improve the detection accuracy up to 92.74%.

  9. Long-term assessment of facial features and functions needing more attention in treatment of Treacher Collins syndrome.

    Science.gov (United States)

    Plomp, Raul G; Versnel, Sarah L; van Lieshout, Manouk J S; Poublon, Rene M L; Mathijssen, Irene M J

    2013-08-01

    This study aimed to determine which facial features and functions need more attention during surgical treatment of Treacher Collins syndrome (TCS) in the long term. A cross-sectional cohort study was conducted to compare 23 TCS patients with 206 controls (all≥18 years) regarding satisfaction with their face. The adjusted Body Cathexis Scale was used to determine satisfaction with the appearance of the different facial features and functions. Desire for further treatment of these items was questioned. For each patient an overview was made of all facial operations performed, the affected facial features and the objective severity of the facial deformities. Patients were least satisfied with the appearance of the ears, facial profile and eyelids and with the functions hearing and nasal patency (Pfunctional deficits of the face are shown to be as important as the facial appearance. Particularly nasal patency and hearing are frequently impaired and require routine screening and treatment from intake onwards. Furthermore, correction of ear deformities and midface hypoplasia should be offered and performed more frequently. Residual deformity and dissatisfaction remains a problem, especially in reconstructed eyelids. II. Copyright © 2013 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  10. Neighbors Based Discriminative Feature Difference Learning for Kinship Verification

    DEFF Research Database (Denmark)

    Duan, Xiaodong; Tan, Zheng-Hua

    2015-01-01

    In this paper, we present a discriminative feature difference learning method for facial image based kinship verification. To transform feature difference of an image pair to be discriminative for kinship verification, a linear transformation matrix for feature difference between an image pair...... databases show that the proposed method combined with a SVM classification method outperforms or is comparable to state-of-the-art kinship verification methods. © Springer International Publishing AG, Part of Springer Science+Business Media...

  11. Laying eyes on headlights: eye movements suggest facial features in cars.

    Science.gov (United States)

    Windhager, Sonja; Hutzler, Florian; Carbon, Claus-Christian; Oberzaucher, Elisabeth; Schaefer, Katrin; Thorstensen, Truls; Leder, Helmut; Grammer, Karl

    2010-09-01

    Humans' proneness to see faces even in inanimate structures such as cars has long been noticed, yet empirical evidence is scarce. To examine this tendency of anthropomorphism, participants were asked to compare specific features (such as the eyes) of a face and a car front presented next to each other. Eye movement patterns indicated on which visual information participants relied to solve the task and clearly revealed the perception of facial features in cars, such as headlights as eyes or grille as nose. Most importantly, a predominance of headlights was found in attracting and guiding people's gaze irrespective of the feature they were asked to compare--equivalent to the role of the eyes during face perception. This response to abstract configurations is interpreted as an adaptive bias of the respective inherent mechanism for face perception and is evolutionarily reasonable with regard to a "better safe than sorry" strategy.

  12. A novel human--machine interface based on recognition of multi-channel facial bioelectric signals.

    Science.gov (United States)

    Mohammad Rezazadeh, Iman; Firoozabadi, S Mohammad; Hu, Huosheng; Hashemi Golpayegani, S Mohammad Reza

    2011-12-01

    This paper presents a novel human-machine interface for disabled people to interact with assistive systems for a better quality of life. It is based on multi-channel forehead bioelectric signals acquired by placing three pairs of electrodes (physical channels) on the Frontalis and Temporalis facial muscles. The acquired signals are passed through a parallel filter bank to explore three different sub-bands related to facial electromyogram, electrooculogram and electroencephalogram. The root mean square features of the bioelectric signals analyzed within non-overlapping 256 ms windows were extracted. The subtractive fuzzy c-means clustering method (SFCM) was applied to segment the feature space and generate initial fuzzy based Takagi-Sugeno rules. Then, an adaptive neuro-fuzzy inference system is exploited to tune up the premises and consequence parameters of the extracted SFCMs rules. The average classifier discriminating ratio for eight different facial gestures (smiling, frowning, pulling up left/right lips corner, eye movement to left/right/up/down) is between 93.04% and 96.99% according to different combinations and fusions of logical features. Experimental results show that the proposed interface has a high degree of accuracy and robustness for discrimination of 8 fundamental facial gestures. Some potential and further capabilities of our approach in human-machine interfaces are also discussed.

  13. A novel human-machine interface based on recognition of multi-channel facial bioelectric signals

    International Nuclear Information System (INIS)

    Razazadeh, Iman Mohammad; Firoozabadi, S. Mohammad; Golpayegani, S.M.R.H.; Hu, H.

    2011-01-01

    Full text: This paper presents a novel human-machine interface for disabled people to interact with assistive systems for a better quality of life. It is based on multichannel forehead bioelectric signals acquired by placing three pairs of electrodes (physical channels) on the Fron-tails and Temporalis facial muscles. The acquired signals are passes through a parallel filter bank to explore three different sub-bands related to facial electromyogram, electrooculogram and electroencephalogram. The root mean features of the bioelectric signals analyzed within non-overlapping 256 ms windows were extracted. The subtractive fuzzy c-means clustering method (SFCM) was applied to segment the feature space and generate initial fuzzy based Takagi-Sugeno rules. Then, an adaptive neuro-fuzzy inference system is exploited to tune up the premises and consequence parameters of the extracted SFCMs. rules. The average classifier discriminating ratio for eight different facial gestures (smiling, frowning, pulling up left/right lips corner, eye movement to left/right/up/down is between 93.04% and 96.99% according to different combinations and fusions of logical features. Experimental results show that the proposed interface has a high degree of accuracy and robustness for discrimination of 8 fundamental facial gestures. Some potential and further capabilities of our approach in human-machine interfaces are also discussed. (author)

  14. Scattered Data Processing Approach Based on Optical Facial Motion Capture

    Directory of Open Access Journals (Sweden)

    Qiang Zhang

    2013-01-01

    Full Text Available In recent years, animation reconstruction of facial expressions has become a popular research field in computer science and motion capture-based facial expression reconstruction is now emerging in this field. Based on the facial motion data obtained using a passive optical motion capture system, we propose a scattered data processing approach, which aims to solve the common problems of missing data and noise. To recover missing data, given the nonlinear relationships among neighbors with the current missing marker, we propose an improved version of a previous method, where we use the motion of three muscles rather than one to recover the missing data. To reduce the noise, we initially apply preprocessing to eliminate impulsive noise, before our proposed three-order quasi-uniform B-spline-based fitting method is used to reduce the remaining noise. Our experiments showed that the principles that underlie this method are simple and straightforward, and it delivered acceptable precision during reconstruction.

  15. Fixation to features and neural processing of facial expressions in a gender discrimination task.

    Science.gov (United States)

    Neath, Karly N; Itier, Roxane J

    2015-10-01

    Early face encoding, as reflected by the N170 ERP component, is sensitive to fixation to the eyes. Whether this sensitivity varies with facial expressions of emotion and can also be seen on other ERP components such as P1 and EPN, was investigated. Using eye-tracking to manipulate fixation on facial features, we found the N170 to be the only eye-sensitive component and this was true for fearful, happy and neutral faces. A different effect of fixation to features was seen for the earlier P1 that likely reflected general sensitivity to face position. An early effect of emotion (∼120 ms) for happy faces was seen at occipital sites and was sustained until ∼350 ms post-stimulus. For fearful faces, an early effect was seen around 80 ms followed by a later effect appearing at ∼150 ms until ∼300 ms at lateral posterior sites. Results suggests that in this emotion-irrelevant gender discrimination task, processing of fearful and happy expressions occurred early and largely independently of the eye-sensitivity indexed by the N170. Processing of the two emotions involved different underlying brain networks active at different times. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. High-resolution computed tomographic features of the stapedius muscle and facial nerve in chronic otitis media.

    Science.gov (United States)

    Fang, Yanqing; Meyer, Jacob; Chen, Bing

    2013-08-01

    To improve preoperative recognition of the morphologic features of stapedius muscle and facial nerve in cases of chronic otitis media by providing a systemized description using temporal bone high-resolution computed tomography (HRCT). Retrospective review of HRCT scans from 212 patients. Tertiary hospital affiliated to Fudan University. Men and women undergoing surgery for chronic otitis media. No preference for demographics or side presenting otitis media. Therapeutic surgery. Location and morphology of stapedius muscle and facial nerve. The stapedius muscle was encountered in 90.5% of axial (n = 181) and 87% of coronal sections (n = 174), and differences between sides and genders were not significant (p > 0.05). Five categories of anomalies or pathologic features were identified in axial layers, and 3 categories were identified in coronal layers. Two axial and 2 coronal CT planes were found to be especially significant in imaging the facial nerve and its morphology (p < 0.001), whereas axial planes were more apt to show stapedius muscle features. Other pathologic features were also observed significantly more from specific CT imaging planes. The presence the stapedius muscle and the morphology between the stapedius muscle and the facial nerve vary between different observation areas, and some CT planes provide more useful information than others. The imaging planes outlined in this study can be used to systematically and correctly identify certain facial nerve and stapedius muscle features and clarify unfamiliar pathologic anatomy in preoperative planning.

  17. Using Computers for Assessment of Facial Features and Recognition of Anatomical Variants that Result in Unfavorable Rhinoplasty Outcomes

    Directory of Open Access Journals (Sweden)

    Tarik Ozkul

    2008-04-01

    Full Text Available Rhinoplasty and facial plastic surgery are among the most frequently performed surgical procedures in the world. Although the underlying anatomical features of nose and face are very well known, performing a successful facial surgery requires not only surgical skills but also aesthetical talent from surgeon. Sculpting facial features surgically in correct proportions to end up with an aesthetically pleasing result is highly difficult. To further complicate the matter, some patients may have some anatomical features which affect rhinoplasty operation outcome negatively. If goes undetected, these anatomical variants jeopardize the surgery causing unexpected rhinoplasty outcomes. In this study, a model is developed with the aid of artificial intelligence tools, which analyses facial features of the patient from photograph, and generates an index of "appropriateness" of the facial features and an index of existence of anatomical variants that effect rhinoplasty negatively. The software tool developed is intended to detect the variants and warn the surgeon before the surgery. Another purpose of the tool is to generate an objective score to assess the outcome of the surgery.

  18. 3D Facial Similarity Measure Based on Geodesic Network and Curvatures

    Directory of Open Access Journals (Sweden)

    Junli Zhao

    2014-01-01

    Full Text Available Automated 3D facial similarity measure is a challenging and valuable research topic in anthropology and computer graphics. It is widely used in various fields, such as criminal investigation, kinship confirmation, and face recognition. This paper proposes a 3D facial similarity measure method based on a combination of geodesic and curvature features. Firstly, a geodesic network is generated for each face with geodesics and iso-geodesics determined and these network points are adopted as the correspondence across face models. Then, four metrics associated with curvatures, that is, the mean curvature, Gaussian curvature, shape index, and curvedness, are computed for each network point by using a weighted average of its neighborhood points. Finally, correlation coefficients according to these metrics are computed, respectively, as the similarity measures between two 3D face models. Experiments of different persons’ 3D facial models and different 3D facial models of the same person are implemented and compared with a subjective face similarity study. The results show that the geodesic network plays an important role in 3D facial similarity measure. The similarity measure defined by shape index is consistent with human’s subjective evaluation basically, and it can measure the 3D face similarity more objectively than the other indices.

  19. Analysis of differences between Western and East-Asian faces based on facial region segmentation and PCA for facial expression recognition

    Science.gov (United States)

    Benitez-Garcia, Gibran; Nakamura, Tomoaki; Kaneko, Masahide

    2017-01-01

    Darwin was the first one to assert that facial expressions are innate and universal, which are recognized across all cultures. However, recent some cross-cultural studies have questioned this assumed universality. Therefore, this paper presents an analysis of the differences between Western and East-Asian faces of the six basic expressions (anger, disgust, fear, happiness, sadness and surprise) focused on three individual facial regions of eyes-eyebrows, nose and mouth. The analysis is conducted by applying PCA for two feature extraction methods: appearance-based by using the pixel intensities of facial parts, and geometric-based by handling 125 feature points from the face. Both methods are evaluated using 4 standard databases for both racial groups and the results are compared with a cross-cultural human study applied to 20 participants. Our analysis reveals that differences between Westerns and East-Asians exist mainly on the regions of eyes-eyebrows and mouth for expressions of fear and disgust respectively. This work presents important findings for a better design of automatic facial expression recognition systems based on the difference between two racial groups.

  20. Influence of skin ageing features on Chinese women's perception of facial age and attractiveness.

    Science.gov (United States)

    Porcheron, A; Latreille, J; Jdid, R; Tschachler, E; Morizot, F

    2014-08-01

    Ageing leads to characteristic changes in the appearance of facial skin. Among these changes, we can distinguish the skin topographic cues (skin sagging and wrinkles), the dark spots and the dark circles around the eyes. Although skin changes are similar in Caucasian and Chinese faces, the age of occurrence and the severity of age-related features differ between the two populations. Little is known about how the ageing of skin influences the perception of female faces in Chinese women. The aim of this study is to evaluate the contribution of the different age-related skin features to the perception of age and attractiveness in Chinese women. Facial images of Caucasian women and Chinese women in their 60s were manipulated separately to reduce the following skin features: (i) skin sagging and wrinkles, (ii) dark spots and (iii) dark circles. Finally, all signs were reduced simultaneously (iv). Female Chinese participants were asked to estimate the age difference between the modified and original images and evaluate the attractiveness of modified and original faces. Chinese women perceived the Chinese faces as younger after the manipulation of dark spots than after the reduction in wrinkles/sagging, whereas they perceived the Caucasian faces as the youngest after the manipulation of wrinkles/sagging. Interestingly, Chinese women evaluated faces with reduced dark spots as being the most attractive whatever the origin of the face. The manipulation of dark circles contributed to making Caucasian and Chinese faces being perceived younger and more attractive than the original faces, although the effect was less pronounced than for the two other types of manipulation. This is the first study to have examined the influence of various age-related skin features on the facial age and attractiveness perception of Chinese women. The results highlight different contributions of dark spots, sagging/wrinkles and dark circles to their perception of Chinese and Caucasian faces.

  1. Emotion recognition based on facial components

    Indian Academy of Sciences (India)

    P ITHAYA RANI

    2018-03-28

    Mar 28, 2018 ... time and memory, to convolve face images with a bank of. Gabor filters to .... over, the LBP is sensitive to noise because the point fea- ... ences of noise. In addition, it encodes the comparative sizes of the central region with locally neighbouring regions into a binary code as in an LBP feature (see figure 2).

  2. Facial expression recognition based on weber local descriptor and sparse representation

    Science.gov (United States)

    Ouyang, Yan

    2018-03-01

    Automatic facial expression recognition has been one of the research hotspots in the area of computer vision for nearly ten years. During the decade, many state-of-the-art methods have been proposed which perform very high accurate rate based on the face images without any interference. Nowadays, many researchers begin to challenge the task of classifying the facial expression images with corruptions and occlusions and the Sparse Representation based Classification framework has been wildly used because it can robust to the corruptions and occlusions. Therefore, this paper proposed a novel facial expression recognition method based on Weber local descriptor (WLD) and Sparse representation. The method includes three parts: firstly the face images are divided into many local patches, and then the WLD histograms of each patch are extracted, finally all the WLD histograms features are composed into a vector and combined with SRC to classify the facial expressions. The experiment results on the Cohn-Kanade database show that the proposed method is robust to occlusions and corruptions.

  3. Exposure to Sodium Valproate during Pregnancy: Facial Features and Signs of Autism.

    Science.gov (United States)

    Stadelmaier, Rachel; Nasri, Hanah; Deutsch, Curtis K; Bauman, Margaret; Hunt, Anne; Stodgell, Christopher J; Adams, Jane; Holmes, Lewis B

    2017-08-15

    Valproic acid (VPA) is the most teratogenic anticonvulsant drug in clinical use today. Children exposed prenatally to VPA have previously been shown to have dysmorphic craniofacial features, identified subjectively but not by anthropometric methods. Exposure to VPA has also been associated with an increased frequency of autism spectrum disorder (ASD). An increased cephalic index (the ratio of the cranial lateral width to the cranial anterior-posterior length) has been observed in children with ASD. Forty-seven children exposed to VPA during the first trimester of pregnancy were evaluated for dysmorphic facial features, identified subjectively and by measurements. Each VPA-exposed child was evaluated for ASD using the Social Communication Questionnaire, Autism Diagnostic Interview-Revised, and Autism Diagnostic Observation Schedule. The same physical examination was carried out on an unexposed comparison group of 126 children. The unexposed children also had testing for cognitive performance by the Wechsler Intelligence Scale for Children. Several dysmorphic craniofacial features, including telecanthus, wide philtrum, and increased length of the upper lip were identified subjectively. Anthropometric measurements confirmed the increased intercanthal distance and documented additional findings, including an increased cephalic index and decreased head circumference/height index. There were no differences between the craniofacial features of VPA-exposed children with and without ASD. An increased frequency of dysmorphic craniofacial features was identified in children exposed to VPA during the first trimester of pregnancy. The most consistent finding was a larger cephalic index, which indicates a disproportion of increased width of the skull relative to the shortened anterior-posterior length. Birth Defects Research 109:1134-1143, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  4. Extreme Facial Expressions Classification Based on Reality Parameters

    Science.gov (United States)

    Rahim, Mohd Shafry Mohd; Rad, Abdolvahab Ehsani; Rehman, Amjad; Altameem, Ayman

    2014-09-01

    Extreme expressions are really type of emotional expressions that are basically stimulated through the strong emotion. An example of those extreme expression is satisfied through tears. So to be able to provide these types of features; additional elements like fluid mechanism (particle system) plus some of physics techniques like (SPH) are introduced. The fusion of facile animation with SPH exhibits promising results. Accordingly, proposed fluid technique using facial animation is the real tenor for this research to get the complex expression, like laugh, smile, cry (tears emergence) or the sadness until cry strongly, as an extreme expression classification that's happens on the human face in some cases.

  5. Image Based Hair Segmentation Algorithm for the Application of Automatic Facial Caricature Synthesis

    Directory of Open Access Journals (Sweden)

    Yehu Shen

    2014-01-01

    Full Text Available Hair is a salient feature in human face region and are one of the important cues for face analysis. Accurate detection and presentation of hair region is one of the key components for automatic synthesis of human facial caricature. In this paper, an automatic hair detection algorithm for the application of automatic synthesis of facial caricature based on a single image is proposed. Firstly, hair regions in training images are labeled manually and then the hair position prior distributions and hair color likelihood distribution function are estimated from these labels efficiently. Secondly, the energy function of the test image is constructed according to the estimated prior distributions of hair location and hair color likelihood. This energy function is further optimized according to graph cuts technique and initial hair region is obtained. Finally, K-means algorithm and image postprocessing techniques are applied to the initial hair region so that the final hair region can be segmented precisely. Experimental results show that the average processing time for each image is about 280 ms and the average hair region detection accuracy is above 90%. The proposed algorithm is applied to a facial caricature synthesis system. Experiments proved that with our proposed hair segmentation algorithm the facial caricatures are vivid and satisfying.

  6. Image Based Hair Segmentation Algorithm for the Application of Automatic Facial Caricature Synthesis

    Science.gov (United States)

    Peng, Zhenyun; Zhang, Yaohui

    2014-01-01

    Hair is a salient feature in human face region and are one of the important cues for face analysis. Accurate detection and presentation of hair region is one of the key components for automatic synthesis of human facial caricature. In this paper, an automatic hair detection algorithm for the application of automatic synthesis of facial caricature based on a single image is proposed. Firstly, hair regions in training images are labeled manually and then the hair position prior distributions and hair color likelihood distribution function are estimated from these labels efficiently. Secondly, the energy function of the test image is constructed according to the estimated prior distributions of hair location and hair color likelihood. This energy function is further optimized according to graph cuts technique and initial hair region is obtained. Finally, K-means algorithm and image postprocessing techniques are applied to the initial hair region so that the final hair region can be segmented precisely. Experimental results show that the average processing time for each image is about 280 ms and the average hair region detection accuracy is above 90%. The proposed algorithm is applied to a facial caricature synthesis system. Experiments proved that with our proposed hair segmentation algorithm the facial caricatures are vivid and satisfying. PMID:24592182

  7. Confidence-Based Feature Acquisition

    Science.gov (United States)

    Wagstaff, Kiri L.; desJardins, Marie; MacGlashan, James

    2010-01-01

    Confidence-based Feature Acquisition (CFA) is a novel, supervised learning method for acquiring missing feature values when there is missing data at both training (learning) and test (deployment) time. To train a machine learning classifier, data is encoded with a series of input features describing each item. In some applications, the training data may have missing values for some of the features, which can be acquired at a given cost. A relevant JPL example is that of the Mars rover exploration in which the features are obtained from a variety of different instruments, with different power consumption and integration time costs. The challenge is to decide which features will lead to increased classification performance and are therefore worth acquiring (paying the cost). To solve this problem, CFA, which is made up of two algorithms (CFA-train and CFA-predict), has been designed to greedily minimize total acquisition cost (during training and testing) while aiming for a specific accuracy level (specified as a confidence threshold). With this method, it is assumed that there is a nonempty subset of features that are free; that is, every instance in the data set includes these features initially for zero cost. It is also assumed that the feature acquisition (FA) cost associated with each feature is known in advance, and that the FA cost for a given feature is the same for all instances. Finally, CFA requires that the base-level classifiers produce not only a classification, but also a confidence (or posterior probability).

  8. MRI-based diagnostic imaging of the intratemporal facial nerve

    International Nuclear Information System (INIS)

    Kress, B.; Baehren, W.

    2001-01-01

    Detailed imaging of the five sections of the full intratemporal course of the facial nerve can be achieved by MRI and using thin tomographic section techniques and surface coils. Contrast media are required for tomographic imaging of pathological processes. Established methods are available for diagnostic evaluation of cerebellopontine angle tumors and chronic Bell's palsy, as well as hemifacial spasms. A method still under discussion is MRI for diagnostic evaluation of Bell's palsy in the presence of fractures of the petrous bone, when blood volumes in the petrous bone make evaluation even more difficult. MRI-based diagnostic evaluation of the idiopatic facial paralysis currently is subject to change. Its usual application cannot be recommended for routine evaluation at present. However, a quantitative analysis of contrast medium uptake of the nerve may be an approach to improve the prognostic value of MRI in acute phases of Bell's palsy. (orig./CB) [de

  9. Likelihood Ratio Based Mixed Resolution Facial Comparison

    NARCIS (Netherlands)

    Peng, Y.; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.

    2015-01-01

    In this paper, we propose a novel method for low-resolution face recognition. It is especially useful for a common situation in forensic search where faces of low resolution, e.g. on surveillance footage or in a crowd, must be compared to a high-resolution reference. This method is based on the

  10. Multimodal recognition based on face and ear using local feature

    Science.gov (United States)

    Yang, Ruyin; Mu, Zhichun; Chen, Long; Fan, Tingyu

    2017-06-01

    The pose issue which may cause loss of useful information has always been a bottleneck in face and ear recognition. To address this problem, we propose a multimodal recognition approach based on face and ear using local feature, which is robust to large facial pose variations in the unconstrained scene. Deep learning method is used for facial pose estimation, and the method of a well-trained Faster R-CNN is used to detect and segment the region of face and ear. Then we propose a weighted region-based recognition method to deal with the local feature. The proposed method has achieved state-of-the-art recognition performance especially when the images are affected by pose variations and random occlusion in unconstrained scene.

  11. Facial Image Analysis Based on Local Binary Patterns: A Survey

    NARCIS (Netherlands)

    Huang, D.; Shan, C.; Ardebilian, M.; Chen, L.

    2011-01-01

    Facial image analysis, including face detection, face recognition,facial expression analysis, facial demographic classification, and so on, is an important and interesting research topic in the computervision and image processing area, which has many important applications such as human-computer

  12. Influence of skin ageing features on Chinese women's perception of facial age and attractiveness

    Science.gov (United States)

    Porcheron, A; Latreille, J; Jdid, R; Tschachler, E; Morizot, F

    2014-01-01

    Objectives Ageing leads to characteristic changes in the appearance of facial skin. Among these changes, we can distinguish the skin topographic cues (skin sagging and wrinkles), the dark spots and the dark circles around the eyes. Although skin changes are similar in Caucasian and Chinese faces, the age of occurrence and the severity of age-related features differ between the two populations. Little is known about how the ageing of skin influences the perception of female faces in Chinese women. The aim of this study is to evaluate the contribution of the different age-related skin features to the perception of age and attractiveness in Chinese women. Methods Facial images of Caucasian women and Chinese women in their 60s were manipulated separately to reduce the following skin features: (i) skin sagging and wrinkles, (ii) dark spots and (iii) dark circles. Finally, all signs were reduced simultaneously (iv). Female Chinese participants were asked to estimate the age difference between the modified and original images and evaluate the attractiveness of modified and original faces. Results Chinese women perceived the Chinese faces as younger after the manipulation of dark spots than after the reduction in wrinkles/sagging, whereas they perceived the Caucasian faces as the youngest after the manipulation of wrinkles/sagging. Interestingly, Chinese women evaluated faces with reduced dark spots as being the most attractive whatever the origin of the face. The manipulation of dark circles contributed to making Caucasian and Chinese faces being perceived younger and more attractive than the original faces, although the effect was less pronounced than for the two other types of manipulation. Conclusion This is the first study to have examined the influence of various age-related skin features on the facial age and attractiveness perception of Chinese women. The results highlight different contributions of dark spots, sagging/wrinkles and dark circles to their perception

  13. CBFS: high performance feature selection algorithm based on feature clearness.

    Directory of Open Access Journals (Sweden)

    Minseok Seo

    Full Text Available BACKGROUND: The goal of feature selection is to select useful features and simultaneously exclude garbage features from a given dataset for classification purposes. This is expected to bring reduction of processing time and improvement of classification accuracy. METHODOLOGY: In this study, we devised a new feature selection algorithm (CBFS based on clearness of features. Feature clearness expresses separability among classes in a feature. Highly clear features contribute towards obtaining high classification accuracy. CScore is a measure to score clearness of each feature and is based on clustered samples to centroid of classes in a feature. We also suggest combining CBFS and other algorithms to improve classification accuracy. CONCLUSIONS/SIGNIFICANCE: From the experiment we confirm that CBFS is more excellent than up-to-date feature selection algorithms including FeaLect. CBFS can be applied to microarray gene selection, text categorization, and image classification.

  14. Prediction of Facial Profile Based on Morphometric Measurements and Profile Characteristics of Permanent Maxillary Central Incisor Teeth

    Directory of Open Access Journals (Sweden)

    N Raghavendra

    2015-01-01

    Full Text Available The computation of facial profile from dental morphometrics has been a subject of great interest in forensic odontology. The use of teeth to draw a profile and facial features is valuable in times of mass disasters when body remains are unavailable due to extreme destruction. This study aims to identify and evaluate applicable parameters in the permanent maxillary central incisors and the face of an individual. A correlation of these parameters establishes a mathematical equation that further charts a tooth-facial profile table. Thirty soft and hard tissue landmarks on the face in the frontal and the lateral profiles (using standardized photographs and seven landmarks on the facial/labial surface of the clinical crown of the permanent maxillary central incisor (using casts of the maxilla were identified for the study. Based on these, a set of eight horizontal and seven vertical parameters on the face and four parameters on the tooth were created for the assessment. Internal and external correlations between the two were carried out and statistically analyzed. A logistic regression was made to predict the probability of the parameters most likely to be reproduced in the creation of the facial profile, based on tooth morphometrics. The results indicated a definite correlation between the facial and the tooth parameters. Among the multiple parameters, a definite correlation in the horizontal dimension could be established between the mouth width and the mesiodistal width (MDW of the tooth. In the vertical dimension, a definite relationship existed between the crown height of the tooth and the width of the midface (zygoma-mandible. There exist divergences in the correlation of tooth and facial parameters.

  15. The Neural Dynamics of Facial Identity Processing: Insights from EEG-Based Pattern Analysis and Image Reconstruction.

    Science.gov (United States)

    Nemrodov, Dan; Niemeier, Matthias; Patel, Ashutosh; Nestor, Adrian

    2018-01-01

    Uncovering the neural dynamics of facial identity processing along with its representational basis outlines a major endeavor in the study of visual processing. To this end, here, we record human electroencephalography (EEG) data associated with viewing face stimuli; then, we exploit spatiotemporal EEG information to determine the neural correlates of facial identity representations and to reconstruct the appearance of the corresponding stimuli. Our findings indicate that multiple temporal intervals support: facial identity classification, face space estimation, visual feature extraction and image reconstruction. In particular, we note that both classification and reconstruction accuracy peak in the proximity of the N170 component. Further, aggregate data from a larger interval (50-650 ms after stimulus onset) support robust reconstruction results, consistent with the availability of distinct visual information over time. Thus, theoretically, our findings shed light on the time course of face processing while, methodologically, they demonstrate the feasibility of EEG-based image reconstruction.

  16. Chromosome 22q11.2 Deletion Syndrome Presenting as Adult Onset Hypoparathyroidism: Clues to Diagnosis from Dysmorphic Facial Features

    Directory of Open Access Journals (Sweden)

    Sira Korpaisarn

    2013-01-01

    Full Text Available We report a 26-year-old Thai man who presented with hypoparathyroidism in adulthood. He had no history of cardiac disease and recurrent infection. His subtle dysmorphic facial features and mild intellectual impairment were suspected for chromosome 22q11.2 deletion syndrome. The diagnosis was confirmed by fluorescence in situ hybridization, which found microdeletion in 22q11.2 region. The characteristic facial appearance can lead to clinical suspicion of this syndrome. The case report emphasizes that this syndrome is not uncommon and presents as a remarkable variability in the severity and extent of expression. Accurate diagnosis is important for genetic counseling and long-term health supervision by multidisciplinary team.

  17. Recognition of 3D facial expression dynamics

    NARCIS (Netherlands)

    Sandbach, G.; Zafeiriou, S.; Pantic, Maja; Rueckert, D.

    2012-01-01

    In this paper we propose a method that exploits 3D motion-based features between frames of 3D facial geometry sequences for dynamic facial expression recognition. An expressive sequence is modelled to contain an onset followed by an apex and an offset. Feature selection methods are applied in order

  18. Utility of optical facial feature and arm movement tracking systems to enable text communication in critically ill patients who cannot otherwise communicate.

    Science.gov (United States)

    Muthuswamy, M B; Thomas, B N; Williams, D; Dingley, J

    2014-09-01

    Patients recovering from critical illness especially those with critical illness related neuropathy, myopathy, or burns to face, arms and hands are often unable to communicate by writing, speech (due to tracheostomy) or lip reading. This may frustrate both patient and staff. Two low cost movement tracking systems based around a laptop webcam and a laser/optical gaming system sensor were utilised as control inputs for on-screen text creation software and both were evaluated as communication tools in volunteers. Two methods were used to control an on-screen cursor to create short sentences via an on-screen keyboard: (i) webcam-based facial feature tracking, (ii) arm movement tracking by laser/camera gaming sensor and modified software. 16 volunteers with simulated tracheostomy and bandaged arms to simulate communication via gross movements of a burned limb, communicated 3 standard messages using each system (total 48 per system) in random sequence. Ten and 13 minor typographical errors occurred with each system respectively, however all messages were comprehensible. Speed of sentence formation ranged from 58 to 120s with the facial feature tracking system, and 60-160s with the arm movement tracking system. The average speed of sentence formation was 81s (range 58-120) and 104s (range 60-160) for facial feature and arm tracking systems respectively, (Pcommunication aids in patients in general and burns critical care units who cannot communicate by conventional means, due to the nature of their injuries. Copyright © 2014 Elsevier Ltd and ISBI. All rights reserved.

  19. Hockey-related facial injuries: a population-based analysis.

    Science.gov (United States)

    Lawrence, Lauren A; Svider, Peter F; Raza, Syed N; Zuliani, Giancarlo; Carron, Michael A; Folbe, Adam J

    2015-03-01

    Recognition of the potentially severe sequelae arising from inadequate facial protection has facilitated sustained efforts to increase the use of protective visors in recent decades. Our objective was to characterize nationwide trends among patients presenting to emergency departments (ED) for facial injuries sustained while playing ice hockey. The National Electronic Injury Surveillance System was searched for hockey-related facial injuries, with analysis for incidence; age and gender; and specific injury diagnoses, mechanisms, and facial locations. There were an estimated 93,444 ED visits for hockey-related facial injuries from 2003 to 2012. The number of annual ED visits declined by 43.8% from 2003 to 2012. A total of 90.6% of patients were male; and the peak age of injury was 17 years. Lacerations were the most common form of facial injury (81.5% of patients) across all age groups. Contusions/abrasions and fractures followed in frequency, with fractures increasing with advancing age. The overall incidence of ED visits due to facial injuries from ice hockey has significantly decreased over the last decade, concurrent with increased societal use of facial protective equipment. Nonetheless, facial hockey injuries facilitate a significant number of ED visits among both adults and children; thus, the knowledge of demographic-specific trends described in this analysis is relevant for physicians involved in the management of facial trauma. These findings reinforce the need to educate individuals who play hockey about the importance of appropriate facial protection. © 2014 The American Laryngological, Rhinological and Otological Society, Inc.

  20. Cosmetics as a feature of the extended human phenotype: modulation of the perception of biologically important facial signals.

    Science.gov (United States)

    Etcoff, Nancy L; Stock, Shannon; Haley, Lauren E; Vickery, Sarah A; House, David M

    2011-01-01

    Research on the perception of faces has focused on the size, shape, and configuration of inherited features or the biological phenotype, and largely ignored the effects of adornment, or the extended phenotype. Research on the evolution of signaling has shown that animals frequently alter visual features, including color cues, to attract, intimidate or protect themselves from conspecifics. Humans engage in conscious manipulation of visual signals using cultural tools in real time rather than genetic changes over evolutionary time. Here, we investigate one tool, the use of color cosmetics. In two studies, we asked viewers to rate the same female faces with or without color cosmetics, and we varied the style of makeup from minimal (natural), to moderate (professional), to dramatic (glamorous). Each look provided increasing luminance contrast between the facial features and surrounding skin. Faces were shown for 250 ms or for unlimited inspection time, and subjects rated them for attractiveness, competence, likeability and trustworthiness. At 250 ms, cosmetics had significant positive effects on all outcomes. Length of inspection time did not change the effect for competence or attractiveness. However, with longer inspection time, the effect of cosmetics on likability and trust varied by specific makeup looks, indicating that cosmetics could impact automatic and deliberative judgments differently. The results suggest that cosmetics can create supernormal facial stimuli, and that one way they may do so is by exaggerating cues to sexual dimorphism. Our results provide evidence that judgments of facial trustworthiness and attractiveness are at least partially separable, that beauty has a significant positive effect on judgment of competence, a universal dimension of social cognition, but has a more nuanced effect on the other universal dimension of social warmth, and that the extended phenotype significantly influences perception of biologically important signals at first

  1. Cosmetics as a feature of the extended human phenotype: modulation of the perception of biologically important facial signals.

    Directory of Open Access Journals (Sweden)

    Nancy L Etcoff

    Full Text Available Research on the perception of faces has focused on the size, shape, and configuration of inherited features or the biological phenotype, and largely ignored the effects of adornment, or the extended phenotype. Research on the evolution of signaling has shown that animals frequently alter visual features, including color cues, to attract, intimidate or protect themselves from conspecifics. Humans engage in conscious manipulation of visual signals using cultural tools in real time rather than genetic changes over evolutionary time. Here, we investigate one tool, the use of color cosmetics. In two studies, we asked viewers to rate the same female faces with or without color cosmetics, and we varied the style of makeup from minimal (natural, to moderate (professional, to dramatic (glamorous. Each look provided increasing luminance contrast between the facial features and surrounding skin. Faces were shown for 250 ms or for unlimited inspection time, and subjects rated them for attractiveness, competence, likeability and trustworthiness. At 250 ms, cosmetics had significant positive effects on all outcomes. Length of inspection time did not change the effect for competence or attractiveness. However, with longer inspection time, the effect of cosmetics on likability and trust varied by specific makeup looks, indicating that cosmetics could impact automatic and deliberative judgments differently. The results suggest that cosmetics can create supernormal facial stimuli, and that one way they may do so is by exaggerating cues to sexual dimorphism. Our results provide evidence that judgments of facial trustworthiness and attractiveness are at least partially separable, that beauty has a significant positive effect on judgment of competence, a universal dimension of social cognition, but has a more nuanced effect on the other universal dimension of social warmth, and that the extended phenotype significantly influences perception of biologically important

  2. Cosmetics as a Feature of the Extended Human Phenotype: Modulation of the Perception of Biologically Important Facial Signals

    Science.gov (United States)

    Etcoff, Nancy L.; Stock, Shannon; Haley, Lauren E.; Vickery, Sarah A.; House, David M.

    2011-01-01

    Research on the perception of faces has focused on the size, shape, and configuration of inherited features or the biological phenotype, and largely ignored the effects of adornment, or the extended phenotype. Research on the evolution of signaling has shown that animals frequently alter visual features, including color cues, to attract, intimidate or protect themselves from conspecifics. Humans engage in conscious manipulation of visual signals using cultural tools in real time rather than genetic changes over evolutionary time. Here, we investigate one tool, the use of color cosmetics. In two studies, we asked viewers to rate the same female faces with or without color cosmetics, and we varied the style of makeup from minimal (natural), to moderate (professional), to dramatic (glamorous). Each look provided increasing luminance contrast between the facial features and surrounding skin. Faces were shown for 250 ms or for unlimited inspection time, and subjects rated them for attractiveness, competence, likeability and trustworthiness. At 250 ms, cosmetics had significant positive effects on all outcomes. Length of inspection time did not change the effect for competence or attractiveness. However, with longer inspection time, the effect of cosmetics on likability and trust varied by specific makeup looks, indicating that cosmetics could impact automatic and deliberative judgments differently. The results suggest that cosmetics can create supernormal facial stimuli, and that one way they may do so is by exaggerating cues to sexual dimorphism. Our results provide evidence that judgments of facial trustworthiness and attractiveness are at least partially separable, that beauty has a significant positive effect on judgment of competence, a universal dimension of social cognition, but has a more nuanced effect on the other universal dimension of social warmth, and that the extended phenotype significantly influences perception of biologically important signals at first

  3. A Brief Review of Facial Emotion Recognition Based on Visual Information

    Directory of Open Access Journals (Sweden)

    Byoung Chul Ko

    2018-01-01

    Full Text Available Facial emotion recognition (FER is an important topic in the fields of computer vision and artificial intelligence owing to its significant academic and commercial potential. Although FER can be conducted using multiple sensors, this review focuses on studies that exclusively use facial images, because visual expressions are one of the main information channels in interpersonal communication. This paper provides a brief review of researches in the field of FER conducted over the past decades. First, conventional FER approaches are described along with a summary of the representative categories of FER systems and their main algorithms. Deep-learning-based FER approaches using deep networks enabling “end-to-end” learning are then presented. This review also focuses on an up-to-date hybrid deep-learning approach combining a convolutional neural network (CNN for the spatial features of an individual frame and long short-term memory (LSTM for temporal features of consecutive frames. In the later part of this paper, a brief review of publicly available evaluation metrics is given, and a comparison with benchmark results, which are a standard for a quantitative comparison of FER researches, is described. This review can serve as a brief guidebook to newcomers in the field of FER, providing basic knowledge and a general understanding of the latest state-of-the-art studies, as well as to experienced researchers looking for productive directions for future work.

  4. A Brief Review of Facial Emotion Recognition Based on Visual Information.

    Science.gov (United States)

    Ko, Byoung Chul

    2018-01-30

    Facial emotion recognition (FER) is an important topic in the fields of computer vision and artificial intelligence owing to its significant academic and commercial potential. Although FER can be conducted using multiple sensors, this review focuses on studies that exclusively use facial images, because visual expressions are one of the main information channels in interpersonal communication. This paper provides a brief review of researches in the field of FER conducted over the past decades. First, conventional FER approaches are described along with a summary of the representative categories of FER systems and their main algorithms. Deep-learning-based FER approaches using deep networks enabling "end-to-end" learning are then presented. This review also focuses on an up-to-date hybrid deep-learning approach combining a convolutional neural network (CNN) for the spatial features of an individual frame and long short-term memory (LSTM) for temporal features of consecutive frames. In the later part of this paper, a brief review of publicly available evaluation metrics is given, and a comparison with benchmark results, which are a standard for a quantitative comparison of FER researches, is described. This review can serve as a brief guidebook to newcomers in the field of FER, providing basic knowledge and a general understanding of the latest state-of-the-art studies, as well as to experienced researchers looking for productive directions for future work.

  5. Markerless 3D facial motion capture system

    Science.gov (United States)

    Hwang, Youngkyoo; Kim, Jung-Bae; Feng, Xuetao; Bang, Won-Chul; Rhee, Taehyun; Kim, James D. K.; Kim, ChangYeong

    2012-03-01

    We propose a novel markerless 3D facial motion capture system using only one common camera. This system is simple and easy to transfer facial expressions of a user's into virtual world. It has robustly tracking facial feature points associated with head movements. In addition, it estimates high accurate 3D points' locations. We designed novel approaches to the followings; Firstly, for precisely 3D head motion tracking, we applied 3D constraints using a 3D face model on conventional 2D feature points tracking approach, called Active Appearance Model (AAM). Secondly, for dealing with various expressions of a user's, we designed 2D face generic models from around 5000 images data and 3D shape data including symmetric and asymmetric facial expressions. Lastly, for accurately facial expression cloning, we invented a manifold space to successfully transfer 2D low dimensional feature points to 3D high dimensional points. The manifold space is defined by eleven facial expression bases.

  6. Facial Nerve Palsy: An Unusual Presenting Feature of Small Cell Lung Cancer

    Directory of Open Access Journals (Sweden)

    Ozcan Yildiz

    2011-01-01

    Full Text Available Lung cancer is the second most common type of cancer in the world and is the most common cause of cancer-related death in men and women; it is responsible for 1.3 million deaths annually worldwide. It can metastasize to any organ. The most common site of metastasis in the head and neck region is the brain; however, it can also metastasize to the oral cavity, gingiva, tongue, parotid gland and lymph nodes. This article reports a case of small cell lung cancer presenting with metastasis to the facial nerve.

  7. GENDER RECOGNITION BASED ON SIFT FEATURES

    OpenAIRE

    Sahar Yousefi; Morteza Zahedi

    2011-01-01

    This paper proposes a robust approach for face detection and gender classification in color images. Previous researches about gender recognition suppose an expensive computational and time-consuming pre-processing step in order to alignment in which face images are aligned so that facial landmarks like eyes, nose, lips, chin are placed in uniform locations in image. In this paper, a novel technique based on mathematical analysis is represented in three stages that eliminates align...

  8. Facial Emotion Recognition Using Context Based Multimodal Approach

    Directory of Open Access Journals (Sweden)

    Priya Metri

    2011-12-01

    Full Text Available Emotions play a crucial role in person to person interaction. In recent years, there has been a growing interest in improving all aspects of interaction between humans and computers. The ability to understand human emotions is desirable for the computer in several applications especially by observing facial expressions. This paper explores a ways of human-computer interaction that enable the computer to be more aware of the user’s emotional expressions we present a approach for the emotion recognition from a facial expression, hand and body posture. Our model uses multimodal emotion recognition system in which we use two different models for facial expression recognition and for hand and body posture recognition and then combining the result of both classifiers using a third classifier which give the resulting emotion . Multimodal system gives more accurate result than a signal or bimodal system

  9. 3D facial expression recognition based on histograms of surface differential quantities

    KAUST Repository

    Li, Huibin

    2011-01-01

    3D face models accurately capture facial surfaces, making it possible for precise description of facial activities. In this paper, we present a novel mesh-based method for 3D facial expression recognition using two local shape descriptors. To characterize shape information of the local neighborhood of facial landmarks, we calculate the weighted statistical distributions of surface differential quantities, including histogram of mesh gradient (HoG) and histogram of shape index (HoS). Normal cycle theory based curvature estimation method is employed on 3D face models along with the common cubic fitting curvature estimation method for the purpose of comparison. Based on the basic fact that different expressions involve different local shape deformations, the SVM classifier with both linear and RBF kernels outperforms the state of the art results on the subset of the BU-3DFE database with the same experimental setting. © 2011 Springer-Verlag.

  10. A model based method for automatic facial expression recognition

    NARCIS (Netherlands)

    Kuilenburg, H. van; Wiering, M.A.; Uyl, M. den

    2006-01-01

    Automatic facial expression recognition is a research topic with interesting applications in the field of human-computer interaction, psychology and product marketing. The classification accuracy for an automatic system which uses static images as input is however largely limited by the image

  11. Review of research in feature based design

    NARCIS (Netherlands)

    Salomons, O.W.; van Houten, Frederikus J.A.M.; Kals, H.J.J.

    1993-01-01

    Research in feature-based design is reviewed. Feature-based design is regarded as a key factor towards CAD/CAPP integration from a process planning point of view. From a design point of view, feature-based design offers possibilities for supporting the design process better than current CAD systems

  12. Multi-Layer Sparse Representation for Weighted LBP-Patches Based Facial Expression Recognition

    Directory of Open Access Journals (Sweden)

    Qi Jia

    2015-03-01

    Full Text Available In this paper, a novel facial expression recognition method based on sparse representation is proposed. Most contemporary facial expression recognition systems suffer from limited ability to handle image nuisances such as low resolution and noise. Especially for low intensity expression, most of the existing training methods have quite low recognition rates. Motivated by sparse representation, the problem can be solved by finding sparse coefficients of the test image by the whole training set. Deriving an effective facial representation from original face images is a vital step for successful facial expression recognition. We evaluate facial representation based on weighted local binary patterns, and Fisher separation criterion is used to calculate the weighs of patches. A multi-layer sparse representation framework is proposed for multi-intensity facial expression recognition, especially for low-intensity expressions and noisy expressions in reality, which is a critical problem but seldom addressed in the existing works. To this end, several experiments based on low-resolution and multi-intensity expressions are carried out. Promising results on publicly available databases demonstrate the potential of the proposed approach.

  13. Emotion Index of Cover Song Music Video Clips based on Facial Expression Recognition

    DEFF Research Database (Denmark)

    Kavallakis, George; Vidakis, Nikolaos; Triantafyllidis, Georgios

    2017-01-01

    This paper presents a scheme of creating an emotion index of cover song music video clips by recognizing and classifying facial expressions of the artist in the video. More specifically, it fuses effective and robust algorithms which are employed for expression recognition, along with the use...... of a neural network system using the features extracted by the SIFT algorithm. Also we support the need of this fusion of different expression recognition algorithms, because of the way that emotions are linked to facial expressions in music video clips....

  14. Feature Selection Based on Confidence Machine

    OpenAIRE

    Liu, Chang; Xu, Yi

    2014-01-01

    In machine learning and pattern recognition, feature selection has been a hot topic in the literature. Unsupervised feature selection is challenging due to the loss of labels which would supply the related information.How to define an appropriate metric is the key for feature selection. We propose a filter method for unsupervised feature selection which is based on the Confidence Machine. Confidence Machine offers an estimation of confidence on a feature'reliability. In this paper, we provide...

  15. Two patients with intellectual disability, overlapping facial features, and overlapping deletions in 6p25.1p24.3

    NARCIS (Netherlands)

    Kuipers, B.C.; Vulto-van Silfhout, A.T.; Marcelis, C.L.M.; Pfundt, R.P.; Leeuw, N. de; Vries, B. de

    2013-01-01

    The clinical and molecular characterizations of two patients with a 1.4 Mb overlapping deletion in the 6p25.1p24.3 region are reported. In addition to the mild intellectual disability, they shared feeding problems in infancy and several dysmorphic facial features including a prominent forehead,

  16. Facial Expression Recognition

    NARCIS (Netherlands)

    Pantic, Maja; Li, S.; Jain, A.

    2009-01-01

    Facial expression recognition is a process performed by humans or computers, which consists of: 1. Locating faces in the scene (e.g., in an image; this step is also referred to as face detection), 2. Extracting facial features from the detected face region (e.g., detecting the shape of facial

  17. Ellis–van Creveld syndrome with facial dysmorphic features in an ...

    African Journals Online (AJOL)

    Ellis–van Creveld syndrome (EVC) is a chondroectodermal dysplasia. The tetrad of cardinal features includes disproportionate dwarfism, bilateral postaxial polydactyl of hands, hidrotic ectodermal dysplasia, and congenital cardiac malformations. This rare condition is inherited as an autosomal recessive trait with variable ...

  18. A novel three-dimensional smile analysis based on dynamic evaluation of facial curve contour

    Science.gov (United States)

    Lin, Yi; Lin, Han; Lin, Qiuping; Zhang, Jinxin; Zhu, Ping; Lu, Yao; Zhao, Zhi; Lv, Jiahong; Lee, Mln Kyeong; Xu, Yue

    2016-02-01

    The influence of three-dimensional facial contour and dynamic evaluation decoding on factors of smile esthetics is essential for facial beauty improvement. However, the kinematic features of the facial smile contour and the contribution from the soft tissue and underlying skeleton are uncharted. Here, the cheekbone-maxilla contour and nasolabial fold were combined into a “smile contour” delineating the overall facial topography emerges prominently in smiling. We screened out the stable and unstable points on the smile contour using facial motion capture and curve fitting, before analyzing the correlation between soft tissue coordinates and hard tissue counterparts of the screened points. Our finding suggests that the mouth corner region was the most mobile area characterizing smile expression, while the other areas remained relatively stable. Therefore, the perioral area should be evaluated dynamically while the static assessment outcome of other parts of the smile contour contribute partially to their dynamic esthetics. Moreover, different from the end piece, morphologies of the zygomatic area and the superior part of the nasolabial crease were determined largely by the skeleton in rest, implying the latter can be altered by orthopedic or orthodontic correction and the former better improved by cosmetic procedures to improve the beauty of smile.

  19. An optimized ERP brain-computer interface based on facial expression changes

    Science.gov (United States)

    Jin, Jing; Daly, Ian; Zhang, Yu; Wang, Xingyu; Cichocki, Andrzej

    2014-06-01

    Objective. Interferences from spatially adjacent non-target stimuli are known to evoke event-related potentials (ERPs) during non-target flashes and, therefore, lead to false positives. This phenomenon was commonly seen in visual attention-based brain-computer interfaces (BCIs) using conspicuous stimuli and is known to adversely affect the performance of BCI systems. Although users try to focus on the target stimulus, they cannot help but be affected by conspicuous changes of the stimuli (such as flashes or presenting images) which were adjacent to the target stimulus. Furthermore, subjects have reported that conspicuous stimuli made them tired and annoyed. In view of this, the aim of this study was to reduce adjacent interference, annoyance and fatigue using a new stimulus presentation pattern based upon facial expression changes. Our goal was not to design a new pattern which could evoke larger ERPs than the face pattern, but to design a new pattern which could reduce adjacent interference, annoyance and fatigue, and evoke ERPs as good as those observed during the face pattern. Approach. Positive facial expressions could be changed to negative facial expressions by minor changes to the original facial image. Although the changes are minor, the contrast is big enough to evoke strong ERPs. In this paper, a facial expression change pattern between positive and negative facial expressions was used to attempt to minimize interference effects. This was compared against two different conditions, a shuffled pattern containing the same shapes and colours as the facial expression change pattern, but without the semantic content associated with a change in expression, and a face versus no face pattern. Comparisons were made in terms of classification accuracy and information transfer rate as well as user supplied subjective measures. Main results. The results showed that interferences from adjacent stimuli, annoyance and the fatigue experienced by the subjects could be

  20. HUMAN IDENTIFICATION BASED ON EXTRACTED GAIT FEATURES

    OpenAIRE

    Hu Ng; Hau-Lee Ton; Wooi-Haw Tan; Timothy Tzen-Vun Yap; Pei-Fen Chong; Junaidi Abdullah

    2011-01-01

    This paper presents a human identification system based on automatically extracted gait features. The proposed approach consists of three parts: extraction of human gait features from enhanced human silhouette, smoothing process on extracted gait features and classification by three classification techniques: fuzzy k- nearest neighbour, linear discriminate analysis and linear support vector machine. The gait features extracted are height, width, crotch height, step-size of the human silhouett...

  1. Feature Extraction Based on Decision Boundaries

    Science.gov (United States)

    Lee, Chulhee; Landgrebe, David A.

    1993-01-01

    In this paper, a novel approach to feature extraction for classification is proposed based directly on the decision boundaries. We note that feature extraction is equivalent to retaining informative features or eliminating redundant features; thus, the terms 'discriminantly information feature' and 'discriminantly redundant feature' are first defined relative to feature extraction for classification. Next, it is shown how discriminantly redundant features and discriminantly informative features are related to decision boundaries. A novel characteristic of the proposed method arises by noting that usually only a portion of the decision boundary is effective in discriminating between classes, and the concept of the effective decision boundary is therefore introduced. Next, a procedure to extract discriminantly informative features based on a decision boundary is proposed. The proposed feature extraction algorithm has several desirable properties: (1) It predicts the minimum number of features necessary to achieve the same classification accuracy as in the original space for a given pattern recognition problem; and (2) it finds the necessary feature vectors. The proposed algorithm does not deteriorate under the circumstances of equal class means or equal class covariances as some previous algorithms do. Experiments show that the performance of the proposed algorithm compares favorably with those of previous algorithms.

  2. Relationship of maxilla to cranial base in different facial types–a cephalometric evaluation

    Science.gov (United States)

    Rana, Tarun; Khanna, Rohit; Tikku, Tripti; Sachan, Kiran

    2012-01-01

    Background Many conflicting opinions have been put forth in the dental literature concerning the maxilla and its relationship to craniofacial complex. In view of this fact, this cephalometric study was conducted to determine the relationship of maxilla to cranial base in different facial types. Materials and Methods The sample consists of 120 pretreatment lateral cephalogram, which were categorized into three groups, normodivergent, hypodivergent, and hyperdivergent. Each group consists of 20 males and 20 females. Descriptive statistics for 11 variables were calculated. Results and Conclusion The result of this study implies that in hyperdivergent subjects' sagittal maxillary base size was smaller and upper posterior facial height (UPFH) was increased in comparison to hypodivergent and normodivergent subjects. Upper posterior facial height has positive correlation with anterior facial height. Posterior maxillary position in relation to cranial base increases with increase in cranial flexural angle in hypodivergent subjects and vice versa in hyperdivergent subjects. Upper posterior facial height decreases with increase in cranial flexural angle in hypodivergent subjects and vice versa in hyperdivergent subjects. PMID:25756029

  3. Heartbeat Rate Measurement from Facial Video

    DEFF Research Database (Denmark)

    Haque, Mohammad Ahsanul; Irani, Ramin; Nasrollahi, Kamal

    2016-01-01

    Heartbeat Rate (HR) reveals a person’s health condition. This paper presents an effective system for measuring HR from facial videos acquired in a more realistic environment than the testing environment of current systems. The proposed method utilizes a facial feature point tracking method...... by combining a ‘Good feature to track’ and a ‘Supervised descent method’ in order to overcome the limitations of currently available facial video based HR measuring systems. Such limitations include, e.g., unrealistic restriction of the subject’s movement and artificial lighting during data capture. A face...

  4. Severe growth deficiency, microcephaly, intellectual disability, and characteristic facial features are due to a homozygous QARS mutation.

    Science.gov (United States)

    Leshinsky-Silver, Esther; Ling, Jiqiang; Wu, Jiang; Vinkler, Chana; Yosovich, Keren; Bahar, Sarit; Yanoov-Sharav, Miri; Lerman-Sagie, Tally; Lev, Dorit

    2017-07-01

    Glutaminyl tRNA synthase is highly expressed in the developing fetal human brain. Mutations in the glutaminyl-tRNA synthetase (QARS) gene have been reported in patients with progressive microcephaly, cerebral-cerebellar atrophy, and intractable seizures. We have previously reported a new recessive syndrome of severe linear growth retardation, poor weight gain, microcephaly, characteristic facial features, cutaneous syndactyly of the toes, high myopia, and intellectual disability in two sisters of Ashkenazi-Jewish origin (Eur J Med Genet 2014;57(6):288-92). Homozygosity mapping and whole exome sequencing revealed a homozygous missense (V476I) mutation in the QARS gene, located in the catalytic domain. The patient's fibroblasts demonstrated markedly reduced QARS amino acylation activity in vitro. Furthermore, the same homozygous mutation was found in an unrelated girl of Ashkenazi origin with the same phenotype. The clinical presentation of our patients differs from the original QARS-associated syndrome in the severe postnatal growth failure, absence of epilepsy, and minor MRI findings, thus further expanding the phenotypic spectrum of the glutaminyl-tRNA synthetase deficiency syndromes.

  5. Avoiding occlusal derangement in facial fractures: An evidence based approach

    Directory of Open Access Journals (Sweden)

    Derick Mendonca

    2013-01-01

    Full Text Available Facial fractures with occlusal derangement describe any fracture which directly or indirectly affects the occlusal relationship. Such fractures include dento-alveolar fractures in the maxilla and mandible, midface fractures - Le fort I, II, III and mandible fractures of the symphysis, parasymphysis, body, angle, and condyle. In some of these fractures, the fracture line runs through the dento-alveolar component whereas in others the fracture line is remote from the occlusal plane nevertheless altering the occlusion. The complications that could ensue from the management of maxillofacial fractures are predominantly iatrogenic, and therefore can be avoided if adequate care is exercised by the operating surgeon. This paper does not emphasize on complications arising from any particular technique in the management of maxillofacial fractures but rather discusses complications in general, irrespective of the technique used.

  6. A small-world network model of facial emotion recognition.

    Science.gov (United States)

    Takehara, Takuma; Ochiai, Fumio; Suzuki, Naoto

    2016-01-01

    Various models have been proposed to increase understanding of the cognitive basis of facial emotions. Despite those efforts, interactions between facial emotions have received minimal attention. If collective behaviours relating to each facial emotion in the comprehensive cognitive system could be assumed, specific facial emotion relationship patterns might emerge. In this study, we demonstrate that the frameworks of complex networks can effectively capture those patterns. We generate 81 facial emotion images (6 prototypes and 75 morphs) and then ask participants to rate degrees of similarity in 3240 facial emotion pairs in a paired comparison task. A facial emotion network constructed on the basis of similarity clearly forms a small-world network, which features an extremely short average network distance and close connectivity. Further, even if two facial emotions have opposing valences, they are connected within only two steps. In addition, we show that intermediary morphs are crucial for maintaining full network integration, whereas prototypes are not at all important. These results suggest the existence of collective behaviours in the cognitive systems of facial emotions and also describe why people can efficiently recognize facial emotions in terms of information transmission and propagation. For comparison, we construct three simulated networks--one based on the categorical model, one based on the dimensional model, and one random network. The results reveal that small-world connectivity in facial emotion networks is apparently different from those networks, suggesting that a small-world network is the most suitable model for capturing the cognitive basis of facial emotions.

  7. Personality Trait and Facial Expression Filter-Based Brain-Computer Interface

    Directory of Open Access Journals (Sweden)

    Seongah Chin

    2013-02-01

    Full Text Available In this paper, we present technical approaches that bridge the gap in the research related to the use of brain-computer interfaces for entertainment and facial expressions. Such facial expressions that reflect an individual's personal traits can be used to better realize artificial facial expressions in a gaming environment based on a brain-computer interface. First, an emotion extraction filter is introduced in order to classify emotions on the basis of the users' brain signals in real time. Next, a personality trait filter is defined to classify extrovert and introvert types, which manifest as five traits: very extrovert, extrovert, medium, introvert and very introvert. In addition, facial expressions derived from expression rates are obtained by an extrovert-introvert fuzzy model through its defuzzification process. Finally, we confirm this validation via an analysis of the variance of the personality trait filter, a k-fold cross validation of the emotion extraction filter, an accuracy analysis, a user study of facial synthesis and a test case game.

  8. Experience-based human perception of facial expressions in Barbary macaques (Macaca sylvanus)

    Science.gov (United States)

    Levy, Xandria; Meints, Kerstin; Majolo, Bonaventura

    2017-01-01

    Background Facial expressions convey key cues of human emotions, and may also be important for interspecies interactions. The universality hypothesis suggests that six basic emotions (anger, disgust, fear, happiness, sadness, and surprise) should be expressed by similar facial expressions in close phylogenetic species such as humans and nonhuman primates. However, some facial expressions have been shown to differ in meaning between humans and nonhuman primates like macaques. This ambiguity in signalling emotion can lead to an increased risk of aggression and injuries for both humans and animals. This raises serious concerns for activities such as wildlife tourism where humans closely interact with wild animals. Understanding what factors (i.e., experience and type of emotion) affect ability to recognise emotional state of nonhuman primates, based on their facial expressions, can enable us to test the validity of the universality hypothesis, as well as reduce the risk of aggression and potential injuries in wildlife tourism. Methods The present study investigated whether different levels of experience of Barbary macaques, Macaca sylvanus, affect the ability to correctly assess different facial expressions related to aggressive, distressed, friendly or neutral states, using an online questionnaire. Participants’ level of experience was defined as either: (1) naïve: never worked with nonhuman primates and never or rarely encountered live Barbary macaques; (2) exposed: shown pictures of the different Barbary macaques’ facial expressions along with the description and the corresponding emotion prior to undertaking the questionnaire; (3) expert: worked with Barbary macaques for at least two months. Results Experience with Barbary macaques was associated with better performance in judging their emotional state. Simple exposure to pictures of macaques’ facial expressions improved the ability of inexperienced participants to better discriminate neutral and distressed

  9. Statistical feature extraction based iris recognition system

    Indian Academy of Sciences (India)

    Atul Bansal

    Abstract. Iris recognition systems have been proposed by numerous researchers using different feature extraction techniques for accurate and reliable biometric authentication. In this paper, a statistical feature extraction technique based on correlation between adjacent pixels has been proposed and implemented. Ham-.

  10. Facial orientation and facial shape in extant great apes: a geometric morphometric analysis of covariation.

    Science.gov (United States)

    Neaux, Dimitri; Guy, Franck; Gilissen, Emmanuel; Coudyzer, Walter; Vignaud, Patrick; Ducrocq, Stéphane

    2013-01-01

    The organization of the bony face is complex, its morphology being influenced in part by the rest of the cranium. Characterizing the facial morphological variation and craniofacial covariation patterns in extant hominids is fundamental to the understanding of their evolutionary history. Numerous studies on hominid facial shape have proposed hypotheses concerning the relationship between the anterior facial shape, facial block orientation and basicranial flexion. In this study we test these hypotheses in a sample of adult specimens belonging to three extant hominid genera (Homo, Pan and Gorilla). Intraspecific variation and covariation patterns are analyzed using geometric morphometric methods and multivariate statistics, such as partial least squared on three-dimensional landmarks coordinates. Our results indicate significant intraspecific covariation between facial shape, facial block orientation and basicranial flexion. Hominids share similar characteristics in the relationship between anterior facial shape and facial block orientation. Modern humans exhibit a specific pattern in the covariation between anterior facial shape and basicranial flexion. This peculiar feature underscores the role of modern humans' highly-flexed basicranium in the overall integration of the cranium. Furthermore, our results are consistent with the hypothesis of a relationship between the reduction of the value of the cranial base angle and a downward rotation of the facial block in modern humans, and to a lesser extent in chimpanzees.

  11. Generation of facial expressions from emotion using a fuzzy rule based system

    NARCIS (Netherlands)

    Bui, T.D.; Heylen, Dirk K.J.; Poel, Mannes; Nijholt, Antinus; Stumptner, Markus; Corbett, Dan; Brooks, Mike

    2001-01-01

    We propose a fuzzy rule-based system to map representations of the emotional state of an animated agent onto muscle contraction values for the appropriate facial expressions. Our implementation pays special attention to the way in which continuous changes in the intensity of emotions can be

  12. Spatio-Temporal Pain Recognition in CNN-based Super-Resolved Facial Images

    DEFF Research Database (Denmark)

    Bellantonio, Marco; Haque, Mohammad Ahsanul; Rodriguez, Pau

    2017-01-01

    Automatic pain detection is a long expected solution to a prevalent medical problem of pain management. This is more relevant when the subject of pain is young children or patients with limited ability to communicate about their pain experience. Computer vision-based analysis of facial pain expre...

  13. Adaptive metric learning with deep neural networks for video-based facial expression recognition

    Science.gov (United States)

    Liu, Xiaofeng; Ge, Yubin; Yang, Chao; Jia, Ping

    2018-01-01

    Video-based facial expression recognition has become increasingly important for plenty of applications in the real world. Despite that numerous efforts have been made for the single sequence, how to balance the complex distribution of intra- and interclass variations well between sequences has remained a great difficulty in this area. We propose the adaptive (N+M)-tuplet clusters loss function and optimize it with the softmax loss simultaneously in the training phrase. The variations introduced by personal attributes are alleviated using the similarity measurements of multiple samples in the feature space with many fewer comparison times as conventional deep metric learning approaches, which enables the metric calculations for large data applications (e.g., videos). Both the spatial and temporal relations are well explored by a unified framework that consists of an Inception-ResNet network with long short term memory and the two fully connected layer branches structure. Our proposed method has been evaluated with three well-known databases, and the experimental results show that our method outperforms many state-of-the-art approaches.

  14. Human emotion detector based on genetic algorithm using lip features

    Science.gov (United States)

    Brown, Terrence; Fetanat, Gholamreza; Homaifar, Abdollah; Tsou, Brian; Mendoza-Schrock, Olga

    2010-04-01

    We predicted human emotion using a Genetic Algorithm (GA) based lip feature extractor from facial images to classify all seven universal emotions of fear, happiness, dislike, surprise, anger, sadness and neutrality. First, we isolated the mouth from the input images using special methods, such as Region of Interest (ROI) acquisition, grayscaling, histogram equalization, filtering, and edge detection. Next, the GA determined the optimal or near optimal ellipse parameters that circumvent and separate the mouth into upper and lower lips. The two ellipses then went through fitness calculation and were followed by training using a database of Japanese women's faces expressing all seven emotions. Finally, our proposed algorithm was tested using a published database consisting of emotions from several persons. The final results were then presented in confusion matrices. Our results showed an accuracy that varies from 20% to 60% for each of the seven emotions. The errors were mainly due to inaccuracies in the classification, and also due to the different expressions in the given emotion database. Detailed analysis of these errors pointed to the limitation of detecting emotion based on the lip features alone. Similar work [1] has been done in the literature for emotion detection in only one person, we have successfully extended our GA based solution to include several subjects.

  15. Infrared-based blink-detecting glasses for facial pacing: toward a bionic blink.

    Science.gov (United States)

    Frigerio, Alice; Hadlock, Tessa A; Murray, Elizabeth H; Heaton, James T

    2014-01-01

    IMPORTANCE Facial paralysis remains one of the most challenging conditions to effectively manage, often causing life-altering deficits in both function and appearance. Facial rehabilitation via pacing and robotic technology has great yet unmet potential. A critical first step toward reanimating symmetrical facial movement in cases of unilateral paralysis is the detection of healthy movement to use as a trigger for stimulated movement. OBJECTIVE To test a blink detection system that can be attached to standard eyeglasses and used as part of a closed-loop facial pacing system. DESIGN, SETTING, AND PARTICIPANTS Standard safety glasses were equipped with an infrared (IR) emitter-detector unit, oriented horizontally across the palpebral fissure, creating a monitored IR beam that became interrupted when the eyelids closed, and were tested in 24 healthy volunteers from a tertiary care facial nerve center community. MAIN OUTCOMES AND MEASURES Video-quantified blinking was compared with both IR sensor signal magnitude and rate of change in healthy participants with their gaze in repose, while they shifted their gaze from central to far-peripheral positions, and during the production of particular facial expressions. RESULTS Blink detection based on signal magnitude achieved 100% sensitivity in forward gaze but generated false detections on downward gaze. Calculations of peak rate of signal change (first derivative) typically distinguished blinks from gaze-related eyelid movements. During forward gaze, 87% of detected blink events were true positives, 11% were false positives, and 2% were false negatives. Of the 11% false positives, 6% were associated with partial eyelid closures. During gaze changes, false blink detection occurred 6% of the time during lateral eye movements, 10% of the time during upward movements, 47% of the time during downward movements, and 6% of the time for movements from an upward or downward gaze back to the primary gaze. Facial expressions

  16. Autosomal recessive spastic tetraplegia caused by AP4M1 and AP4B1 gene mutation: expansion of the facial and neuroimaging features.

    Science.gov (United States)

    Tüysüz, Beyhan; Bilguvar, Kaya; Koçer, Naci; Yalçınkaya, Cengiz; Çağlayan, Okay; Gül, Ece; Sahin, Sezgin; Çomu, Sinan; Günel, Murat

    2014-07-01

    Adaptor protein complex-4 (AP4) is a component of intracellular transportation of proteins, which is thought to have a unique role in neurons. Recently, mutations affecting all four subunits of AP4 (AP4M1, AP4E1, AP4S1, and AP4B1) have been found to cause similar autosomal recessive phenotype consisting of tetraplegic cerebral palsy and intellectual disability. The aim of this study was analyzing AP4 genes in three new families with this phenotype, and discussing their clinical findings with an emphasis on neuroimaging and facial features. Using homozygosity mapping followed by whole-exome sequencing, we identified two novel homozygous mutations in AP4M1 and a homozygous deletion in AP4B1 in three pairs of siblings. Spastic tetraplegia, microcephaly, severe intellectual disability, limited speech, and stereotypic laughter were common findings in our patients. All patients also had similar facial features consisting of coarse and hypotonic face, bitemporal narrowing, bulbous nose with broad nasal ridge, and short philtrum which were not described in patients with AP4M1 and AP4B1 mutations previously. The patients presented here and previously with AP4M1, AP4B1, and AP4E1 mutations shared brain abnormalities including asymmetrical ventriculomegaly, thin splenium of the corpus callosum, and reduced white matter volume. The patients also had hippocampal globoid formation and thin hippocampus. In conclusion, disorders due to mutations in AP4 complex have similar neurological, facial, and cranial imaging findings. Thus, these four genes encoding AP4 subunits should be screened in patients with autosomal recessive spastic tetraplegic cerebral palsy, severe intellectual disability, and stereotypic laughter, especially with the described facial and cranial MRI features. © 2014 Wiley Periodicals, Inc.

  17. Extended feature-fusion guidelines to improve image-based multi-modal biometrics

    CSIR Research Space (South Africa)

    Brown, Dane

    2016-09-01

    Full Text Available be used to help align the global features. These features can also be extracted from palmprints as they share many characteristics of the fingerprint. Facial texture patterns consist of global contour and pore features. Local features, known as facial... to classify different biometric modalities. Global and local features often require algorithms to im- prove their clarity and consistency over multiple samples. This is particularly the case with contours and pores in face images, principal lines in palmprint...

  18. Facial expression recognition and model-based regeneration for distance teaching

    Science.gov (United States)

    De Silva, Liyanage C.; Vinod, V. V.; Sengupta, Kuntal

    1998-12-01

    This paper presents a novel idea of a visual communication system, which can support distance teaching using a network of computers. Here the author's main focus is to enhance the quality of distance teaching by reducing the barrier between the teacher and the student, which is formed due to the remote connection of the networked participants. The paper presents an effective way of improving teacher-student communication link of an IT (Information Technology) based distance teaching scenario, using facial expression recognition results and face global and local motion detection results of both the teacher and the student. It presents a way of regenerating the facial images for the teacher-student down-link, which can enhance the teachers facial expressions and which also can reduce the network traffic compared to usual video broadcasting scenarios. At the same time, it presents a way of representing a large volume of facial expression data of the whole student population (in the student-teacher up-link). This up-link representation helps the teacher to receive an instant feed back of his talk, as if he was delivering a face to face lecture. In conventional video tele-conferencing type of applications, this task is nearly impossible, due to huge volume of upward network traffic. The authors utilize several of their previous publication results for most of the image processing components needs to be investigated to complete such a system. In addition, some of the remaining system components are covered by several on going work.

  19. Audiovisual laughter detection based on temporal features

    NARCIS (Netherlands)

    Petridis, Stavros; Nijholt, Antinus; Nijholt, A.; Pantic, M.; Pantic, Maja; Poel, Mannes; Poel, M.; Hondorp, G.H.W.

    2008-01-01

    Previous research on automatic laughter detection has mainly been focused on audio-based detection. In this study we present an audiovisual approach to distinguishing laughter from speech based on temporal features and we show that the integration of audio and visual information leads to improved

  20. Facial Position and Expression-Based Human-Computer Interface for Persons With Tetraplegia.

    Science.gov (United States)

    Bian, Zhen-Peng; Hou, Junhui; Chau, Lap-Pui; Magnenat-Thalmann, Nadia

    2016-05-01

    A human-computer interface (namely Facial position and expression Mouse system, FM) for the persons with tetraplegia based on a monocular infrared depth camera is presented in this paper. The nose position along with the mouth status (close/open) is detected by the proposed algorithm to control and navigate the cursor as computer user input. The algorithm is based on an improved Randomized Decision Tree, which is capable of detecting the facial information efficiently and accurately. A more comfortable user experience is achieved by mapping the nose motion to the cursor motion via a nonlinear function. The infrared depth camera enables the system to be independent of illumination and color changes both from the background and on human face, which is a critical advantage over RGB camera-based options. Extensive experimental results show that the proposed system outperforms existing assistive technologies in terms of quantitative and qualitative assessments.

  1. Occlusal and facial features in Amazon indigenous: An insight into the role of genetics and environment in the etiology dental malocclusion.

    Science.gov (United States)

    de Souza, Bento Sousa; Bichara, Livia Monteiro; Guerreiro, João Farias; Quintão, Cátia Cardoso Abdo; Normando, David

    2015-09-01

    Indigenous people of the Xingu river present a similar tooth wear pattern, practise exclusive breast-feeding, no pacifier use, and have a large intertribal genetic distance. To revisit the etiology of dental malocclusion features considering these population characteristics. Occlusion and facial features of five semi-isolated Amazon indigenous populations (n=351) were evaluated and compared to previously published data from urban Amazon people. Malocclusion prevalence ranged from 33.8% to 66.7%. Overall this prevalence is lower when compared to urban people mainly regarding posterior crossbite. A high intertribal diversity was found. The Arara-Laranjal village had a population with a normal face profile (98%) and a high rate of normal occlusion (66.2%), while another group from the same ethnicity presented a high prevalence of malocclusion, the highest occurrence of Class III malocclusion (32.6%) and long face (34.8%). In Pat-Krô village the population had the highest prevalence of Class II malocclusion (43.9%), convex profile (38.6%), increased overjet (36.8%) and deep bite (15.8%). Another village's population, from the same ethnicity, had a high frequency of anterior open bite (22.6%) and anterior crossbite (12.9%). The highest occurrence of bi-protrusion was found in the group with the lowest prevalence of dental crowding, and vice versa. Supported by previous genetic studies and given their similar environmental conditions, the high intertribal diversity of occlusal and facial features suggests that genetic factors contribute substantially to the morphology of occlusal and facial features in the indigenous groups studied. The low prevalence of posterior crossbite in the remote indigenous populations compared with urban populations may relate to prolonged breastfeeding and an absence of pacifiers in the indigenous groups. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Reconstruction of various perinasal defects using facial artery perforator-based nasolabial island flaps.

    Science.gov (United States)

    Yoon, Tae Ho; Yun, In Sik; Rha, Dong Kyun; Lee, Won Jai

    2013-11-01

    Classical flaps for perinasal defect reconstruction, such as forehead or nasolabial flaps, have some disadvantages involving limitations of the arc of rotation and two stages of surgery. However, a perforator-based flap is more versatile and allows freedom in flap design. We introduced our experience with reconstruction using a facial artery perforator-based propeller flap on the perinasal area. We describe the surgical differences between different defect subtypes. Between December 2005 and August 2013, 10 patients underwent perinasal reconstruction in which a facial artery perforator-based flap was used. We divided the perinasal defects into types A and B, according to location. The operative results, including flap size, arc of rotation, complications, and characteristics of the perforator were evaluated by retrospective chart review and photographic evaluation. Eight patients were male and 2 patients were female. Their mean age was 61 years (range, 35-75 years). The size of the flap ranged from 1 cm×1.5 cm to 3 cm×6 cm. Eight patients healed uneventfully, but 2 patients presented with mild flap congestion. However, these 2 patients healed by conservative management without any additional surgery. All of the flaps survived completely with aesthetically pleasing results. The facial artery perforator-based flap allowed for versatile customized flaps, and the donor site scar was concealed using the natural nasolabial fold.

  3. Toward a universal, automated facial measurement tool in facial reanimation.

    Science.gov (United States)

    Hadlock, Tessa A; Urban, Luke S

    2012-01-01

    To describe a highly quantitative facial function-measuring tool that yields accurate, objective measures of facial position in significantly less time than existing methods. Facial Assessment by Computer Evaluation (FACE) software was designed for facial analysis. Outputs report the static facial landmark positions and dynamic facial movements relevant in facial reanimation. Fifty individuals underwent facial movement analysis using Photoshop-based measurements and the new software; comparisons of agreement and efficiency were made. Comparisons were made between individuals with normal facial animation and patients with paralysis to gauge sensitivity to abnormal movements. Facial measurements were matched using FACE software and Photoshop-based measures at rest and during expressions. The automated assessments required significantly less time than Photoshop-based assessments.FACE measurements easily revealed differences between individuals with normal facial animation and patients with facial paralysis. FACE software produces accurate measurements of facial landmarks and facial movements and is sensitive to paralysis. Given its efficiency, it serves as a useful tool in the clinical setting for zonal facial movement analysis in comprehensive facial nerve rehabilitation programs.

  4. EMOTION RECOGNITION BASED ON THE ANALYSIS OF FACIAL EXPRESIONS: A SURVEY

    OpenAIRE

    ANDONI BERISTAIN; MANUEL GRAÑA

    2009-01-01

    Face expression recognition is an active area of research with several fields of applications, ranging from emotion recognition for advanced human computer interaction to avatar animation for the movie industry. This paper presents a review of the state-of-the-art emotion recognition based on the visual analysis of facial expressions. We cover the main technical approaches and discuss the issues related to the gathering of data for the validation of the proposed systems.

  5. Feature selection based classifier combination approach for ...

    Indian Academy of Sciences (India)

    2016-08-26

    Aug 26, 2016 ... Feature selection based classifier combination approach for handwritten Devanagari numeral recognition. Pratibha Singh Ajay Verma ... ensemble of classifiers. The main contribution of the proposed method is that, the method gives quite efficient results utilizing only 10% patterns of the available dataset.

  6. A dynamic texture-based approach to recognition of facial actions and their temporal models.

    Science.gov (United States)

    Koelstra, Sander; Pantic, Maja; Patras, Ioannis

    2010-11-01

    In this work, we propose a dynamic texture-based approach to the recognition of facial Action Units (AUs, atomic facial gestures) and their temporal models (i.e., sequences of temporal segments: neutral, onset, apex, and offset) in near-frontal-view face videos. Two approaches to modeling the dynamics and the appearance in the face region of an input video are compared: an extended version of Motion History Images and a novel method based on Nonrigid Registration using Free-Form Deformations (FFDs). The extracted motion representation is used to derive motion orientation histogram descriptors in both the spatial and temporal domain. Per AU, a combination of discriminative, frame-based GentleBoost ensemble learners and dynamic, generative Hidden Markov Models detects the presence of the AU in question and its temporal segments in an input image sequence. When tested for recognition of all 27 lower and upper face AUs, occurring alone or in combination in 264 sequences from the MMI facial expression database, the proposed method achieved an average event recognition accuracy of 89.2 percent for the MHI method and 94.3 percent for the FFD method. The generalization performance of the FFD method has been tested using the Cohn-Kanade database. Finally, we also explored the performance on spontaneous expressions in the Sensitive Artificial Listener data set.

  7. I care, even after the first impression: Facial appearance-based evaluations in healthcare context.

    Science.gov (United States)

    Mattarozzi, Katia; Colonnello, Valentina; De Gioia, Francesco; Todorov, Alexander

    2017-06-01

    Prior research has demonstrated that healthcare providers' implicit biases may contribute to healthcare disparities. Independent research in social psychology indicates that facial appearance-based evaluations affect social behavior in a variety of domains, influencing political, legal, and economic decisions. Whether and to what extent these evaluations influence approach behavior in healthcare contexts warrants research attention. Here we investigate the impact of facial appearance-based evaluations of trustworthiness on healthcare providers' caring inclination, and the moderating role of experience and information about the social identity of the faces. Novice and expert nurses rated their inclination to provide care when viewing photos of trustworthy-, neutral-, and untrustworthy-looking faces. To explore whether information about the target of care influences caring inclination, some participants were told that they would view patients' faces while others received no information about the faces. Both novice and expert nurses had higher caring inclination scores for trustworthy-than for untrustworthy-looking faces; however, experts had higher scores than novices for untrustworthy-looking faces. Regardless of a face's trustworthiness level, experts had higher caring inclination scores for patients than for unidentified individuals, while novices showed no differences. Facial appearance-based inferences can bias caring inclination in healthcare contexts. However, expert healthcare providers are less biased by these inferences and more sensitive to information about the target of care. These findings highlight the importance of promoting novice healthcare professionals' awareness of first impression biases. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Feature Selection Based on Mutual Correlation

    Czech Academy of Sciences Publication Activity Database

    Haindl, Michal; Somol, Petr; Ververidis, D.; Kotropoulos, C.

    2006-01-01

    Roč. 19, č. 4225 (2006), s. 569-577 ISSN 0302-9743. [Iberoamerican Congress on Pattern Recognition. CIARP 2006 /11./. Cancun, 14.11.2006-17.11.2006] R&D Projects: GA AV ČR 1ET400750407; GA MŠk 1M0572; GA AV ČR IAA2075302 EU Projects: European Commission(XE) 507752 - MUSCLE Institutional research plan: CEZ:AV0Z10750506 Keywords : feature selection Subject RIV: BD - Theory of Information Impact factor: 0.402, year: 2005 http://library.utia.cas.cz/separaty/historie/haindl-feature selection based on mutual correlation.pdf

  9. Facial-based ethnic recognition: insights from two closely related but ethnically distinct groups

    Directory of Open Access Journals (Sweden)

    S. P. Henzi

    2010-02-01

    Full Text Available Previous studies on facial recognition have considered widely separated populations, both geographically and culturally, making it hard to disentangle effects of familiarity with an ability to identify ethnic groups per se.We used data from a highly intermixed population of African peoples from South Africa to test whether individuals from nine different ethnic groups could correctly differentiate between facial images of two of these, the Tswana and Pedi. Individuals could not assign ethnicity better than expected by chance, and there was no significant difference between genders in accuracy of assignment. Interestingly, we observed a trend that individuals of mixed ethnic origin were better at assigning ethnicity to Pedi and Tswanas, than individuals from less mixed backgrounds. This result supports the hypothesis that ethnic recognition is based on the visual

  10. Management of the Facial Nerve in Lateral Skull Base Surgery Analytic Retrospective study

    Directory of Open Access Journals (Sweden)

    Mohamed A. El Shazly

    2011-01-01

    Full Text Available Background Surgical approaches to the jugular foramen are often complex and lengthy procedures associated with significant morbidity based on the anatomic and tumor characteristics. In addition to the risk of intra-operative hemorrhage from vascular tumors, lower cranial nerves deficits are frequently increased after intra-operative manipulation. Accordingly, modifications in the surgical techniques have been developed to minimize these risks. Preoperative embolization and intra-operative ligation of the external carotid artery have decreased the intraoperative blood loss. Accurate identification and exposure of the cranial nerves extracranially allows for their preservation during tumor resection. The modification of facial nerve mobilization provides widened infratemporal exposure with less postoperative facial weakness. The ideal approach should enable complete, one stage tumor resection with excellent infratemporal and posterior fossa exposure and would not aggravate or cause neurologic deficit. The aim of this study is to present our experience in handling jugular foramen lesions (mainly glomus jugulare without the need for anterior facial nerve transposition. Methods In this series we present our experience in Kasr ElEini University hospital (Cairo–-Egypt in handling 36 patients with jugular foramen lesions over a period of 20 years where the previously mentioned preoperative and operative rules were followed. The clinical status, operative technique and postoperative care and outcome are detailed and analyzed in relation to the outcome. Results Complete cure without complications was achieved in four cases of congenital cholesteatoma and four cases with class B glomus. In advanced cases of glomus jugulare (28 patients (C and D stages complete cure was achieved in 21 of them (75%. The operative complications were also related to this group of 28 patients, in the form of facial paralysis in 20 of them (55.6% and symptomatic vagal

  11. Morphometric studies on the facial skeleton of humans and pongids based on CT-scans.

    Science.gov (United States)

    Schumacher, K U; Koppe, T; Fanghänel, J; Schumacher, G H; Nagai, H

    1994-10-01

    The changes of the skull, which we can observe during the anthropogenesis, are reflected especially in the different skull proportions. We carried out metric measurements at the median level on 10 adult skulls each of humans, chimpanzees and gorillas as well as 11 skulls of orangutans. All skulls were scanned with a CT at the median level. We measured the lines and angles of the scans and the means and the standard deviations were calculated. We carried out a correlation analysis to observe the relation of their characteristics. We showed that there is a relation between the length of the skull base and the facial length in all species. From the results of the correlation analysis, we can also conclude that a relation exists between the degree of prognathism and the different length measurements of the facial skeleton. We also found a bending of the facial skeleton in relation to the cranial base towards the ventral side, also known as klinorhynchy, in all observed species. The highest degree of klinorhynchy was found in humans and the lowest in orangutans. We will discuss the different definition of the term klinorhynchy and its importance in the evolution of the hominoids.

  12. Face recognition via edge-based Gabor feature representation for plastic surgery-altered images

    Science.gov (United States)

    Chude-Olisah, Chollette C.; Sulong, Ghazali; Chude-Okonkwo, Uche A. K.; Hashim, Siti Z. M.

    2014-12-01

    Plastic surgery procedures on the face introduce skin texture variations between images of the same person (intra-subject), thereby making the task of face recognition more difficult than in normal scenario. Usually, in contemporary face recognition systems, the original gray-level face image is used as input to the Gabor descriptor, which translates to encoding some texture properties of the face image. The texture-encoding process significantly degrades the performance of such systems in the case of plastic surgery due to the presence of surgically induced intra-subject variations. Based on the proposition that the shape of significant facial components such as eyes, nose, eyebrow, and mouth remains unchanged after plastic surgery, this paper employs an edge-based Gabor feature representation approach for the recognition of surgically altered face images. We use the edge information, which is dependent on the shapes of the significant facial components, to address the plastic surgery-induced texture variation problems. To ensure that the significant facial components represent useful edge information with little or no false edges, a simple illumination normalization technique is proposed for preprocessing. Gabor wavelet is applied to the edge image to accentuate on the uniqueness of the significant facial components for discriminating among different subjects. The performance of the proposed method is evaluated on the Georgia Tech (GT) and the Labeled Faces in the Wild (LFW) databases with illumination and expression problems, and the plastic surgery database with texture changes. Results show that the proposed edge-based Gabor feature representation approach is robust against plastic surgery-induced face variations amidst expression and illumination problems and outperforms the existing plastic surgery face recognition methods reported in the literature.

  13. Facial Action Unit Recognition under Incomplete Data Based on Multi-label Learning with Missing Labels

    KAUST Repository

    Li, Yongqiang

    2016-07-07

    Facial action unit (AU) recognition has been applied in a wild range of fields, and has attracted great attention in the past two decades. Most existing works on AU recognition assumed that the complete label assignment for each training image is available, which is often not the case in practice. Labeling AU is expensive and time consuming process. Moreover, due to the AU ambiguity and subjective difference, some AUs are difficult to label reliably and confidently. Many AU recognition works try to train the classifier for each AU independently, which is of high computation cost and ignores the dependency among different AUs. In this work, we formulate AU recognition under incomplete data as a multi-label learning with missing labels (MLML) problem. Most existing MLML methods usually employ the same features for all classes. However, we find this setting is unreasonable in AU recognition, as the occurrence of different AUs produce changes of skin surface displacement or face appearance in different face regions. If using the shared features for all AUs, much noise will be involved due to the occurrence of other AUs. Consequently, the changes of the specific AUs cannot be clearly highlighted, leading to the performance degradation. Instead, we propose to extract the most discriminative features for each AU individually, which are learned by the supervised learning method. The learned features are further embedded into the instance-level label smoothness term of our model, which also includes the label consistency and the class-level label smoothness. Both a global solution using st-cut and an approximated solution using conjugate gradient (CG) descent are provided. Experiments on both posed and spontaneous facial expression databases demonstrate the superiority of the proposed method in comparison with several state-of-the-art works.

  14. A voxel-based lesion study on facial emotion recognition after penetrating brain injury

    Science.gov (United States)

    Dal Monte, Olga; Solomon, Jeffrey M.; Schintu, Selene; Knutson, Kristine M.; Strenziok, Maren; Pardini, Matteo; Leopold, Anne; Raymont, Vanessa; Grafman, Jordan

    2013-01-01

    The ability to read emotions in the face of another person is an important social skill that can be impaired in subjects with traumatic brain injury (TBI). To determine the brain regions that modulate facial emotion recognition, we conducted a whole-brain analysis using a well-validated facial emotion recognition task and voxel-based lesion symptom mapping (VLSM) in a large sample of patients with focal penetrating TBIs (pTBIs). Our results revealed that individuals with pTBI performed significantly worse than normal controls in recognizing unpleasant emotions. VLSM mapping results showed that impairment in facial emotion recognition was due to damage in a bilateral fronto-temporo-limbic network, including medial prefrontal cortex (PFC), anterior cingulate cortex, left insula and temporal areas. Beside those common areas, damage to the bilateral and anterior regions of PFC led to impairment in recognizing unpleasant emotions, whereas bilateral posterior PFC and left temporal areas led to impairment in recognizing pleasant emotions. Our findings add empirical evidence that the ability to read pleasant and unpleasant emotions in other people's faces is a complex process involving not only a common network that includes bilateral fronto-temporo-limbic lobes, but also other regions depending on emotional valence. PMID:22496440

  15. Familiarity facilitates feature-based face processing

    Science.gov (United States)

    Wheeler, Kelsey G.; Cipolli, Carlo; Gobbini, M. Ida

    2017-01-01

    Recognition of personally familiar faces is remarkably efficient, effortless and robust. We asked if feature-based face processing facilitates detection of familiar faces by testing the effect of face inversion on a visual search task for familiar and unfamiliar faces. Because face inversion disrupts configural and holistic face processing, we hypothesized that inversion would diminish the familiarity advantage to the extent that it is mediated by such processing. Subjects detected personally familiar and stranger target faces in arrays of two, four, or six face images. Subjects showed significant facilitation of personally familiar face detection for both upright and inverted faces. The effect of familiarity on target absent trials, which involved only rejection of unfamiliar face distractors, suggests that familiarity facilitates rejection of unfamiliar distractors as well as detection of familiar targets. The preserved familiarity effect for inverted faces suggests that facilitation of face detection afforded by familiarity reflects mostly feature-based processes. PMID:28582439

  16. Towards cardinality-based service feature diagrams

    Directory of Open Access Journals (Sweden)

    Ghulam Mustafa Assad

    2015-03-01

    Full Text Available To provide efficient services to end-user it is essential to manage variability among services. Feature modelling is an important approach to manage variability and commonalities of a system in product line. Feature models are composed of feature diagrams. Service feature diagrams (an extended form of feature diagrams changed the basic framework of feature diagrams by proposing new feature types and their relevance. Service feature diagrams provide selection rights for variable features. In this paper we argue that it is essential to put cardinalities on service feature diagrams. That is, the selection of features should be done under some constraints, to provide a lower and upper limit for the selection of features. The use of cardinalities on service feature diagrams reduces the types of features to half, while keeping the integrity of all features.

  17. FEATURE EXTRACTION FOR EMG BASED PROSTHESES CONTROL

    Directory of Open Access Journals (Sweden)

    R. Aishwarya

    2013-01-01

    Full Text Available The control of prosthetic limb would be more effective if it is based on Surface Electromyogram (SEMG signals from remnant muscles. The analysis of SEMG signals depend on a number of factors, such as amplitude as well as time- and frequency-domain properties. Time series analysis using Auto Regressive (AR model and Mean frequency which is tolerant to white Gaussian noise are used as feature extraction techniques. EMG Histogram is used as another feature vector that was seen to give more distinct classification. The work was done with SEMG dataset obtained from the NINAPRO DATABASE, a resource for bio robotics community. Eight classes of hand movements hand open, hand close, Wrist extension, Wrist flexion, Pointing index, Ulnar deviation, Thumbs up, Thumb opposite to little finger are taken into consideration and feature vectors are extracted. The feature vectors can be given to an artificial neural network for further classification in controlling the prosthetic arm which is not dealt in this paper.

  18. A Feature-Based Structural Measure: An Image Similarity Measure for Face Recognition

    Directory of Open Access Journals (Sweden)

    Noor Abdalrazak Shnain

    2017-08-01

    Full Text Available Facial recognition is one of the most challenging and interesting problems within the field of computer vision and pattern recognition. During the last few years, it has gained special attention due to its importance in relation to current issues such as security, surveillance systems and forensics analysis. Despite this high level of attention to facial recognition, the success is still limited by certain conditions; there is no method which gives reliable results in all situations. In this paper, we propose an efficient similarity index that resolves the shortcomings of the existing measures of feature and structural similarity. This measure, called the Feature-Based Structural Measure (FSM, combines the best features of the well-known SSIM (structural similarity index measure and FSIM (feature similarity index measure approaches, striking a balance between performance for similar and dissimilar images of human faces. In addition to the statistical structural properties provided by SSIM, edge detection is incorporated in FSM as a distinctive structural feature. Its performance is tested for a wide range of PSNR (peak signal-to-noise ratio, using ORL (Olivetti Research Laboratory, now AT&T Laboratory Cambridge and FEI (Faculty of Industrial Engineering, São Bernardo do Campo, São Paulo, Brazil databases. The proposed measure is tested under conditions of Gaussian noise; simulation results show that the proposed FSM outperforms the well-known SSIM and FSIM approaches in its efficiency of similarity detection and recognition of human faces.

  19. Memory Driven Feature-Based Design

    Science.gov (United States)

    1993-01-01

    memory , measures of similarity, and the question of how to manage remembering and recollecting on the basis of similarity [18]. There is a large body...is also influenced by the Dynamic Memory ideas of Schank [20], by the episodic memory ideas of Kolodner [21], and by the Case-based planning approach...AD-A264 697 WL-TR-93-4021 MEMORY DRIVEN FEATURE-BASED DESIGN DTIC Y.H. PAO AY 11993 F.L. MERAT G.M. RADACK CASE WESTERN RESERVE UNIVERSITY ELECTRICAL

  20. Modified kernel-based nonlinear feature extraction.

    Energy Technology Data Exchange (ETDEWEB)

    Ma, J. (Junshui); Perkins, S. J. (Simon J.); Theiler, J. P. (James P.); Ahalt, S. (Stanley)

    2002-01-01

    Feature Extraction (FE) techniques are widely used in many applications to pre-process data in order to reduce the complexity of subsequent processes. A group of Kernel-based nonlinear FE ( H E ) algorithms has attracted much attention due to their high performance. However, a serious limitation that is inherent in these algorithms -- the maximal number of features extracted by them is limited by the number of classes involved -- dramatically degrades their flexibility. Here we propose a modified version of those KFE algorithms (MKFE), This algorithm is developed from a special form of scatter-matrix, whose rank is not determined by the number of classes involved, and thus breaks the inherent limitation in those KFE algorithms. Experimental results suggest that MKFE algorithm is .especially useful when the training set is small.

  1. Facial emotion recognition system for autistic children: a feasible study based on FPGA implementation.

    Science.gov (United States)

    Smitha, K G; Vinod, A P

    2015-11-01

    Children with autism spectrum disorder have difficulty in understanding the emotional and mental states from the facial expressions of the people they interact. The inability to understand other people's emotions will hinder their interpersonal communication. Though many facial emotion recognition algorithms have been proposed in the literature, they are mainly intended for processing by a personal computer, which limits their usability in on-the-move applications where portability is desired. The portability of the system will ensure ease of use and real-time emotion recognition and that will aid for immediate feedback while communicating with caretakers. Principal component analysis (PCA) has been identified as the least complex feature extraction algorithm to be implemented in hardware. In this paper, we present a detailed study of the implementation of serial and parallel implementation of PCA in order to identify the most feasible method for realization of a portable emotion detector for autistic children. The proposed emotion recognizer architectures are implemented on Virtex 7 XC7VX330T FFG1761-3 FPGA. We achieved 82.3% detection accuracy for a word length of 8 bits.

  2. Qualified matching feature collection for feature point-based copy-move forgery detection

    Science.gov (United States)

    Yu, Liyang; Han, Qi; Niu, Xiamu

    2015-03-01

    The feature matching step plays a critical role during the copy-move forgery detection procedure. However, when several highly similar features simultaneously exist in the feature space, current feature matching methods will miss a considerable number of genuine matching feature pairs. To this end, we propose a clustering-based method to collect qualified matching features for the feature point-based methods. The proposed method can collect far more genuine matching features than existing methods do, and thus significantly improve the detection performance, especially for multiple pasting cases. Experimental results confirm the efficacy of the proposed method.

  3. Improvements on a simple muscle-based 3D face for realistic facial expressions

    NARCIS (Netherlands)

    Bui, T.D.; Heylen, Dirk K.J.; Nijholt, Antinus; Badler, N.; Thalmann, D.

    2003-01-01

    Facial expressions play an important role in face-to-face communication. With the development of personal computers capable of rendering high quality graphics, computer facial animation has produced more and more realistic facial expressions to enrich human-computer communication. In this paper, we

  4. Comparative analysis of the anterior and posterior length and deflection angle of the cranial base, in individuals with facial Pattern I, II and III

    Directory of Open Access Journals (Sweden)

    Guilherme Thiesen

    2013-02-01

    Full Text Available OBJECTIVE: This study evaluated the variations in the anterior cranial base (S-N, posterior cranial base (S-Ba and deflection of the cranial base (SNBa among three different facial patterns (Pattern I, II and III. METHOD: A sample of 60 lateral cephalometric radiographs of Brazilian Caucasian patients, both genders, between 8 and 17 years of age was selected. The sample was divided into 3 groups (Pattern I, II and III of 20 individuals each. The inclusion criteria for each group were the ANB angle, Wits appraisal and the facial profile angle (G'.Sn.Pg'. To compare the mean values obtained from (SNBa, S-N, S-Ba each group measures, the ANOVA test and Scheffé's Post-Hoc test were applied. RESULTS AND CONCLUSIONS: There was no statistically significant difference for the deflection angle of the cranial base among the different facial patterns (Patterns I, II and III. There was no significant difference for the measures of the anterior and posterior cranial base between the facial Patterns I and II. The mean values for S-Ba were lower in facial Pattern III with statistically significant difference. The mean values of S-N in the facial Pattern III were also reduced, but without showing statistically significant difference. This trend of lower values in the cranial base measurements would explain the maxillary deficiency and/or mandibular prognathism features that characterize the facial Pattern III.OBJETIVO: o presente estudo avaliou as variações da base craniana anterior (S-N, base craniana posterior (S-Ba, e ângulo de deflexão da base do crânio (SNBa entre três diferentes padrões faciais (Padrão I, II e III. MÉTODOS: selecionou-se uma amostra de 60 telerradiografias em norma lateral de pacientes brasileiros leucodermas, de ambos os sexos, com idades entre 8 anos e 17 anos. A amostra foi dividida em três grupos (Padrão I, II e III, sendo cada grupo constituído de 20 indivíduos. Os critérios de seleção dos indivíduos para cada grupo

  5. Hirschsprung disease, microcephaly, mental retardation, and characteristic facial features: delineation of a new syndrome and identification of a locus at chromosome 2q22-q23.

    Science.gov (United States)

    Mowat, D R; Croaker, G D; Cass, D T; Kerr, B A; Chaitow, J; Adès, L C; Chia, N L; Wilson, M J

    1998-01-01

    We have identified six children with a distinctive facial phenotype in association with mental retardation (MR), microcephaly, and short stature, four of whom presented with Hirschsprung (HSCR) disease in the neonatal period. HSCR was diagnosed in a further child at the age of 3 years after investigation for severe chronic constipation and another child, identified as sharing the same facial phenotype, had chronic constipation, but did not have HSCR. One of our patients has an interstitial deletion of chromosome 2, del(2)(q21q23). These children strongly resemble the patient reported by Lurie et al with HSCR and dysmorphic features associated with del(2)(q22q23). All patients have been isolated cases, suggesting a contiguous gene syndrome or a dominant single gene disorder involving a locus for HSCR located at 2q22-q23. Review of published reports suggests that there is significant phenotypic and genetic heterogeneity within the group of patients with HSCR, MR, and microcephaly. In particular, our patients appear to have a separate disorder from Goldberg-Shprintzen syndrome, for which autosomal recessive inheritance has been proposed because of sib recurrence and consanguinity in some families. Images PMID:9719364

  6. Quantative Evaluation of the Efficiency of Facial Bio-potential Signals Based on Forehead Three-Channel Electrode Placement For Facial Gesture Recognition Applicable in a Human-Machine Interface

    Directory of Open Access Journals (Sweden)

    Iman Mohammad Rezazadeh

    2010-06-01

    Full Text Available Introduction: Today, facial bio-potential signals are employed in many human-machine interface applications for enhancing and empowering the rehabilitation process. The main point to achieve that goal is to record appropriate bioelectric signals from the human face by placing and configuring electrodes over it in the right way. In this paper, heuristic geometrical position and configuration of the electrodes has been proposed for improving the quality of the acquired signals and consequently enhancing the performance of the facial gesture classifier. Materials and Methods: Investigation and evaluation of the electrodes' proper geometrical position and configuration can be performed using two methods: clinical and modeling. In the clinical method, the electrodes are placed in predefined positions and the elicited signals from them are then processed. The performance of the method is evaluated based on the results obtained. On the other hand, in the modeling approach, the quality of the recorded signals and their information content are evaluated only by modeling and simulation. In this paper, both methods have been utilized together. First, suitable electrode positions and configuration were proposed and evaluated by modeling and simulation. Then, the experiment was performed with a predefined protocol on 7 healthy subjects to validate the simulation results. Here, the recorded signals were passed through parallel butterworth filter banks to obtain facial EMG, EOG and EEG signals and the RMS features of each 256 msec time slot were extracted.  By using the power of Subtractive Fuzzy C-Mean (SFCM, 8 different facial gestures (including smiling, frowning, pulling up left and right lip corners, left/right/up and down movements of the eyes were discriminated. Results: According to the three-channel electrode configuration derived from modeling of the dipoles effects on the surface electrodes and by employing the SFCM classifier, an average 94

  7. Spoofing detection on facial images recognition using LBP and GLCM combination

    Science.gov (United States)

    Sthevanie, F.; Ramadhani, K. N.

    2018-03-01

    The challenge for the facial based security system is how to detect facial image falsification such as facial image spoofing. Spoofing occurs when someone try to pretend as a registered user to obtain illegal access and gain advantage from the protected system. This research implements facial image spoofing detection method by analyzing image texture. The proposed method for texture analysis combines the Local Binary Pattern (LBP) and Gray Level Co-occurrence Matrix (GLCM) method. The experimental results show that spoofing detection using LBP and GLCM combination achieves high detection rate compared to that of using only LBP feature or GLCM feature.

  8. Facial Expression Recognition Using SVM Classifier

    Directory of Open Access Journals (Sweden)

    Vasanth P.C.

    2015-03-01

    Full Text Available Facial feature tracking and facial actions recognition from image sequence attracted great attention in computer vision field. Computational facial expression analysis is a challenging research topic in computer vision. It is required by many applications such as human-computer interaction, computer graphic animation and automatic facial expression recognition. In recent years, plenty of computer vision techniques have been developed to track or recognize the facial activities in three levels. First, in the bottom level, facial feature tracking, which usually detects and tracks prominent landmarks surrounding facial components (i.e., mouth, eyebrow, etc, captures the detailed face shape information; Second, facial actions recognition, i.e., recognize facial action units (AUs defined in FACS, try to recognize some meaningful facial activities (i.e., lid tightener, eyebrow raiser, etc; In the top level, facial  expression analysis attempts to recognize some meaningful facial activities (i.e., lid tightener, eyebrow raiser, etc; In the top level, facial expression analysis attempts to recognize facial expressions that represent the human emotion states. In this proposed algorithm initially detecting eye and mouth, features of eye and mouth are extracted using Gabor filter, (Local Binary Pattern LBP and PCA is used to reduce the dimensions of the features. Finally SVM is used to classification of expression and facial action units.

  9. Feature-Based Nonlocal Polarimetric SAR Filtering

    Directory of Open Access Journals (Sweden)

    Xiaoli Xing

    2017-10-01

    Full Text Available Polarimetric synthetic aperture radar (PolSAR images are inherently contaminated by multiplicative speckle noise, which complicates the image interpretation and image analyses. To reduce the speckle effect, several adaptive speckle filters have been developed based on the weighted average of the similarity measures commonly depending on the model or probability distribution, which are often affected by the distribution parameters and modeling texture components. In this paper, a novel filtering method introduces the coefficient of variance ( CV and Pauli basis (PB to measure the similarity, and the two features are combined with the framework of the nonlocal mean filtering. The CV is used to describe the complexity of various scenes and distinguish the scene heterogeneity; moreover, the Pauli basis is able to express the polarimetric information in PolSAR image processing. This proposed filtering combines the CV and Pauli basis to improve the estimation accuracy of the similarity weights. Then, the similarity of the features is deduced according to the test statistic. Subsequently, the filtering is proceeded by using the nonlocal weighted estimation. The performance of the proposed filter is tested with the simulated images and real PolSAR images, which are acquired by AIRSAR system and ESAR system. The qualitative and quantitative experiments indicate the validity of the proposed method by comparing with the widely-used despeckling methods.

  10. Underwater Object Segmentation Based on Optical Features

    Directory of Open Access Journals (Sweden)

    Zhe Chen

    2018-01-01

    Full Text Available Underwater optical environments are seriously affected by various optical inputs, such as artificial light, sky light, and ambient scattered light. The latter two can block underwater object segmentation tasks, since they inhibit the emergence of objects of interest and distort image information, while artificial light can contribute to segmentation. Artificial light often focuses on the object of interest, and, therefore, we can initially identify the region of target objects if the collimation of artificial light is recognized. Based on this concept, we propose an optical feature extraction, calculation, and decision method to identify the collimated region of artificial light as a candidate object region. Then, the second phase employs a level set method to segment the objects of interest within the candidate region. This two-phase structure largely removes background noise and highlights the outline of underwater objects. We test the performance of the method with diverse underwater datasets, demonstrating that it outperforms previous methods.

  11. Feature-driven model-based segmentation

    Science.gov (United States)

    Qazi, Arish A.; Kim, John; Jaffray, David A.; Pekar, Vladimir

    2011-03-01

    The accurate delineation of anatomical structures is required in many medical image analysis applications. One example is radiation therapy planning (RTP), where traditional manual delineation is tedious, labor intensive, and can require hours of clinician's valuable time. Majority of automated segmentation methods in RTP belong to either model-based or atlas-based approaches. One substantial limitation of model-based segmentation is that its accuracy may be restricted by the uncertainties in image content, specifically when segmenting low-contrast anatomical structures, e.g. soft tissue organs in computed tomography images. In this paper, we introduce a non-parametric feature enhancement filter which replaces raw intensity image data by a high level probabilistic map which guides the deformable model to reliably segment low-contrast regions. The method is evaluated by segmenting the submandibular and parotid glands in the head and neck region and comparing the results to manual segmentations in terms of the volume overlap. Quantitative results show that we are in overall good agreement with expert segmentations, achieving volume overlap of up to 80%. Qualitatively, we demonstrate that we are able to segment low-contrast regions, which otherwise are difficult to delineate with deformable models relying on distinct object boundaries from the original image data.

  12. Inversion improves the recognition of facial expression in thatcherized images.

    Science.gov (United States)

    Psalta, Lilia; Andrews, Timothy J

    2014-01-01

    The Thatcher illusion provides a compelling example of the face inversion effect. However, the marked effect of inversion in the Thatcher illusion contrasts to other studies that report only a small effect of inversion on the recognition of facial expressions. To address this discrepancy, we compared the effects of inversion and thatcherization on the recognition of facial expressions. We found that inversion of normal faces caused only a small reduction in the recognition of facial expressions. In contrast, local inversion of facial features in upright thatcherized faces resulted in a much larger reduction in the recognition of facial expressions. Paradoxically, inversion of thatcherized faces caused a relative increase in the recognition of facial expressions. Together, these results suggest that different processes explain the effects of inversion on the recognition of facial expressions and on the perception of the Thatcher illusion. The grotesque perception of thatcherized images is based on a more orientation-sensitive representation of the face. In contrast, the recognition of facial expression is dependent on a more orientation-insensitive representation. A similar pattern of results was evident when only the mouth or eye region was visible. These findings demonstrate that a key component of the Thatcher illusion is to be found in orientation-specific encoding of the features of the face.

  13. Facial paralysis

    Science.gov (United States)

    ... develops slowly. Symptoms can include headaches, seizures, or hearing loss. In newborns, facial paralysis may be caused by ... may refer you to a physical, speech, or occupational therapist. If facial paralysis from Bell palsy lasts ...

  14. Persistent facial pain conditions

    DEFF Research Database (Denmark)

    Forssell, Heli; Alstergren, Per; Bakke, Merete

    2016-01-01

    Persistent facial pains, especially temporomandibular disorders (TMD), are common conditions. As dentists are responsible for the treatment of most of these disorders, up-to date knowledge on the latest advances in the field is essential for successful diagnosis and management. The review covers...... TMD, and different neuropathic or putative neuropathic facial pains such as persistent idiopathic facial pain and atypical odontalgia, trigeminal neuralgia and painful posttraumatic trigeminal neuropathy. The article presents an overview of TMD pain as a biopsychosocial condition, its prevalence......, clinical features, consequences, central and peripheral mechanisms, diagnostic criteria (DC/TMD), and principles of management. For each of the neuropathic facial pain entities, the definitions, prevalence, clinical features, and diagnostics are described. The current understanding of the pathophysiology...

  15. Structural features of facial skull and maxillary sinuses as predictors of complications in endodontic treatment of teeth of upper jaw

    Directory of Open Access Journals (Sweden)

    Lepilin A.M.

    2012-09-01

    Full Text Available Endodontic treatment is considered to be one of the most common procedures in modern dentistry, which also leads to increase of the complications. Objective: to establish the anthropometric characteristics of the structure of the facial skull and maxillary sinus, determining the development of complications of the endodontic treatment of upper jaw. Materials and methods. Measurements have been performed on 105 three-dimensional CT scan of the head, 75 have been in the control group, 30 cases have got foreign bodies of the maxillary sinuses on the CT. Results. We have established the correlation between obtained anthropometrical parameters such as height and width of the face with the type of maxillary sinus pneumatization, also we have studied the critical thickness of the bone plate over the tooth root, which is the main predisposing factor in the development of complications. Conclusion. It is possible to form risk groups according to the type of the structure of the front-skeleton, for additional studies of further endodontic interventions that may reduce their frequency.

  16. Facial melanoses: Indian perspective

    Directory of Open Access Journals (Sweden)

    Neena Khanna

    2011-01-01

    Full Text Available Facial melanoses (FM are a common presentation in Indian patients, causing cosmetic disfigurement with considerable psychological impact. Some of the well defined causes of FM include melasma, Riehl′s melanosis, Lichen planus pigmentosus, erythema dyschromicum perstans (EDP, erythrosis, and poikiloderma of Civatte. But there is considerable overlap in features amongst the clinical entities. Etiology in most of the causes is unknown, but some factors such as UV radiation in melasma, exposure to chemicals in EDP, exposure to allergens in Riehl′s melanosis are implicated. Diagnosis is generally based on clinical features. The treatment of FM includes removal of aggravating factors, vigorous photoprotection, and some form of active pigment reduction either with topical agents or physical modes of treatment. Topical agents include hydroquinone (HQ, which is the most commonly used agent, often in combination with retinoic acid, corticosteroids, azelaic acid, kojic acid, and glycolic acid. Chemical peels are important modalities of physical therapy, other forms include lasers and dermabrasion.

  17. Evaluating visibility of age spot and freckle based on simulated spectral reflectance distribution and facial color image

    Science.gov (United States)

    Hirose, Misa; Toyota, Saori; Tsumura, Norimichi

    2018-02-01

    In this research, we evaluate the visibility of age spot and freckle with changing the blood volume based on simulated spectral reflectance distribution and the actual facial color images, and compare these results. First, we generate three types of spatial distribution of age spot and freckle in patch-like images based on the simulated spectral reflectance. The spectral reflectance is simulated using Monte Carlo simulation of light transport in multi-layered tissue. Next, we reconstruct the facial color image with changing the blood volume. We acquire the concentration distribution of melanin, hemoglobin and shading components by applying the independent component analysis on a facial color image. We reproduce images using the obtained melanin and shading concentration and the changed hemoglobin concentration. Finally, we evaluate the visibility of pigmentations using simulated spectral reflectance distribution and facial color images. In the result of simulated spectral reflectance distribution, we found that the visibility became lower as the blood volume increases. However, we can see that a specific blood volume reduces the visibility of the actual pigmentations from the result of the facial color images.

  18. A PCA-Based method for determining craniofacial relationship and sexual dimorphism of facial shapes.

    Science.gov (United States)

    Shui, Wuyang; Zhou, Mingquan; Maddock, Steve; He, Taiping; Wang, Xingce; Deng, Qingqiong

    2017-11-01

    Previous studies have used principal component analysis (PCA) to investigate the craniofacial relationship, as well as sex determination using facial factors. However, few studies have investigated the extent to which the choice of principal components (PCs) affects the analysis of craniofacial relationship and sexual dimorphism. In this paper, we propose a PCA-based method for visual and quantitative analysis, using 140 samples of 3D heads (70 male and 70 female), produced from computed tomography (CT) images. There are two parts to the method. First, skull and facial landmarks are manually marked to guide the model's registration so that dense corresponding vertices occupy the same relative position in every sample. Statistical shape spaces of the skull and face in dense corresponding vertices are constructed using PCA. Variations in these vertices, captured in every principal component (PC), are visualized to observe shape variability. The correlations of skull- and face-based PC scores are analysed, and linear regression is used to fit the craniofacial relationship. We compute the PC coefficients of a face based on this craniofacial relationship and the PC scores of a skull, and apply the coefficients to estimate a 3D face for the skull. To evaluate the accuracy of the computed craniofacial relationship, the mean and standard deviation of every vertex between the two models are computed, where these models are reconstructed using real PC scores and coefficients. Second, each PC in facial space is analysed for sex determination, for which support vector machines (SVMs) are used. We examined the correlation between PCs and sex, and explored the extent to which the choice of PCs affects the expression of sexual dimorphism. Our results suggest that skull- and face-based PCs can be used to describe the craniofacial relationship and that the accuracy of the method can be improved by using an increased number of face-based PCs. The results show that the accuracy of

  19. Perceived age of facial features is a significant diagnosis criterion for age-related carotid atherosclerosis in Japanese subjects: J-SHIPP study.

    Science.gov (United States)

    Kido, Miwako; Kohara, Katsuhiko; Miyawaki, Saori; Tabara, Yasuharu; Igase, Michiya; Miki, Tetsuro

    2012-10-01

    Vascular aging is known to be a major determinant of life expectancy. Recently, perceived age was reported to be a better predictor for mortality than chronological age. Based on these findings, we investigated whether or not perceived age was related to atherosclerosis in a general population. The participants were 273 individuals aged ≥ 50 years who participated in the Skin-doc in Anti-Aging Doc program. Facial photos were taken under a shadowless lamp from three directions (antero-posterior, and 60° right and left oblique projection) using a high-resolution digital camera. Perceived age was assessed either by 19 professional nurses in the geriatric ward or using facial identification program software. Carotid intima-media thickness (IMT), radial augmentation index (AI) and brachial-ankle pulse wave velocity (baPWV) were measured as indices for atherosclerosis. The perceived age difference (expressed as the difference between perceived age and chronological age), when estimated either by nurses or software, was significantly and negatively associated with chronological age. Subjects who were evaluated by nurses to be younger than their chronological age had significantly lower carotid IMT after adjustment for chronological age. Conversely, carotid IMT was an independent and negative determinant of looking young, as perceived by nurses. Similar observations were also made between perceived age using facial identification software and carotid IMT. Radial AI and baPWV were not associated with perceived age. These findings show that carotid atherosclerosis is related to perceived age. This association might underlie previous findings showing that perceived age predicts life expectancy. © 2012 Japan Geriatrics Society.

  20. Multispectral image fusion based on fractal features

    Science.gov (United States)

    Tian, Jie; Chen, Jie; Zhang, Chunhua

    2004-01-01

    Imagery sensors have been one indispensable part of the detection and recognition systems. They are widely used to the field of surveillance, navigation, control and guide, et. However, different imagery sensors depend on diverse imaging mechanisms, and work within diverse range of spectrum. They also perform diverse functions and have diverse circumstance requires. So it is unpractical to accomplish the task of detection or recognition with a single imagery sensor under the conditions of different circumstances, different backgrounds and different targets. Fortunately, the multi-sensor image fusion technique emerged as important route to solve this problem. So image fusion has been one of the main technical routines used to detect and recognize objects from images. While, loss of information is unavoidable during fusion process, so it is always a very important content of image fusion how to preserve the useful information to the utmost. That is to say, it should be taken into account before designing the fusion schemes how to avoid the loss of useful information or how to preserve the features helpful to the detection. In consideration of these issues and the fact that most detection problems are actually to distinguish man-made objects from natural background, a fractal-based multi-spectral fusion algorithm has been proposed in this paper aiming at the recognition of battlefield targets in the complicated backgrounds. According to this algorithm, source images are firstly orthogonally decomposed according to wavelet transform theories, and then fractal-based detection is held to each decomposed image. At this step, natural background and man-made targets are distinguished by use of fractal models that can well imitate natural objects. Special fusion operators are employed during the fusion of area that contains man-made targets so that useful information could be preserved and features of targets could be extruded. The final fused image is reconstructed from the

  1. Age estimation by facial analysis based on applications available for smartphones.

    Science.gov (United States)

    Rezende Machado, A L; Dezem, T U; Bruni, A T; Alves da Silva, R H

    2017-12-01

    Forensic Dentistry has an important role in the human identification cases and, among the analyses that can be performed, age estimation has an important value in establishing an anthropological profile. Modern technology invests for new mechanisms of age estimation: software apps, based on special algorithms, because there is not interference based on personal knowledge, cultural and personal experiences for facial recognition. This research evaluated the use of two different apps: "How Old Do I Look? - Age Camera" and "How Old Am I? - Age Camera, Do You Look Like in Selfie Face Pic?", for age estimation analysis in a sample of 100 people (50 females and 50 males). Univariate and multivariate statistical methods were used to evaluate data. A great reliability was seen when used for the male volunteers. However, for females, no equivalence was found between the real age and the estimated age. These applications presented satisfactory results as an auxiliary method, in male images.

  2. Feature based sliding window technique for face recognition

    Science.gov (United States)

    Javed, Muhammad Younus; Mohsin, Syed Maajid; Anjum, Muhammad Almas

    2010-02-01

    Human beings are commonly identified by biometric schemes which are concerned with identifying individuals by their unique physical characteristics. The use of passwords and personal identification numbers for detecting humans are being used for years now. Disadvantages of these schemes are that someone else may use them or can easily be forgotten. Keeping in view of these problems, biometrics approaches such as face recognition, fingerprint, iris/retina and voice recognition have been developed which provide a far better solution when identifying individuals. A number of methods have been developed for face recognition. This paper illustrates employment of Gabor filters for extracting facial features by constructing a sliding window frame. Classification is done by assigning class label to the unknown image that has maximum features similar to the image stored in the database of that class. The proposed system gives a recognition rate of 96% which is better than many of the similar techniques being used for face recognition.

  3. Nablus mask-like facial syndrome

    DEFF Research Database (Denmark)

    Allanson, Judith; Smith, Amanda; Hare, Heather

    2012-01-01

    Nablus mask-like facial syndrome (NMLFS) has many distinctive phenotypic features, particularly tight glistening skin with reduced facial expression, blepharophimosis, telecanthus, bulky nasal tip, abnormal external ear architecture, upswept frontal hairline, and sparse eyebrows. Over the last fe...

  4. Statistical feature extraction based iris recognition system

    Indian Academy of Sciences (India)

    Atul Bansal

    features obtained from one's face [1], finger [2], voice [3] and/or iris [4, 5]. Iris recognition system is widely used in high security areas. A number of researchers have proposed various algorithms for feature extraction. A little work [6,. 7] however, has been reported using statistical techniques directly on pixel values in order to ...

  5. IMAGE RETIEVAL COLOR, SHAPE AND TEXTURE FEATURES USING CONTENT BASED

    OpenAIRE

    K. NARESH BABU,; SAKE. POTHALAIAH; Dr.K ASHOK BABU

    2010-01-01

    Content-based image retrieval (CBIR) is an important research area for manipulating large amount of image databases and archives. Extraction of invariant features is the basis of CBIR. This paper focuses on the problem of texture, color& shape feature extractions. Using just one feature information for comparing images may cause inaccuracy than compared with using more than one features. Therefore many image retrieval system use many feature information like color, shape and other features. W...

  6. The review and results of different methods for facial recognition

    Science.gov (United States)

    Le, Yifan

    2017-09-01

    In recent years, facial recognition draws much attention due to its wide potential applications. As a unique technology in Biometric Identification, facial recognition represents a significant improvement since it could be operated without cooperation of people under detection. Hence, facial recognition will be taken into defense system, medical detection, human behavior understanding, etc. Several theories and methods have been established to make progress in facial recognition: (1) A novel two-stage facial landmark localization method is proposed which has more accurate facial localization effect under specific database; (2) A statistical face frontalization method is proposed which outperforms state-of-the-art methods for face landmark localization; (3) It proposes a general facial landmark detection algorithm to handle images with severe occlusion and images with large head poses; (4) There are three methods proposed on Face Alignment including shape augmented regression method, pose-indexed based multi-view method and a learning based method via regressing local binary features. The aim of this paper is to analyze previous work of different aspects in facial recognition, focusing on concrete method and performance under various databases. In addition, some improvement measures and suggestions in potential applications will be put forward.

  7. Facial symmetry in unilateral cleft lip and palate following alar base augmentation with bone graft: a three-dimensional assessment.

    Science.gov (United States)

    Devlin, Mark F; Ray, Arup; Raine, Peter; Bowman, Adrian; Ayoub, Ashraf F

    2007-07-01

    The aim of this study was to assess the outcome of bone grafting using a corticocancellous block of iliac crest to reconstruct the support for the deformed, volume-deficient alar base in treated patients with unilateral cleft lip and palate (UCLP). The main outcome being measured was nasal symmetry. This was a prospective study using a noninvasive three-dimensional stereophotogrammetry system (C3D) to assess the position of the alar base. Images were captured immediately preoperatively and at 6 months following the augmentation of the alar base with a block of bone graft. These images were used to calculate facial symmetry scores and were compared using a two sample Student's t test to assess the efficacy of the surgical method in reducing facial/nasal asymmetry. This investigation was conducted on 18 patients with one patient failing to attend for follow-up. The results for 17 patients are presented. Facial symmetry scores improved significantly following the insertion of the bone graft at the deficient alar base (p=0.005). 3D stereophotogrammetry is a noninvasive, accurate, and archiveable method of assessing facial form and surgical change. Nasal symmetry can be quantified and measured reliably with this tool. Bone grafting to the alar base region of treated UCLP patients with volume deficiency produces improvement in nasal symmetry.

  8. Coupled Gaussian Process Regression for pose-invariant facial expression recognition

    NARCIS (Netherlands)

    Rudovic, Ognjen; Patras, Ioannis; Pantic, Maja; Daniilidis, Kostas; Maragos, Petros; Paragios, Nikos

    2010-01-01

    We present a novel framework for the recognition of facial expressions at arbitrary poses that is based on 2D geometric features. We address the problem by first mapping the 2D locations of landmark points of facial expressions in non-frontal poses to the corresponding locations in the frontal pose.

  9. Tensor Rank Preserving Discriminant Analysis for Facial Recognition.

    Science.gov (United States)

    Tao, Dapeng; Guo, Yanan; Li, Yaotang; Gao, Xinbo

    2017-10-12

    Facial recognition, one of the basic topics in computer vision and pattern recognition, has received substantial attention in recent years. However, for those traditional facial recognition algorithms, the facial images are reshaped to a long vector, thereby losing part of the original spatial constraints of each pixel. In this paper, a new tensor-based feature extraction algorithm termed tensor rank preserving discriminant analysis (TRPDA) for facial image recognition is proposed; the proposed method involves two stages: in the first stage, the low-dimensional tensor subspace of the original input tensor samples was obtained; in the second stage, discriminative locality alignment was utilized to obtain the ultimate vector feature representation for subsequent facial recognition. On the one hand, the proposed TRPDA algorithm fully utilizes the natural structure of the input samples, and it applies an optimization criterion that can directly handle the tensor spectral analysis problem, thereby decreasing the computation cost compared those traditional tensor-based feature selection algorithms. On the other hand, the proposed TRPDA algorithm extracts feature by finding a tensor subspace that preserves most of the rank order information of the intra-class input samples. Experiments on the three facial databases are performed here to determine the effectiveness of the proposed TRPDA algorithm.

  10. Variable developmental delays and characteristic facial features-A novel 7p22.3p22.2 microdeletion syndrome?

    Science.gov (United States)

    Yu, Andrea C; Zambrano, Regina M; Cristian, Ingrid; Price, Sue; Bernhard, Birgitta; Zucker, Marc; Venkateswaran, Sunita; McGowan-Jordan, Jean; Armour, Christine M

    2017-06-01

    Isolated 7p22.3p22.2 deletions are rarely described with only two reports in the literature. Most other reported cases either involve a much larger region of the 7p arm or have an additional copy number variation. Here, we report five patients with overlapping microdeletions at 7p22.3p22.2. The patients presented with variable developmental delays, exhibiting relative weaknesses in expressive language skills and relative strengths in gross, and fine motor skills. The most consistent facial features seen in these patients included a broad nasal root, a prominent forehead a prominent glabella and arched eyebrows. Additional variable features amongst the patients included microcephaly, metopic ridging or craniosynostosis, cleft palate, cardiac defects, and mild hypotonia. Although the patients' deletions varied in size, there was a 0.47 Mb region of overlap which contained 7 OMIM genes: EIP3B, CHST12, LFNG, BRAT1, TTYH3, AMZ1, and GNA12. We propose that monosomy of this region represents a novel microdeletion syndrome. We recommend that individuals with 7p22.3p22.2 deletions should receive a developmental assessment and a thorough cardiac exam, with consideration of an echocardiogram, as part of their initial evaluation. © 2017 Wiley Periodicals, Inc.

  11. Realistic Facial Expression of Virtual Human Based on Color, Sweat, and Tears Effects

    Directory of Open Access Journals (Sweden)

    Mohammed Hazim Alkawaz

    2014-01-01

    Full Text Available Generating extreme appearances such as scared awaiting sweating while happy fit for tears (cry and blushing (anger and happiness is the key issue in achieving the high quality facial animation. The effects of sweat, tears, and colors are integrated into a single animation model to create realistic facial expressions of 3D avatar. The physical properties of muscles, emotions, or the fluid properties with sweating and tears initiators are incorporated. The action units (AUs of facial action coding system are merged with autonomous AUs to create expressions including sadness, anger with blushing, happiness with blushing, and fear. Fluid effects such as sweat and tears are simulated using the particle system and smoothed-particle hydrodynamics (SPH methods which are combined with facial animation technique to produce complex facial expressions. The effects of oxygenation of the facial skin color appearance are measured using the pulse oximeter system and the 3D skin analyzer. The result shows that virtual human facial expression is enhanced by mimicking actual sweating and tears simulations for all extreme expressions. The proposed method has contribution towards the development of facial animation industry and game as well as computer graphics.

  12. Down syndrome detection from facial photographs using machine learning techniques

    Science.gov (United States)

    Zhao, Qian; Rosenbaum, Kenneth; Sze, Raymond; Zand, Dina; Summar, Marshall; Linguraru, Marius George

    2013-02-01

    Down syndrome is the most commonly occurring chromosomal condition; one in every 691 babies in United States is born with it. Patients with Down syndrome have an increased risk for heart defects, respiratory and hearing problems and the early detection of the syndrome is fundamental for managing the disease. Clinically, facial appearance is an important indicator in diagnosing Down syndrome and it paves the way for computer-aided diagnosis based on facial image analysis. In this study, we propose a novel method to detect Down syndrome using photography for computer-assisted image-based facial dysmorphology. Geometric features based on facial anatomical landmarks, local texture features based on the Contourlet transform and local binary pattern are investigated to represent facial characteristics. Then a support vector machine classifier is used to discriminate normal and abnormal cases; accuracy, precision and recall are used to evaluate the method. The comparison among the geometric, local texture and combined features was performed using the leave-one-out validation. Our method achieved 97.92% accuracy with high precision and recall for the combined features; the detection results were higher than using only geometric or texture features. The promising results indicate that our method has the potential for automated assessment for Down syndrome from simple, noninvasive imaging data.

  13. Palmprint Based Verification System Using SURF Features

    Science.gov (United States)

    Srinivas, Badrinath G.; Gupta, Phalguni

    This paper describes the design and development of a prototype of robust biometric system for verification. The system uses features extracted using Speeded Up Robust Features (SURF) operator of human hand. The hand image for features is acquired using a low cost scanner. The palmprint region extracted is robust to hand translation and rotation on the scanner. The system is tested on IITK database of 200 images and PolyU database of 7751 images. The system is found to be robust with respect to translation and rotation. It has FAR 0.02%, FRR 0.01% and accuracy of 99.98% and can be a suitable system for civilian applications and high-security environments.

  14. Analytical Features: A Knowledge-Based Approach to Audio Feature Generation

    Directory of Open Access Journals (Sweden)

    Pachet François

    2009-01-01

    Full Text Available We present a feature generation system designed to create audio features for supervised classification tasks. The main contribution to feature generation studies is the notion of analytical features (AFs, a construct designed to support the representation of knowledge about audio signal processing. We describe the most important aspects of AFs, in particular their dimensional type system, on which are based pattern-based random generators, heuristics, and rewriting rules. We show how AFs generalize or improve previous approaches used in feature generation. We report on several projects using AFs for difficult audio classification tasks, demonstrating their advantage over standard audio features. More generally, we propose analytical features as a paradigm to bring raw signals into the world of symbolic computation.

  15. Statistical feature extraction based iris recognition system

    Indian Academy of Sciences (India)

    Performance of the proposed iris recognition system (IRS) has been measured by recording false acceptance rate (FAR) and false rejection rate (FRR) at differentthresholds in the distance metric. System performance has been evaluated by computing statistical features along two directions, namely, radial direction of ...

  16. Surface characterization based upon significant topographic features

    Energy Technology Data Exchange (ETDEWEB)

    Blanc, J; Grime, D; Blateyron, F, E-mail: fblateyron@digitalsurf.fr [Digital Surf, 16 rue Lavoisier, F-25000 Besancon (France)

    2011-08-19

    Watershed segmentation and Wolf pruning, as defined in ISO 25178-2, allow the detection of significant features on surfaces and their characterization in terms of dimension, area, volume, curvature, shape or morphology. These new tools provide a robust way to specify functional surfaces.

  17. Surface characterization based upon significant topographic features

    International Nuclear Information System (INIS)

    Blanc, J; Grime, D; Blateyron, F

    2011-01-01

    Watershed segmentation and Wolf pruning, as defined in ISO 25178-2, allow the detection of significant features on surfaces and their characterization in terms of dimension, area, volume, curvature, shape or morphology. These new tools provide a robust way to specify functional surfaces.

  18. Facial trauma

    Science.gov (United States)

    Maxillofacial injury; Midface trauma; Facial injury; LeFort injuries ... Hockberger RS, Walls RM, eds. Rosen's Emergency Medicine: Concepts and Clinical Practice . 8th ed. Philadelphia, PA: Elsevier ...

  19. Feature Selection with Neighborhood Entropy-Based Cooperative Game Theory

    Directory of Open Access Journals (Sweden)

    Kai Zeng

    2014-01-01

    Full Text Available Feature selection plays an important role in machine learning and data mining. In recent years, various feature measurements have been proposed to select significant features from high-dimensional datasets. However, most traditional feature selection methods will ignore some features which have strong classification ability as a group but are weak as individuals. To deal with this problem, we redefine the redundancy, interdependence, and independence of features by using neighborhood entropy. Then the neighborhood entropy-based feature contribution is proposed under the framework of cooperative game. The evaluative criteria of features can be formalized as the product of contribution and other classical feature measures. Finally, the proposed method is tested on several UCI datasets. The results show that neighborhood entropy-based cooperative game theory model (NECGT yield better performance than classical ones.

  20. Modified wind chill temperatures determined by a whole body thermoregulation model and human-based facial convective coefficients

    Science.gov (United States)

    Shabat, Yael Ben; Shitzer, Avraham; Fiala, Dusan

    2014-08-01

    Wind chill equivalent temperatures (WCETs) were estimated by a modified Fiala's whole body thermoregulation model of a clothed person. Facial convective heat exchange coefficients applied in the computations concurrently with environmental radiation effects were taken from a recently derived human-based correlation. Apart from these, the analysis followed the methodology used in the derivation of the currently used wind chill charts. WCET values are summarized by the following equation: Results indicate consistently lower estimated facial skin temperatures and consequently higher WCETs than those listed in the literature and used by the North American weather services. Calculated dynamic facial skin temperatures were additionally applied in the estimation of probabilities for the occurrence of risks of frostbite. Predicted weather combinations for probabilities of "Practically no risk of frostbite for most people," for less than 5 % risk at wind speeds above 40 km h-1, were shown to occur at air temperatures above -10 °C compared to the currently published air temperature of -15 °C. At air temperatures below -35 °C, the presently calculated weather combination of 40 km h-1/-35 °C, at which the transition for risks to incur a frostbite in less than 2 min, is less conservative than that published: 60 km h-1/-40 °C. The present results introduce a fundamentally improved scientific basis for estimating facial skin temperatures, wind chill temperatures and risk probabilities for frostbites over those currently practiced.

  1. Lack of Support for the Association between Facial Shape and Aggression: A Reappraisal Based on a Worldwide Population Genetics Perspective

    Science.gov (United States)

    Gómez-Valdés, Jorge; Hünemeier, Tábita; Quinto-Sánchez, Mirsha; Paschetta, Carolina; de Azevedo, Soledad; González, Marina F.; Martínez-Abadías, Neus; Esparza, Mireia; Pucciarelli, Héctor M.; Salzano, Francisco M.; Bau, Claiton H. D.; Bortolini, Maria Cátira; González-José, Rolando

    2013-01-01

    Antisocial and criminal behaviors are multifactorial traits whose interpretation relies on multiple disciplines. Since these interpretations may have social, moral and legal implications, a constant review of the evidence is necessary before any scientific claim is considered as truth. A recent study proposed that men with wider faces relative to facial height (fWHR) are more likely to develop unethical behaviour mediated by a psychological sense of power. This research was based on reports suggesting that sexual dimorphism and selection would be responsible for a correlation between fWHR and aggression. Here we show that 4,960 individuals from 94 modern human populations belonging to a vast array of genetic and cultural contexts do not display significant amounts of fWHR sexual dimorphism. Further analyses using populations with associated ethnographical records as well as samples of male prisoners of the Mexico City Federal Penitentiary condemned by crimes of variable level of inter-personal aggression (homicide, robbery, and minor faults) did not show significant evidence, suggesting that populations/individuals with higher levels of bellicosity, aggressive behaviour, or power-mediated behaviour display greater fWHR. Finally, a regression analysis of fWHR on individual's fitness showed no significant correlation between this facial trait and reproductive success. Overall, our results suggest that facial attributes are poor predictors of aggressive behaviour, or at least, that sexual selection was weak enough to leave a signal on patterns of between- and within-sex and population facial variation. PMID:23326328

  2. Cosmetics alter biologically-based factors of beauty: evidence from facial contrast.

    Science.gov (United States)

    Jones, Alex L; Russell, Richard; Ward, Robert

    2015-02-28

    The use of cosmetics by women seems to consistently increase their attractiveness. What factors of attractiveness do cosmetics alter to achieve this? Facial contrast is a known cue to sexual dimorphism and youth, and cosmetics exaggerate sexual dimorphisms in facial contrast. Here, we demonstrate that the luminance contrast pattern of the eyes and eyebrows is consistently sexually dimorphic across a large sample of faces, with females possessing lower brow contrasts than males, and greater eye contrast than males. Red-green and yellow-blue color contrasts were not found to differ consistently between the sexes. We also show that women use cosmetics not only to exaggerate sexual dimorphisms of brow and eye contrasts, but also to increase contrasts that decline with age. These findings refine the notion of facial contrast, and demonstrate how cosmetics can increase attractiveness by manipulating factors of beauty associated with facial contrast.

  3. Graphical matching rules for cardinality based service feature diagrams

    Directory of Open Access Journals (Sweden)

    Faiza Kanwal

    2017-03-01

    Full Text Available To provide efficient services to end-users, variability and commonality among the features of the product line is a challenge for industrialist and researchers. Feature modeling provides great services to deal with variability and commonality among the features of product line. Cardinality based service feature diagrams changed the basic framework of service feature diagrams by putting constraints to them, which make service specifications more flexible, but apart from their variation in selection third party services may have to be customizable. Although to control variability, cardinality based service feature diagrams provide high level visual notations. For specifying variability, the use of cardinality based service feature diagrams raises the problem of matching a required feature diagram against the set of provided diagrams.

  4. Cognitive penetrability and emotion recognition in human facial expressions

    Directory of Open Access Journals (Sweden)

    Francesco eMarchi

    2015-06-01

    Full Text Available Do our background beliefs, desires, and mental images influence our perceptual experience of the emotions of others? In this paper, we will address the possibility of cognitive penetration of perceptual experience in the domain of social cognition. In particular, we focus on emotion recognition based on the visual experience of facial expressions. After introducing the current debate on cognitive penetration, we review examples of perceptual adaptation for facial expressions of emotion. This evidence supports the idea that facial expressions are perceptually processed as wholes. That is, the perceptual system integrates lower-level facial features, such as eyebrow orientation, mouth angle etc., into facial compounds. We then present additional experimental evidence showing that in some cases, emotion recognition on the basis of facial expression is sensitive to and modified by the background knowledge of the subject. We argue that such sensitivity is best explained as a difference in the visual experience of the facial expression, not just as a modification of the judgment based on this experience. The difference in experience is characterized as the result of the interference of background knowledge with the perceptual integration process for faces. Thus, according to the best explanation, we have to accept cognitive penetration in some cases of emotion recognition. Finally, we highlight a recent model of social vision in order to propose a mechanism for cognitive penetration used in the face-based recognition of emotion.

  5. Histological image classification using biologically interpretable shape-based features

    International Nuclear Information System (INIS)

    Kothari, Sonal; Phan, John H; Young, Andrew N; Wang, May D

    2013-01-01

    Automatic cancer diagnostic systems based on histological image classification are important for improving therapeutic decisions. Previous studies propose textural and morphological features for such systems. These features capture patterns in histological images that are useful for both cancer grading and subtyping. However, because many of these features lack a clear biological interpretation, pathologists may be reluctant to adopt these features for clinical diagnosis. We examine the utility of biologically interpretable shape-based features for classification of histological renal tumor images. Using Fourier shape descriptors, we extract shape-based features that capture the distribution of stain-enhanced cellular and tissue structures in each image and evaluate these features using a multi-class prediction model. We compare the predictive performance of the shape-based diagnostic model to that of traditional models, i.e., using textural, morphological and topological features. The shape-based model, with an average accuracy of 77%, outperforms or complements traditional models. We identify the most informative shapes for each renal tumor subtype from the top-selected features. Results suggest that these shapes are not only accurate diagnostic features, but also correlate with known biological characteristics of renal tumors. Shape-based analysis of histological renal tumor images accurately classifies disease subtypes and reveals biologically insightful discriminatory features. This method for shape-based analysis can be extended to other histological datasets to aid pathologists in diagnostic and therapeutic decisions

  6. Facial Schwannoma

    Directory of Open Access Journals (Sweden)

    Mohammadtaghi Khorsandi Ashtiani

    2005-06-01

    Full Text Available Background: Facial schwannoma is a rare tumor arising from any part of the nerve. Probable symptoms are partial or facial weakness, hearing loss, visible mass in the ear, otorrhea, loss of taste, rarely pain, and sometimes without any symptoms. Patients should undergo a complete neurotologic history, examination with documentation of facial and auditory function, specially C.T. scan or M.R.I. Surgery is the only treatment option although the decision of when to remove facial schwannoma in the presence of normal facial function is difficult. Case: A 19-year-old girl with all above symptoms in the right side except loss of taste is diagnosed having facial schwannoma with full examination, audiometric, and radiological tests. She underwent surgery. In follow-up facial function were mostly restored. Conclusion: The need for careful assessment of patients with Bell's palsy cannot be overemphasized. In spite of the negative results if still there is any suspicoin, total facial nerve exploration is necessary.

  7. Fashion Evaluation Method for Clothing Recommendation Based on Weak Appearance Feature

    Directory of Open Access Journals (Sweden)

    Yan Zhang

    2017-01-01

    Full Text Available With the rapid rising of living standard, people gradually developed higher shopping enthusiasm and increasing demand for garment. Nowadays, an increasing number of people pursue fashion. However, facing too many types of garment, consumers need to try them on repeatedly, which is somewhat time- and energy-consuming. Besides, it is difficult for merchants to master the real-time demand of consumers. Herein, there is not enough cohesiveness between consumer information and merchants. Thus, a novel fashion evaluation method on the basis of the appearance weak feature is proposed in this paper. First of all, image database is established and three aspects of appearance weak feature are put forward to characterize the fashion level. Furthermore, the appearance weak features are extracted according to the characters’ facial feature localization method. Last but not least, consumers’ fashion level can be classified through support vector product, and the classification is verified with the hierarchical analysis method. The experimental results show that consumers’ fashion level can be accurately described based on the indexes of appearance weak feature and the approach has higher application value for the clothing recommendation system.

  8. Simultaneous Channel and Feature Selection of Fused EEG Features Based on Sparse Group Lasso

    Directory of Open Access Journals (Sweden)

    Jin-Jia Wang

    2015-01-01

    Full Text Available Feature extraction and classification of EEG signals are core parts of brain computer interfaces (BCIs. Due to the high dimension of the EEG feature vector, an effective feature selection algorithm has become an integral part of research studies. In this paper, we present a new method based on a wrapped Sparse Group Lasso for channel and feature selection of fused EEG signals. The high-dimensional fused features are firstly obtained, which include the power spectrum, time-domain statistics, AR model, and the wavelet coefficient features extracted from the preprocessed EEG signals. The wrapped channel and feature selection method is then applied, which uses the logistical regression model with Sparse Group Lasso penalized function. The model is fitted on the training data, and parameter estimation is obtained by modified blockwise coordinate descent and coordinate gradient descent method. The best parameters and feature subset are selected by using a 10-fold cross-validation. Finally, the test data is classified using the trained model. Compared with existing channel and feature selection methods, results show that the proposed method is more suitable, more stable, and faster for high-dimensional feature fusion. It can simultaneously achieve channel and feature selection with a lower error rate. The test accuracy on the data used from international BCI Competition IV reached 84.72%.

  9. Features and characteristics of problem based learning

    OpenAIRE

    Eser Ceker; Fezile Ozdamli

    2016-01-01

    Throughout the years, there appears to be an increase in Problem Based Learning applications in education; and Problem Based Learning related research areas. The main aim of this research is to underline the fundamentals (basic elements) of Problem Based Learning, investigate the dimensions of research approached to PBL oriented areas (with a look for the latest technology supported tools of Problem Based Learning). This research showed that the most researched characteristics of PBL are; tea...

  10. Features and Characteristics of Problem Based Learning

    Science.gov (United States)

    Ceker, Eser; Ozdamli, Fezile

    2016-01-01

    Throughout the years, there appears to be an increase in Problem Based Learning applications in education; and Problem Based Learning related research areas. The main aim of this research is to underline the fundamentals (basic elements) of Problem Based Learning, investigate the dimensions of research approached to PBL oriented areas (with a look…

  11. EEG feature selection method based on decision tree.

    Science.gov (United States)

    Duan, Lijuan; Ge, Hui; Ma, Wei; Miao, Jun

    2015-01-01

    This paper aims to solve automated feature selection problem in brain computer interface (BCI). In order to automate feature selection process, we proposed a novel EEG feature selection method based on decision tree (DT). During the electroencephalogram (EEG) signal processing, a feature extraction method based on principle component analysis (PCA) was used, and the selection process based on decision tree was performed by searching the feature space and automatically selecting optimal features. Considering that EEG signals are a series of non-linear signals, a generalized linear classifier named support vector machine (SVM) was chosen. In order to test the validity of the proposed method, we applied the EEG feature selection method based on decision tree to BCI Competition II datasets Ia, and the experiment showed encouraging results.

  12. Facial Expression Recognition via Non-Negative Least-Squares Sparse Coding

    Directory of Open Access Journals (Sweden)

    Ying Chen

    2014-05-01

    Full Text Available Sparse coding is an active research subject in signal processing, computer vision, and pattern recognition. A novel method of facial expression recognition via non-negative least squares (NNLS sparse coding is presented in this paper. The NNLS sparse coding is used to form a facial expression classifier. To testify the performance of the presented method, local binary patterns (LBP and the raw pixels are extracted for facial feature representation. Facial expression recognition experiments are conducted on the Japanese Female Facial Expression (JAFFE database. Compared with other widely used methods such as linear support vector machines (SVM, sparse representation-based classifier (SRC, nearest subspace classifier (NSC, K-nearest neighbor (KNN and radial basis function neural networks (RBFNN, the experiment results indicate that the presented NNLS method performs better than other used methods on facial expression recognition tasks.

  13. Linear regression-based feature selection for microarray data classification.

    Science.gov (United States)

    Abid Hasan, Md; Hasan, Md Kamrul; Abdul Mottalib, M

    2015-01-01

    Predicting the class of gene expression profiles helps improve the diagnosis and treatment of diseases. Analysing huge gene expression data otherwise known as microarray data is complicated due to its high dimensionality. Hence the traditional classifiers do not perform well where the number of features far exceeds the number of samples. A good set of features help classifiers to classify the dataset efficiently. Moreover, a manageable set of features is also desirable for the biologist for further analysis. In this paper, we have proposed a linear regression-based feature selection method for selecting discriminative features. Our main focus is to classify the dataset more accurately using less number of features than other traditional feature selection methods. Our method has been compared with several other methods and in almost every case the classification accuracy is higher using less number of features than the other popular feature selection methods.

  14. Feature selection based classifier combination approach for ...

    Indian Academy of Sciences (India)

    based classifier combination is the simplest method in which final decision is that class for which maximum (greater than N/2) participating classifier vote, where N is the number of classifiers. 3.2b Decision templates: The method based on decision template, (Kuncheva et al 2001) firstly creates DT for each class using ...

  15. Feature selection based classifier combination approach for ...

    Indian Academy of Sciences (India)

    3.2c Dempster-Shafer rule based classifier combination: Dempster–Shafer (DS) method is based on the evidence theory, proposed by Glen Shafer as a way to represent cognitive knowledge. Here the probability is obtained using belief function instead of using the Bayesian distribution. Prob- ability values are assigned to a ...

  16. 3D animation of facial plastic surgery based on computer graphics

    Science.gov (United States)

    Zhang, Zonghua; Zhao, Yan

    2013-12-01

    More and more people, especial women, are getting desired to be more beautiful than ever. To some extent, it becomes true because the plastic surgery of face was capable in the early 20th and even earlier as doctors just dealing with war injures of face. However, the effect of post-operation is not always satisfying since no animation could be seen by the patients beforehand. In this paper, by combining plastic surgery of face and computer graphics, a novel method of simulated appearance of post-operation will be given to demonstrate the modified face from different viewpoints. The 3D human face data are obtained by using 3D fringe pattern imaging systems and CT imaging systems and then converted into STL (STereo Lithography) file format. STL file is made up of small 3D triangular primitives. The triangular mesh can be reconstructed by using hash function. Top triangular meshes in depth out of numbers of triangles must be picked up by ray-casting technique. Mesh deformation is based on the front triangular mesh in the process of simulation, which deforms interest area instead of control points. Experiments on face model show that the proposed 3D animation facial plastic surgery can effectively demonstrate the simulated appearance of post-operation.

  17. Superpixel-Based Feature for Aerial Image Scene Recognition

    Directory of Open Access Journals (Sweden)

    Hongguang Li

    2018-01-01

    Full Text Available Image scene recognition is a core technology for many aerial remote sensing applications. Different landforms are inputted as different scenes in aerial imaging, and all landform information is regarded as valuable for aerial image scene recognition. However, the conventional features of the Bag-of-Words model are designed using local points or other related information and thus are unable to fully describe landform areas. This limitation cannot be ignored when the aim is to ensure accurate aerial scene recognition. A novel superpixel-based feature is proposed in this study to characterize aerial image scenes. Then, based on the proposed feature, a scene recognition method of the Bag-of-Words model for aerial imaging is designed. The proposed superpixel-based feature that utilizes landform information establishes top-task superpixel extraction of landforms to bottom-task expression of feature vectors. This characterization technique comprises the following steps: simple linear iterative clustering based superpixel segmentation, adaptive filter bank construction, Lie group-based feature quantification, and visual saliency model-based feature weighting. Experiments of image scene recognition are carried out using real image data captured by an unmanned aerial vehicle (UAV. The recognition accuracy of the proposed superpixel-based feature is 95.1%, which is higher than those of scene recognition algorithms based on other local features.

  18. Segmentation-Based PolSAR Image Classification Using Visual Features: RHLBP and Color Features

    Directory of Open Access Journals (Sweden)

    Jian Cheng

    2015-05-01

    Full Text Available A segmentation-based fully-polarimetric synthetic aperture radar (PolSAR image classification method that incorporates texture features and color features is designed and implemented. This method is based on the framework that conjunctively uses statistical region merging (SRM for segmentation and support vector machine (SVM for classification. In the segmentation step, we propose an improved local binary pattern (LBP operator named the regional homogeneity local binary pattern (RHLBP to guarantee the regional homogeneity in PolSAR images. In the classification step, the color features extracted from false color images are applied to improve the classification accuracy. The RHLBP operator and color features can provide discriminative information to separate those pixels and regions with similar polarimetric features, which are from different classes. Extensive experimental comparison results with conventional methods on L-band PolSAR data demonstrate the effectiveness of our proposed method for PolSAR image classification.

  19. Classical Music Clustering Based on Acoustic Features

    OpenAIRE

    Wang, Xindi; Haque, Syed Arefinul

    2017-01-01

    In this paper we cluster 330 classical music pieces collected from MusicNet database based on their musical note sequence. We use shingling and chord trajectory matrices to create signature for each music piece and performed spectral clustering to find the clusters. Based on different resolution, the output clusters distinctively indicate composition from different classical music era and different composing style of the musicians.

  20. Automated Analysis of Facial Cues from Videos as a Potential Method for Differentiating Stress and Boredom of Players in Games

    Directory of Open Access Journals (Sweden)

    Fernando Bevilacqua

    2018-01-01

    Full Text Available Facial analysis is a promising approach to detect emotions of players unobtrusively; however approaches are commonly evaluated in contexts not related to games or facial cues are derived from models not designed for analysis of emotions during interactions with games. We present a method for automated analysis of facial cues from videos as a potential tool for detecting stress and boredom of players behaving naturally while playing games. Computer vision is used to automatically and unobtrusively extract 7 facial features aimed at detecting the activity of a set of facial muscles. Features are mainly based on the Euclidean distance of facial landmarks and do not rely on predefined facial expressions, training of a model, or the use of facial standards. An empirical evaluation was conducted on video recordings of an experiment involving games as emotion elicitation sources. Results show statistically significant differences in the values of facial features during boring and stressful periods of gameplay for 5 of the 7 features. We believe our approach is more user-tailored, convenient, and better suited for contexts involving games.

  1. Individual discriminative face recognition models based on subsets of features

    DEFF Research Database (Denmark)

    Clemmensen, Line Katrine Harder; Gomez, David Delgado; Ersbøll, Bjarne Kjær

    2007-01-01

    of the face recognition problem. The elastic net model is able to select a subset of features with low computational effort compared to other state-of-the-art feature selection methods. Furthermore, the fact that the number of features usually is larger than the number of images in the data base makes feature......The accuracy of data classification methods depends considerably on the data representation and on the selected features. In this work, the elastic net model selection is used to identify meaningful and important features in face recognition. Modelling the characteristics which distinguish one...... person from another using only subsets of features will both decrease the computational cost and increase the generalization capacity of the face recognition algorithm. Moreover, identifying which are the features that better discriminate between persons will also provide a deeper understanding...

  2. A signal-detection-based diagnostic-feature-detection model of eyewitness identification.

    Science.gov (United States)

    Wixted, John T; Mickes, Laura

    2014-04-01

    The theoretical understanding of eyewitness identifications made from a police lineup has long been guided by the distinction between absolute and relative decision strategies. In addition, the accuracy of identifications associated with different eyewitness memory procedures has long been evaluated using measures like the diagnosticity ratio (the correct identification rate divided by the false identification rate). Framed in terms of signal-detection theory, both the absolute/relative distinction and the diagnosticity ratio are mainly relevant to response bias while remaining silent about the key issue of diagnostic accuracy, or discriminability (i.e., the ability to tell the difference between innocent and guilty suspects in a lineup). Here, we propose a signal-detection-based model of eyewitness identification, one that encourages the use of (and helps to conceptualize) receiver operating characteristic (ROC) analysis to measure discriminability. Recent ROC analyses indicate that the simultaneous presentation of faces in a lineup yields higher discriminability than the presentation of faces in isolation, and we propose a diagnostic feature-detection hypothesis to account for that result. According to this hypothesis, the simultaneous presentation of faces allows the eyewitness to appreciate that certain facial features (viz., those that are shared by everyone in the lineup) are non-diagnostic of guilt. To the extent that those non-diagnostic features are discounted in favor of potentially more diagnostic features, the ability to discriminate innocent from guilty suspects will be enhanced.

  3. Graph-based unsupervised feature selection and multiview ...

    Indian Academy of Sciences (India)

    Home; Journals; Journal of Biosciences; Volume 40; Issue 4. Graph-based unsupervised feature selection and multiview clustering for microarray data. Tripti Swarnkar Pabitra Mitra ... Keywords. Biological functional enrichment; clustering; explorative data analysis; feature selection; gene selection; graph-based learning.

  4. Automatic Facial Expression Recognition and Operator Functional State

    Science.gov (United States)

    Blanson, Nina

    2012-01-01

    The prevalence of human error in safety-critical occupations remains a major challenge to mission success despite increasing automation in control processes. Although various methods have been proposed to prevent incidences of human error, none of these have been developed to employ the detection and regulation of Operator Functional State (OFS), or the optimal condition of the operator while performing a task, in work environments due to drawbacks such as obtrusiveness and impracticality. A video-based system with the ability to infer an individual's emotional state from facial feature patterning mitigates some of the problems associated with other methods of detecting OFS, like obtrusiveness and impracticality in integration with the mission environment. This paper explores the utility of facial expression recognition as a technology for inferring OFS by first expounding on the intricacies of OFS and the scientific background behind emotion and its relationship with an individual's state. Then, descriptions of the feedback loop and the emotion protocols proposed for the facial recognition program are explained. A basic version of the facial expression recognition program uses Haar classifiers and OpenCV libraries to automatically locate key facial landmarks during a live video stream. Various methods of creating facial expression recognition software are reviewed to guide future extensions of the program. The paper concludes with an examination of the steps necessary in the research of emotion and recommendations for the creation of an automatic facial expression recognition program for use in real-time, safety-critical missions

  5. Readability assessment of internet-based patient education materials related to facial fractures.

    Science.gov (United States)

    Sanghvi, Saurin; Cherla, Deepa V; Shukla, Pratik A; Eloy, Jean Anderson

    2012-09-01

    Various professional societies, clinical practices, hospitals, and health care-related Web sites provide Internet-based patient education material (IPEMs) to the general public. However, this information may be written above the 6th-grade reading level recommended by the US Department of Health and Human Services. The purpose of this study is to assess the readability of facial fracture (FF)-related IPEMs and compare readability levels of IPEMs provided by four sources: professional societies, clinical practices, hospitals, and miscellaneous sources. Analysis of IPEMs on FFs available on Google.com. The readability of 41 FF-related IPEMs was assessed with four readability indices: Flesch-Kincaid Grade Level (FKGL), Flesch Reading Ease Score (FRES), Simple Measure of Gobbledygook (SMOG), and Gunning Frequency of Gobbledygook (Gunning FOG). Averages were evaluated against national recommendations and between each source using analysis of variance and t tests. Only 4.9% of IPEMs were written at or below the 6th-grade reading level, based on FKGL. The mean readability scores were: FRES 54.10, FKGL 9.89, SMOG 12.73, and Gunning FOG 12.98, translating into FF-related IPEMs being written at a "difficult" writing level, which is above the level of reading understanding of the average American adult. IPEMs related to FFs are written above the recommended 6th-grade reading level. Consequently, this information would be difficult to understand by the average US patient. Copyright © 2012 The American Laryngological, Rhinological, and Otological Society, Inc.

  6. Facial Fractures.

    Science.gov (United States)

    Ghosh, Rajarshi; Gopalkrishnan, Kulandaswamy

    2018-01-29

    The aim of this study is to retrospectively analyze the incidence of facial fractures along with age, gender predilection, etiology, commonest site, associated dental injuries, and any complications of patients operated in Craniofacial Unit of SDM College of Dental Sciences and Hospital. This retrospective study was conducted at the Department of OMFS, SDM College of Dental Sciences, Dharwad from January 2003 to December 2013. Data were recorded for the cause of injury, age and gender distribution, frequency and type of injury, localization and frequency of soft tissue injuries, dentoalveolar trauma, facial bone fractures, complications, concomitant injuries, and different treatment protocols.All the data were analyzed using statistical analysis that is chi-squared test. A total of 1146 patients reported at our unit with facial fractures during these 10 years. Males accounted for a higher frequency of facial fractures (88.8%). Mandible was the commonest bone to be fractured among all the facial bones (71.2%). Maxillary central incisors were the most common teeth to be injured (33.8%) and avulsion was the most common type of injury (44.6%). Commonest postoperative complication was plate infection (11%) leading to plate removal. Other injuries associated with facial fractures were rib fractures, head injuries, upper and lower limb fractures, etc., among these rib fractures were seen most frequently (21.6%). This study was performed to compare the different etiologic factors leading to diverse facial fracture patterns. By statistical analysis of this record the authors come to know about the relationship of facial fractures with gender, age, associated comorbidities, etc.

  7. [Facial burns].

    Science.gov (United States)

    Müller, F E

    1984-01-01

    Deep partial and full thickness facial burns require early skin grafting. Pressure face masks and local steroids reduce hypertrophic scarring. Split skin and Z-plasties are used for early reconstructive surgery. Only after softening of the scar tissue definite reconstructive work should be undertaken. For this period full thickness skin grafts and local flaps are preferred. Special regional problems require skilled plastic surgery. Reconstructive surgery is the most essential part of the rehabilitation of severe facial burns.

  8. Mutual information-based feature selection for radiomics

    Science.gov (United States)

    Oubel, Estanislao; Beaumont, Hubert; Iannessi, Antoine

    2016-03-01

    Background The extraction and analysis of image features (radiomics) is a promising field in the precision medicine era, with applications to prognosis, prediction, and response to treatment quantification. In this work, we present a mutual information - based method for quantifying reproducibility of features, a necessary step for qualification before their inclusion in big data systems. Materials and Methods Ten patients with Non-Small Cell Lung Cancer (NSCLC) lesions were followed over time (7 time points in average) with Computed Tomography (CT). Five observers segmented lesions by using a semi-automatic method and 27 features describing shape and intensity distribution were extracted. Inter-observer reproducibility was assessed by computing the multi-information (MI) of feature changes over time, and the variability of global extrema. Results The highest MI values were obtained for volume-based features (VBF). The lesion mass (M), surface to volume ratio (SVR) and volume (V) presented statistically significant higher values of MI than the rest of features. Within the same VBF group, SVR showed also the lowest variability of extrema. The correlation coefficient (CC) of feature values was unable to make a difference between features. Conclusions MI allowed to discriminate three features (M, SVR, and V) from the rest in a statistically significant manner. This result is consistent with the order obtained when sorting features by increasing values of extrema variability. MI is a promising alternative for selecting features to be considered as surrogate biomarkers in a precision medicine context.

  9. Perceived Sexual Orientation Based on Vocal and Facial Stimuli Is Linked to Self-Rated Sexual Orientation in Czech Men

    Science.gov (United States)

    Valentova, Jaroslava Varella; Havlíček, Jan

    2013-01-01

    Previous research has shown that lay people can accurately assess male sexual orientation based on limited information, such as face, voice, or behavioral display. Gender-atypical traits are thought to serve as cues to sexual orientation. We investigated the presumed mechanisms of sexual orientation attribution using a standardized set of facial and vocal stimuli of Czech men. Both types of stimuli were rated for sexual orientation and masculinity-femininity by non-student heterosexual women and homosexual men. Our data showed that by evaluating vocal stimuli both women and homosexual men can judge sexual orientation of the target men in agreement with their self-reported sexual orientation. Nevertheless, only homosexual men accurately attributed sexual orientation of the two groups from facial images. Interestingly, facial images of homosexual targets were rated as more masculine than heterosexual targets. This indicates that attributions of sexual orientation are affected by stereotyped association between femininity and male homosexuality; however, reliance on such cues can lead to frequent misjudgments as was the case with the female raters. Although our study is based on a community sample recruited in a non-English speaking country, the results are generally consistent with the previous research and thus corroborate the validity of sexual orientation attributions. PMID:24358180

  10. Perceived sexual orientation based on vocal and facial stimuli is linked to self-rated sexual orientation in Czech men.

    Directory of Open Access Journals (Sweden)

    Jaroslava Varella Valentova

    Full Text Available Previous research has shown that lay people can accurately assess male sexual orientation based on limited information, such as face, voice, or behavioral display. Gender-atypical traits are thought to serve as cues to sexual orientation. We investigated the presumed mechanisms of sexual orientation attribution using a standardized set of facial and vocal stimuli of Czech men. Both types of stimuli were rated for sexual orientation and masculinity-femininity by non-student heterosexual women and homosexual men. Our data showed that by evaluating vocal stimuli both women and homosexual men can judge sexual orientation of the target men in agreement with their self-reported sexual orientation. Nevertheless, only homosexual men accurately attributed sexual orientation of the two groups from facial images. Interestingly, facial images of homosexual targets were rated as more masculine than heterosexual targets. This indicates that attributions of sexual orientation are affected by stereotyped association between femininity and male homosexuality; however, reliance on such cues can lead to frequent misjudgments as was the case with the female raters. Although our study is based on a community sample recruited in a non-English speaking country, the results are generally consistent with the previous research and thus corroborate the validity of sexual orientation attributions.

  11. A Real-Time Interactive System for Facial Makeup of Peking Opera

    Science.gov (United States)

    Cai, Feilong; Yu, Jinhui

    In this paper we present a real-time interactive system for making facial makeup of Peking Opera. First, we analyze the process of drawing facial makeup and characteristics of the patterns used in it, and then construct a SVG pattern bank based on local features like eye, nose, mouth, etc. Next, we pick up some SVG patterns from the pattern bank and composed them to make a new facial makeup. We offer a vector-based free form deformation (FFD) tool to edit patterns and, based on editing, our system creates automatically texture maps for a template head model. Finally, the facial makeup is rendered on the 3D head model in real time. Our system offers flexibility in designing and synthesizing various 3D facial makeup. Potential applications of the system include decoration design, digital museum exhibition and education of Peking Opera.

  12. Moment feature based fast feature extraction algorithm for moving object detection using aerial images.

    Directory of Open Access Journals (Sweden)

    A F M Saifuddin Saif

    Full Text Available Fast and computationally less complex feature extraction for moving object detection using aerial images from unmanned aerial vehicles (UAVs remains as an elusive goal in the field of computer vision research. The types of features used in current studies concerning moving object detection are typically chosen based on improving detection rate rather than on providing fast and computationally less complex feature extraction methods. Because moving object detection using aerial images from UAVs involves motion as seen from a certain altitude, effective and fast feature extraction is a vital issue for optimum detection performance. This research proposes a two-layer bucket approach based on a new feature extraction algorithm referred to as the moment-based feature extraction algorithm (MFEA. Because a moment represents the coherent intensity of pixels and motion estimation is a motion pixel intensity measurement, this research used this relation to develop the proposed algorithm. The experimental results reveal the successful performance of the proposed MFEA algorithm and the proposed methodology.

  13. Moment feature based fast feature extraction algorithm for moving object detection using aerial images.

    Science.gov (United States)

    Saif, A F M Saifuddin; Prabuwono, Anton Satria; Mahayuddin, Zainal Rasyid

    2015-01-01

    Fast and computationally less complex feature extraction for moving object detection using aerial images from unmanned aerial vehicles (UAVs) remains as an elusive goal in the field of computer vision research. The types of features used in current studies concerning moving object detection are typically chosen based on improving detection rate rather than on providing fast and computationally less complex feature extraction methods. Because moving object detection using aerial images from UAVs involves motion as seen from a certain altitude, effective and fast feature extraction is a vital issue for optimum detection performance. This research proposes a two-layer bucket approach based on a new feature extraction algorithm referred to as the moment-based feature extraction algorithm (MFEA). Because a moment represents the coherent intensity of pixels and motion estimation is a motion pixel intensity measurement, this research used this relation to develop the proposed algorithm. The experimental results reveal the successful performance of the proposed MFEA algorithm and the proposed methodology.

  14. Deletion of 4.4 Mb at 2q33.2q33.3 May Cause Growth Deficiency in a Patient with Mental Retardation, Facial Dysmorphic Features and Speech Delay.

    Science.gov (United States)

    Papoulidis, Ioannis; Paspaliaris, Vassilis; Papageorgiou, Elena; Siomou, Elissavet; Dagklis, Themistoklis; Sotiriou, Sotirios; Thomaidis, Loretta; Manolakos, Emmanouil

    2015-01-01

    A patient with a rare interstitial deletion of chromosomal band 2q33.2q33.3 is described. The clinical features resembled the 2q33.1 microdeletion syndrome (Glass syndrome), including mental retardation, facial dysmorphism, high-arched narrow palate, growth deficiency, and speech delay. The chromosomal aberration was characterized by whole genome BAC aCGH. A comparison of the current patient and Glass syndrome features revealed that this case displayed a relatively mild phenotype. Overall, it is suggested that the deleted region of 2q33 causative for Glass syndrome may be larger than initially suggested. © 2015 S. Karger AG, Basel.

  15. Facial Video based Detection of Physical Fatigue for Maximal Muscle Activity

    DEFF Research Database (Denmark)

    Haque, Mohammad Ahsanul; Irani, Ramin; Nasrollahi, Kamal

    2016-01-01

    Physical fatigue reveals the health condition of a person at for example health checkup, fitness assessment or rehabilitation training. This paper presents an efficient noncontact system for detecting non-localized physi-cal fatigue from maximal muscle activity using facial videos acquired...

  16. Face Puzzle – Two new video-based tasks for measuring explicit and implicit aspects of facial emotion recognition

    Directory of Open Access Journals (Sweden)

    Dorit eKliemann

    2013-06-01

    Full Text Available Recognizing others’ emotional states is crucial for effective social interaction. While most facial emotion recognition tasks use explicit prompts that trigger consciously controlled processing, emotional faces are almost exclusively processed implicitly in real life. Recent attempts in social cognition suggest a dual process perspective, whereby explicit and implicit processes largely operate independently. However, due to differences in methodology the direct comparison of implicit and explicit social cognition has remained a challenge.Here, we introduce a new tool to comparably measure implicit and explicit processing aspects comprising basic and complex emotions in facial expressions. We developed two video-based tasks with similar answer formats to assess performance in respective facial emotion recognition processes: Face Puzzle, implicit and explicit. To assess the tasks’ sensitivity to atypical social cognition and to infer interrelationship patterns between explicit and implicit processes in typical and atypical development, we included healthy adults (NT, n= 24 and adults with autism spectrum disorder (ASD, n = 24.Item analyses yielded good reliability of the new tasks. Group-specific results indicated sensitivity to subtle social impairments in high-functioning ASD. Correlation analyses with established implicit and explicit socio-cognitive measures were further in favor of the tasks’ external validity. Between group comparisons provide first hints of differential relations between implicit and explicit aspects of facial emotion recognition processes in healthy compared to ASD participants. In addition, an increased magnitude of between group differences in the implicit task was found for a speed-accuracy composite measure. The new Face Puzzle tool thus provides two new tasks to separately assess explicit and implicit social functioning, for instance, to measure subtle impairments as well as potential improvements due to social

  17. SAR Target Recognition with Feature Fusion Based on Stacked Autoencoder

    Directory of Open Access Journals (Sweden)

    Kang Miao

    2017-04-01

    Full Text Available A feature fusion algorithm based on a Stacked AutoEncoder (SAE for Synthetic Aperture Rader (SAR imagery is proposed in this paper. Firstly, 25 baseline features and Three-Patch Local Binary Patterns (TPLBP features are extracted. Then, the features are combined in series and fed into the SAE network, which is trained by a greedy layer-wise method. Finally, the softmax classifier is employed to fine tune the SAE network for better fusion performance. Additionally, the Gabor texture features of SAR images are extracted, and the fusion experiments between different features are carried out. The results show that the baseline features and TPLBP features have low redundancy and high complementarity, which makes the fused feature more discriminative. Compared with the SAR target recognition algorithm based on SAE or CNN (Convolutional Neural Network, the proposed method simplifies the network structure and increases the recognition accuracy and efficiency. 10-classes SAR targets based on an MSTAR dataset got a classification accuracy up to 95.88%, which verifies the effectiveness of the presented algorithm.

  18. Infrared vehicle recognition using unsupervised feature learning based on K-feature

    Science.gov (United States)

    Lin, Jin; Tan, Yihua; Xia, Haijiao; Tian, Jinwen

    2018-02-01

    Subject to the complex battlefield environment, it is difficult to establish a complete knowledge base in practical application of vehicle recognition algorithms. The infrared vehicle recognition is always difficult and challenging, which plays an important role in remote sensing. In this paper we propose a new unsupervised feature learning method based on K-feature to recognize vehicle in infrared images. First, we use the target detection algorithm which is based on the saliency to detect the initial image. Then, the unsupervised feature learning based on K-feature, which is generated by Kmeans clustering algorithm that extracted features by learning a visual dictionary from a large number of samples without label, is calculated to suppress the false alarm and improve the accuracy. Finally, the vehicle target recognition image is finished by some post-processing. Large numbers of experiments demonstrate that the proposed method has satisfy recognition effectiveness and robustness for vehicle recognition in infrared images under complex backgrounds, and it also improve the reliability of it.

  19. Edge and line feature extraction based on covariance models

    NARCIS (Netherlands)

    van der Heijden, Ferdinand

    Image segmentation based on contour extraction usually involves three stages of image operations: feature extraction, edge detection and edge linking. This paper is devoted to the first stage: a method to design feature extractors used to detect edges from noisy and/or blurred images.

  20. Effect of Feature Dimensionality on Object-based Land Cover ...

    African Journals Online (AJOL)

    Geographic object-based image analysis (GEOBIA) allows the easy integration of such additional features into the classification process. This paper compares the performance of three supervised classifiers in a GEOBIA environment as an increasing number of object features are included as classification input.

  1. A Genome-Wide Association Study Identifies Five Loci Influencing Facial Morphology in Europeans

    Science.gov (United States)

    Liu, Fan; van der Lijn, Fedde; Schurmann, Claudia; Zhu, Gu; Chakravarty, M. Mallar; Hysi, Pirro G.; Wollstein, Andreas; Lao, Oscar; de Bruijne, Marleen; Ikram, M. Arfan; van der Lugt, Aad; Rivadeneira, Fernando; Uitterlinden, André G.; Hofman, Albert; Niessen, Wiro J.; Homuth, Georg; de Zubicaray, Greig; McMahon, Katie L.; Thompson, Paul M.; Daboul, Amro; Puls, Ralf; Hegenscheid, Katrin; Bevan, Liisa; Pausova, Zdenka; Medland, Sarah E.; Montgomery, Grant W.; Wright, Margaret J.; Wicking, Carol; Boehringer, Stefan; Spector, Timothy D.; Paus, Tomáš; Martin, Nicholas G.; Biffar, Reiner; Kayser, Manfred

    2012-01-01

    Inter-individual variation in facial shape is one of the most noticeable phenotypes in humans, and it is clearly under genetic regulation; however, almost nothing is known about the genetic basis of normal human facial morphology. We therefore conducted a genome-wide association study for facial shape phenotypes in multiple discovery and replication cohorts, considering almost ten thousand individuals of European descent from several countries. Phenotyping of facial shape features was based on landmark data obtained from three-dimensional head magnetic resonance images (MRIs) and two-dimensional portrait images. We identified five independent genetic loci associated with different facial phenotypes, suggesting the involvement of five candidate genes—PRDM16, PAX3, TP63, C5orf50, and COL17A1—in the determination of the human face. Three of them have been implicated previously in vertebrate craniofacial development and disease, and the remaining two genes potentially represent novel players in the molecular networks governing facial development. Our finding at PAX3 influencing the position of the nasion replicates a recent GWAS of facial features. In addition to the reported GWA findings, we established links between common DNA variants previously associated with NSCL/P at 2p21, 8q24, 13q31, and 17q22 and normal facial-shape variations based on a candidate gene approach. Overall our study implies that DNA variants in genes essential for craniofacial development contribute with relatively small effect size to the spectrum of normal variation in human facial morphology. This observation has important consequences for future studies aiming to identify more genes involved in the human facial morphology, as well as for potential applications of DNA prediction of facial shape such as in future forensic applications. PMID:23028347

  2. A genome-wide association study identifies five loci influencing facial morphology in Europeans.

    Directory of Open Access Journals (Sweden)

    Fan Liu

    2012-09-01

    Full Text Available Inter-individual variation in facial shape is one of the most noticeable phenotypes in humans, and it is clearly under genetic regulation; however, almost nothing is known about the genetic basis of normal human facial morphology. We therefore conducted a genome-wide association study for facial shape phenotypes in multiple discovery and replication cohorts, considering almost ten thousand individuals of European descent from several countries. Phenotyping of facial shape features was based on landmark data obtained from three-dimensional head magnetic resonance images (MRIs and two-dimensional portrait images. We identified five independent genetic loci associated with different facial phenotypes, suggesting the involvement of five candidate genes--PRDM16, PAX3, TP63, C5orf50, and COL17A1--in the determination of the human face. Three of them have been implicated previously in vertebrate craniofacial development and disease, and the remaining two genes potentially represent novel players in the molecular networks governing facial development. Our finding at PAX3 influencing the position of the nasion replicates a recent GWAS of facial features. In addition to the reported GWA findings, we established links between common DNA variants previously associated with NSCL/P at 2p21, 8q24, 13q31, and 17q22 and normal facial-shape variations based on a candidate gene approach. Overall our study implies that DNA variants in genes essential for craniofacial development contribute with relatively small effect size to the spectrum of normal variation in human facial morphology. This observation has important consequences for future studies aiming to identify more genes involved in the human facial morphology, as well as for potential applications of DNA prediction of facial shape such as in future forensic applications.

  3. WORD BASED TAMIL SPEECH RECOGNITION USING TEMPORAL FEATURE BASED SEGMENTATION

    Directory of Open Access Journals (Sweden)

    A. Akila

    2015-05-01

    Full Text Available Speech recognition system requires segmentation of speech waveform into fundamental acoustic units. Segmentation is a process of decomposing the speech signal into smaller units. Speech segmentation could be done using wavelet, fuzzy methods, Artificial Neural Networks and Hidden Markov Model. Speech segmentation is a process of breaking continuous stream of sound into some basic units like words, phonemes or syllable that could be recognized. Segmentation could be used to distinguish different types of audio signals from large amount of audio data, often referred as audio classification. The speech segmentation can be divided into two categories based on whether the algorithm uses previous knowledge of data to process the speech. The categories are blind segmentation and aided segmentation.The major issues with the connected speech recognition algorithms were the vocabulary size will be larger with variation in the combination of words in the connected speech and the complexity of the algorithm is more to find the best match for the given test pattern. To overcome these issues, the connected speech has to be segmented into words using the attributes of speech. A methodology using the temporal feature Short Term Energy was proposed and compared with an existing algorithm called Dynamic Thresholding segmentation algorithm which uses spectrogram image of the connected speech for segmentation.

  4. Predicting facial characteristics from complex polygenic variations

    DEFF Research Database (Denmark)

    Fagertun, Jens; Wolffhechel, Karin Marie Brandt; Pers, Tune

    2015-01-01

    traits in a linear regression. We show in this proof-of-concept study for facial trait prediction from genome-wide SNP data that some facial characteristics can be modeled by genetic information: facial width, eyebrow width, distance between eyes, and features involving mouth shape are predicted......Research into the importance of the human genome in the context of facial appearance is receiving increasing attention and has led to the detection of several Single Nucleotide Polymorphisms (SNPs) of importance. In this work we attempt a holistic approach predicting facial characteristics from...

  5. Opinion mining feature-level using Naive Bayes and feature extraction based analysis dependencies

    Science.gov (United States)

    Sanda, Regi; Baizal, Z. K. Abdurahman; Nhita, Fhira

    2015-12-01

    Development of internet and technology, has major impact and providing new business called e-commerce. Many e-commerce sites that provide convenience in transaction, and consumers can also provide reviews or opinions on products that purchased. These opinions can be used by consumers and producers. Consumers to know the advantages and disadvantages of particular feature of the product. Procuders can analyse own strengths and weaknesses as well as it's competitors products. Many opinions need a method that the reader can know the point of whole opinion. The idea emerged from review summarization that summarizes the overall opinion based on sentiment and features contain. In this study, the domain that become the main focus is about the digital camera. This research consisted of four steps 1) giving the knowledge to the system to recognize the semantic orientation of an opinion 2) indentify the features of product 3) indentify whether the opinion gives a positive or negative 4) summarizing the result. In this research discussed the methods such as Naï;ve Bayes for sentiment classification, and feature extraction algorithm based on Dependencies Analysis, which is one of the tools in Natural Language Processing (NLP) and knowledge based dictionary which is useful for handling implicit features. The end result of research is a summary that contains a bunch of reviews from consumers on the features and sentiment. With proposed method, accuration for sentiment classification giving 81.2 % for positive test data, 80.2 % for negative test data, and accuration for feature extraction reach 90.3 %.

  6. An Effective Combined Feature For Web Based Image Retrieval

    Directory of Open Access Journals (Sweden)

    H.M.R.B Herath

    2015-08-01

    Full Text Available Abstract Technology advances as well as the emergence of large scale multimedia applications and the revolution of the World Wide Web has changed the world into a digital age. Anybody can use their mobile phone to take a photo at any time anywhere and upload that image to ever growing image databases. Development of effective techniques for visual and multimedia retrieval systems is one of the most challenging and important directions of the future research. This paper proposes an effective combined feature for web based image retrieval. Frequently used colour and texture features are explored in order to develop a combined feature for this purpose. Widely used three colour features Colour moments Colour coherence vector and Colour Correlogram and three texture features Grey Level Co-occurrence matrix Tamura features and Gabor filter were analyzed for their performance. Precision and Recall were used to evaluate the performance of each of these techniques. By comparing precision and recall values the methods that performed best were taken and combined to form a hybrid feature. The developed combined feature was evaluated by developing a web based CBIR system. A web crawler was used to first crawl through Web sites and images found in those sites are downloaded and the combined feature representation technique was used to extract image features. The test results indicated that this web system can be used to index web images with the combined feature representation schema and to find similar images. Random image retrievals using the web system shows that the combined feature can be used to retrieve images belonging to the general image domain. Accuracy of the retrieval can be noted high for natural images like outdoor scenes images of flowers etc. Also images which have a similar colour and texture distribution were retrieved as similar even though the images were belonging to deferent semantic categories. This can be ideal for an artist who wants

  7. Facial Sports Injuries

    Science.gov (United States)

    ... Marketplace Find an ENT Doctor Near You Facial Sports Injuries Facial Sports Injuries Patient Health Information News ... should receive immediate medical attention. Prevention Of Facial Sports Injuries The best way to treat facial sports ...

  8. Linear feature selection in texture analysis - A PLS based method

    DEFF Research Database (Denmark)

    Marques, Joselene; Igel, Christian; Lillholm, Martin

    2013-01-01

    We present a texture analysis methodology that combined uncommitted machine-learning techniques and partial least square (PLS) in a fully automatic framework. Our approach introduces a robust PLS-based dimensionality reduction (DR) step to specifically address outliers and high-dimensional feature......, which first applied a PLS regression to rank the features and then defined the best number of features to retain in the model by an iterative learning phase. The outliers in the dataset, that could inflate the number of selected features, were eliminated by a pre-processing step. To cope...... and considering all CV groups, the methods selected 36 % of the original features available. The diagnosis evaluation reached a generalization area-under-the-ROC curve of 0.92, which was higher than established cartilage-based markers known to relate to OA diagnosis....

  9. Feature selection for fMRI-based deception detection

    Science.gov (United States)

    Jin, Bo; Strasburger, Alvin; Laken, Steven J; Kozel, F Andrew; Johnson, Kevin A; George, Mark S; Lu, Xinghua

    2009-01-01

    Background Functional magnetic resonance imaging (fMRI) is a technology used to detect brain activity. Patterns of brain activation have been utilized as biomarkers for various neuropsychiatric applications. Detecting deception based on the pattern of brain activation characterized with fMRI is getting attention – with machine learning algorithms being applied to this field in recent years. The high dimensionality of fMRI data makes it a difficult task to directly utilize the original data as input for classification algorithms in detecting deception. In this paper, we investigated the procedures of feature selection to enhance fMRI-based deception detection. Results We used the t-statistic map derived from the statistical parametric mapping analysis of fMRI signals to construct features that reflect brain activation patterns. We subsequently investigated various feature selection methods including an ensemble method to identify discriminative features to detect deception. Using 124 features selected from a set of 65,166 original features as inputs for a support vector machine classifier, our results indicate that feature selection significantly enhanced the classification accuracy of the support vector machine in comparison to the models trained using all features and dimension reduction based models. Furthermore, the selected features are shown to form anatomic clusters within brain regions, which supports the hypothesis that specific brain regions may play a role during deception processes. Conclusion Feature selection not only enhances classification accuracy in fMRI-based deception detection but also provides support for the biological hypothesis that brain activities in certain regions of the brain are important for discrimination of deception. PMID:19761569

  10. Facial blindsight

    Directory of Open Access Journals (Sweden)

    Marco eSolcà

    2015-09-01

    Full Text Available Blindsight denotes unconscious residual visual capacities in the context of an inability to consciously recollect or identify visual information. It has been described for color and shape discrimination, movement or facial emotion recognition. The present study investigates a patient suffering from cortical blindness whilst maintaining select residual abilities in face detection. Our patient presented the capacity to distinguish between jumbled/normal faces, known/unknown faces or famous people’s categories although he failed to explicitly recognize or describe them. Conversely, performance was at chance level when asked to categorize non-facial stimuli. Our results provide clinical evidence for the notion that some aspects of facial processing can occur without perceptual awareness, possibly using direct tracts from the thalamus to associative visual cortex, bypassing the primary visual cortex.

  11. Augmented reality-based self-facial modeling to promote the emotional expression and social skills of adolescents with autism spectrum disorders.

    Science.gov (United States)

    Chen, Chien-Hsu; Lee, I-Jui; Lin, Ling-Yi

    2014-11-08

    Autism spectrum disorders (ASD) are characterized by a reduced ability to understand the emotions of other people; this ability involves recognizing facial expressions. This study assessed the possibility of enabling three adolescents with ASD to become aware of facial expressions observed in situations in a school setting simulated using augmented reality (AR) technology. The AR system provided three-dimensional (3-D) animations of six basic facial expressions overlaid on participant faces to facilitate practicing emotional judgments and social skills. Based on the multiple baseline design across subjects, the data indicated that AR intervention can improve the appropriate recognition and response to facial emotional expressions seen in the situational task. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Feature-based Ontology Mapping from an Information Receivers’ Viewpoint

    DEFF Research Database (Denmark)

    Glückstad, Fumiko Kano; Mørup, Morten

    2012-01-01

    This paper compares four algorithms for computing feature-based similarities between concepts respectively possessing a distinctive set of features. The eventual purpose of comparing these feature-based similarity algorithms is to identify a candidate term in a Target Language (TL) that can...... optimally convey the original meaning of a culturally-specific Source Language (SL) concept to a TL audience by aligning two culturally-dependent domain-specific ontologies. The results indicate that the Bayesian Model of Generalization [1] performs best, not only for identifying candidate translation terms...

  13. Rejuvenecimiento facial

    Directory of Open Access Journals (Sweden)

    L. Daniel Jacubovsky, Dr.

    2010-01-01

    Full Text Available El envejecimiento facial es un proceso único y particular a cada individuo y está regido en especial por su carga genética. El lifting facial es una compleja técnica desarrollada en nuestra especialidad desde principios de siglo, para revertir los principales signos de este proceso. Los factores secundarios que gravitan en el envejecimiento facial son múltiples y por ello las ritidectomías o lifting cérvico faciales descritas han buscado corregir los cambios fisonómicos del envejecimiento excursionando, como se describe, en todos los planos tisulares involucrados. Esta cirugía por lo tanto, exige conocimiento cabal de la anatomía quirúrgica, pericia y experiencia para reducir las complicaciones, estigmas quirúrgicos y revisiones secundarias. La ridectomía facial ha evolucionado hacia un procedimiento más simple, de incisiones más cortas y disecciones menos extensas. Las suspensiones musculares han variado en su ejecución y los vectores de montaje y resección cutánea son cruciales en los resultados estéticos de la cirugía cérvico facial. Hoy estos vectores son de tracción más vertical. La corrección de la flaccidez va acompañada de un interés en reponer el volumen de la superficie del rostro, en especial el tercio medio. Las técnicas quirúrgicas de rejuvenecimiento, en especial el lifting facial, exigen una planificación para cada paciente. Las técnicas adjuntas al lifting, como blefaroplastias, mentoplastía, lipoaspiración de cuello, implantes faciales y otras, también han tenido una positiva evolución hacia la reducción de riesgos y mejor éxito estético.

  14. Reconocimiento facial

    OpenAIRE

    Urtiaga Abad, Juan Alfonso

    2014-01-01

    El presente proyecto trata sobre uno de los campos más problemáticos de la inteligencia artificial, el reconocimiento facial. Algo tan sencillo para las personas como es reconocer una cara conocida se traduce en complejos algoritmos y miles de datos procesados en cuestión de segundos. El proyecto comienza con un estudio del estado del arte de las diversas técnicas de reconocimiento facial, desde las más utilizadas y probadas como el PCA y el LDA, hasta técnicas experimentales que utilizan ...

  15. Facial Resemblance Exaggerates Sex-Specific Jealousy-Based Decisions1

    Directory of Open Access Journals (Sweden)

    Steven M. Platek

    2007-01-01

    Full Text Available Sex differences in reaction to a romantic partner's infidelity are well documented and are hypothesized to be attributable to sex-specific jealousy mechanisms which are utilized to solve adaptive problems associated with risk of extra-pair copulation. Males, because of the risk of cuckoldry become more upset by sexual infidelity, while females, because of loss of resources and biparental investment tend to become more distressed by emotional infidelity. However, the degree to which these sex-specific reactions to jealousy interact with cues to kin are completely unknown. Here we investigated the interaction of facial resemblance with decisions about sex-specific jealousy scenarios. Fifty nine volunteers were asked to imagine that two different people (represented by facial composites informed them about their romantic partner's sexual or emotional infidelity. Consistent with previous research, males ranked sexual infidelity scenarios as most upsetting and females ranked emotional infidelity scenarios most upsetting. However, when information about the infidelity was provided by a face that resembled the subject, sex-specific reactions to jealousy were exaggerated. This finding highlights the use of facial resemblance as a putative self-referent phenotypic matching cue that impacts trusting behavior in sexual contexts.

  16. Spatiotemporal Features for Asynchronous Event-based Data

    Directory of Open Access Journals (Sweden)

    Xavier eLagorce

    2015-02-01

    Full Text Available Bio-inspired asynchronous event-based vision sensors are currently introducing a paradigm shift in visual information processing. These new sensors rely on a stimulus-driven principle of light acquisition similar to biological retinas. They are event-driven and fully asynchronous, thereby reducing redundancy and encoding exact times of input signal changes, leading to a very precise temporal resolution. Approaches for higher-level computer vision often rely on the realiable detection of features in visual frames, but similar definitions of features for the novel dynamic and event-based visual input representation of silicon retinas have so far been lacking. This article addresses the problem of learning and recognizing features for event-based vision sensors, which capture properties of truly spatiotemporal volumes of sparse visual event information. A novel computational architecture for learning and encoding spatiotemporal features is introduced based on a set of predictive recurrent reservoir networks, competing via winner-take-all selection. Features are learned in an unsupervised manner from real-world input recorded with event-based vision sensors. It is shown that the networks in the architecture learn distinct and task-specific dynamic visual features, and can predict their trajectories over time.

  17. Research on Forest Flame Recognition Algorithm Based on Image Feature

    Science.gov (United States)

    Wang, Z.; Liu, P.; Cui, T.

    2017-09-01

    In recent years, fire recognition based on image features has become a hotspot in fire monitoring. However, due to the complexity of forest environment, the accuracy of forest fireworks recognition based on image features is low. Based on this, this paper proposes a feature extraction algorithm based on YCrCb color space and K-means clustering. Firstly, the paper prepares and analyzes the color characteristics of a large number of forest fire image samples. Using the K-means clustering algorithm, the forest flame model is obtained by comparing the two commonly used color spaces, and the suspected flame area is discriminated and extracted. The experimental results show that the extraction accuracy of flame area based on YCrCb color model is higher than that of HSI color model, which can be applied in different scene forest fire identification, and it is feasible in practice.

  18. RESEARCH ON FOREST FLAME RECOGNITION ALGORITHM BASED ON IMAGE FEATURE

    Directory of Open Access Journals (Sweden)

    Z. Wang

    2017-09-01

    Full Text Available In recent years, fire recognition based on image features has become a hotspot in fire monitoring. However, due to the complexity of forest environment, the accuracy of forest fireworks recognition based on image features is low. Based on this, this paper proposes a feature extraction algorithm based on YCrCb color space and K-means clustering. Firstly, the paper prepares and analyzes the color characteristics of a large number of forest fire image samples. Using the K-means clustering algorithm, the forest flame model is obtained by comparing the two commonly used color spaces, and the suspected flame area is discriminated and extracted. The experimental results show that the extraction accuracy of flame area based on YCrCb color model is higher than that of HSI color model, which can be applied in different scene forest fire identification, and it is feasible in practice.

  19. Collaborative Filtering Fusing Label Features Based on SDAE

    DEFF Research Database (Denmark)

    Huo, Huan; Liu, Xiufeng; Zheng, Deyuan

    2017-01-01

    problem, auxiliary information such as labels are utilized. Another approach of recommendation system is content-based model which can’t be directly integrated with CF-based model due to its inherent characteristics. Considering that deep learning algorithms are capable of extracting deep latent features......Collaborative filtering (CF) is successfully applied to recommendation system by digging the latent features of users and items. However, conventional CF-based models usually suffer from the sparsity of rating matrices which would degrade model’s recommendation performance. To address this sparsity......, this paper applies Stack Denoising Auto Encoder (SDAE) to content-based model and proposes LCF(Deep Learning for Collaborative Filtering) algorithm by combing CF-based model which fuses label features. Experiments on real-world data sets show that DLCF can largely overcome the sparsity problem...

  20. SVM-based glioma grading. Optimization by feature reduction analysis

    International Nuclear Information System (INIS)

    Zoellner, Frank G.; Schad, Lothar R.; Emblem, Kyrre E.; Harvard Medical School, Boston, MA; Oslo Univ. Hospital

    2012-01-01

    We investigated the predictive power of feature reduction analysis approaches in support vector machine (SVM)-based classification of glioma grade. In 101 untreated glioma patients, three analytic approaches were evaluated to derive an optimal reduction in features; (i) Pearson's correlation coefficients (PCC), (ii) principal component analysis (PCA) and (iii) independent component analysis (ICA). Tumor grading was performed using a previously reported SVM approach including whole-tumor cerebral blood volume (CBV) histograms and patient age. Best classification accuracy was found using PCA at 85% (sensitivity = 89%, specificity = 84%) when reducing the feature vector from 101 (100-bins rCBV histogram + age) to 3 principal components. In comparison, classification accuracy by PCC was 82% (89%, 77%, 2 dimensions) and 79% by ICA (87%, 75%, 9 dimensions). For improved speed (up to 30%) and simplicity, feature reduction by all three methods provided similar classification accuracy to literature values (∝87%) while reducing the number of features by up to 98%. (orig.)

  1. An emotion-based facial expression word activates laughter module in the human brain: a functional magnetic resonance imaging study.

    Science.gov (United States)

    Osaka, Naoyuki; Osaka, Mariko; Kondo, Hirohito; Morishita, Masanao; Fukuyama, Hidenao; Shibasaki, Hiroshi

    2003-04-10

    We report an fMRI experiment demonstrating that visualization of onomatopoeia, an emotion-based facial expression word, highly suggestive of laughter, heard by the ear, significantly activates both the extrastriate visual cortex near the inferior occipital gyrus and the premotor (PM)/supplementary motor area (SMA) in the superior frontal gyrus while non-onomatopoeic words under the same task that did not imply laughter do not activate these areas in humans. We tested the specific hypothesis that an activation in extrastriate visual cortex and PM/SMA would be modulated by image formation of onomatopoeia implying laughter and found the hypothesis to be true. Copyright 2003 Elsevier Science Ireland Ltd.

  2. Feature-based tolerancing for intelligent inspection process definition

    International Nuclear Information System (INIS)

    Brown, C.W.

    1993-07-01

    This paper describes a feature-based tolerancing capability that complements a geometric solid model with an explicit representation of conventional and geometric tolerances. This capability is focused on supporting an intelligent inspection process definition system. The feature-based tolerance model's benefits include advancing complete product definition initiatives (e.g., STEP -- Standard for Exchange of Product model dam), suppling computer-integrated manufacturing applications (e.g., generative process planning and automated part programming) with product definition information, and assisting in the solution of measurement performance issues. A feature-based tolerance information model was developed based upon the notion of a feature's toleranceable aspects and describes an object-oriented scheme for representing and relating tolerance features, tolerances, and datum reference frames. For easy incorporation, the tolerance feature entities are interconnected with STEP solid model entities. This schema will explicitly represent the tolerance specification for mechanical products, support advanced dimensional measurement applications, and assist in tolerance-related methods divergence issues

  3. Concealing a shiny facial skin appearance by an Aerogel-based formula. In vitro and in vivo studies.

    Science.gov (United States)

    Cassin, G; Diridollou, S; Flament, F; Adam, A S; Pierre, P; Colomb, L; Morancais, J L; Qiu, H

    2018-02-01

    To explore, in vitro and in vivo, the potential interest of an Aerogel-based formula, in concealing a naturally shiny facial skin. In vitro, various formulae and ingredients were applied as a thin film onto contrast plates and studied through measuring the shine induced following pump spraying of a mixture of oleic acid and mineral water as a sebum/sweat mix model. In such a test, an Aerogel ingredient led to very positive results. In vivo, two different formulae with various concentrations of Aerogel were randomly tested on half side of the face vs. bare side of Chinese women, under some provocative environmental conditions, known to enhance facial shine. These conditions comprised a normal activity under a hot and highly humid summer time followed - or not - by a hamam session. Both studies included comparative evaluations using a half-face procedure (treated/untreated or vehicle). In the first case, evaluations were quantitatively carried out, whereas the second one was based on a quantitative self-evaluations from standardized full-face photographs RESULTS: In vitro, the tested Aerogel, incorporated at 1% or 2% concentration in a common O/W cosmetic emulsion, shows an immediate light scattering effect, thereby masking shine. Such effect appears of much higher amplitude than that of two other tested particulate ingredients (Talc and Perlite). A noticeable remanence of anti-shine effect was confirmed in vivo in extreme conditions. The latter was self-perceived by all participants in the second study. This result is likely related to the super hydrophobic behaviour of the Aerogel. As cosmetic ingredient, this new Aerogel appears as a highly promising ingredient for concealing the facial skin shine, a source of complaint from many consumers living in hot and humid regions. © 2017 Society of Cosmetic Scientists and the Société Française de Cosmétologie.

  4. 3-D FEATURE-BASED MATCHING BY RSTG APPROACH

    Directory of Open Access Journals (Sweden)

    J.-J. Jaw

    2012-07-01

    Full Text Available 3-D feature matching is the essential kernel in a fully automated feature-based LiDAR point cloud registration. After feasible procedures of feature acquisition, connecting corresponding features in different data frames is imperative to be solved. The objective addressed in this paper is developing an approach coined RSTG to retrieve corresponding counterparts of unsorted multiple 3-D features extracted from sets of LiDAR point clouds. RSTG stands for the four major processes, "Rotation alignment"; "Scale estimation"; "Translation alignment" and "Geometric check," strategically formulated towards finding out matching solution with high efficiency and leading to accomplishing the 3-D similarity transformation among all sets. The workable types of features to RSTG comprise points, lines, planes and clustered point groups. Each type of features can be employed exclusively or combined with others, if sufficiently supplied, throughout the matching scheme. The paper gives a detailed description of the matching methodology and discusses on the matching effects based on the statistical assessment which revealed that the RSTG approach reached an average matching rate of success up to 93% with around 6.6% of statistical type 1 error. Notably, statistical type 2 error, the critical indicator of matching reliability, was kept 0% throughout all the experiments.

  5. Some Aspects of Facial Nerve Paralysis

    African Journals Online (AJOL)

    1973-01-06

    Jan 6, 1973 ... births. Facial palsy at birth must be differentiated from agenesis of facial muscles. Trauma: fractures of the base of the skull; facial in- juries; penetrating injury of middle ear; and altitude paralysis. Neurologic causes: Landry-Guillain-Barre ascending paralysis; multiple sclerosis; myasthenia gravis; opercular.

  6. Adaptive Feature Based Control of Compact Disk Players

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Stoustrup, Jakob; Vidal, Enrique Sanchez

    2005-01-01

    -players playing CDs with surface fault is derived and described. This feature based control scheme uses precomputed base to remove the surface fault influence from the position measurements. In this paper an adaptive version of the feature based control scheme is proposed and described. This adaptive scheme can...... result that the adaptive scheme clearly adapts better to the given faults compared with the non-adaptive version of the feature based control scheme.......Many have experienced the problem that their Compact Disc players have difficulties playing Compact Discs with surface faults like scratches and fingerprints. The cause of this is due to the two servo control loops used to keep the Optical Pick-up Unit focused and radially on the information track...

  7. Facial soft tissue thickness in North Indian adult population

    Directory of Open Access Journals (Sweden)

    Tanushri Saxena

    2012-01-01

    Full Text Available Objectives: Forensic facial reconstruction is an attempt to reproduce a likeness of facial features of an individual, based on characteristics of the skull, for the purpose of individual identification - The aim of this study was to determine the soft tissue thickness values of individuals of Bareilly population, Uttar Pradesh, India and to evaluate whether these values can help in forensic identification. Study design: A total of 40 individuals (19 males, 21 females were evaluated using spiral computed tomographic (CT scan with 2 mm slice thickness in axial sections and soft tissue thicknesses were measured at seven midfacial anthropological facial landmarks. Results: It was found that facial soft tissue thickness values decreased with age. Soft tissue thickness values were less in females than in males, except at ramus region. Comparing the left and right values in individuals it was found to be not significant. Conclusion: Soft tissue thickness values are an important factor in facial reconstruction and also help in forensic identification of an individual. CT scan gives a good representation of these values and hence is considered an important tool in facial reconstruction- This study has been conducted in North Indian population and further studies with larger sample size can surely add to the data regarding soft tissue thicknesses.

  8. Attractiveness of facial averageness and symmetry in non-western cultures: in search of biologically based standards of beauty.

    Science.gov (United States)

    Rhodes, G; Yoshikawa, S; Clark, A; Lee, K; McKay, R; Akamatsu, S

    2001-01-01

    Averageness and symmetry are attractive in Western faces and are good candidates for biologically based standards of beauty. A hallmark of such standards is that they are shared across cultures. We examined whether facial averageness and symmetry are attractive in non-Western cultures. Increasing the averageness of individual faces, by warping those faces towards an averaged composite of the same race and sex, increased the attractiveness of both Chinese (experiment 1) and Japanese (experiment 2) faces, for Chinese and Japanese participants, respectively. Decreasing averageness by moving the faces away from an average shape decreased attractiveness. We also manipulated the symmetry of Japanese faces by blending each original face with its mirror image to create perfectly symmetric versions. Japanese raters preferred the perfectly symmetric versions to the original faces (experiment 2). These findings show that preferences for facial averageness and symmetry are not restricted to Western cultures, consistent with the view that they are biologically based. Interestingly, it made little difference whether averageness was manipulated by using own-race or other-race averaged composites and there was no preference for own-race averaged composites over other-race or mixed-race composites (experiment 1). We discuss the implications of these results for understanding what makes average faces attractive. We also discuss some limitations of our studies, and consider other lines of converging evidence that may help determine whether preferences for average and symmetric faces are biologically based.

  9. Using PSO-Based Hierarchical Feature Selection Algorithm

    Directory of Open Access Journals (Sweden)

    Zhiwei Ji

    2014-01-01

    Full Text Available Hepatocellular carcinoma (HCC is one of the most common malignant tumors. Clinical symptoms attributable to HCC are usually absent, thus often miss the best therapeutic opportunities. Traditional Chinese Medicine (TCM plays an active role in diagnosis and treatment of HCC. In this paper, we proposed a particle swarm optimization-based hierarchical feature selection (PSOHFS model to infer potential syndromes for diagnosis of HCC. Firstly, the hierarchical feature representation is developed by a three-layer tree. The clinical symptoms and positive score of patient are leaf nodes and root in the tree, respectively, while each syndrome feature on the middle layer is extracted from a group of symptoms. Secondly, an improved PSO-based algorithm is applied in a new reduced feature space to search an optimal syndrome subset. Based on the result of feature selection, the causal relationships of symptoms and syndromes are inferred via Bayesian networks. In our experiment, 147 symptoms were aggregated into 27 groups and 27 syndrome features were extracted. The proposed approach discovered 24 syndromes which obviously improved the diagnosis accuracy. Finally, the Bayesian approach was applied to represent the causal relationships both at symptom and syndrome levels. The results show that our computational model can facilitate the clinical diagnosis of HCC.

  10. Graph-based unsupervised feature selection and multiview ...

    Indian Academy of Sciences (India)

    2015-09-28

    Sep 28, 2015 ... Biological functional enrichment; clustering; explorative data analysis; feature selection; gene selection; graph-based learning. Published online: 28 September ...... RFGS: random forest gene selection; SVST: Support vector sampling technique; SOM: Self-organizing map; GUFS: proposed graph-based.

  11. AGE CLASSIFICATION BASED ON FEATURES EXTRACTED FROM THIRD ORDER NEIGHBORHOOD LOCAL BINARY PATTERN

    Directory of Open Access Journals (Sweden)

    Pullela S.V.V.S.R. Kumar

    2014-11-01

    Full Text Available The present paper extended the work carried out by Kumar et. al. [10] on Third order Neighbourhood LBP (TN-LBP and derived an approach that estimates pattern trends on the outer cell of TN-LBP. The present paper observed and noted that the TN-LBP forms two types of V-patterns on the outer cell of TN-LBP i.e. Outer Right V Patterns (ORVP and Outer Left V Patterns (OLVP. The ORLP and OLVP of TN-LBP consist of 5 pixels each. The present paper derived Grey Level Co-occurrence Matrix (GLCM features based on LBP values of ORVP and OLVP. This GLCM is named as ORLVP-GLCM (Outer cell Right and Left V-Patterns of GLCM and on this four features are evaluated to classify human into child (0 to 12 years, young (13 to 30 years, middle aged (31 to 50 years and senior adult (above 60 years. The proposed method is experimented on FGNET, GOOGLE and Scanned facial images and the results are compared with the existing methods. The results demonstrate the efficiency of the proposed method over the existing methods.

  12. Art or Science? An Evidence-Based Approach to Human Facial Beauty a Quantitative Analysis Towards an Informed Clinical Aesthetic Practice.

    Science.gov (United States)

    Harrar, Harpal; Myers, Simon; Ghanem, Ali M

    2018-02-01

    Patients often seek guidance from the aesthetic practitioners regarding treatments to enhance their 'beauty'. Is there a science behind the art of assessment and if so is it measurable? Through the centuries, this question has challenged scholars, artists and surgeons. This study aims to undertake a review of the evidence behind quantitative facial measurements in assessing beauty to help the practitioner in everyday aesthetic practice. A Medline, Embase search for beauty, facial features and quantitative analysis was undertaken. Inclusion criteria were studies on adults, and exclusions included studies undertaken for dental, cleft lip, oncology, burns or reconstructive surgeries. The abstracts and papers were appraised, and further studies excluded that were considered inappropriate. The data were extracted using a standardised table. The final dataset was appraised in accordance with the PRISMA checklist and Holland and Rees' critique tools. Of the 1253 studies screened, 1139 were excluded from abstracts and a further 70 excluded from full text articles. The remaining 44 were assessed qualitatively and quantitatively. It became evident that the datasets were not comparable. Nevertheless, common themes were obvious, and these were summarised. Despite measures of the beauty of individual components to the sum of all the parts, such as symmetry and the golden ratio, we are yet far from establishing what truly constitutes quantitative beauty. Perhaps beauty is truly in the 'eyes of the beholder' (and perhaps in the eyes of the subject too). This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .

  13. Methods to Quantify Soft-Tissue Based Facial Growth and Treatment Outcomes in Children: A Systematic Review

    Science.gov (United States)

    Brons, Sander; van Beusichem, Machteld E.; Bronkhorst, Ewald M.; Draaisma, Jos; Bergé, Stefaan J.; Maal, Thomas J.; Kuijpers-Jagtman, Anne Marie

    2012-01-01

    Context Technological advancements have led craniofacial researchers and clinicians into the era of three-dimensional digital imaging for quantitative evaluation of craniofacial growth and treatment outcomes. Objective To give an overview of soft-tissue based methods for quantitative longitudinal assessment of facial dimensions in children until six years of age and to assess the reliability of these methods in studies with good methodological quality. Data Source PubMed, EMBASE, Cochrane Library, Web of Science, Scopus and CINAHL were searched. A hand search was performed to check for additional relevant studies. Study Selection Primary publications on facial growth and treatment outcomes in children younger than six years of age were included. Data Extraction Independent data extraction by two observers. A quality assessment instrument was used to determine the methodological quality. Methods, used in studies with good methodological quality, were assessed for reliability expressed as the magnitude of the measurement error and the correlation coefficient between repeated measurements. Results In total, 47 studies were included describing 4 methods: 2D x-ray cephalometry; 2D photography; anthropometry; 3D imaging techniques (surface laser scanning, stereophotogrammetry and cone beam computed tomography). In general the measurement error was below 1 mm and 1° and correlation coefficients range from 0.65 to 1.0. Conclusion Various methods have shown to be reliable. However, at present stereophotogrammetry seems to be the best 3D method for quantitative longitudinal assessment of facial dimensions in children until six years of age due to its millisecond fast image capture, archival capabilities, high resolution and no exposure to ionizing radiation. PMID:22879898

  14. Feature-based attentional modulation of orientation perception in somatosensation

    Directory of Open Access Journals (Sweden)

    Meike Annika Schweisfurth

    2014-07-01

    Full Text Available In a reaction time study of human tactile orientation detection the effects of spatial attention and feature-based attention were investigated. Subjects had to give speeded responses to target orientations (parallel and orthogonal to the finger axis in a random stream of oblique tactile distractor orientations presented to their index and ring fingers. Before each block of trials, subjects received a tactile cue at one finger. By manipulating the validity of this cue with respect to its location and orientation (feature, we provided an incentive to subjects to attend spatially to the cued location and only there to the cued orientation. Subjects showed quicker responses to parallel compared to orthogonal targets, pointing to an orientation anisotropy in sensory processing. Also, faster reaction times were observed in location-matched trials, i.e. when targets appeared on the cued finger, representing a perceptual benefit of spatial attention. Most importantly, reaction times were shorter to orientations matching the cue, both at the cued and at the uncued location, documenting a global enhancement of tactile sensation by feature-based attention. This is the first report of a perceptual benefit of feature-based attention outside the spatial focus of attention in somatosensory perception. The similarity to effects of feature-based attention in visual perception supports the notion of matching attentional mechanisms across sensory domains.

  15. A biometric identification system based on eigenpalm and eigenfinger features.

    Science.gov (United States)

    Ribaric, Slobodan; Fratric, Ivan

    2005-11-01

    This paper presents a multimodal biometric identification system based on the features of the human hand. We describe a new biometric approach to personal identification using eigenfinger and eigenpalm features, with fusion applied at the matching-score level. The identification process can be divided into the following phases: capturing the image; preprocessing; extracting and normalizing the palm and strip-like finger subimages; extracting the eigenpalm and eigenfinger features based on the K-L transform; matching and fusion; and, finally, a decision based on the (k, l)-NN classifier and thresholding. The system was tested on a database of 237 people (1,820 hand images). The experimental results showed the effectiveness of the system in terms of the recognition rate (100 percent), the equal error rate (EER = 0.58 percent), and the total error rate (TER = 0.72 percent).

  16. Feature-based component model for design of embedded systems

    Science.gov (United States)

    Zha, Xuan Fang; Sriram, Ram D.

    2004-11-01

    An embedded system is a hybrid of hardware and software, which combines software's flexibility and hardware real-time performance. Embedded systems can be considered as assemblies of hardware and software components. An Open Embedded System Model (OESM) is currently being developed at NIST to provide a standard representation and exchange protocol for embedded systems and system-level design, simulation, and testing information. This paper proposes an approach to representing an embedded system feature-based model in OESM, i.e., Open Embedded System Feature Model (OESFM), addressing models of embedded system artifacts, embedded system components, embedded system features, and embedded system configuration/assembly. The approach provides an object-oriented UML (Unified Modeling Language) representation for the embedded system feature model and defines an extension to the NIST Core Product Model. The model provides a feature-based component framework allowing the designer to develop a virtual embedded system prototype through assembling virtual components. The framework not only provides a formal precise model of the embedded system prototype but also offers the possibility of designing variation of prototypes whose members are derived by changing certain virtual components with different features. A case study example is discussed to illustrate the embedded system model.

  17. Automatic seamless image mosaic method based on SIFT features

    Science.gov (United States)

    Liu, Meiying; Wen, Desheng

    2017-02-01

    An automatic seamless image mosaic method based on SIFT features is proposed. First a scale-invariant feature extracting algorithm SIFT is used for feature extraction and matching, which gains sub-pixel precision for features extraction. Then, the transforming matrix H is computed with improved PROSAC algorithm , compared with RANSAC algorithm, the calculate efficiency is advanced, and the number of the inliers are more. Then the transforming matrix H is purify with LM algorithm. And finally image mosaic is completed with smoothing algorithm. The method implements automatically and avoids the disadvantages of traditional image mosaic method under different scale and illumination conditions. Experimental results show the image mosaic effect is wonderful and the algorithm is stable very much. It is high valuable in practice.

  18. Feature extraction for deep neural networks based on decision boundaries

    Science.gov (United States)

    Woo, Seongyoun; Lee, Chulhee

    2017-05-01

    Feature extraction is a process used to reduce data dimensions using various transforms while preserving the discriminant characteristics of the original data. Feature extraction has been an important issue in pattern recognition since it can reduce the computational complexity and provide a simplified classifier. In particular, linear feature extraction has been widely used. This method applies a linear transform to the original data to reduce the data dimensions. The decision boundary feature extraction method (DBFE) retains only informative directions for discriminating among the classes. DBFE has been applied to various parametric and non-parametric classifiers, which include the Gaussian maximum likelihood classifier (GML), the k-nearest neighbor classifier, support vector machines (SVM) and neural networks. In this paper, we apply DBFE to deep neural networks. This algorithm is based on the nonparametric version of DBFE, which was developed for neural networks. Experimental results with the UCI database show improved classification accuracy with reduced dimensionality.

  19. Eye movement identification based on accumulated time feature

    Science.gov (United States)

    Guo, Baobao; Wu, Qiang; Sun, Jiande; Yan, Hua

    2017-06-01

    Eye movement is a new kind of feature for biometrical recognition, it has many advantages compared with other features such as fingerprint, face, and iris. It is not only a sort of static characteristics, but also a combination of brain activity and muscle behavior, which makes it effective to prevent spoofing attack. In addition, eye movements can be incorporated with faces, iris and other features recorded from the face region into multimode systems. In this paper, we do an exploring study on eye movement identification based on the eye movement datasets provided by Komogortsev et al. in 2011 with different classification methods. The time of saccade and fixation are extracted from the eye movement data as the eye movement features. Furthermore, the performance analysis was conducted on different classification methods such as the BP, RBF, ELMAN and SVM in order to provide a reference to the future research in this field.

  20. Missile placement analysis based on improved SURF feature matching algorithm

    Science.gov (United States)

    Yang, Kaida; Zhao, Wenjie; Li, Dejun; Gong, Xiran; Sheng, Qian

    2015-03-01

    The precious battle damage assessment by use of video images to analysis missile placement is a new study area. The article proposed an improved speeded up robust features algorithm named restricted speeded up robust features, which combined the combat application of TV-command-guided missiles and the characteristics of video image. Its restrictions mainly reflected in two aspects, one is to restrict extraction area of feature point; the second is to restrict the number of feature points. The process of missile placement analysis based on video image was designed and a video splicing process and random sample consensus purification were achieved. The RSURF algorithm is proved that has good realtime performance on the basis of guarantee the accuracy.

  1. Automatic Human Facial Expression Recognition Based on Integrated Classifier From Monocular Video with Uncalibrated Camera

    Directory of Open Access Journals (Sweden)

    Yu Tao

    2017-01-01

    Full Text Available An automatic recognition framework for human facial expressions from a monocular video with an uncalibrated camera is proposed. The expression characteristics are first acquired from a kind of deformable template, similar to a facial muscle distribution. After associated regularization, the time sequences from the trait changes in space-time under complete expressional production are then arranged line by line in a matrix. Next, the matrix dimensionality is reduced by a method of manifold learning of neighborhood-preserving embedding. Finally, the refined matrix containing the expression trait information is recognized by a classifier that integrates the hidden conditional random field (HCRF and support vector machine (SVM. In an experiment using the Cohn–Kanade database, the proposed method showed a comparatively higher recognition rate than the individual HCRF or SVM methods in direct recognition from two-dimensional human face traits. Moreover, the proposed method was shown to be more robust than the typical Kotsia method because the former contains more structural characteristics of the data to be classified in space-time

  2. Fault Features Extraction and Identification based Rolling Bearing Fault Diagnosis

    International Nuclear Information System (INIS)

    Qin, B; Sun, G D; Zhang L Y; Wang J G; HU, J

    2017-01-01

    For the fault classification model based on extreme learning machine (ELM), the diagnosis accuracy and stability of rolling bearing is greatly influenced by a critical parameter, which is the number of nodes in hidden layer of ELM. An adaptive adjustment strategy is proposed based on vibrational mode decomposition, permutation entropy, and nuclear kernel extreme learning machine to determine the tunable parameter. First, the vibration signals are measured and then decomposed into different fault feature models based on variation mode decomposition. Then, fault feature of each model is formed to a high dimensional feature vector set based on permutation entropy. Second, the ELM output function is expressed by the inner product of Gauss kernel function to adaptively determine the number of hidden layer nodes. Finally, the high dimension feature vector set is used as the input to establish the kernel ELM rolling bearing fault classification model, and the classification and identification of different fault states of rolling bearings are carried out. In comparison with the fault classification methods based on support vector machine and ELM, the experimental results show that the proposed method has higher classification accuracy and better generalization ability. (paper)

  3. Separable mechanisms underlying global feature-based attention.

    Science.gov (United States)

    Bondarenko, Rowena; Boehler, Carsten N; Stoppel, Christian M; Heinze, Hans-Jochen; Schoenfeld, Mircea A; Hopf, Jens-Max

    2012-10-31

    Feature-based attention is known to operate in a spatially global manner, in that the selection of attended features is not bound to the spatial focus of attention. Here we used electromagnetic recordings in human observers to characterize the spatiotemporal signature of such global selection of an orientation feature. Observers performed a simple orientation-discrimination task while ignoring task-irrelevant orientation probes outside the focus of attention. We observed that global feature-based selection, indexed by the brain response to unattended orientation probes, is composed of separable functional components. One such component reflects global selection based on the similarity of the probe with task-relevant orientation values ("template matching"), which is followed by a component reflecting selection based on the similarity of the probe with the orientation value under discrimination in the focus of attention ("discrimination matching"). Importantly, template matching occurs at ∼150 ms after stimulus onset, ∼80 ms before the onset of discrimination matching. Moreover, source activity underlying template matching and discrimination matching was found to originate from ventral extrastriate cortex, with the former being generated in more anterolateral and the latter in more posteromedial parts, suggesting template matching to occur in visual cortex higher up in the visual processing hierarchy than discrimination matching. We take these observations to indicate that the population-level signature of global feature-based selection reflects a sequence of hierarchically ordered operations in extrastriate visual cortex, in which the selection based on task relevance has temporal priority over the selection based on the sensory similarity between input representations.

  4. Auditory-model based robust feature selection for speech recognition.

    Science.gov (United States)

    Koniaris, Christos; Kuropatwinski, Marcin; Kleijn, W Bastiaan

    2010-02-01

    It is shown that robust dimension-reduction of a feature set for speech recognition can be based on a model of the human auditory system. Whereas conventional methods optimize classification performance, the proposed method exploits knowledge implicit in the auditory periphery, inheriting its robustness. Features are selected to maximize the similarity of the Euclidean geometry of the feature domain and the perceptual domain. Recognition experiments using mel-frequency cepstral coefficients (MFCCs) confirm the effectiveness of the approach, which does not require labeled training data. For noisy data the method outperforms commonly used discriminant-analysis based dimension-reduction methods that rely on labeling. The results indicate that selecting MFCCs in their natural order results in subsets with good performance.

  5. A Distributed Feature-based Environment for Collaborative Design

    Directory of Open Access Journals (Sweden)

    Wei-Dong Li

    2003-02-01

    Full Text Available This paper presents a client/server design environment based on 3D feature-based modelling and Java technologies to enable design information to be shared efficiently among members within a design team. In this environment, design tasks and clients are organised through working sessions generated and maintained by a collaborative server. The information from an individual design client during a design process is updated and broadcast to other clients in the same session through an event-driven and call-back mechanism. The downstream manufacturing analysis modules can be wrapped as agents and plugged into the open environment to support the design activities. At the server side, a feature-feature relationship is established and maintained to filter the varied information of a working part, so as to facilitate efficient information update during the design process.

  6. Dependence of the appearance-based perception of criminality, suggestibility, and trustworthiness on the level of pixelation of facial images.

    Science.gov (United States)

    Nurmoja, Merle; Eamets, Triin; Härma, Hanne-Loore; Bachmann, Talis

    2012-10-01

    While the dependence of face identification on the level of pixelation-transform of the images of faces has been well studied, similar research on face-based trait perception is underdeveloped. Because depiction formats used for hiding individual identity in visual media and evidential material recorded by surveillance cameras often consist of pixelized images, knowing the effects of pixelation on person perception has practical relevance. Here, the results of two experiments are presented showing the effect of facial image pixelation on the perception of criminality, trustworthiness, and suggestibility. It appears that individuals (N = 46, M age = 21.5 yr., SD = 3.1 for criminality ratings; N = 94, M age = 27.4 yr., SD = 10.1 for other ratings) have the ability to discriminate between facial cues ndicative of these perceived traits from the coarse level of image pixelation (10-12 pixels per face horizontally) and that the discriminability increases with a decrease in the coarseness of pixelation. Perceived criminality and trustworthiness appear to be better carried by the pixelized images than perceived suggestibility.

  7. [Facial injections of hyaluronic acid-based fillers for malformations. Preliminary study regarding scar tissue improvement and cosmetic betterment].

    Science.gov (United States)

    Franchi, G; Neiva-Vaz, C; Picard, A; Vazquez, M-P

    2018-02-02

    Cross-linked hyaluronic acid-based fillers have gained rapid acceptance for treating facial wrinkles, deep tissue folds and sunken areas due to aging. This study evaluates, in addition to space-filling properties, their effects on softness and elasticity as a secondary effect, following injection of 3 commercially available cross-linked hyaluronic acid-based fillers (15mg/mL, 17,5mg/mL and 20mg/mL) in patients presenting with congenital or acquired facial malformations. We started injecting gels of cross-linked hyaluronic acid-based fillers in those cases in 2013; we performed 46 sessions of injections in 32 patients, aged from 13-32. Clinical assessment was performed by the patient himself and by a plastic surgeon, 15 days after injections and 6-18 months later. Cross-linked hyaluronic acid-based fillers offered very subtle cosmetic results and supplemented surgery with a very high level of satisfaction of the patients. When injected in fibrosis, the first session enhanced softness and elasticity; the second session enhanced the volume. Cross-linked hyaluronic acid-based fillers fill sunken areas and better softness and elasticity of scar tissues. In addition to their well-understood space-filling function, as a secondary effect, the authors demonstrate that cross-linked hyaluronic acid-based fillers improve softness and elasticity of scarring tissues. Many experimental studies support our observations, showing that cross-linked hyaluronic acid stimulates the production of several extra-cellular matrix components, including dermal collagen and elastin. Copyright © 2018 Elsevier Masson SAS. All rights reserved.

  8. An expert botanical feature extraction technique based on phenetic features for identifying plant species.

    Directory of Open Access Journals (Sweden)

    Hoshang Kolivand

    Full Text Available In this paper, we present a new method to recognise the leaf type and identify plant species using phenetic parts of the leaf; lobes, apex and base detection. Most of the research in this area focuses on the popular features such as the shape, colour, vein, and texture, which consumes large amounts of computational processing and are not efficient, especially in the Acer database with a high complexity structure of the leaves. This paper is focused on phenetic parts of the leaf which increases accuracy. Detecting the local maxima and local minima are done based on Centroid Contour Distance for Every Boundary Point, using north and south region to recognise the apex and base. Digital morphology is used to measure the leaf shape and the leaf margin. Centroid Contour Gradient is presented to extract the curvature of leaf apex and base. We analyse 32 leaf images of tropical plants and evaluated with two different datasets, Flavia, and Acer. The best accuracy obtained is 94.76% and 82.6% respectively. Experimental results show the effectiveness of the proposed technique without considering the commonly used features with high computational cost.

  9. An expert botanical feature extraction technique based on phenetic features for identifying plant species.

    Science.gov (United States)

    Kolivand, Hoshang; Fern, Bong Mei; Rahim, Mohd Shafry Mohd; Sulong, Ghazali; Baker, Thar; Tully, David

    2018-01-01

    In this paper, we present a new method to recognise the leaf type and identify plant species using phenetic parts of the leaf; lobes, apex and base detection. Most of the research in this area focuses on the popular features such as the shape, colour, vein, and texture, which consumes large amounts of computational processing and are not efficient, especially in the Acer database with a high complexity structure of the leaves. This paper is focused on phenetic parts of the leaf which increases accuracy. Detecting the local maxima and local minima are done based on Centroid Contour Distance for Every Boundary Point, using north and south region to recognise the apex and base. Digital morphology is used to measure the leaf shape and the leaf margin. Centroid Contour Gradient is presented to extract the curvature of leaf apex and base. We analyse 32 leaf images of tropical plants and evaluated with two different datasets, Flavia, and Acer. The best accuracy obtained is 94.76% and 82.6% respectively. Experimental results show the effectiveness of the proposed technique without considering the commonly used features with high computational cost.

  10. An expert botanical feature extraction technique based on phenetic features for identifying plant species

    Science.gov (United States)

    Fern, Bong Mei; Rahim, Mohd Shafry Mohd; Sulong, Ghazali; Baker, Thar; Tully, David

    2018-01-01

    In this paper, we present a new method to recognise the leaf type and identify plant species using phenetic parts of the leaf; lobes, apex and base detection. Most of the research in this area focuses on the popular features such as the shape, colour, vein, and texture, which consumes large amounts of computational processing and are not efficient, especially in the Acer database with a high complexity structure of the leaves. This paper is focused on phenetic parts of the leaf which increases accuracy. Detecting the local maxima and local minima are done based on Centroid Contour Distance for Every Boundary Point, using north and south region to recognise the apex and base. Digital morphology is used to measure the leaf shape and the leaf margin. Centroid Contour Gradient is presented to extract the curvature of leaf apex and base. We analyse 32 leaf images of tropical plants and evaluated with two different datasets, Flavia, and Acer. The best accuracy obtained is 94.76% and 82.6% respectively. Experimental results show the effectiveness of the proposed technique without considering the commonly used features with high computational cost. PMID:29420568

  11. Sequence-based classification using discriminatory motif feature selection.

    Directory of Open Access Journals (Sweden)

    Hao Xiong

    Full Text Available Most existing methods for sequence-based classification use exhaustive feature generation, employing, for example, all k-mer patterns. The motivation behind such (enumerative approaches is to minimize the potential for overlooking important features. However, there are shortcomings to this strategy. First, practical constraints limit the scope of exhaustive feature generation to patterns of length ≤ k, such that potentially important, longer (> k predictors are not considered. Second, features so generated exhibit strong dependencies, which can complicate understanding of derived classification rules. Third, and most importantly, numerous irrelevant features are created. These concerns can compromise prediction and interpretation. While remedies have been proposed, they tend to be problem-specific and not broadly applicable. Here, we develop a generally applicable methodology, and an attendant software pipeline, that is predicated on discriminatory motif finding. In addition to the traditional training and validation partitions, our framework entails a third level of data partitioning, a discovery partition. A discriminatory motif finder is used on sequences and associated class labels in the discovery partition to yield a (small set of features. These features are then used as inputs to a classifier in the training partition. Finally, performance assessment occurs on the validation partition. Important attributes of our approach are its modularity (any discriminatory motif finder and any classifier can be deployed and its universality (all data, including sequences that are unaligned and/or of unequal length, can be accommodated. We illustrate our approach on two nucleosome occupancy datasets and a protein solubility dataset, previously analyzed using enumerative feature generation. Our method achieves excellent performance results, with and without optimization of classifier tuning parameters. A Python pipeline implementing the approach is

  12. Feature-Based Statistical Analysis of Combustion Simulation Data

    Energy Technology Data Exchange (ETDEWEB)

    Bennett, J; Krishnamoorthy, V; Liu, S; Grout, R; Hawkes, E; Chen, J; Pascucci, V; Bremer, P T

    2011-11-18

    We present a new framework for feature-based statistical analysis of large-scale scientific data and demonstrate its effectiveness by analyzing features from Direct Numerical Simulations (DNS) of turbulent combustion. Turbulent flows are ubiquitous and account for transport and mixing processes in combustion, astrophysics, fusion, and climate modeling among other disciplines. They are also characterized by coherent structure or organized motion, i.e. nonlocal entities whose geometrical features can directly impact molecular mixing and reactive processes. While traditional multi-point statistics provide correlative information, they lack nonlocal structural information, and hence, fail to provide mechanistic causality information between organized fluid motion and mixing and reactive processes. Hence, it is of great interest to capture and track flow features and their statistics together with their correlation with relevant scalar quantities, e.g. temperature or species concentrations. In our approach we encode the set of all possible flow features by pre-computing merge trees augmented with attributes, such as statistical moments of various scalar fields, e.g. temperature, as well as length-scales computed via spectral analysis. The computation is performed in an efficient streaming manner in a pre-processing step and results in a collection of meta-data that is orders of magnitude smaller than the original simulation data. This meta-data is sufficient to support a fully flexible and interactive analysis of the features, allowing for arbitrary thresholds, providing per-feature statistics, and creating various global diagnostics such as Cumulative Density Functions (CDFs), histograms, or time-series. We combine the analysis with a rendering of the features in a linked-view browser that enables scientists to interactively explore, visualize, and analyze the equivalent of one terabyte of simulation data. We highlight the utility of this new framework for combustion

  13. Feature-based morphometry: discovering group-related anatomical patterns.

    Science.gov (United States)

    Toews, Matthew; Wells, William; Collins, D Louis; Arbel, Tal

    2010-02-01

    This paper presents feature-based morphometry (FBM), a new fully data-driven technique for discovering patterns of group-related anatomical structure in volumetric imagery. In contrast to most morphometry methods which assume one-to-one correspondence between subjects, FBM explicitly aims to identify distinctive anatomical patterns that may only be present in subsets of subjects, due to disease or anatomical variability. The image is modeled as a collage of generic, localized image features that need not be present in all subjects. Scale-space theory is applied to analyze image features at the characteristic scale of underlying anatomical structures, instead of at arbitrary scales such as global or voxel-level. A probabilistic model describes features in terms of their appearance, geometry, and relationship to subject groups, and is automatically learned from a set of subject images and group labels. Features resulting from learning correspond to group-related anatomical structures that can potentially be used as image biomarkers of disease or as a basis for computer-aided diagnosis. The relationship between features and groups is quantified by the likelihood of feature occurrence within a specific group vs. the rest of the population, and feature significance is quantified in terms of the false discovery rate. Experiments validate FBM clinically in the analysis of normal (NC) and Alzheimer's (AD) brain images using the freely available OASIS database. FBM automatically identifies known structural differences between NC and AD subjects in a fully data-driven fashion, and an equal error classification rate of 0.80 is achieved for subjects aged 60-80 years exhibiting mild AD (CDR=1). Copyright (c) 2009 Elsevier Inc. All rights reserved.

  14. Colesteatoma causando paralisia facial Cholesteatoma causing facial paralysis

    Directory of Open Access Journals (Sweden)

    José Ricardo Gurgel Testa

    2003-10-01

    blood supply or production of neurotoxic substances secreted from either the cholesteatoma matrix or bacteria enclosed in the tumor. AIM: To evaluate the incidence, clinical features and treatment of the facial palsy due cholesteatoma. STUDY DESIGN: Clinical retrospective. MATERIAL AND METHOD: Retrospective study of 10 cases of facial paralysis due cholesteatoma selected through a survey of 206 decompressions of the facial nerve due various aetiologies realized in the last 10 years in UNIFESP-EPM. RESULTS: The incidence of facial paralysis due cholesteatoma in this study was 4,85%, with female predominance (60%. The average age of the patients was 39 years. The duration and severity of the facial palsy associated with the extension of lesion were important for the functional recovery of the facial nerve. CONCLUSION: Early surgical approach is necessary in these cases to improve the nerve function more adequately. When disruption or intense fibrous replacement occurs in the facial nerve, nerve grafting (greater auricular/sural nerves and/or hypoglossal facial anastomosis may be suggested.

  15. Effect of Feature Dimensionality on Object-based Land Cover ...

    African Journals Online (AJOL)

    Myburgh, G, Mnr

    Effect of Feature Dimensionality on Object-based Land Cover. Classification: A Comparison of Three .... Argialas, 2008), GEOBIA is generally more sensitive to the Hughes effect when statistical classifiers are used. Support .... Area, asymmetry, border length, compactness, density, length, length/width (22), main direction, ...

  16. Image Recommendation Algorithm Using Feature-Based Collaborative Filtering

    Science.gov (United States)

    Kim, Deok-Hwan

    As the multimedia contents market continues its rapid expansion, the amount of image contents used in mobile phone services, digital libraries, and catalog service is increasing remarkably. In spite of this rapid growth, users experience high levels of frustration when searching for the desired image. Even though new images are profitable to the service providers, traditional collaborative filtering methods cannot recommend them. To solve this problem, in this paper, we propose feature-based collaborative filtering (FBCF) method to reflect the user's most recent preference by representing his purchase sequence in the visual feature space. The proposed approach represents the images that have been purchased in the past as the feature clusters in the multi-dimensional feature space and then selects neighbors by using an inter-cluster distance function between their feature clusters. Various experiments using real image data demonstrate that the proposed approach provides a higher quality recommendation and better performance than do typical collaborative filtering and content-based filtering techniques.

  17. Digital video steganalysis using motion vector recovery-based features.

    Science.gov (United States)

    Deng, Yu; Wu, Yunjie; Zhou, Linna

    2012-07-10

    As a novel digital video steganography, the motion vector (MV)-based steganographic algorithm leverages the MVs as the information carriers to hide the secret messages. The existing steganalyzers based on the statistical characteristics of the spatial/frequency coefficients of the video frames cannot attack the MV-based steganography. In order to detect the presence of information hidden in the MVs of video streams, we design a novel MV recovery algorithm and propose the calibration distance histogram-based statistical features for steganalysis. The support vector machine (SVM) is trained with the proposed features and used as the steganalyzer. Experimental results demonstrate that the proposed steganalyzer can effectively detect the presence of hidden messages and outperform others by the significant improvements in detection accuracy even with low embedding rates.

  18. An opinion formation based binary optimization approach for feature selection

    Science.gov (United States)

    Hamedmoghadam, Homayoun; Jalili, Mahdi; Yu, Xinghuo

    2018-02-01

    This paper proposed a novel optimization method based on opinion formation in complex network systems. The proposed optimization technique mimics human-human interaction mechanism based on a mathematical model derived from social sciences. Our method encodes a subset of selected features to the opinion of an artificial agent and simulates the opinion formation process among a population of agents to solve the feature selection problem. The agents interact using an underlying interaction network structure and get into consensus in their opinions, while finding better solutions to the problem. A number of mechanisms are employed to avoid getting trapped in local minima. We compare the performance of the proposed method with a number of classical population-based optimization methods and a state-of-the-art opinion formation based method. Our experiments on a number of high dimensional datasets reveal outperformance of the proposed algorithm over others.

  19. Semantics of cardinality-based service feature diagrams based on linear logic

    Directory of Open Access Journals (Sweden)

    Ghulam Mustafa Assad

    2015-12-01

    Full Text Available To provide efficient services to end-user it is essential to manage variability among services. Feature modeling is an important approach to manage variability and commonalities of a system in product line. Feature models are composed of feature diagrams. Service feature diagrams (an extended form of feature diagrams introduced some new notations to classical feature diagrams. Service feature diagrams provide selection rights for variable features. In our previous work, we introduced cardinalities for the selection of features from a service feature diagram which we call cardinality-based service feature diagrams (CSFD. In this paper, we provide semantics to CSFDs. These semantics are backed by the formal calculus of Linear Logic. We provide rules to interpret CSFDs into linear logical formula. Our results show that the linear formulas of CSFDs give the same results as expected from the CSFDs.

  20. Annotation-based feature extraction from sets of SBML models.

    Science.gov (United States)

    Alm, Rebekka; Waltemath, Dagmar; Wolfien, Markus; Wolkenhauer, Olaf; Henkel, Ron

    2015-01-01

    Model repositories such as BioModels Database provide computational models of biological systems for the scientific community. These models contain rich semantic annotations that link model entities to concepts in well-established bio-ontologies such as Gene Ontology. Consequently, thematically similar models are likely to share similar annotations. Based on this assumption, we argue that semantic annotations are a suitable tool to characterize sets of models. These characteristics improve model classification, allow to identify additional features for model retrieval tasks, and enable the comparison of sets of models. In this paper we discuss four methods for annotation-based feature extraction from model sets. We tested all methods on sets of models in SBML format which were composed from BioModels Database. To characterize each of these sets, we analyzed and extracted concepts from three frequently used ontologies, namely Gene Ontology, ChEBI and SBO. We find that three out of the methods are suitable to determine characteristic features for arbitrary sets of models: The selected features vary depending on the underlying model set, and they are also specific to the chosen model set. We show that the identified features map on concepts that are higher up in the hierarchy of the ontologies than the concepts used for model annotations. Our analysis also reveals that the information content of concepts in ontologies and their usage for model annotation do not correlate. Annotation-based feature extraction enables the comparison of model sets, as opposed to existing methods for model-to-keyword comparison, or model-to-model comparison.

  1. Individual discriminative face recognition models based on subsets of features

    DEFF Research Database (Denmark)

    Clemmensen, Line Katrine Harder; Gomez, David Delgado; Ersbøll, Bjarne Kjær

    2007-01-01

    The accuracy of data classification methods depends considerably on the data representation and on the selected features. In this work, the elastic net model selection is used to identify meaningful and important features in face recognition. Modelling the characteristics which distinguish one...... selection techniques such as forward selection or lasso regression become inadequate. In the experimental section, the performance of the elastic net model is compared with geometrical and color based algorithms widely used in face recognition such as Procrustes nearest neighbor, Eigenfaces, or Fisher...

  2. Feature Recognition of Froth Images Based on Energy Distribution Characteristics

    Directory of Open Access Journals (Sweden)

    WU Yanpeng

    2014-09-01

    Full Text Available This paper proposes a determining algorithm for froth image features based on the amplitude spectrum energy statistics by applying Fast Fourier Transformation to analyze the energy distribution of various-sized froth. The proposed algorithm has been used to do a froth feature analysis of the froth images from the alumina flotation processing site, and the results show that the consistency rate reaches 98.1 % and the usability rate 94.2 %; with its good robustness and high efficiency, the algorithm is quite suitable for flotation processing state recognition.

  3. Influence of gravity upon some facial signs.

    Science.gov (United States)

    Flament, F; Bazin, R; Piot, B

    2015-06-01

    Facial clinical signs and their integration are the basis of perception than others could have from ourselves, noticeably the age they imagine we are. Facial modifications in motion and their objective measurements before and after application of skin regimen are essential to go further in evaluation capacities to describe efficacy in facial dynamics. Quantification of facial modifications vis à vis gravity will allow us to answer about 'control' of facial shape in daily activities. Standardized photographs of the faces of 30 Caucasian female subjects of various ages (24-73 year) were successively taken at upright and supine positions within a short time interval. All these pictures were therefore reframed - any bias due to facial features was avoided when evaluating one single sign - for clinical quotation by trained experts of several facial signs regarding published standardized photographic scales. For all subjects, the supine position increased facial width but not height, giving a more fuller appearance to the face. More importantly, the supine position changed the severity of facial ageing features (e.g. wrinkles) compared to an upright position and whether these features were attenuated or exacerbated depended on their facial location. Supine station mostly modifies signs of the lower half of the face whereas those of the upper half appear unchanged or slightly accentuated. These changes appear much more marked in the older groups, where some deep labial folds almost vanish. These alterations decreased the perceived ages of the subjects by an average of 3.8 years. Although preliminary, this study suggests that a 90° rotation of the facial skin vis à vis gravity induces rapid rearrangements among which changes in tensional forces within and across the face, motility of interstitial free water among underlying skin tissue and/or alterations of facial Langer lines, likely play a significant role. © 2015 Society of Cosmetic Scientists and the Société Fran

  4. Fuzzy based finger vein recognition with rotation invariant feature matching

    Science.gov (United States)

    Ezhilmaran, D.; Joseph, Rose Bindu

    2017-11-01

    Finger vein recognition is a promising biometric with commercial applications which is explored widely in the recent years. In this paper, a finger vein recognition system is proposed using rotation invariant feature descriptors for matching after enhancing the finger vein images with an interval type-2 fuzzy method. SIFT features are extracted and matched using a matching score based on Euclidian distance. Rotation invariance of the proposed method is verified in the experiment and the results are compared with SURF matching and minutiae matching. It is seen that rotation invariance is verified and the poor quality issues are solved efficiently with the designed system of finger vein recognition during the analysis. The experiments underlines the robustness and reliability of the interval type-2 fuzzy enhancement and SIFT feature matching.

  5. Tool Wear Feature Extraction Based on Hilbert Marginal Spectrum

    Science.gov (United States)

    Guan, Shan; Song, Weijie; Pang, Hongyang

    2017-09-01

    In the metal cutting process, the signal contains a wealth of tool wear state information. A tool wear signal’s analysis and feature extraction method based on Hilbert marginal spectrum is proposed. Firstly, the tool wear signal was decomposed by empirical mode decomposition algorithm and the intrinsic mode functions including the main information were screened out by the correlation coefficient and the variance contribution rate. Secondly, Hilbert transform was performed on the main intrinsic mode functions. Hilbert time-frequency spectrum and Hilbert marginal spectrum were obtained by Hilbert transform. Finally, Amplitude domain indexes were extracted on the basis of the Hilbert marginal spectrum and they structured recognition feature vector of tool wear state. The research results show that the extracted features can effectively characterize the different wear state of the tool, which provides a basis for monitoring tool wear condition.

  6. Efficient Identification Using a Prime-Feature-Based Technique

    DEFF Research Database (Denmark)

    Hussain, Dil Muhammad Akbar; Haq, Shaiq A.; Valente, Andrea

    2011-01-01

    Identification of authorized train drivers through biometrics is a growing area of interest in locomotive radio remote control systems. The existing technique of password authentication is not very reliable and potentially unauthorized personnel may also operate the system on behalf of the operator....... Fingerprint identification system, implemented on PC/104 based real-time systems, can accurately identify the operator. Traditionally, the uniqueness of a fingerprint is determined by the overall pattern of ridges and valleys as well as the local ridge anomalies e.g., a ridge bifurcation or a ridge ending...... in this paper. The technique involves identifying the most prominent feature of the fingerprint and searching only for that feature in the database to expedite the search process. The proposed architect provides efficient matching process and indexing feature for identification is unique....

  7. A Method to Measure the Bracelet Based on Feature Energy

    Science.gov (United States)

    Liu, Hongmin; Li, Lu; Wang, Zhiheng; Huo, Zhanqiang

    2017-12-01

    To measure the bracelet automatically, a novel method based on feature energy is proposed. Firstly, the morphological method is utilized to preprocess the image, and the contour consisting of a concentric circle is extracted. Then, a feature energy function, which is relevant to the distances from one pixel to the edge points, is defined taking into account the geometric properties of the concentric circle. The input image is subsequently transformed to the feature energy distribution map (FEDM) by computing the feature energy of each pixel. The center of the concentric circle is thus located by detecting the maximum on the FEDM; meanwhile, the radii of the concentric circle are determined according to the feature energy function of the center pixel. Finally, with the use of a calibration template, the internal diameter and thickness of the bracelet are measured. The experimental results show that the proposed method can measure the true sizes of the bracelet accurately with the simplicity, directness and robustness compared to the existing methods.

  8. SVM-based glioma grading. Optimization by feature reduction analysis

    Energy Technology Data Exchange (ETDEWEB)

    Zoellner, Frank G.; Schad, Lothar R. [University Medical Center Mannheim, Heidelberg Univ., Mannheim (Germany). Computer Assisted Clinical Medicine; Emblem, Kyrre E. [Massachusetts General Hospital, Charlestown, A.A. Martinos Center for Biomedical Imaging, Boston MA (United States). Dept. of Radiology; Harvard Medical School, Boston, MA (United States); Oslo Univ. Hospital (Norway). The Intervention Center

    2012-11-01

    We investigated the predictive power of feature reduction analysis approaches in support vector machine (SVM)-based classification of glioma grade. In 101 untreated glioma patients, three analytic approaches were evaluated to derive an optimal reduction in features; (i) Pearson's correlation coefficients (PCC), (ii) principal component analysis (PCA) and (iii) independent component analysis (ICA). Tumor grading was performed using a previously reported SVM approach including whole-tumor cerebral blood volume (CBV) histograms and patient age. Best classification accuracy was found using PCA at 85% (sensitivity = 89%, specificity = 84%) when reducing the feature vector from 101 (100-bins rCBV histogram + age) to 3 principal components. In comparison, classification accuracy by PCC was 82% (89%, 77%, 2 dimensions) and 79% by ICA (87%, 75%, 9 dimensions). For improved speed (up to 30%) and simplicity, feature reduction by all three methods provided similar classification accuracy to literature values ({proportional_to}87%) while reducing the number of features by up to 98%. (orig.)

  9. Persistent idiopathic facial pain.

    Science.gov (United States)

    Benoliel, Rafael; Gaul, Charly

    2017-06-01

    Background Persistent idiopathic facial pain (PIFP) is a chronic disorder recurring daily for more than two hours per day over more than three months, in the absence of clinical neurological deficit. PIFP is the current terminology for Atypical Facial Pain and is characterized by daily or near daily pain that is initially confined but may subsequently spread. Pain cannot be attributed to any pathological process, although traumatic neuropathic mechanisms are suspected. When present intraorally, PIFP has been termed 'Atypical Odontalgia', and this entity is discussed in a separate article in this special issue. PIFP is often a difficult but important differential diagnosis among chronic facial pain syndromes. Aim To summarize current knowledge on diagnostic criteria, differential diagnosis, pathophysiology and management of PIFP. Methods We present a narrative review reporting current literature and personal experience. Additionally, we discuss and differentiate the common differential diagnoses associated with PIFP including traumatic trigeminal neuropathies, regional myofascial pain, atypical neurovascular pains and atypical trigeminal neuropathic pains. Results and conclusion The underlying pathophysiology in PIFP is still enigmatic, however neuropathic mechanisms may be relevant. PIFP needs interdisciplinary collaboration to rule out and manage secondary causes, psychiatric comorbidities and other facial pain syndromes, particularly trigeminal neuralgia. Burden of disease and psychiatric comorbidity screening is recommended at an early stage of disease, and should be addressed in the management plan. Future research is needed to establish clear diagnostic criteria and treatment strategies based on clinical findings and individual pathophysiology.

  10. [Feature extraction for breast cancer data based on geometric algebra theory and feature selection using differential evolution].

    Science.gov (United States)

    Li, Jing; Hong, Wenxue

    2014-12-01

    The feature extraction and feature selection are the important issues in pattern recognition. Based on the geometric algebra representation of vector, a new feature extraction method using blade coefficient of geometric algebra was proposed in this study. At the same time, an improved differential evolution (DE) feature selection method was proposed to solve the elevated high dimension issue. The simple linear discriminant analysis was used as the classifier. The result of the 10-fold cross-validation (10 CV) classification of public breast cancer biomedical dataset was more than 96% and proved superior to that of the original features and traditional feature extraction method.

  11. Features fusion based approach for handwritten Gujarati character recognition

    Directory of Open Access Journals (Sweden)

    Ankit Sharma

    2017-02-01

    Full Text Available Handwritten character recognition is a challenging area of research. Lots of research activities in the area of character recognition are already done for Indian languages such as Hindi, Bangla, Kannada, Tamil and Telugu. Literature review on handwritten character recognition indicates that in comparison with other Indian scripts research activities on Gujarati handwritten character recognition are very less.  This paper aims to bring Gujarati character recognition in attention. Recognition of isolated Gujarati handwritten characters is proposed using three different kinds of features and their fusion. Chain code based, zone based and projection profiles based features are utilized as individual features. One of the significant contribution of proposed work is towards the generation of large and representative dataset of 88,000 handwritten Gujarati characters. Experiments are carried out on this developed dataset. Artificial Neural Network (ANN, Support Vector Machine (SVM and Naive Bayes (NB classifier based methods are implemented for handwritten Gujarati character recognition. Experimental results show substantial enhancement over state-of-the-art and authenticate our proposals.

  12. The Relationships between Processing Facial Identity, Emotional Expression, Facial Speech, and Gaze Direction during Development

    Science.gov (United States)

    Spangler, Sibylle M.; Schwarzer, Gudrun; Korell, Monika; Maier-Karius, Johanna

    2010-01-01

    Four experiments were conducted with 5- to 11-year-olds and adults to investigate whether facial identity, facial speech, emotional expression, and gaze direction are processed independently of or in interaction with one another. In a computer-based, speeded sorting task, participants sorted faces according to facial identity while disregarding…

  13. Facial nerve repair after operative injury: Impact of timing on hypoglossal-facial nerve graft outcomes.

    Science.gov (United States)

    Yawn, Robert J; Wright, Harry V; Francis, David O; Stephan, Scott; Bennett, Marc L

    Reanimation of facial paralysis is a complex problem with multiple treatment options. One option is hypoglossal-facial nerve grafting, which can be performed in the immediate postoperative period after nerve transection, or in a delayed setting after skull base surgery when the nerve is anatomically intact but function is poor. The purpose of this study is to investigate the effect of timing of hypoglossal-facial grafting on functional outcome. A retrospective case series from a single tertiary otologic referral center was performed identifying 60 patients with facial nerve injury following cerebellopontine angle tumor extirpation. Patients underwent hypoglossal-facial nerve anastomosis following facial nerve injury. Facial nerve function was measured using the House-Brackmann facial nerve grading system at a median follow-up interval of 18months. Multivariate logistic regression analysis was used determine how time to hypoglossal-facial nerve grafting affected odds of achieving House-Brackmann grade of ≤3. Patients who underwent acute hypoglossal-facial anastomotic repair (0-14days from injury) were more likely to achieve House-Brackmann grade ≤3 compared to those that had delayed repair (OR 4.97, 95% CI 1.5-16.9, p=0.01). Early hypoglossal-facial anastomotic repair after acute facial nerve injury is associated with better long-term facial function outcomes and should be considered in the management algorithm. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Latent Trees for Estimating Intensity of Facial Action Units

    NARCIS (Netherlands)

    Kaltwang, Sebastian; Todorovic, Sinisa; Pantic, Maja

    This paper is about estimating intensity levels of Facial Action Units (FAUs) in videos as an important step toward interpreting facial expressions. As input features, we use locations of facial landmark points detected in video frames. To address uncertainty of input, we formulate a generative

  15. A statistical method for 2D facial landmarking

    NARCIS (Netherlands)

    Dibeklioğlu, H.; Salah, A.A.; Gevers, T.

    2012-01-01

    Many facial-analysis approaches rely on robust and accurate automatic facial landmarking to correctly function. In this paper, we describe a statistical method for automatic facial-landmark localization. Our landmarking relies on a parsimonious mixture model of Gabor wavelet features, computed in

  16. Production ready feature recognition based automatic group technology part coding

    Energy Technology Data Exchange (ETDEWEB)

    Ames, A.L.

    1990-01-01

    During the past four years, a feature recognition based expert system for automatically performing group technology part coding from solid model data has been under development. The system has become a production quality tool, capable of quickly the geometry based portions of a part code with no human intervention. It has been tested on over 200 solid models, half of which are models of production Sandia designs. Its performance rivals that of humans performing the same task, often surpassing them in speed and uniformity. The feature recognition capability developed for part coding is being extended to support other applications, such as manufacturability analysis, automatic decomposition (for finite element meshing and machining), and assembly planning. Initial surveys of these applications indicate that the current capability will provide a strong basis for other applications and that extensions toward more global geometric reasoning and tighter coupling with solid modeler functionality will be necessary.

  17. Biosensor method and system based on feature vector extraction

    Science.gov (United States)

    Greenbaum, Elias [Knoxville, TN; Rodriguez, Jr., Miguel; Qi, Hairong [Knoxville, TN; Wang, Xiaoling [San Jose, CA

    2012-04-17

    A method of biosensor-based detection of toxins comprises the steps of providing at least one time-dependent control signal generated by a biosensor in a gas or liquid medium, and obtaining a time-dependent biosensor signal from the biosensor in the gas or liquid medium to be monitored or analyzed for the presence of one or more toxins selected from chemical, biological or radiological agents. The time-dependent biosensor signal is processed to obtain a plurality of feature vectors using at least one of amplitude statistics and a time-frequency analysis. At least one parameter relating to toxicity of the gas or liquid medium is then determined from the feature vectors based on reference to the control signal.

  18. Collaborative Tracking of Image Features Based on Projective Invariance

    Science.gov (United States)

    Jiang, Jinwei

    -mode sensors for improving the flexibility and robustness of the system. From the experimental results during three field tests for the LASOIS system, we observed that most of the errors in the image processing algorithm are caused by the incorrect feature tracking. This dissertation addresses the feature tracking problem in image sequences acquired from cameras. Despite many alternatives to feature tracking problem, iterative least squares solution solving the optical flow equation has been the most popular approach used by many in the field. This dissertation attempts to leverage the former efforts to enhance feature tracking methods by introducing a view geometric constraint to the tracking problem, which provides collaboration among features. In contrast to alternative geometry based methods, the proposed approach provides an online solution to optical flow estimation in a collaborative fashion by exploiting Horn and Schunck flow estimation regularized by view geometric constraints. Proposed collaborative tracker estimates the motion of a feature based on the geometry of the scene and how the other features are moving. Alternative to this approach, a new closed form solution to tracking that combines the image appearance with the view geometry is also introduced. We particularly use invariants in the projective coordinates and conjecture that the traditional appearance solution can be significantly improved using view geometry. The geometric constraint is introduced by defining a new optical flow equation which exploits the scene geometry from a set drawn from tracked features. At the end of each tracking loop the quality of the tracked features is judged using both appearance similarity and geometric consistency. Our experiments demonstrate robust tracking performance even when the features are occluded or they undergo appearance changes due to projective deformation of the template. The proposed collaborative tracking method is also tested in the visual navigation

  19. Multi Modal Face Recognition Using Block Based Curvelet Features

    OpenAIRE

    K, Jyothi; J, Prabhakar C.

    2014-01-01

    In this paper, we present multimodal 2D +3D face recognition method using block based curvelet features. The 3D surface of face (Depth Map) is computed from the stereo face images using stereo vision technique. The statistical measures such as mean, standard deviation, variance and entropy are extracted from each block of curvelet subband for both depth and intensity images independently.In order to compute the decision score, the KNN classifier is employed independently for both intensity an...

  20. Facial Displays Are Tools for Social Influence.

    Science.gov (United States)

    Crivelli, Carlos; Fridlund, Alan J

    2018-05-01

    Based on modern theories of signal evolution and animal communication, the behavioral ecology view of facial displays (BECV) reconceives our 'facial expressions of emotion' as social tools that serve as lead signs to contingent action in social negotiation. BECV offers an externalist, functionalist view of facial displays that is not bound to Western conceptions about either expressions or emotions. It easily accommodates recent findings of diversity in facial displays, their public context-dependency, and the curious but common occurrence of solitary facial behavior. Finally, BECV restores continuity of human facial behavior research with modern functional accounts of non-human communication, and provides a non-mentalistic account of facial displays well-suited to new developments in artificial intelligence and social robotics. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  1. A window-based time series feature extraction method.

    Science.gov (United States)

    Katircioglu-Öztürk, Deniz; Güvenir, H Altay; Ravens, Ursula; Baykal, Nazife

    2017-10-01

    This study proposes a robust similarity score-based time series feature extraction method that is termed as Window-based Time series Feature ExtraCtion (WTC). Specifically, WTC generates domain-interpretable results and involves significantly low computational complexity thereby rendering itself useful for densely sampled and populated time series datasets. In this study, WTC is applied to a proprietary action potential (AP) time series dataset on human cardiomyocytes and three precordial leads from a publicly available electrocardiogram (ECG) dataset. This is followed by comparing WTC in terms of predictive accuracy and computational complexity with shapelet transform and fast shapelet transform (which constitutes an accelerated variant of the shapelet transform). The results indicate that WTC achieves a slightly higher classification performance with significantly lower execution time when compared to its shapelet-based alternatives. With respect to its interpretable features, WTC has a potential to enable medical experts to explore definitive common trends in novel datasets. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Effective and extensible feature extraction method using genetic algorithm-based frequency-domain feature search for epileptic EEG multiclassification.

    Science.gov (United States)

    Wen, Tingxi; Zhang, Zhongnan

    2017-05-01

    In this paper, genetic algorithm-based frequency-domain feature search (GAFDS) method is proposed for the electroencephalogram (EEG) analysis of epilepsy. In this method, frequency-domain features are first searched and then combined with nonlinear features. Subsequently, these features are selected and optimized to classify EEG signals. The extracted features are analyzed experimentally. The features extracted by GAFDS show remarkable independence, and they are superior to the nonlinear features in terms of the ratio of interclass distance and intraclass distance. Moreover, the proposed feature search method can search for features of instantaneous frequency in a signal after Hilbert transformation. The classification results achieved using these features are reasonable; thus, GAFDS exhibits good extensibility. Multiple classical classifiers (i.e., k-nearest neighbor, linear discriminant analysis, decision tree, AdaBoost, multilayer perceptron, and Naïve Bayes) achieve satisfactory classification accuracies by using the features generated by the GAFDS method and the optimized feature selection. The accuracies for 2-classification and 3-classification problems may reach up to 99% and 97%, respectively. Results of several cross-validation experiments illustrate that GAFDS is effective in the extraction of effective features for EEG classification. Therefore, the proposed feature selection and optimization model can improve classification accuracy.

  3. Validation of Underwater Sensor Package Using Feature Based SLAM

    Directory of Open Access Journals (Sweden)

    Christopher Cain

    2016-03-01

    Full Text Available Robotic vehicles working in new, unexplored environments must be able to locate themselves in the environment while constructing a picture of the objects in the environment that could act as obstacles that would prevent the vehicles from completing their desired tasks. In enclosed environments, underwater range sensors based off of acoustics suffer performance issues due to reflections. Additionally, their relatively high cost make them less than ideal for usage on low cost vehicles designed to be used underwater. In this paper we propose a sensor package composed of a downward facing camera, which is used to perform feature tracking based visual odometry, and a custom vision-based two dimensional rangefinder that can be used on low cost underwater unmanned vehicles. In order to examine the performance of this sensor package in a SLAM framework, experimental tests are performed using an unmanned ground vehicle and two feature based SLAM algorithms, the extended Kalman filter based approach and the Rao-Blackwellized, particle filter based approach, to validate the sensor package.

  4. Validation of Underwater Sensor Package Using Feature Based SLAM.

    Science.gov (United States)

    Cain, Christopher; Leonessa, Alexander

    2016-03-17

    Robotic vehicles working in new, unexplored environments must be able to locate themselves in the environment while constructing a picture of the objects in the environment that could act as obstacles that would prevent the vehicles from completing their desired tasks. In enclosed environments, underwater range sensors based off of acoustics suffer performance issues due to reflections. Additionally, their relatively high cost make them less than ideal for usage on low cost vehicles designed to be used underwater. In this paper we propose a sensor package composed of a downward facing camera, which is used to perform feature tracking based visual odometry, and a custom vision-based two dimensional rangefinder that can be used on low cost underwater unmanned vehicles. In order to examine the performance of this sensor package in a SLAM framework, experimental tests are performed using an unmanned ground vehicle and two feature based SLAM algorithms, the extended Kalman filter based approach and the Rao-Blackwellized, particle filter based approach, to validate the sensor package.

  5. Validation of Underwater Sensor Package Using Feature Based SLAM

    Science.gov (United States)

    Cain, Christopher; Leonessa, Alexander

    2016-01-01

    Robotic vehicles working in new, unexplored environments must be able to locate themselves in the environment while constructing a picture of the objects in the environment that could act as obstacles that would prevent the vehicles from completing their desired tasks. In enclosed environments, underwater range sensors based off of acoustics suffer performance issues due to reflections. Additionally, their relatively high cost make them less than ideal for usage on low cost vehicles designed to be used underwater. In this paper we propose a sensor package composed of a downward facing camera, which is used to perform feature tracking based visual odometry, and a custom vision-based two dimensional rangefinder that can be used on low cost underwater unmanned vehicles. In order to examine the performance of this sensor package in a SLAM framework, experimental tests are performed using an unmanned ground vehicle and two feature based SLAM algorithms, the extended Kalman filter based approach and the Rao-Blackwellized, particle filter based approach, to validate the sensor package. PMID:26999142

  6. Prediction of postoperative facial swelling, pain and trismus following third molar surgery based on preoperative variables

    Science.gov (United States)

    de Souza-Santos, Jadson A.; Martins-Filho, Paulo R.; da Silva, Luiz C.; de Oliveira e Silva, Emanuel D.; Gomes, Ana C.

    2013-01-01

    Objective: This paper investigates the relationship between preoperative findings and short-term outcome in third molar surgery. Study design: A prospective study was carried out involving 80 patients who required 160 surgical extractions of impacted mandibular third molars between January 2009 and December 2010. All extractions were performed under local anesthesia by the same dental surgeon. Swelling and maximal inter-incisor distance were measured at 48 h and on the 7th day postoperatively. Mean visual analogue pain scores were determined at four different time periods. Results: One-hundred eight (67.5%) of the 160 extractions were performed on male subjects and 52 (32.5%) were performed on female subjects. Median age was 22.46 years. The amount of facial swelling varied depending on gender and operating time. Trismus varied depending on gender, operating time and tooth sectioning. The influence of age, gender and operating time varied depending on the pain evaluation period (p trismus and pain) differ depending on the patients’ characteristics (age, gender and body mass index). Moreover, surgery characteristics such as operating time and tooth sectioning were also associated with postoperative variables. Key words:Third molar extraction, pain, swelling, trismus, postoperative findings, prediction. PMID:23229245

  7. Single-labelled music genre classification using content-based features

    CSIR Research Space (South Africa)

    Ajoodha, R

    2015-11-01

    Full Text Available In this paper we use content-based features to perform automatic classification of music pieces into genres. We categorise these features into four groups: features extracted from the Fourier transform’s magnitude spectrum, features designed...

  8. Novel Noninvasive Brain Disease Detection System Using a Facial Image Sensor

    Science.gov (United States)

    Shu, Ting; Zhang, Bob; Tang, Yuan Yan

    2017-01-01

    Brain disease including any conditions or disabilities that affect the brain is fast becoming a leading cause of death. The traditional diagnostic methods of brain disease are time-consuming, inconvenient and non-patient friendly. As more and more individuals undergo examinations to determine if they suffer from any form of brain disease, developing noninvasive, efficient, and patient friendly detection systems will be beneficial. Therefore, in this paper, we propose a novel noninvasive brain disease detection system based on the analysis of facial colors. The system consists of four components. A facial image is first captured through a specialized sensor, where four facial key blocks are next located automatically from the various facial regions. Color features are extracted from each block to form a feature vector for classification via the Probabilistic Collaborative based Classifier. To thoroughly test the system and its performance, seven facial key block combinations were experimented. The best result was achieved using the second facial key block, where it showed that the Probabilistic Collaborative based Classifier is the most suitable. The overall performance of the proposed system achieves an accuracy −95%, a sensitivity −94.33%, a specificity −95.67%, and an average processing time (for one sample) of <1 min at brain disease detection. PMID:29292716

  9. Novel Noninvasive Brain Disease Detection System Using a Facial Image Sensor

    Directory of Open Access Journals (Sweden)

    Ting Shu

    2017-12-01

    Full Text Available Brain disease including any conditions or disabilities that affect the brain is fast becoming a leading cause of death. The traditional diagnostic methods of brain disease are time-consuming, inconvenient and non-patient friendly. As more and more individuals undergo examinations to determine if they suffer from any form of brain disease, developing noninvasive, efficient, and patient friendly detection systems will be beneficial. Therefore, in this paper, we propose a novel noninvasive brain disease detection system based on the analysis of facial colors. The system consists of four components. A facial image is first captured through a specialized sensor, where four facial key blocks are next located automatically from the various facial regions. Color features are extracted from each block to form a feature vector for classification via the Probabilistic Collaborative based Classifier. To thoroughly test the system and its performance, seven facial key block combinations were experimented. The best result was achieved using the second facial key block, where it showed that the Probabilistic Collaborative based Classifier is the most suitable. The overall performance of the proposed system achieves an accuracy −95%, a sensitivity −94.33%, a specificity −95.67%, and an average processing time (for one sample of <1 min at brain disease detection.

  10. Novel Noninvasive Brain Disease Detection System Using a Facial Image Sensor.

    Science.gov (United States)

    Shu, Ting; Zhang, Bob; Tang, Yuan Yan

    2017-12-08

    Brain disease including any conditions or disabilities that affect the brain is fast becoming a leading cause of death. The traditional diagnostic methods of brain disease are time-consuming, inconvenient and non-patient friendly. As more and more individuals undergo examinations to determine if they suffer from any form of brain disease, developing noninvasive, efficient, and patient friendly detection systems will be beneficial. Therefore, in this paper, we propose a novel noninvasive brain disease detection system based on the analysis of facial colors. The system consists of four components. A facial image is first captured through a specialized sensor, where four facial key blocks are next located automatically from the various facial regions. Color features are extracted from each block to form a feature vector for classification via the Probabilistic Collaborative based Classifier. To thoroughly test the system and its performance, seven facial key block combinations were experimented. The best result was achieved using the second facial key block, where it showed that the Probabilistic Collaborative based Classifier is the most suitable. The overall performance of the proposed system achieves an accuracy -95%, a sensitivity -94.33%, a specificity -95.67%, and an average processing time (for one sample) of detection.

  11. Human age estimation framework using different facial parts

    OpenAIRE

    Mohamed Y. El Dib; Hoda M. Onsi

    2011-01-01

    Human age estimation from facial images has a wide range of real-world applications in human computer interaction (HCI). In this paper, we use the bio-inspired features (BIF) to analyze different facial parts: (a) eye wrinkles, (b) whole internal face (without forehead area) and (c) whole face (with forehead area) using different feature shape points. The analysis shows that eye wrinkles which cover 30% of the facial area contain the most important aging features compared to internal face and...

  12. Dictionary-Based Map Compression for Sparse Feature Maps

    Science.gov (United States)

    Tanaka, Kanji; Nagasaka, Tomomi

    Obtaining a compact representation of a large-size feature map built by mapper robots is a critical issue in recent mobile robotics. This “map compression” problem is explored from a novel perspective of dictionary-based data compression techniques in the paper. The primary contribution of the paper is the proposal of the dictionary-based map compression approach. A map compression system is presented by employing RANSAC map matching and sparse coding as building blocks. The effectiveness levels of the proposed techniques is investigated in terms of map compression ratio, compression speed, the retrieval performance of compressed/decompressed maps, as well as applications to the Kolmogorov complexity.

  13. Forensic Facial Reconstruction: Relationship Between the Alar Cartilage and Piriform Aperture.

    Science.gov (United States)

    Strapasson, Raíssa Ananda Paim; Herrera, Lara Maria; Melani, Rodolfo Francisco Haltenhoff

    2017-11-01

    During forensic facial reconstruction, facial features may be predicted based on the parameters of the skull. This study evaluated the relationships between alar cartilage and piriform aperture and nose morphology and facial typology. Ninety-six cone beam computed tomography images of Brazilian subjects (49 males and 47 females) were used in this study. OsiriX software was used to perform the following measurements: nasal width, distance between alar base insertion points, lower width of the piriform aperture, and upper width of the piriform aperture. Nasal width was associated with the lower width of the piriform aperture, sex, skeletal vertical pattern of the face, and age. The current study contributes to the improvement of forensic facial guides by identifying the relationships between the alar cartilages and characteristics of the biological profile of members of a population that has been little studied thus far. © 2017 American Academy of Forensic Sciences.

  14. Dynamic Facial Prosthetics for Sufferers of Facial Paralysis

    Directory of Open Access Journals (Sweden)

    Fergal Coulter

    2011-10-01

    Full Text Available BackgroundThis paper discusses the various methods and the materialsfor the fabrication of active artificial facial muscles. Theprimary use for these will be the reanimation of paralysedor atrophied muscles in sufferers of non-recoverableunilateral facial paralysis.MethodThe prosthetic solution described in this paper is based onsensing muscle motion of the contralateral healthy musclesand replicating that motion across a patient’s paralysed sideof the face, via solid state and thin film actuators. Thedevelopment of this facial prosthetic device focused onrecreating a varying intensity smile, with emphasis ontiming, displacement and the appearance of the wrinklesand folds that commonly appear around the nose and eyesduring the expression.An animatronic face was constructed with actuations beingmade to a silicone representation musculature, usingmultiple shape-memory alloy cascades. Alongside theartificial muscle physical prototype, a facial expressionrecognition software system was constructed. This formsthe basis of an automated calibration and reconfigurationsystem for the artificial muscles following implantation, soas to suit the implantee’s unique physiognomy.ResultsAn animatronic model face with silicone musculature wasdesigned and built to evaluate the performance of ShapeMemory Alloy artificial muscles, their power controlcircuitry and software control systems. A dual facial motionsensing system was designed to allow real time control overmodel – a piezoresistive flex sensor to measure physicalmotion, and a computer vision system to evaluate real toartificial muscle performance.Analysis of various facial expressions in real subjects wasmade, which give useful data upon which to base thesystems parameter limits.ConclusionThe system performed well, and the various strengths andshortcomings of the materials and methods are reviewedand considered for the next research phase, when newpolymer based artificial muscles are constructed

  15. MRI-based diagnostic imaging of the intratemporal facial nerve; Die kernspintomographische Darstellung des intratemporalen N. facialis

    Energy Technology Data Exchange (ETDEWEB)

    Kress, B.; Baehren, W. [Bundeswehrkrankenhaus Ulm (Germany). Abt. fuer Radiologie

    2001-07-01

    Detailed imaging of the five sections of the full intratemporal course of the facial nerve can be achieved by MRI and using thin tomographic section techniques and surface coils. Contrast media are required for tomographic imaging of pathological processes. Established methods are available for diagnostic evaluation of cerebellopontine angle tumors and chronic Bell's palsy, as well as hemifacial spasms. A method still under discussion is MRI for diagnostic evaluation of Bell's palsy in the presence of fractures of the petrous bone, when blood volumes in the petrous bone make evaluation even more difficult. MRI-based diagnostic evaluation of the idiopatic facial paralysis currently is subject to change. Its usual application cannot be recommended for routine evaluation at present. However, a quantitative analysis of contrast medium uptake of the nerve may be an approach to improve the prognostic value of MRI in acute phases of Bell's palsy. (orig./CB) [German] Die detaillierte kernspintomographische Darstellung des aus 5 Abschnitten bestehenden intratemporalen Verlaufes des N. facialis gelingt mit der MRI unter Einsatz von Duennschichttechniken und Oberflaechenspulen. Zur Darstellung von pathologischen Vorgaengen ist die Gabe von Kontrastmittel notwendig. Die Untersuchung in der Diagnostik von Kleinhirnbrueckenwinkeltumoren und der chronischen Facialisparese ist etabliert, ebenso wie die Diagnostik des Hemispasmus facialis. In der Diskussion ist die MRI zur Dokumentation der Facialisparese bei Felsenbeinfrakturen, wobei die Einblutungen im Felsenbein die Beurteilung erschweren. Die kernspintomographische Diagnostik der idiopathischen Facialisparese befindet sich im Wandel. In der herkoemmlichen Form wird sie nicht zur Routinediagnostik empfohlen. Die quantitative Analyse der Kontrastmittelaufnahme im Nerv koennte jedoch die prognostische Bedeutung der MRI in der Akutphase der Bell's palsy erhoehen. (orig.)

  16. Does Facial Amimia Impact the Recognition of Facial Emotions? An EMG Study in Parkinson's Disease.

    Directory of Open Access Journals (Sweden)

    Soizic Argaud

    Full Text Available According to embodied simulation theory, understanding other people's emotions is fostered by facial mimicry. However, studies assessing the effect of facial mimicry on the recognition of emotion are still controversial. In Parkinson's disease (PD, one of the most distinctive clinical features is facial amimia, a reduction in facial expressiveness, but patients also show emotional disturbances. The present study used the pathological model of PD to examine the role of facial mimicry on emotion recognition by investigating EMG responses in PD patients during a facial emotion recognition task (anger, joy, neutral. Our results evidenced a significant decrease in facial mimicry for joy in PD, essentially linked to the absence of reaction of the zygomaticus major and the orbicularis oculi muscles in response to happy avatars, whereas facial mimicry for expressions of anger was relatively preserved. We also confirmed that PD patients were less accurate in recognizing positive and neutral facial expressions and highlighted a beneficial effect of facial mimicry on the recognition of emotion. We thus provide additional arguments for embodied simulation theory suggesting that facial mimicry is a potential lever for therapeutic actions in PD even if it seems not to be necessarily required in recognizing emotion as such.

  17. Facial Prototype Formation in Children.

    Science.gov (United States)

    Inn, Donald; And Others

    This study examined memory representation as it is exhibited in young children's formation of facial prototypes. In the first part of the study, researchers constructed images of faces using an Identikit that provided the features of hair, eyes, mouth, nose, and chin. Images were varied systematically. A series of these images, called exemplar…

  18. Local-Nearest-Neighbors-Based Feature Weighting for Gene Selection.

    Science.gov (United States)

    An, Shuai; Wang, Jun; Wei, Jinmao

    2017-06-07

    Selecting functional genes is essential for analyzing microarray data. Among many available feature (gene) selection approaches, the ones on the basis of the large margin nearest neighbor receive more attention due to their low computational costs and high accuracies in analyzing the high-dimensional data. Yet there still exist some problems that hamper the existing approaches in sifting real target genes, including selecting erroneous nearest neighbors, high sensitivity to irrelevant genes, and inappropriate evaluation criteria. Previous pioneer works have partly addressed some of the problems, but none of them are capable of solving these problems simultaneously. In this paper, we propose a new local-nearest-neighbors-based feature weighting approach to alleviate the above problems. The proposed approach is based on the trick of locally minimizing the within-class distances and maximizing the between-class distances with the k nearest neighbors rule. We further define a feature weight vector, and construct it by minimizing the cost function with a regularization term. The proposed approach can be applied naturally to the multi-class problems and does not require extra modification. Experimental results on the UCI and the open microarray data sets validate the effectiveness and efficiency of the new approach.

  19. Improved Fourier-based characterization of intracellular fractal features

    Science.gov (United States)

    Xylas, Joanna; Quinn, Kyle P.; Hunter, Martin; Georgakoudi, Irene

    2012-01-01

    A novel Fourier-based image analysis method for measuring fractal features is presented which can significantly reduce artifacts due to non-fractal edge effects. The technique is broadly applicable to the quantitative characterization of internal morphology (texture) of image features with well-defined borders. In this study, we explore the capacity of this method for quantitative assessment of intracellular fractal morphology of mitochondrial networks in images of normal and diseased (precancerous) epithelial tissues. Using a combination of simulated fractal images and endogenous two-photon excited fluorescence (TPEF) microscopy, our method is shown to more accurately characterize the exponent of the high-frequency power spectral density (PSD) of these images in the presence of artifacts that arise due to cellular and nuclear borders. PMID:23188308

  20. A Feature-Based Forensic Procedure for Splicing Forgeries Detection

    Directory of Open Access Journals (Sweden)

    Irene Amerini

    2015-01-01

    Full Text Available Nowadays, determining if an image appeared somewhere on the web or in a magazine or is authentic or not has become crucial. Image forensics methods based on features have demonstrated so far to be very effective in detecting forgeries in which a portion of an image is cloned somewhere else onto the same image. Anyway such techniques cannot be adopted to deal with splicing attack, that is, when the image portion comes from another picture that then, usually, is not available anymore for an operation of feature match. In this paper, a procedure in which these techniques could also be employed will be shown to get rid of splicing attack by resorting to the use of some repositories of images available on the Internet like Google Images or TinEye Reverse Image Search. Experimental results are presented on some real case images retrieved on the Internet to demonstrate the capacity of the proposed procedure.

  1. Iris features-based heart disease diagnosis by computer vision

    Science.gov (United States)

    Nguchu, Benedictor A.; Li, Li

    2017-07-01

    The study takes advantage of several new breakthroughs in computer vision technology to develop a new mid-irisbiomedical platform that processes iris image for early detection of heart-disease. Guaranteeing early detection of heart disease provides a possibility of having non-surgical treatment as suggested by biomedical researchers and associated institutions. However, our observation discovered that, a clinical practicable solution which could be both sensible and specific for early detection is still lacking. Due to this, the rate of majority vulnerable to death is highly increasing. The delayed diagnostic procedures, inefficiency, and complications of available methods are the other reasons for this catastrophe. Therefore, this research proposes the novel IFB (Iris Features Based) method for diagnosis of premature, and early stage heart disease. The method incorporates computer vision and iridology to obtain a robust, non-contact, nonradioactive, and cost-effective diagnostic tool. The method analyzes abnormal inherent weakness in tissues, change in color and patterns, of a specific region of iris that responds to impulses of heart organ as per Bernard Jensen-iris Chart. The changes in iris infer the presence of degenerative abnormalities in heart organ. These changes are precisely detected and analyzed by IFB method that includes, tensor-based-gradient(TBG), multi orientations gabor filters(GF), textural oriented features(TOF), and speed-up robust features(SURF). Kernel and Multi class oriented support vector machines classifiers are used for classifying normal and pathological iris features. Experimental results demonstrated that the proposed method, not only has better diagnostic performance, but also provides an insight for early detection of other diseases.

  2. Three-Dimensional Facial Adaptation for MPEG-4 Talking Heads

    Directory of Open Access Journals (Sweden)

    Nikos Grammalidis

    2002-10-01

    Full Text Available This paper studies a new method for three-dimensional (3D facial model adaptation and its integration into a text-to-speech (TTS system. The 3D facial adaptation requires a set of two orthogonal views of the user′s face with a number of feature points located on both views. Based on the correspondences of the feature points′ positions, a generic face model is deformed nonrigidly treating every facial part as a separate entity. A cylindrical texture map is then built from the two image views. The generated head models are compared to corresponding models obtained by the commonly used adaptation method that utilizes 3D radial bases functions. The generated 3D models are integrated into a talking head system, which consists of two distinct parts: a multilingual text to speech sub-system and an MPEG-4 compliant facial animation sub-system. Support for the Greek language has been added, while preserving lip and speech synchronization.

  3. Complex chromosome rearrangement in a child with microcephaly, dysmorphic facial features and mosaicism for a terminal deletion del(18(q21.32-qter investigated by FISH and array-CGH: Case report

    Directory of Open Access Journals (Sweden)

    Kokotas Haris

    2008-11-01

    Full Text Available Abstract We report on a 7 years and 4 months old Greek boy with mild microcephaly and dysmorphic facial features. He was a sociable child with maxillary hypoplasia, epicanthal folds, upslanting palpebral fissures with long eyelashes, and hypertelorism. His ears were prominent and dysmorphic, he had a long philtrum and a high arched palate. His weight was 17 kg (25th percentile and his height 120 cm (50th percentile. High resolution chromosome analysis identified in 50% of the cells a normal male karyotype, and in 50% of the cells one chromosome 18 showed a terminal deletion from 18q21.32. Molecular cytogenetic investigation confirmed a del(18(q21.32-qter in the one chromosome 18, but furthermore revealed the presence of a duplication in q21.2 in the other chromosome 18. The case is discussed concerning comparable previously reported cases and the possible mechanisms of formation.

  4. Smart Images Search based on Visual Features Fusion

    International Nuclear Information System (INIS)

    Saad, M.H.

    2013-01-01

    Image search engines attempt to give fast and accurate access to the wide range of the huge amount images available on the Internet. There have been a number of efforts to build search engines based on the image content to enhance search results. Content-Based Image Retrieval (CBIR) systems have achieved a great interest since multimedia files, such as images and videos, have dramatically entered our lives throughout the last decade. CBIR allows automatically extracting target images according to objective visual contents of the image itself, for example its shapes, colors and textures to provide more accurate ranking of the results. The recent approaches of CBIR differ in terms of which image features are extracted to be used as image descriptors for matching process. This thesis proposes improvements of the efficiency and accuracy of CBIR systems by integrating different types of image features. This framework addresses efficient retrieval of images in large image collections. A comparative study between recent CBIR techniques is provided. According to this study; image features need to be integrated to provide more accurate description of image content and better image retrieval accuracy. In this context, this thesis presents new image retrieval approaches that provide more accurate retrieval accuracy than previous approaches. The first proposed image retrieval system uses color, texture and shape descriptors to form the global features vector. This approach integrates the yc b c r color histogram as a color descriptor, the modified Fourier descriptor as a shape descriptor and modified Edge Histogram as a texture descriptor in order to enhance the retrieval results. The second proposed approach integrates the global features vector, which is used in the first approach, with the SURF salient point technique as local feature. The nearest neighbor matching algorithm with a proposed similarity measure is applied to determine the final image rank. The second approach

  5. Magnetoencephalographic study on facial movements

    Directory of Open Access Journals (Sweden)

    Kensaku eMiki

    2014-07-01

    Full Text Available In this review, we introduced our three studies that focused on facial movements. In the first study, we examined the temporal characteristics of neural responses elicited by viewing mouth movements, and assessed differences between the responses to mouth opening and closing movements and an averting eyes condition. Our results showed that the occipitotemporal area, the human MT/V5 homologue, was active in the perception of both mouth and eye motions. Viewing mouth and eye movements did not elicit significantly different activity in the occipitotemporal area, which indicated that perception of the movement of facial parts may be processed in the same manner, and this is different from motion in general. In the second study, we investigated whether early activity in the occipitotemporal region evoked by eye movements was influenced by a face contour and/or features such as the mouth. Our results revealed specific information processing for eye movements in the occipitotemporal region, and this activity was significantly influenced by whether movements appeared with the facial contour and/or features, in other words, whether the eyes moved, even if the movement itself was the same. In the third study, we examined the effects of inverting the facial contour (hair and chin and features (eyes, nose, and mouth on processing for static and dynamic face perception. Our results showed the following: (1 In static face perception, activity in the right fusiform area was affected more by the inversion of features while that in the left fusiform area was affected more by a disruption in the spatial relationship between the contour and features, and (2 In dynamic face perception, activity in the right occipitotemporal area was affected by the inversion of the facial contour.

  6. Text feature extraction based on deep learning: a review.

    Science.gov (United States)

    Liang, Hong; Sun, Xiao; Sun, Yunlei; Gao, Yuan

    2017-01-01

    Selection of text feature item is a basic and important matter for text mining and information retrieval. Traditional methods of feature extraction require handcrafted features. To hand-design, an effective feature is a lengthy process, but aiming at new applications, deep learning enables to acquire new effective feature representation from training data. As a new feature extraction method, deep learning has made achievements in text mining. The major difference between deep learning and conventional methods is that deep learning automatically learns features from big data, instead of adopting handcrafted features, which mainly depends on priori knowledge of designers and is highly impossible to take the advantage of big data. Deep learning can automatically learn feature representation from big data, including millions of parameters. This thesis outlines the common methods used in text feature extraction first, and then expands frequently used deep learning methods in text feature extraction and its applications, and forecasts the application of deep learning in feature extraction.

  7. Space-based RF signal classification using adaptive wavelet features

    Energy Technology Data Exchange (ETDEWEB)

    Caffrey, M.; Briles, S.

    1995-04-01

    RF signals are dispersed in frequency as they propagate through the ionosphere. For wide-band signals, this results in nonlinearly- chirped-frequency, transient signals in the VHF portion of the spectrum. This ionospheric dispersion provide a means of discriminating wide-band transients from other signals (e.g., continuous-wave carriers, burst communications, chirped-radar signals, etc.). The transient nature of these dispersed signals makes them candidates for wavelet feature selection. Rather than choosing a wavelet ad hoc, we adaptively compute an optimal mother wavelet via a neural network. Gaussian weighted, linear frequency modulate (GLFM) wavelets are linearly combined by the network to generate our application specific mother wavelet, which is optimized for its capacity to select features that discriminate between the dispersed signals and clutter (e.g., multiple continuous-wave carriers), not for its ability to represent the dispersed signal. The resulting mother wavelet is then used to extract features for a neutral network classifier. The performance of the adaptive wavelet classifier is the compared to an FFT based neural network classifier.

  8. Feature-based automatic color calibration for networked camera system

    Science.gov (United States)

    Yamamoto, Shoji; Taki, Keisuke; Tsumura, Norimichi; Nakaguchi, Toshiya; Miyake, Yoichi

    2011-01-01

    In this paper, we have developed a feature-based automatic color calibration by using an area-based detection and adaptive nonlinear regression method. Simple color matching of chartless is achieved by using the characteristic of overlapping image area with each camera. Accurate detection of common object is achieved by the area-based detection that combines MSER with SIFT. Adaptive color calibration by using the color of detected object is calculated by nonlinear regression method. This method can indicate the contribution of object's color for color calibration, and automatic selection notification for user is performed by this function. Experimental result show that the accuracy of the calibration improves gradually. It is clear that this method can endure practical use of multi-camera color calibration if an enough sample is obtained.

  9. Medical Image Fusion Based on Feature Extraction and Sparse Representation

    Directory of Open Access Journals (Sweden)

    Yin Fei

    2017-01-01

    Full Text Available As a novel multiscale geometric analysis tool, sparse representation has shown many advantages over the conventional image representation methods. However, the standard sparse representation does not take intrinsic structure and its time complexity into consideration. In this paper, a new fusion mechanism for multimodal medical images based on sparse representation and decision map is proposed to deal with these problems simultaneously. Three decision maps are designed including structure information map (SM and energy information map (EM as well as structure and energy map (SEM to make the results reserve more energy and edge information. SM contains the local structure feature captured by the Laplacian of a Gaussian (LOG and EM contains the energy and energy distribution feature detected by the mean square deviation. The decision map is added to the normal sparse representation based method to improve the speed of the algorithm. Proposed approach also improves the quality of the fused results by enhancing the contrast and reserving more structure and energy information from the source images. The experiment results of 36 groups of CT/MR, MR-T1/MR-T2, and CT/PET images demonstrate that the method based on SR and SEM outperforms five state-of-the-art methods.

  10. Proton Conductivity and Operational Features Of PBI-Based Membranes

    DEFF Research Database (Denmark)

    Qingfeng, Li; Jensen, Jens Oluf; Precht Noyé, Pernille

    2005-01-01

    As an approach to high temperature operation of PEMFCs, acid-doped PBI membranes are under active development. The membrane exhibits high proton conductivity under low water contents at temperatures up to 200°C. Mechanisms of proton conduction for the membranes have been proposed. Based on the me...... on the membranes fuel cell tests have been demonstrated. Operating features of the PBI cell include no humidification, high CO tolerance, better heat utilization and possible integration with fuel processing units. Issues for further development are also discussed....

  11. Directional wavelet based features for colonic polyp classification.

    Science.gov (United States)

    Wimmer, Georg; Tamaki, Toru; Tischendorf, J J W; Häfner, Michael; Yoshida, Shigeto; Tanaka, Shinji; Uhl, Andreas

    2016-07-01

    In this work, various wavelet based methods like the discrete wavelet transform, the dual-tree complex wavelet transform, the Gabor wavelet transform, curvelets, contourlets and shearlets are applied for the automated classification of colonic polyps. The methods are tested on 8 HD-endoscopic image databases, where each database is acquired using different imaging modalities (Pentax's i-Scan technology combined with or without staining the mucosa), 2 NBI high-magnification databases and one database with chromoscopy high-magnification images. To evaluate the suitability of the wavelet based methods with respect to the classification of colonic polyps, the classification performances of 3 wavelet transforms and the more recent curvelets, contourlets and shearlets are compared using a common framework. Wavelet transforms were already often and successfully applied to the classification of colonic polyps, whereas curvelets, contourlets and shearlets have not been used for this purpose so far. We apply different feature extraction techniques to extract the information of the subbands of the wavelet based methods. Most of the in total 25 approaches were already published in different texture classification contexts. Thus, the aim is also to assess and compare their classification performance using a common framework. Three of the 25 approaches are novel. These three approaches extract Weibull features from the subbands of curvelets, contourlets and shearlets. Additionally, 5 state-of-the-art non wavelet based methods are applied to our databases so that we can compare their results with those of the wavelet based methods. It turned out that extracting Weibull distribution parameters from the subband coefficients generally leads to high classification results, especially for the dual-tree complex wavelet transform, the Gabor wavelet transform and the Shearlet transform. These three wavelet based transforms in combination with Weibull features even outperform the state

  12. Facial Symmetry: An Illusion?

    Directory of Open Access Journals (Sweden)

    Naveen Reddy Admala

    2013-01-01

    Materials and methods: A sample of 120 patients (60 males and 60 females; mean age, 15 years; range, 16-22 years who had received orthodontic clinical examination at AME′s Dental College and Hospital were selected. Selection was made in such a way that following malocclusions with equal sexual distribution was possible from the patient database. Patients selected were classified into skeletal Class I (25 males and 25 females, Class II (25 males and 25 females and Class III (10 males and 10 females based on ANB angle. The number was predecided to be the same and also was based on the number of patients with following malocclusions reported to the department. Differences in length between distances from the points at which ear rods were inserted to the facial midline and the perpendicular distance from the softtissue menton to the facial midline were measured on a frontofacial photograph. Subjects with a discrepancy of more than three standard deviations of the measurement error were categorized as having left- or right-sided laterality. Results: Of subjects with facial asymmetry, 74.1% had a wider right hemiface, and 51.6% of those with chin deviation had left-sided laterality. These tendencies were independent of sex or skeletal jaw relationships. Conclusion: These results suggest that laterality in the normal asymmetry of the face, which is consistently found in humans, is likely to be a hereditary rather than an acquired trait.

  13. Feature representation and compression for content-based retrieval

    Science.gov (United States)

    Xie, Hua; Ortega, Antonio

    2000-12-01

    In semantic content-based image/video browsing and navigation systems, efficient mechanisms to represent and manage a large collection of digital images/videos are needed. Traditional keyword-based indexing describes the content of multimedia data through annotations such as text or keywords extracted manually by the user from a controlled vocabulary. This textual indexing technique lacks the flexibility of satisfying various kinds of queries requested by database users and also requires huge amount of work for updating the information. Current content-based retrieval systems often extract a set of features such as color, texture, shape motion, speed, and position from the raw multimedia data automatically and store them as content descriptors. This content-based metadata differs from text-based metadata in that it supports wider varieties of queries and can be extracted automatically, thus providing a promising approach for efficient database access and management. When the raw data volume grows very large, explicitly extracting the content-information and storing it as metadata along with the images will improve querying performance since metadata requires much less storage than the raw image data and thus will be easier to manipulate. In this paper we maintain that storing metadata together with images will enable effective information management and efficient remote query. We also show, using a texture classification example, that this side information can be compressed while guaranteeing that the desired query accuracy is satisfied. We argue that the compact representation of the image contents not only reduces significantly the storage and transmission rate requirement, but also facilitates certain types of queries. Algorithms are developed for optimized compression of this texture feature metadata given that the goal is to maximize the classification performance for a given rate budget.

  14. Facial morphology and obstructive sleep apnea

    Directory of Open Access Journals (Sweden)

    Anderson Capistrano

    2015-12-01

    Full Text Available Objective: This study aimed at assessing the relationship between facial morphological patterns (I, II, III, Long Face and Short Face as well as facial types (brachyfacial, mesofacial and dolichofacial and obstructive sleep apnea (OSA in patients attending a center specialized in sleep disorders. Methods: Frontal, lateral and smile photographs of 252 patients (157 men and 95 women, randomly selected from a polysomnography clinic, with mean age of 40.62 years, were evaluated. In order to obtain diagnosis of facial morphology, the sample was sent to three professors of Orthodontics trained to classify patients' face according to five patterns, as follows: 1 Pattern I; 2 Pattern II; 3 Pattern III; 4 Long facial pattern; 5 Short facial pattern. Intraexaminer agreement was assessed by means of Kappa index. The professors ranked patients' facial type based on a facial index that considers the proportion between facial width and height. Results: The multiple linear regression model evinced that, when compared to Pattern I, Pattern II had the apnea and hypopnea index (AHI worsened in 6.98 episodes. However, when Pattern II was compared to Pattern III patients, the index for the latter was 11.45 episodes lower. As for the facial type, brachyfacial patients had a mean AHI of 22.34, while dolichofacial patients had a significantly statistical lower index of 10.52. Conclusion: Patients' facial morphology influences OSA. Pattern II and brachyfacial patients had greater AHI, while Pattern III patients showed a lower index.

  15. Unsupervised Feature Selection Based on the Morisita Index

    Science.gov (United States)

    Golay, Jean; Kanevski, Mikhail

    2016-04-01

    Recent breakthroughs in technology have radically improved our ability to collect and store data. As a consequence, the size of datasets has been increasing rapidly both in terms of number of variables (or features) and number of instances. Since the mechanism of many phenomena is not well known, too many variables are sampled. A lot of them are redundant and contribute to the emergence of three major challenges in data mining: (1) the complexity of result interpretation, (2) the necessity to develop new methods and tools for data processing, (3) the possible reduction in the accuracy of learning algorithms because of the curse of dimensionality. This research deals with a new algorithm for selecting the smallest subset of features conveying all the information of a dataset (i.e. an algorithm for removing redundant features). It is a new version of the Fractal Dimensionality Reduction (FDR) algorithm [1] and it relies on two ideas: (a) In general, data lie on non-linear manifolds of much lower dimension than that of the spaces where they are embedded. (b) The situation describes in (a) is partly due to redundant variables, since they do not contribute to increasing the dimension of manifolds, called Intrinsic Dimension (ID). The suggested algorithm implements these ideas by selecting only the variables influencing the data ID. Unlike the FDR algorithm, it resorts to a recently introduced ID estimator [2] based on the Morisita index of clustering and to a sequential forward search strategy. Consequently, in addition to its ability to capture non-linear dependences, it can deal with large datasets and its implementation is straightforward in any programming environment. Many real world case studies are considered. They are related to environmental pollution and renewable resources. References [1] C. Traina Jr., A.J.M. Traina, L. Wu, C. Faloutsos, Fast feature selection using fractal dimension, in: Proceedings of the XV Brazilian Symposium on Databases, SBBD, pp. 158

  16. In search of Leonardo: computer-based facial image analysis of Renaissance artworks for identifying Leonardo as subject

    Science.gov (United States)

    Tyler, Christopher W.; Smith, William A. P.; Stork, David G.

    2012-03-01

    One of the enduring mysteries in the history of the Renaissance is the adult appearance of the archetypical "Renaissance Man," Leonardo da Vinci. His only acknowledged self-portrait is from an advanced age, and various candidate images of younger men are difficult to assess given the absence of documentary evidence. One clue about Leonardo's appearance comes from the remark of the contemporary historian, Vasari, that the sculpture of David by Leonardo's master, Andrea del Verrocchio, was based on the appearance of Leonardo when he was an apprentice. Taking a cue from this statement, we suggest that the more mature sculpture of St. Thomas, also by Verrocchio, might also have been a portrait of Leonardo. We tested the possibility Leonardo was the subject for Verrocchio's sculpture by a novel computational technique for the comparison of three-dimensional facial configurations. Based on quantitative measures of similarities, we also assess whether another pair of candidate two-dimensional images are plausibly attributable as being portraits of Leonardo as a young adult. Our results are consistent with the claim Leonardo is indeed the subject in these works, but we need comparisons with images in a larger corpora of candidate artworks before our results achieve statistical significance.

  17. Recurrent unilateral facial nerve palsy in a child with dehiscent facial nerve canal

    Directory of Open Access Journals (Sweden)

    Christopher Liu

    2016-12-01

    Full Text Available Objective: The dehiscent facial nerve canal has been well documented in histopathological studies of temporal bones as well as in clinical setting. We describe clinical and radiologic features of a child with recurrent facial nerve palsy and dehiscent facial nerve canal. Methods: Retrospective chart review. Results: A 5-year-old male was referred to the otolaryngology clinic for evaluation of recurrent acute otitis media and hearing loss. He also developed recurrent left peripheral FN palsy associated with episodes of bilateral acute otitis media. High resolution computed tomography of the temporal bones revealed incomplete bony coverage of the tympanic segment of the left facial nerve. Conclusions: Recurrent peripheral FN palsy may occur in children with recurrent acute otitis media in the presence of a dehiscent facial nerve canal. Facial nerve canal dehiscence should be considered in the differential diagnosis of children with recurrent peripheral FN palsy.

  18. Do Facial Expressions Develop before Birth?

    Science.gov (United States)

    Reissland, Nadja; Francis, Brian; Mason, James; Lincoln, Karen

    2011-01-01

    Background Fetal facial development is essential not only for postnatal bonding between parents and child, but also theoretically for the study of the origins of affect. However, how such movements become coordinated is poorly understood. 4-D ultrasound visualisation allows an objective coding of fetal facial movements. Methodology/Findings Based on research using facial muscle movements to code recognisable facial expressions in adults and adapted for infants, we defined two distinct fetal facial movements, namely “cry-face-gestalt” and “laughter- gestalt,” both made up of up to 7 distinct facial movements. In this conceptual study, two healthy fetuses were then scanned at different gestational ages in the second and third trimester. We observed that the number and complexity of simultaneous movements increased with gestational age. Thus, between 24 and 35 weeks the mean number of co-occurrences of 3 or more facial movements increased from 7% to 69%. Recognisable facial expressions were also observed to develop. Between 24 and 35 weeks the number of co-occurrences of 3 or more movements making up a “cry-face gestalt” facial movement increased from 0% to 42%. Similarly the number of co-occurrences of 3 or more facial movements combining to a “laughter-face gestalt” increased from 0% to 35%. These changes over age were all highly significant. Significance This research provides the first evidence of developmental progression from individual unrelated facial movements toward fetal facial gestalts. We propose that there is considerable potential of this method for assessing fetal development: Subsequent discrimination of normal and abnormal fetal facial development might identify health problems in utero. PMID:21904607

  19. Research of image retrieval technology based on color feature

    Science.gov (United States)

    Fu, Yanjun; Jiang, Guangyu; Chen, Fengying

    2009-10-01

    Recently, with the development of the communication and the computer technology and the improvement of the storage technology and the capability of the digital image equipment, more and more image resources are given to us than ever. And thus the solution of how to locate the proper image quickly and accurately is wanted.The early method is to set up a key word for searching in the database, but now the method has become very difficult when we search much more picture that we need. In order to overcome the limitation of the traditional searching method, content based image retrieval technology was aroused. Now, it is a hot research subject.Color image retrieval is the important part of it. Color is the most important feature for color image retrieval. Three key questions on how to make use of the color characteristic are discussed in the paper: the expression of color, the abstraction of color characteristic and the measurement of likeness based on color. On the basis, the extraction technology of the color histogram characteristic is especially discussed. Considering the advantages and disadvantages of the overall histogram and the partition histogram, a new method based the partition-overall histogram is proposed. The basic thought of it is to divide the image space according to a certain strategy, and then calculate color histogram of each block as the color feature of this block. Users choose the blocks that contain important space information, confirming the right value. The system calculates the distance between the corresponding blocks that users choosed. Other blocks merge into part overall histograms again, and the distance should be calculated. Then accumulate all the distance as the real distance between two pictures. The partition-overall histogram comprehensive utilizes advantages of two methods above, by choosing blocks makes the feature contain more spatial information which can improve performance; the distances between partition-overall histogram

  20. Vehicle Unsteady Dynamics Characteristics Based on Tire and Road Features

    Directory of Open Access Journals (Sweden)

    Bin Ma

    2013-01-01

    Full Text Available During automotive related accidents, tire and road play an important role in vehicle unsteady dynamics as they have a significant impact on the sliding friction. The calculation of the rubber viscoelastic energy loss modulus and the true contact area model is improved based on the true contact area and the rubber viscoelastic theory. A 10 DOF full vehicle dynamic model in consideration of the kinetic sliding friction coefficient which has good accuracy and reality is developed. The stability test is carried out to evaluate the effectiveness of the model, and the simulation test is done in MATLAB to analyze the impact of tire feature and road self-affine characteristics on the sport utility vehicle (SUV unsteady dynamics under different weights. The findings show that it is a great significance to analyze the SUV dynamics equipped with different tire on different roads, which may provide useful insights into solving the explicit-implicit features of tire prints in systematically and designing active safety systems.

  1. Wavelet based feature extraction and visualization in hyperspectral tissue characterization.

    Science.gov (United States)

    Denstedt, Martin; Bjorgan, Asgeir; Milanič, Matija; Randeberg, Lise Lyngsnes

    2014-12-01

    Hyperspectral images of tissue contain extensive and complex information relevant for clinical applications. In this work, wavelet decomposition is explored for feature extraction from such data. Wavelet methods are simple and computationally effective, and can be implemented in real-time. The aim of this study was to correlate results from wavelet decomposition in the spectral domain with physical parameters (tissue oxygenation, blood and melanin content). Wavelet decomposition was tested on Monte Carlo simulations, measurements of a tissue phantom and hyperspectral data from a human volunteer during an occlusion experiment. Reflectance spectra were decomposed, and the coefficients were correlated to tissue parameters. This approach was used to identify wavelet components that can be utilized to map levels of blood, melanin and oxygen saturation. The results show a significant correlation (p wavelet components. The tissue parameters could be mapped using a subset of the calculated components due to redundancy in spectral information. Vessel structures are well visualized. Wavelet analysis appears as a promising tool for extraction of spectral features in skin. Future studies will aim at developing quantitative mapping of optical properties based on wavelet decomposition.

  2. Unfakeable facial configurations affect strategic choices in trust games with or without information about past behavior.

    Directory of Open Access Journals (Sweden)

    Constantin Rezlescu

    Full Text Available Many human interactions are built on trust, so widespread confidence in first impressions generally favors individuals with trustworthy-looking appearances. However, few studies have explicitly examined: 1 the contribution of unfakeable facial features to trust-based decisions, and 2 how these cues are integrated with information about past behavior.Using highly controlled stimuli and an improved experimental procedure, we show that unfakeable facial features associated with the appearance of trustworthiness attract higher investments in trust games. The facial trustworthiness premium is large for decisions based solely on faces, with trustworthy identities attracting 42% more money (Study 1, and remains significant though reduced to 6% when reputational information is also available (Study 2. The face trustworthiness premium persists with real (rather than virtual currency and when higher payoffs are at stake (Study 3.Our results demonstrate that cooperation may be affected not only by controllable appearance cues (e.g., clothing, facial expressions as shown previously, but also by features that are impossible to mimic (e.g., individual facial structure. This unfakeable face trustworthiness effect is not limited to the rare situations where people lack any information about their partners, but survives in richer environments where relevant details about partner past behavior are available.

  3. Unfakeable facial configurations affect strategic choices in trust games with or without information about past behavior.

    Science.gov (United States)

    Rezlescu, Constantin; Duchaine, Brad; Olivola, Christopher Y; Chater, Nick

    2012-01-01

    Many human interactions are built on trust, so widespread confidence in first impressions generally favors individuals with trustworthy-looking appearances. However, few studies have explicitly examined: 1) the contribution of unfakeable facial features to trust-based decisions, and 2) how these cues are integrated with information about past behavior. Using highly controlled stimuli and an improved experimental procedure, we show that unfakeable facial features associated with the appearance of trustworthiness attract higher investments in trust games. The facial trustworthiness premium is large for decisions based solely on faces, with trustworthy identities attracting 42% more money (Study 1), and remains significant though reduced to 6% when reputational information is also available (Study 2). The face trustworthiness premium persists with real (rather than virtual) currency and when higher payoffs are at stake (Study 3). Our results demonstrate that cooperation may be affected not only by controllable appearance cues (e.g., clothing, facial expressions) as shown previously, but also by features that are impossible to mimic (e.g., individual facial structure). This unfakeable face trustworthiness effect is not limited to the rare situations where people lack any information about their partners, but survives in richer environments where relevant details about partner past behavior are available.

  4. Global Contrast Enhancement Based Image Forensics Using Statistical Features

    Directory of Open Access Journals (Sweden)

    Neetu Singh

    2017-01-01

    Full Text Available The evolution of modern cameras, mobile phones equipped with sophisticated image editing software has revolutionized digital imaging. In the process of image editing, contrast enhancement is a very common technique to hide visual traces of tampering. In our work, we have employed statistical distribution of block variance and AC DCT coefficients of an image to detect global contrast enhancement in an image. The variation in statistical parameters of block variance and AC DCT coefficients distribution for different degrees of contrast enhancement are used as features to detect contrast enhancement. An SVM classifier with 10-fold cross-validation is employed. An overall accuracy greater than 99% in detection with false rate less than 2% has been achieved. The proposed method is novel and it can be applied to uncompressed, previously JPEG compressed and post enhancement JPEG compressed images with high accuracy. The proposed method does not employ oft-repeated image histogram-based approach.

  5. Unsupervised Posture Modeling Based on Spatial-Temporal Movement Features

    Science.gov (United States)

    Yan, Chunjuan

    Traditional posture modeling for human action recognition is based on silhouette segmentation, which is subject to the noise from illumination variation and posture occlusions and shadow interruptions. In this paper, we extract spatial temporal movement features from human actions and adopt unsupervised clustering method for salient posture learning. First, spatial-temporal interest points (STIPs) were extracted according to the properties of human movement, and then, histogram of gradient was built to describe the distribution of STIPs in each frame for a single pose. In addition, the training samples were clustered by non-supervised classification method. Moreover, the salient postures were modeled with GMM according to Expectation Maximization (EM) estimation. The experiment results proved that our method can effectively and accurately recognize human's action postures.

  6. 9 Mb familial duplication in chromosome band Xp22.2-22.13 associated with mental retardation, hypotonia and developmental delay, scoliosis, cardiovascular problems and mild dysmorphic facial features.

    Science.gov (United States)

    Sismani, Carolina; Anastasiadou, Violetta; Kousoulidou, Ludmila; Parkel, Sven; Koumbaris, George; Zilina, Olga; Bashiardes, Stavros; Spanou, Elena; Kurg, Ants; Patsalis, Philippos C

    2011-01-01

    We report on a family with syndromic X-linked mental retardation (XLMR) caused by an Xp22.2-22.13 duplication. This family consists of a carrier mother and daughter and four affected sons, presenting with mental retardation, developmental delay, cardiovascular problems and mild dysmorphic facial features. Female carriers have normal intelligence and some common clinical features, as well as different clinical abnormalities. Cytogenetic analysis of the mother showed an Xp22.2 duplication which was passed to all her offspring. Fluorescence In Situ Hybridization (FISH) using whole chromosome paint and Bacterial Artificial Chromosome (BAC) clones covering Xp22.12-Xp22.3 region, confirmed the X chromosome origin and the size of the duplication. Two different targeted microarray methodologies were used for breakpoint confirmation, resulting in the localization of the duplication to approximately 9.75-18.98 Mb. Detailed description of such rare duplications provides valuable data for the investigation of genetic disease etiology. Copyright © 2011 Elsevier Masson SAS. All rights reserved.

  7. Are Rich People Perceived as More Trustworthy? Perceived Socioeconomic Status Modulates Judgments of Trustworthiness and Trust Behavior Based on Facial Appearance

    Directory of Open Access Journals (Sweden)

    Yue Qi

    2018-04-01

    Full Text Available In the era of globalization, people meet strangers from different countries more often than ever. Previous research indicates that impressions of trustworthiness based on facial appearance play an important role in interpersonal cooperation behaviors. The current study examined whether additional information about socioeconomic status (SES, including national prosperity and individual monthly income, affects facial judgments and appearance-based trust decisions. Besides reproducing previous conclusions that trustworthy faces receive more money than untrustworthy faces, the present study showed that high-income individuals were judged as more trustworthy than low-income individuals, and also were given more money in a trust game. However, trust behaviors were not modulated by the nationality of the faces. The present research suggests that people are more likely to trust strangers with a high income, compared with individuals with a low income.

  8. Improving scale invariant feature transform-based descriptors with shape-color alliance robust feature

    Science.gov (United States)

    Wang, Rui; Zhu, Zhengdan; Zhang, Liang

    2015-05-01

    Constructing appropriate descriptors for interest points in image matching is a critical aspect task in computer vision and pattern recognition. A method as an extension of the scale invariant feature transform (SIFT) descriptor called shape-color alliance robust feature (SCARF) descriptor is presented. To address the problem that SIFT is designed mainly for gray images and lack of global information for feature points, the proposed approach improves the SIFT descriptor by means of a concentric-rings model, as well as integrating the color invariant space and shape context with SIFT to construct the SCARF descriptor. The SCARF method developed is more robust than the conventional SIFT with respect to not only the color and photometrical variations but also the measuring similarity as a global variation between two shapes. A comparative evaluation of different descriptors is carried out showing that the SCARF approach provides better results than the other four state-of-the-art related methods.

  9. A Cross-Sectional Clinic-Based Study in Patients With Side-Locked Unilateral Headache and Facial Pain.

    Science.gov (United States)

    Prakash, Sanjay; Rathore, Chaturbhuj; Makwana, Prayag; Dave, Ankit

    2016-07-01

    To undertake the epidemiological evaluation of the patients presenting with side-locked headache and facial pain in a tertiary neurology outpatient clinic. Side-locked unilateral headache and facial pain include a large number of primary and secondary headaches and cranial neuropathies. A diagnostic approach for the patients presenting with strictly unilateral headaches is important as many of these headache disorders respond to a highly selective drug. Epidemiological data may guide us to formulate a proper approach for such patients. However, the literature is sparse on strictly unilateral headache and facial pain. We prospectively recruited 307 consecutive adult patients (>18 years) with side-locked headache and facial pain presenting to a neurology outpatient clinic between July 2014 and December 2015. All patients were subjected to MRI brain and other investigations to find out the different secondary causes. The diagnosis was carried out by at least two headache specialists together. All patients were classified according to the International Classification of Headache Disorder-third edition (ICHD-3β). The mean age at the time of examination was 42.4 ± 13.6 years (range 18-80 years). Forty-eight percent of patients were male. Strictly unilateral headaches accounted for 19.2% of the total headaches seen in the clinic. Headaches were classified as primary in 58%, secondary in 18%, and cranial neuropathies and other facial pain in 16% patients. Five percent of patients could not be classified. Three percent of patients were classified as per the Appendix section of ICHD-3β. The prevalence of secondary headaches and painful cranial neuropathies increased with age. A total of 36 different diagnoses were made. Only two diseases (migraine and cluster headache) had a prevalence of more than 10%. The prevalence of 13 diseases varied between 6 and 9%. The prevalence of other 14 groups was ≤1%. Migraine was the most common diagnosis (15%). Cervicogenic headache

  10. Three-dimensional facial analyses of Indian and Malaysian women.

    Science.gov (United States)

    Kusugal, Preethi; Ruttonji, Zarir; Gowda, Roopa; Rajpurohit, Ladusingh; Lad, Pritam; Ritu

    2015-01-01

    Facial measurements serve as a valuable tool in the treatment planning of maxillofacial rehabilitation, orthodontic treatment, and orthognathic surgeries. The esthetic guidelines of face are still based on neoclassical canons, which were used in the ancient art. These canons are considered to be highly subjective, and there is ample evidence in the literature, which raises such questions as whether or not these canons can be applied for the modern population. This study was carried out to analyze the facial features of Indian and Malaysian women by using three-dimensional (3D) scanner and thus determine the prevalence of neoclassical facial esthetic canons in both the groups. The study was carried out on 60 women in the age range of 18-25 years, out of whom 30 were Indian and 30 Malaysian. As many as 16 facial measurements were taken by using a noncontact 3D scanner. Unpaired t-test was used for comparison of facial measurements between Indian and Malaysian females. Two-tailed Fisher exact test was used to determine the prevalence of neoclassical canons. Orbital Canon was prevalent in 80% of Malaysian women; the same was found only in 16% of Indian women (P = 0.00013). About 43% of Malaysian women exhibited orbitonasal canon (P = 0.0470) whereas nasoaural canon was prevalent in 73% of Malaysian and 33% of Indian women (P = 0.0068). Orbital, orbitonasal, and nasoaural canon were more prevalent in Malaysian women. Facial profile canon, nasooral, and nasofacial canons were not seen in either group. Though some canons provide guidelines in esthetic analyses of face, complete reliance on these canons is not justifiable.

  11. Object Recognition using Feature- and Color-Based Methods

    Science.gov (United States)

    Duong, Tuan; Duong, Vu; Stubberud, Allen

    2008-01-01

    An improved adaptive method of processing image data in an artificial neural network has been developed to enable automated, real-time recognition of possibly moving objects under changing (including suddenly changing) conditions of illumination and perspective. The method involves a combination of two prior object-recognition methods one based on adaptive detection of shape features and one based on adaptive color segmentation to enable recognition in situations in which either prior method by itself may be inadequate. The chosen prior feature-based method is known as adaptive principal-component analysis (APCA); the chosen prior color-based method is known as adaptive color segmentation (ACOSE). These methods are made to interact with each other in a closed-loop system to obtain an optimal solution of the object-recognition problem in a dynamic environment. One of the results of the interaction is to increase, beyond what would otherwise be possible, the accuracy of the determination of a region of interest (containing an object that one seeks to recognize) within an image. Another result is to provide a minimized adaptive step that can be used to update the results obtained by the two component methods when changes of color and apparent shape occur. The net effect is to enable the neural network to update its recognition output and improve its recognition capability via an adaptive learning sequence. In principle, the improved method could readily be implemented in integrated circuitry to make a compact, low-power, real-time object-recognition system. It has been proposed to demonstrate the feasibility of such a system by integrating a 256-by-256 active-pixel sensor with APCA, ACOSE, and neural processing circuitry on a single chip. It has been estimated that such a system on a chip would have a volume no larger than a few cubic centimeters, could operate at a rate as high as 1,000 frames per second, and would consume in the order of milliwatts of power.

  12. Deletion of 11q12.3-11q13.1 in a patient with intellectual disability and childhood facial features resembling Cornelia de Lange syndrome.

    Science.gov (United States)

    Boyle, Martine Isabel; Jespersgaard, Cathrine; Nazaryan, Lusine; Ravn, Kirstine; Brøndum-Nielsen, Karen; Bisgaard, Anne-Marie; Tümer, Zeynep

    2015-11-01

    Deletions within 11q12.3-11q13.1 are very rare and to date only two cases have been described in the literature. In this study we describe a 23-year-old male patient with intellectual disability, behavioral problems, dysmorphic features, dysphagia, gastroesophageal reflux and skeletal abnormalities. Cornelia de Lange syndrome (CdLS, OMIM #122470; #300590; #610759; #300882; #614701) was suggested as a differential diagnosis in childhood although he lacked some of the features typical for this disorder. He does not have a mutation in any of the five known CdLS genes (NIPBL, SMC1A, SMC3, HDAC8, RAD21), but a 1.6Mb deletion at chromosome region 11q12.3-11q13.1 was detected by chromosome microarray. The deletion contains several genes including PPP2R5B, which has been associated with intellectual disability and overgrowth; NRXN2, which has been associated with intellectual disability and autism spectrum disorder; and CDCA5, which is part of the cohesin pathway, as are all the five known CdLS genes. It is therefore possible that deletion of CDCA5 may account for some of the CdLS like features of the present case. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Additivity of Feature-based and Symmetry-based Grouping Effects in Multiple Object Tracking

    Directory of Open Access Journals (Sweden)

    Chundi eWang

    2016-05-01

    Full Text Available Multiple object tracking (MOT is an attentional process wherein people track several moving targets among several distractors. Symmetry, an important indicator of regularity, is a general spatial pattern observed in natural and artificial scenes. According to the laws of perceptual organization proposed by Gestalt psychologists, regularity is a principle of perceptual grouping, such as similarity and closure. A great deal of research reported that feature-based similarity grouping (e.g., grouping based on color, size, or shape among targets in MOT tasks can improve tracking performance. However, no additive feature-based grouping effects have been reported where the tracking objects had two or more features. Additive effect refers to a greater grouping effect produced by grouping based on multiple cues instead of one cue. Can spatial symmetry produce a similar grouping effect similar to that of feature similarity in MOT tasks? Are the grouping effects based on symmetry and feature similarity additive? This study includes four experiments to address these questions. The results of Experiments 1 and 2 demonstrated the automatic symmetry-based grouping effects. More importantly, an additive grouping effect of symmetry and feature similarity was observed in Experiments 3 and 4. Our findings indicate that symmetry can produce an enhanced grouping effect in MOT and facilitate the grouping effect based on color or shape similarity. The where and what pathways might have played an important role in the additive grouping effect.

  14. A Motion-Based Feature for Event-Based Pattern Recognition.

    Science.gov (United States)

    Clady, Xavier; Maro, Jean-Matthieu; Barré, Sébastien; Benosman, Ryad B

    2016-01-01

    This paper introduces an event-based luminance-free feature from the output of asynchronous event-based neuromorphic retinas. The feature consists in mapping the distribution of the optical flow along the contours of the moving objects in the visual scene into a matrix. Asynchronous event-based neuromorphic retinas are composed of autonomous pixels, each of them asynchronously generating "spiking" events that encode relative changes in pixels' illumination at high temporal resolutions. The optical flow is computed at each event, and is integrated locally or globally in a speed and direction coordinate frame based grid, using speed-tuned temporal kernels. The latter ensures that the resulting feature equitably represents the distribution of the normal motion along the current moving edges, whatever their respective dynamics. The usefulness and the generality of the proposed feature are demonstrated in pattern recognition applications: local corner detection and global gesture recognition.

  15. A smile can reveal your age: enabling facial dynamics in age estimation

    NARCIS (Netherlands)

    Dibeklioğlu, H.; Gevers, T.; Salah, A.A.; Valenti, R.

    2012-01-01

    Estimation of a person's age from the facial image has many applications, ranging from biometrics and access control to cosmetics and entertainment. Many image-based methods have been proposed for this problem. In this paper, we propose a method for the use of dynamic features in age estimation, and

  16. Facial assessments: identifying the suitable pathway to facial rejuvenation.

    Science.gov (United States)

    Weinkle, S

    2006-05-01

    There are now numerous ways in which a patient can rejuvenate their facial appearance, including various types of expensive, invasive, surgical procedures, and an ever increasing gamut of products that can be inserted or injected beneath the skin to restore a youthful look to the face. The importance of facial assessments in identifying the most suitable treatment option is discussed here. Before a patient commits to any one of these corrective options, it is the responsibility of the physician to conduct a thorough assessment of the patient's face. All of the facial characteristics should be examined closely: underlying bone and musculature, shape, proportion, and features including folds, wrinkles, fine lines, volume deficits and changes in pigmentation. The degree of ptosis in the facial tissues should be assessed by light palpation. Following assessment of the face, digital photographs should be taken of the patient's full face and profile, allowing the physician to indicate areas, on a visual display, that need correction and there are now computer programs which can 'morph' the features of a facial photograph, providing an approximation of the post-treatment result. Shape and proportion are neglected facets in the assessment of the face prior to corrective treatment. A treatment or technique which rejuvenates a 'thin' face may not work so successfully on a 'round' face and vice versa. Most importantly, the physician should aim to understand the patient's objective and subjective perceptions of their face and ascertain the results that are desired by the patient before evaluating what can be achieved. Appropriate corrective options can then be discussed in detail, highlighting the risks, side effects, costs, invasiveness, logistics and anticipated outcomes of each. A comprehensive assessment of the patient's face allows the physician to formulate a regimen of treatments that will reach or exceed the expectations of the patient.

  17. Topological Embedding Feature Based Resource Allocation in Network Virtualization

    Directory of Open Access Journals (Sweden)

    Hongyan Cui

    2014-01-01

    Full Text Available Virtualization provides a powerful way to run multiple virtual networks on a shared substrate network, which needs accurate and efficient mathematical models. Virtual network embedding is a challenge in network virtualization. In this paper, considering the degree of convergence when mapping a virtual network onto substrate network, we propose a new embedding algorithm based on topology mapping convergence-degree. Convergence-degree means the adjacent degree of virtual network’s nodes when they are mapped onto a substrate network. The contributions of our method are as below. Firstly, we map virtual nodes onto the substrate nodes with the maximum convergence-degree. The simulation results show that our proposed algorithm largely enhances the network utilization efficiency and decreases the complexity of the embedding problem. Secondly, we define the load balance rate to reflect the load balance of substrate links. The simulation results show our proposed algorithm achieves better load balance. Finally, based on the feature of star topology, we further improve our embedding algorithm and make it suitable for application in the star topology. The test result shows it gets better performance than previous works.

  18. Children and Facial Trauma

    Science.gov (United States)

    ... patient. It is important during treatment of facial fractures to be careful that the patient's facial appearance is minimally affected. Injuries to the teeth and surrounding dental structures style Isolated injuries to ...

  19. Deletion of 11q12.3-11q13.1 in a patient with intellectual disability and childhood facial features resembling Cornelia de Lange syndrome

    DEFF Research Database (Denmark)

    Boyle, Martine Isabel; Jespersgaard, Cathrine; Nazaryan, Lusine

    2015-01-01

    Deletions within 11q12.3-11q13.1 are very rare and to date only two cases have been described in the literature. In this study we describe a 23-year-old male patient with intellectual disability, behavioral problems, dysmorphic features, dysphagia, gastroesophageal reflux and skeletal abnormalities......), but a 1.6Mb deletion at chromosome region 11q12.3-11q13.1 was detected by chromosome microarray. The deletion contains several genes including PPP2R5B, which has been associated with intellectual disability and overgrowth; NRXN2, which has been associated with intellectual disability and autism spectrum...

  20. Grammar-based feature generation for time-series prediction

    CERN Document Server

    De Silva, Anthony Mihirana

    2015-01-01

    This book proposes a novel approach for time-series prediction using machine learning techniques with automatic feature generation. Application of machine learning techniques to predict time-series continues to attract considerable attention due to the difficulty of the prediction problems compounded by the non-linear and non-stationary nature of the real world time-series. The performance of machine learning techniques, among other things, depends on suitable engineering of features. This book proposes a systematic way for generating suitable features using context-free grammar. A number of feature selection criteria are investigated and a hybrid feature generation and selection algorithm using grammatical evolution is proposed. The book contains graphical illustrations to explain the feature generation process. The proposed approaches are demonstrated by predicting the closing price of major stock market indices, peak electricity load and net hourly foreign exchange client trade volume. The proposed method ...

  1. Learning image descriptors for matching based on Haar features

    Science.gov (United States)

    Chen, L.; Rottensteiner, F.; Heipke, C.

    2014-08-01

    This paper presents a new and fast binary descriptor for image matching learned from Haar features. The training uses AdaBoost; the weak learner is built on response function for Haar features, instead of histogram-type features. The weak classifier is selected from a large weak feature pool. The selected features have different feature type, scale and position within the patch, having correspond threshold value for weak classifiers. Besides, to cope with the fact in real matching that dissimilar matches are encountered much more often than similar matches, cascaded classifiers are trained to motivate training algorithms see a large number of dissimilar patch pairs. The final trained output are binary value vectors, namely descriptors, with corresponding weight and perceptron threshold for a strong classifier in every stage. We present preliminary results which serve as a proof-of-concept of the work.

  2. A brain-computer interface for potential non-verbal facial communication based on EEG signals related to specific emotions.

    Science.gov (United States)

    Kashihara, Koji

    2014-01-01

    Unlike assistive technology for verbal communication, the brain-machine or brain-computer interface (BMI/BCI) has not been established as a non-verbal communication tool for amyotrophic lateral sclerosis (ALS) patients. Face-to-face communication enables access to rich emotional information, but individuals suffering from neurological disorders, such as ALS and autism, may not express their emotions or communicate their negative feelings. Although emotions may be inferred by looking at facial expressions, emotional prediction for neutral faces necessitates advanced judgment. The process that underlies brain neuronal responses to neutral faces and causes emotional changes remains unknown. To address this problem, therefore, this study attempted to decode conditioned emotional reactions to neutral face stimuli. This direction was motivated by the assumption that if electroencephalogram (EEG) signals can be used to detect patients' emotional responses to specific inexpressive faces, the results could be incorporated into the design and development of BMI/BCI-based non-verbal communication tools. To these ends, this study investigated how a neutral face associated with a negative emotion modulates rapid central responses in face processing and then identified cortical activities. The conditioned neutral face-triggered event-related potentials that originated from the posterior temporal lobe statistically significantly changed during late face processing (600-700 ms) after stimulus, rather than in early face processing activities, such as P1 and N170 responses. Source localization revealed that the conditioned neutral faces increased activity in the right fusiform gyrus (FG). This study also developed an efficient method for detecting implicit negative emotional responses to specific faces by using EEG signals. A classification method based on a support vector machine enables the easy classification of neutral faces that trigger specific individual emotions. In

  3. Research of image matching algorithm based on local features

    Science.gov (United States)

    Sun, Wei

    2015-07-01

    For the problem of low efficiency in SIFT algorithm while using exhaustive method to search the nearest neighbor and next nearest neighbor of feature points, this paper introduces K-D tree algorithm, to index the feature points extracted in database images according to the tree structure, at the same time, using the concept of a weighted priority, further improves the algorithm, to further enhance the efficiency of feature matching.

  4. Near-duplicate Video Detection Algorithm Based on Global GSP Feature and Local ScSIFT Feature Fusion

    Science.gov (United States)

    Luan, Xidao; Xie, Yuxiang; He, Jingmeng; Zhang, Lili; Li, Chen; Zhang, Xin

    2018-01-01

    The main problem with near-duplicate video detection is the high computational complexity and the low efficiency. Near-duplicate video detection methods based on global feature is running fast but with low accuracy, on the contrary, methods based on local feature is accurate, but the calculation is large and time-consuming. Therefore, a near-duplicate video detection algorithm combining global GSP feature and local ScSIFT feature is proposed. Firstly, the video clips and the query ones are discretized into a set of key frame sequences, and the temporal information is recorded at the same time. Secondly, by filtering the video clips on global Gaussian-Scale pyramid feature, similar video clips are selected as the candidates to assure the high recall. Then, combined with the temporal features of the keyframes, the candidate videos are further detected by the local ScSIFT feature to obtain a higher precision. Experimental results show that the proposed algorithm can improve the accuracy of near-duplicate video detection on the basis of guaranteeing the timeliness of the algorithm.

  5. Bayesian Information Criterion Based Feature Filtering for the Fusion of Multiple Features in High-Spatial-Resolution Satellite Scene Classification

    Directory of Open Access Journals (Sweden)

    Da Lin

    2015-01-01

    Full Text Available This paper presents a novel classification method for high-spatial-resolution satellite scene classification introducing Bayesian information criterion (BIC-based feature filtering process to further eliminate opaque and redundant information between multiple features. Firstly, two diverse and complementary feature descriptors are extracted to characterize the satellite scene. Then, sparse canonical correlation analysis (SCCA with penalty function is employed to fuse the extracted feature descriptors and remove the ambiguities and redundancies between them simultaneously. After that, a two-phase Bayesian information criterion (BIC-based feature filtering process is designed to further filter out redundant information. In the first phase, we gradually impose a constraint via an iterative process to set a constraint on the loadings for averting sparse correlation descending below to a lower confidence limit of the approximated canonical correlation. In the second phase, Bayesian information criterion (BIC is utilized to conduct the feature filtering which sets the smallest loading in absolute value to zero in each iteration for all features. Lastly, a support vector machine with pyramid match kernel is applied to obtain the final result. Experimental results on high-spatial-resolution satellite scenes demonstrate that the suggested approach achieves satisfactory performance in classification accuracy.

  6. A novel malformation complex of bilateral and symmetric preaxial radial ray-thumb aplasia and lower limb defects with minimal facial dysmorphic features: a case report and literature review.

    Science.gov (United States)

    Al Kaissi, Ali; Klaushofer, Klaus; Krebs, Alexander; Grill, Franz

    2008-10-24

    Radial hemimelia is a congenital abnormality characterised by the partial or complete absence of the radius. The longitudinal hemimelia indicates the absence of one or more bones along the preaxial (medial) or postaxial (lateral) side of the limb. Preaxial limb defects occurred more frequently with a combination of microtia, esophageal atresia, anorectal atresia, heart defects, unilateral kidney dysgenesis, and some axial skeletal defects. Postaxial acrofacial dysostoses are characterised by distinctive facies and postaxial limb deficiencies, involving the 5th finger, metacarpal/ulnar/fibular/and metatarsal. The patient, an 8-year-old-boy with minimal craniofacial dysmorphic features but with profound upper limb defects of bilateral and symmetrical absence of the radius and the thumbs respectively. In addition, there was a unilateral tibio-fibular hypoplasia (hemimelia) associated with hypoplasia of the terminal phalanges and malsegmentation of the upper thoracic vertebrae, causing effectively the development of thoracic kyphosis. In the typical form of the preaxial acrofacial dysostosis, there are aberrations in the development of the first and second branchial arches and limb buds. The craniofacial dysmorphic features are characteristic such as micrognathia, zygomatic hypoplasia, cleft palate, and preaxial limb defects. Nager and de Reynier in 1948, who used the term acrofacial dysostosis (AFD) to distinguish the condition from mandibulofacial dysostosis. Neither the facial features nor the limb defects in our present patient appear to be absolutely typical with the previously reported cases of AFD. Our patient expands the phenotype of syndromic preaxial limb malformation complex. He might represent a new syndromic entity of mild naso-maxillary malformation in connection with axial and extra-axial malformation complex.

  7. Extron prediction method based on improved period-3 feature strategy

    Science.gov (United States)

    Chen, Gong; Dou, Xiao-Ming; Zhu, Xi-Fang

    2017-07-01

    To improve the accuracy of the gene encoding (exon) prediction, near period-3 feature exons prediction algorithm is proposed. Near period-3 clustering power spectrum of extrons and introns are extracted as template feature, DNA sequence is divided into frames and moved. Compared with the template feature, the prediction of the Euclidean distance with different weights is realized from each frame. By changing the different feature, number, frame length, gene sequence weight and comparing with period-3 algorithm, the experiment results show that the prediction accuracy of the proposed algorithm is better than that period-3 algorithm.

  8. Palm-vein classification based on principal orientation features.

    Directory of Open Access Journals (Sweden)

    Yujia Zhou

    Full Text Available Personal recognition using palm-vein patterns has emerged as a promising alternative for human recognition because of its uniqueness, stability, live body identification, flexibility, and difficulty to cheat. With the expanding application of palm-vein pattern recognition, the corresponding growth of the database has resulted in a long response time. To shorten the response time of identification, this paper proposes a simple and useful classification for palm-vein identification based on principal direction features. In the registration process, the Gaussian-Radon transform is adopted to extract the orientation matrix and then compute the principal direction of a palm-vein image based on the orientation matrix. The database can be classified into six bins based on the value of the principal direction. In the identification process, the principal direction of the test sample is first extracted to ascertain the corresponding bin. One-by-one matching with the training samples is then performed in the bin. To improve recognition efficiency while maintaining better recognition accuracy, two neighborhood bins of the corresponding bin are continuously searched to identify the input palm-vein image. Evaluation experiments are conducted on three different databases, namely, PolyU, CASIA, and the database of this study. Experimental results show that the searching range of one test sample in PolyU, CASIA and our database by the proposed method for palm-vein identification can be reduced to 14.29%, 14.50%, and 14.28%, with retrieval accuracy of 96.67%, 96.00%, and 97.71%, respectively. With 10,000 training samples in the database, the execution time of the identification process by the traditional method is 18.56 s, while that by the proposed approach is 3.16 s. The experimental results confirm that the proposed approach is more efficient than the traditional method, especially for a large database.

  9. The aesthetic unit principle of facial aging.

    Science.gov (United States)

    Tan, Susan L; Brandt, Michael G; Yeung, Jeffrey C; Doyle, Philip C; Moore, Corey C

    2015-01-01

    . Within-rater reliability was found to be very good (r = 0.88). Our data support the hypothesis that facial aesthetic unit separation influences perceived facial youthfulness among photographs of women. The presence of facial aesthetic unit separation results in a less youthful appearance. Based on these empirical data, the concept of facial aesthetic unit separation appears to play a significant role in perceived facial aging. NA.

  10. Biometric features and privacy : condemned, based upon your finger print

    NARCIS (Netherlands)

    Bullee, Jan-Willem; Veldhuis, Raymond N.J.

    What information is available in biometric features besides that needed for the biometric recognition process? What if a biometric feature contains Personally Identifiable Information? Will the whole biometric system become a threat to privacy? This paper is an attempt to quantifiy the link between

  11. Feature Surfaces in Symmetric Tensor Fields Based on Eigenvalue Manifold.

    Science.gov (United States)

    Palacios, Jonathan; Yeh, Harry; Wang, Wenping; Zhang, Yue; Laramee, Robert S; Sharma, Ritesh; Schultz, Thomas; Zhang, Eugene

    2016-03-01

    Three-dimensional symmetric tensor fields have a wide range of applications in solid and fluid mechanics. Recent advances in the (topological) analysis of 3D symmetric tensor fields focus on degenerate tensors which form curves. In this paper, we introduce a number of feature surfaces, such as neutral surfaces and traceless surfaces, into tensor field analysis, based on the notion of eigenvalue manifold. Neutral surfaces are the boundary between linear tensors and planar tensors, and the traceless surfaces are the boundary between tensors of positive traces and those of negative traces. Degenerate curves, neutral surfaces, and traceless surfaces together form a partition of the eigenvalue manifold, which provides a more complete tensor field analysis than degenerate curves alone. We also extract and visualize the isosurfaces of tensor modes, tensor isotropy, and tensor magnitude, which we have found useful for domain applications in fluid and solid mechanics. Extracting neutral and traceless surfaces using the Marching Tetrahedra method can cause the loss of geometric and topological details, which can lead to false physical interpretation. To robustly extract neutral surfaces and traceless surfaces, we develop a polynomial description of them which enables us to borrow techniques from algebraic surface extraction, a topic well-researched by the computer-aided design (CAD) community as well as the algebraic geometry community. In addition, we adapt the surface extraction technique, called A-patches, to improve the speed of finding degenerate curves. Finally, we apply our analysis to data from solid and fluid mechanics as well as scalar field analysis.

  12. Feature extraction algorithm for space targets based on fractal theory

    Science.gov (United States)

    Tian, Balin; Yuan, Jianping; Yue, Xiaokui; Ning, Xin

    2007-11-01

    In order to offer a potential for extending the life of satellites and reducing the launch and operating costs, satellite servicing including conducting repairs, upgrading and refueling spacecraft on-orbit become much more frequently. Future space operations can be more economically and reliably executed using machine vision systems, which can meet real time and tracking reliability requirements for image tracking of space surveillance system. Machine vision was applied to the research of relative pose for spacecrafts, the feature extraction algorithm was the basis of relative pose. In this paper fractal geometry based edge extraction algorithm which can be used in determining and tracking the relative pose of an observed satellite during proximity operations in machine vision system was presented. The method gets the gray-level image distributed by fractal dimension used the Differential Box-Counting (DBC) approach of the fractal theory to restrain the noise. After this, we detect the consecutive edge using Mathematical Morphology. The validity of the proposed method is examined by processing and analyzing images of space targets. The edge extraction method not only extracts the outline of the target, but also keeps the inner details. Meanwhile, edge extraction is only processed in moving area to reduce computation greatly. Simulation results compared edge detection using the method which presented by us with other detection methods. The results indicate that the presented algorithm is a valid method to solve the problems of relative pose for spacecrafts.

  13. Novel Viologen Derivative Based Uranyl Coordination Polymers Featuring Photochromic Behaviors.

    Science.gov (United States)

    Hu, Kong-Qiu; Wu, Qun-Yan; Mei, Lei; Zhang, Xiao-Lin; Ma, Lei; Song, Gang; Chen, Di-Yun; Wang, Yi-Tong; Chai, Zhi-Fang; Shi, Wei-Qun

    2017-12-19

    A series of novel uranyl coordination polymers have been synthesized by hydrothermal reactions. Both complexes 1 and 2 prosess two ipbp - ligands (H 2 ipbpCl=1-(3,5-dicarboxyphenyl)-4,4'-bipyridinium chloride), one uranyl cation, and two coordination water molecules, which can further extend to 2D networks through hydrogen bonding. In complex 1, two sets of equivalent nets are entangled together, resulting in a 2D + 2D → 3D polycatenated framework. In complex 2, the neighbouring equivalent nets interpenetrate each other, forming a twofold interpenetrated network. Complexes 3 and 4 are isomers, and both of them are constructed from (UO 2 ) 2 (OH) 2 dinuclear units, which are connected with four ipbp - ligands. The 3D structures of complexes 3 and 4 are similar along the b axis. Similar to other viologen-based coordination polymers, complexes 3 and 4 exhibit photochromic and thermochromic properties, which are rarely observed in actinide coordination polymers. Unlike the monotonous coordination mode in complexes 1-4, the ipbp - ligands feature a μ 3 -bridge through two kinds of coordination modes in complex 5. Notably, complex 5 presents a unique example in which terminal pyridine nitrogen atom is involved in the coordination. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. A Brain–Computer Interface for Potential Nonverbal Facial Communication Based on EEG Signals Related to Specific Emotions

    Directory of Open Access Journals (Sweden)

    Koji eKashihara

    2014-08-01

    Full Text Available Unlike assistive technology for verbal communication, the brain–machine or brain–computer interface (BMI/BCI has not been established as a nonverbal communication tool for amyotrophic lateral sclerosis (ALS patients. Face-to-face communication enables access to rich emotional information, but individuals suffering from neurological disorders, such as ALS and autism, may not express their emotions or communicate their negative feelings. Although emotions may be inferred by looking at facial expressions, emotional prediction for neutral faces necessitates advanced judgment. The process that underlies brain neuronal responses to neutral faces and causes emotional changes remains unknown. To address this problem, therefore, this study attempted to decode conditioned emotional reactions to neutral face stimuli. This direction was motivated by the assumption that if electroencephalogram (EEG signals can be used to detect patients’ emotional responses to specific inexpressive faces, the results could be incorporated into the design and development of BMI/BCI-based nonverbal communication tools. To these ends, this study investigated how a neutral face associated with a negative emotion modulates rapid central responses in face processing and then identified cortical activities. The conditioned neutral face-triggered event-related potentials that originated from the posterior temporal lobe statistically significantly changed during late face processing (600–700 ms after stimulus, rather than in early face processing activities, such as P1 and N170 responses. Source localization revealed that the conditioned neutral faces increased activity in the right fusiform gyrus. This study also developed an efficient method for detecting implicit negative emotional responses to specific faces by using EEG signals.

  15. Treatment and Prognosis of Facial Palsy on Ramsay Hunt Syndrome: Results Based on a Review of the Literature.

    Science.gov (United States)

    Monsanto, Rafael da Costa; Bittencourt, Aline Gomes; Bobato Neto, Natal José; Beilke, Silvia Carolina Almeida; Lorenzetti, Fabio Tadeu Moura; Salomone, Raquel

    2016-10-01

    Introduction  Ramsay Hunt syndrome is the second most common cause of facial palsy. Early and correct treatment should be performed to avoid complications, such as permanent facial nerve dysfunction. Objective  The objective of this study is to review the prognosis of the facial palsy on Ramsay Hunt syndrome, considering the different treatments proposed in the literature. Data Synthesis  We read the abstract of 78 studies; we selected 31 studies and read them in full. We selected 19 studies for appraisal. Among the 882 selected patients, 621 (70.4%) achieved a House-Brackmann score of I or II; 68% of the patients treated only with steroids achieved HB I or II, versus 70.5% when treated with steroids plus antiviral agents. Among patients with complete facial palsy (grades V or VI), 51.4% recovered to grades I or II. The rate of complete recovery varied considering the steroid associated with acyclovir: 81.3% for methylprednisolone, 69.2% for prednisone; 61.4% for prednisolone; and 76.3% for hydrocortisone. Conclusions  Patients with Ramsay-hunt syndrome, when early diagnosed and treated, achieve high rates of complete recovery. The association of steroids and acyclovir is better than steroids used in monotherapy.

  16. Eagle's syndrome with facial palsy

    Directory of Open Access Journals (Sweden)

    Mohammed Al-Hashim

    2017-01-01

    Full Text Available Eagle's syndrome (ES is a rare disease in which the styloid process is elongated and compressing adjacent structures. We describe a rare presentation of ES in which the patient presented with facial palsy. Facial palsy as a presentation of ES is very rare. A review of the English literature revealed only one previously reported case. Our case is a 39-year-old male who presented with left facial palsy. He also reported a 9-year history of the classical symptoms of ES. A computed tomography scan with three-dimensional reconstruction confirmed the diagnoses. He was started on conservative management but without significant improvement. Surgical intervention was offered, but the patient refused. It is important for otolaryngologists, dentists, and other specialists who deal with head and neck problems to be able to recognize ES despite its rarity. Although the patient responded to a treatment similar to that of Bell's palsy because of the clinical features and imaging, ES was most likely the cause of his facial palsy.

  17. Pattern Recognition by Dinamic Feature Analysis Based on PCA

    Directory of Open Access Journals (Sweden)

    Juliana Valencia-Aguirre

    2009-06-01

    Full Text Available Usually, in pattern recognition problems we represent the observations by mean of measures on appropriate variables of data set, these measures can be categorized as Static and Dynamic Features. Static features are not always an accurate representation of data. In these sense, many phenomena are better modeled by dynamic changes on their measures. The advantage of using an extended form (dynamic features is the inclusion of new information that allows us to get a better representation of the object. Nevertheless, sometimes it is difficult in a classification stage to deal with dynamic features, because the associated computational cost often can be higher than we deal with static features. For analyzing such representations, we use Principal Component Analysis (PCA, arranging dynamic data in such a way we can consider variations related to the intrinsic dynamic of observations. Therefore, the method made possible to evaluate the dynamic information about of the observations on a lower dimensionality feature space without decreasing the accuracy performance. Algorithms were tested on real data to classify pathological speech from normal voices, and using PCA for dynamic feature selection, as well.

  18. Advances in face detection and facial image analysis

    CERN Document Server

    Celebi, M; Smolka, Bogdan

    2016-01-01

    This book presents the state-of-the-art in face detection and analysis. It outlines new research directions, including in particular psychology-based facial dynamics recognition, aimed at various applications such as behavior analysis, deception detection, and diagnosis of various psychological disorders. Topics of interest include face and facial landmark detection, face recognition, facial expression and emotion analysis, facial dynamics analysis, face classification, identification, and clustering, and gaze direction and head pose estimation, as well as applications of face analysis.

  19. Gender in Facial Representations: A Contrast-Based Study of Adaptation within and between the Sexes

    Science.gov (United States)

    Oruç, Ipek; Guo, Xiaoyue M.; Barton, Jason J. S.

    2011-01-01

    Face aftereffects are proving to be an effective means of examining the properties of face-specific processes in the human visual system. We examined the role of gender in the neural representation of faces using a contrast-based adaptation method. If faces of different genders share the same representational face space, then adaptation to a face of one gender should affect both same- and different-gender faces. Further, if these aftereffects differ in magnitude, this may indicate distinct gender-related factors in the organization of this face space. To control for a potential confound between physical similarity and gender, we used a Bayesian ideal observer and human discrimination data to construct a stimulus set in which pairs of different-gender faces were equally dissimilar as same-gender pairs. We found that the recognition of both same-gender and different-gender faces was suppressed following a brief exposure of 100ms. Moreover, recognition was more suppressed for test faces of a different-gender than those of the same-gender as the adaptor, despite the equivalence in physical and psychophysical similarity. Our results suggest that male and female faces likely occupy the same face space, allowing transfer of aftereffects between the genders, but that there are special properties that emerge along gender-defining dimensions of this space. PMID:21267414

  20. Riparian erosion vulnerability model based on environmental features.

    Science.gov (United States)

    Botero-Acosta, Alejandra; Chu, Maria L; Guzman, Jorge A; Starks, Patrick J; Moriasi, Daniel N

    2017-12-01

    Riparian erosion is one of the major causes of sediment and contaminant load to streams, degradation of riparian wildlife habitats, and land loss hazards. Land and soil management practices are implemented as conservation and restoration measures to mitigate the environmental problems brought about by riparian erosion. This, however, requires the identification of vulnerable areas to soil erosion. Because of the complex interactions between the different mechanisms that govern soil erosion and the inherent uncertainties involved in quantifying these processes, assessing erosion vulnerability at the watershed scale is challenging. The main objective of this study was to develop a methodology to identify areas along the riparian zone that are susceptible to erosion. The methodology was developed by integrating the physically-based watershed model MIKE-SHE, to simulate water movement, and a habitat suitability model, MaxEnt, to quantify the probability of presences of elevation changes (i.e., erosion) across the watershed. The presences of elevation changes were estimated based on two LiDAR-based elevation datasets taken in 2009 and 2012. The changes in elevation were grouped into four categories: low (0.5 - 0.7 m), medium (0.7 - 1.0 m), high (1.0 - 1.7 m) and very high (1.7 - 5.9 m), considering each category as a studied "species". The categories' locations were then used as "species location" map in MaxEnt. The environmental features used as constraints to the presence of erosion were land cover, soil, stream power index, overland flow, lateral inflow, and discharge. The modeling framework was evaluated in the Fort Cobb Reservoir Experimental watershed in southcentral Oklahoma. Results showed that the most vulnerable areas for erosion were located at the upper riparian zones of the Cobb and Lake sub-watersheds. The main waterways of these sub-watersheds were also found to be prone to streambank erosion. Approximatively 80% of the riparian zone (streambank

  1. Information Theory based Feature Selection for Customer Classification

    OpenAIRE

    Barraza, Néstor Rubén; Moro, Sergio; Ferreyra, Marcelo; de la Peña, Adolfo

    2016-01-01

    The application of Information Theory techniques in customer feature selection is analyzed. This method, usually called information gain has been demonstrated to be simple and fast for feature selection. The important concept of mutual information, originally introduced to analyze and model a noisy channel is used in order to measure relations between characteristics of given customers. An application to a bank customers data set of telemarketing calls for selling bank long-term deposits is s...

  2. Cascaded face alignment via intimacy definition feature

    Science.gov (United States)

    Li, Hailiang; Lam, Kin-Man; Chiu, Man-Yau; Wu, Kangheng; Lei, Zhibin

    2017-09-01

    Recent years have witnessed the emerging popularity of regression-based face aligners, which directly learn mappings between facial appearance and shape-increment manifolds. We propose a random-forest based, cascaded regression model for face alignment by using a locally lightweight feature, namely intimacy definition feature. This feature is more discriminative than the pose-indexed feature, more efficient than the histogram of oriented gradients feature and the scale-invariant feature transform feature, and more compact than the local binary feature (LBF). Experimental validation of our algorithm shows that our approach achieves state-of-the-art performance when testing on some challenging datasets. Compared with the LBF-based algorithm, our method achieves about twice the speed, 20% improvement in terms of alignment accuracy and saves an order of magnitude on memory requirement.

  3. Feature-Based and String-Based Models for Predicting RNA-Protein Interaction

    Directory of Open Access Journals (Sweden)

    Donald Adjeroh

    2018-03-01

    Full Text Available In this work, we study two approaches for the problem of RNA-Protein Interaction (RPI. In the first approach, we use a feature-based technique by combining extracted features from both sequences and secondary structures. The feature-based approach enhanced the prediction accuracy as it included much more available information about the RNA-protein pairs. In the second approach, we apply search algorithms and data structures to extract effective string patterns for prediction of RPI, using both sequence information (protein and RNA sequences, and structure information (protein and RNA secondary structures. This led to different string-based models for predicting interacting RNA-protein pairs. We show results that demonstrate the effectiveness of the proposed approaches, including comparative results against leading state-of-the-art methods.

  4. Genetic determinants of facial clefting

    DEFF Research Database (Denmark)

    Jugessur, Astanand; Shi, Min; Gjessing, Håkon Kristian

    2009-01-01

    BACKGROUND: Facial clefts are common birth defects with a strong genetic component. To identify fetal genetic risk factors for clefting, 1536 SNPs in 357 candidate genes were genotyped in two population-based samples from Scandinavia (Norway: 562 case-parent and 592 control-parent triads; Denmark...

  5. Unsupervised Feature Learning With Winner-Takes-All Based STDP

    Directory of Open Access Journals (Sweden)

    Paul Ferré

    2018-04-01

    Full Text Available We present a novel strategy for unsupervised feature learning in image applications inspired by the Spike-Timing-Dependent-Plasticity (STDP biological learning rule. We show equivalence between rank order coding Leaky-Integrate-and-Fire neurons and ReLU artificial neurons when applied to non-temporal data. We apply this to images using rank-order coding, which allows us to perform a full network simulation with a single feed-forward pass using GPU hardware. Next we introduce a binary STDP learning rule compatible with training on batches of images. Two mechanisms to stabilize the training are also presented : a Winner-Takes-All (WTA framework which selects the most relevant patches to learn from along the spatial dimensions, and a simple feature-wise normalization as homeostatic process. This learning process allows us to train multi-layer architectures of convolutional sparse features. We apply our method to extract features from the MNIST, ETH80, CIFAR-10, and STL-10 datasets and show that these features are relevant for classification. We finally compare these results with several other state of the art unsupervised learning methods.

  6. Idiopathic ophthalmodynia and idiopathic rhinalgia: two topographic facial pain syndromes.

    Science.gov (United States)

    Pareja, Juan A; Cuadrado, María L; Porta-Etessam, Jesús; Fernández-de-las-Peñas, César; Gili, Pablo; Caminero, Ana B; Cebrián, José L

    2010-09-01

    To describe 2 topographic facial pain conditions with the pain clearly localized in the eye (idiopathic ophthalmodynia) or in the nose (idiopathic rhinalgia), and to propose their distinction from persistent idiopathic facial pain. Persistent idiopathic facial pain, burning mouth syndrome, atypical odontalgia, and facial arthromyalgia are idiopathic facial pain syndromes that have been separated according to topographical criteria. Still, some other facial pain syndromes might have been veiled under the broad term of persistent idiopathic facial pain. Through a 10-year period we have studied all patients referred to our neurological clinic because of facial pain of unknown etiology that might deviate from all well-characterized facial pain syndromes. In a group of patients we have identified 2 consistent clinical pictures with pain precisely located either in the eye (n=11) or in the nose (n=7). Clinical features resembled those of other localized idiopathic facial syndromes, the key differences relying on the topographic distribution of the pain. Both idiopathic ophthalmodynia and idiopathic rhinalgia seem specific pain syndromes with a distinctive location, and may deserve a nosologic status just as other focal pain syndromes of the face. Whether all such focal syndromes are topographic variants of persistent idiopathic facial pain or independent disorders remains a controversial issue.

  7. Facial emotion recognition and borderline personality pathology.

    Science.gov (United States)

    Meehan, Kevin B; De Panfilis, Chiara; Cain, Nicole M; Antonucci, Camilla; Soliani, Antonio; Clarkin, John F; Sambataro, Fabio

    2017-09-01

    The impact of borderline personality pathology on facial emotion recognition has been in dispute; with impaired, comparable, and enhanced accuracy found in high borderline personality groups. Discrepancies are likely driven by variations in facial emotion recognition tasks across studies (stimuli type/intensity) and heterogeneity in borderline personality pathology. This study evaluates facial emotion recognition for neutral and negative emotions (fear/sadness/disgust/anger) presented at varying intensities. Effortful control was evaluated as a moderator of facial emotion recognition in borderline personality. Non-clinical multicultural undergraduates (n = 132) completed a morphed facial emotion recognition task of neutral and negative emotional expressions across different intensities (100% Neutral; 25%/50%/75% Emotion) and self-reported borderline personality features and effortful control. Greater borderline personality features related to decreased accuracy in detecting neutral faces, but increased accuracy in detecting negative emotion faces, particularly at low-intensity thresholds. This pattern was moderated by effortful control; for individuals with low but not high effortful control, greater borderline personality features related to misattributions of emotion to neutral expressions, and enhanced detection of low-intensity emotional expressions. Individuals with high borderline personality features may therefore exhibit a bias toward detecting negative emotions that are not or barely present; however, good self-regulatory skills may protect against this potential social-cognitive vulnerability. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  8. Feature Fusion Based on Convolutional Neural Network for SAR ATR

    Directory of Open Access Journals (Sweden)

    Chen Shi-Qi

    2017-01-01

    Full Text Available Recent breakthroughs in algorithms related to deep convolutional neural networks (DCNN have stimulated the development of various of signal processing approaches, where the specific application of Automatic Target Recognition (ATR using Synthetic Aperture Radar (SAR data has spurred widely attention. Inspired by the more efficient distributed training such as inception architecture and residual network, a new feature fusion structure which jointly exploits all the merits of each version is proposed to reduce the data dimensions and the complexity of computation. The detailed procedure presented in this paper consists of the fused features, which make the representation of SAR images more distinguishable after the extraction of a set of features from DCNN, followed by a trainable classifier. In particular, the obtained results on the 10-class benchmark data set demonstrate that the presented architecture can achieve remarkable classification performance to the current state-of-the-art methods.

  9. Selecting protein families for environmental features based on manifold regularization.

    Science.gov (United States)

    Jiang, Xingpeng; Xu, Weiwei; Park, E K; Li, Guangrong

    2014-06-01

    Recently, statistics and machine learning have been developed to identify functional or taxonomic features of environmental features or physiological status. Important proteins (or other functional and taxonomic entities) to environmental features can be potentially used as biosensors. A major challenge is how the distribution of protein and gene functions embodies the adaption of microbial communities across environments and host habitats. In this paper, we propose a novel regularization method for linear regression to adapt the challenge. The approach is inspired by local linear embedding (LLE) and we call it a manifold-constrained regularization for linear regression (McRe). The novel regularization procedure also has potential to be used in solving other linear systems. We demonstrate the efficiency and the performance of the approach in both simulation and real data.

  10. A regression-based Kansei engineering system based on form feature lines for product form design

    Directory of Open Access Journals (Sweden)

    Yan Xiong

    2016-06-01

    Full Text Available When developing new products, it is important for a designer to understand users’ perceptions and develop product form with the corresponding perceptions. In order to establish the mapping between users’ perceptions and product design features effectively, in this study, we presented a regression-based Kansei engineering system based on form feature lines for product form design. First according to the characteristics of design concept representation, product form features–product form feature lines were defined. Second, Kansei words were chosen to describe image perceptions toward product samples. Then, multiple linear regression and support vector regression were used to construct the models, respectively, that predicted users’ image perceptions. Using mobile phones as experimental samples, Kansei prediction models were established based on the front view form feature lines of the samples. From the experimental results, these two predict models were of good adaptability. But in contrast to multiple linear regression, the predict performance of support vector regression model was better, and support vector regression is more suitable for form regression prediction. The results of the case showed that the proposed method provided an effective means for designers to manipulate product features as a whole, and it can optimize Kansei model and improve practical values.

  11. Mutual information-based feature selection for low-cost BCIs based on motor imagery.

    Science.gov (United States)

    Schiatti, L; Faes, L; Tessadori, J; Barresi, G; Mattos, L

    2016-08-01

    In the present study a feature selection algorithm based on mutual information (MI) was applied to electro-encephalographic (EEG) data acquired during three different motor imagery tasks from two dataset: Dataset I from BCI Competition IV including full scalp recordings from four subjects, and new data recorded from three subjects using the popular low-cost Emotiv EPOC EEG headset. The aim was to evaluate optimal channels and band-power (BP) features for motor imagery tasks discrimination, in order to assess the feasibility of a portable low-cost motor imagery based Brain-Computer Interface (BCI) system. The minimal sub set of features most relevant to task description and less redundant to each other was determined, and the corresponding classification accuracy was assessed offline employing linear support vector machine (SVM) in a 10-fold cross validation scheme. The analysis was performed: (a) on the original full Dataset I from BCI competition IV, (b) on a restricted channels set from Dataset I corresponding to available Emotiv EPOC electrodes locations, and (c) on data recorded with the EPOC system. Results from (a) showed that an offline classification accuracy above 80% can be reached using only 5 features. Limiting the analysis to EPOC channels caused a decrease of classification accuracy, although it still remained above chance level, both for data from (b) and (c). A top accuracy of 70% was achieved using 2 optimal features. These results encourage further research towards the development of portable low cost motor imagery-based BCI systems.

  12. Boosting feature selection for Neural Network based regression.

    Science.gov (United States)

    Bailly, Kevin; Milgram, Maurice

    2009-01-01

    The head pose estimation problem is well known to be a challenging task in computer vision and is a useful tool for several applications involving human-computer interaction. This problem can be stated as a regression one where the input is an image and the output is pan and tilt angles. Finding the optimal regression is a hard problem because of the high dimensionality of the input (number of image pixels) and the large variety of morphologies and illumination. We propose a new method combining a boosting strategy for feature selection and a neural network for the regression. Potential features are a very large set of Haar-like wavelets which are well known to be adapted to face image processing. To achieve the feature selection, a new Fuzzy Functional Criterion (FFC) is introduced which is able to evaluate the link between a feature and the output without any estimation of the joint probability density function as in the Mutual Information. The boosting strategy uses this criterion at each step: features are evaluated by the FFC using weights on examples computed from the error produced by the neural network trained at the previous step. Tests are carried out on the commonly used Pointing 04 database and compared with three state-of-the-art methods. We also evaluate the accuracy of the estimation on FacePix, a database with a high angular resolution. Our method is compared positively to a Convolutional Neural Network, which is well known to incorporate feature extraction in its first layers.

  13. The optimal extraction of feature algorithm based on KAZE

    Science.gov (United States)

    Yao, Zheyi; Gu, Guohua; Qian, Weixian; Wang, Pengcheng

    2015-10-01

    As a novel method of 2D features extraction algorithm over the nonlinear scale space, KAZE provide a special method. However, the computation of nonlinear scale space and the construction of KAZE feature vectors are more expensive than the SIFT and SURF significantly. In this paper, the given image is used to build the nonlinear space up to a maximum evolution time through the efficient Additive Operator Splitting (AOS) techniques and the variable conductance diffusion. Changing the parameter can improve the construction of nonlinear scale space and simplify the image conductivities for each dimension space, with the predigest computation. Then, the detection for points of interest can exhibit a maxima of the scale-normalized determinant with the Hessian response in the nonlinear scale space. At the same time, the detection of feature vectors is optimized by the Wavelet Transform method, which can avoid the second Gaussian smoothing in the KAZE Features and cut down the complexity of the algorithm distinctly in the building and describing vectors steps. In this way, the dominant orientation is obtained, similar to SURF, by summing the responses within a sliding circle segment covering an angle of π/3 in the circular area of radius 6σ with a sampling step of size σ one by one. Finally, the extraction in the multidimensional patch at the given scale, centered over the points of interest and rotated to align its dominant orientation to a canonical direction, is able to simplify the description of feature by reducing the description dimensions, just as the PCA-SIFT method. Even though the features are somewhat more expensive to compute than SIFT due to the construction of nonlinear scale space, but compared to SURF, the result revels a step forward in performance in detection, description and application against the previous ways by the following contrast experiments.

  14. Object detection based on improved color and scale invariant features

    Science.gov (United States)

    Chen, Mengyang; Men, Aidong; Fan, Peng; Yang, Bo

    2009-10-01

    A novel object detection method which combines color and scale invariant features is presented in this paper. The detection system mainly adopts the widely used framework of SIFT (Scale Invariant Feature Transform), which consists of both a keypoint detector and descriptor. Although SIFT has some impressive advantages, it is not only computationally expensive, but also vulnerable to color images. To overcome these drawbacks, we employ the local color kernel histograms and Haar Wavelet Responses to enhance the descriptor's distinctiveness and computational efficiency. Extensive experimental evaluations show that the method has better robustness and lower computation costs.

  15. A Fourier-based textural feature extraction procedure

    Science.gov (United States)

    Stromberg, W. D.; Farr, T. G.

    1986-01-01

    A procedure is presented to discriminate and characterize regions of uniform image texture. The procedure utilizes textural features consisting of pixel-by-pixel estimates of the relative emphases of annular regions of the Fourier transform. The utility and derivation of the features are described through presentation of a theoretical justification of the concept followed by a heuristic extension to a real environment. Two examples are provided that validate the technique on synthetic images and demonstrate its applicability to the discrimination of geologic texture in a radar image of a tropical vegetated area.

  16. Genome-Wide Association Study Reveals Multiple Loci Influencing Normal Human Facial Morphology.

    Directory of Open Access Journals (Sweden)

    John R Shaffer

    2016-08-01

    Full Text Available Numerous lines of evidence point to a genetic basis for facial morphology in humans, yet little is known about how specific genetic variants relate to the phenotypic expression of many common facial features. We conducted genome-wide association meta-analyses of 20 quantitative facial measurements derived from the 3D surface images of 3118 healthy individuals of European ancestry belonging to two US cohorts. Analyses were performed on just under one million genotyped SNPs (Illumina OmniExpress+Exome v1.2 array imputed to the 1000 Genomes reference panel (Phase 3. We observed genome-wide significant associations (p < 5 x 10-8 for cranial base width at 14q21.1 and 20q12, intercanthal width at 1p13.3 and Xq13.2, nasal width at 20p11.22, nasal ala length at 14q11.2, and upper facial depth at 11q22.1. Several genes in the associated regions are known to play roles in craniofacial development or in syndromes affecting the face: MAFB, PAX9, MIPOL1, ALX3, HDAC8, and PAX1. We also tested genotype-phenotype associations reported in two previous genome-wide studies and found evidence of replication for nasal ala length and SNPs in CACNA2D3 and PRDM16. These results provide further evidence that common variants in regions harboring genes of known craniofacial function contribute to normal variation in human facial features. Improved understanding of the genes associated with facial morphology in healthy individuals can provide insights into the pathways and mechanisms controlling normal and abnormal facial morphogenesis.

  17. Speech Emotion Feature Selection Method Based on Contribution Analysis Algorithm of Neural Network

    International Nuclear Information System (INIS)

    Wang Xiaojia; Mao Qirong; Zhan Yongzhao

    2008-01-01

    There are many emotion features. If all these features are employed to recognize emotions, redundant features may be existed. Furthermore, recognition result is unsatisfying and the cost of feature extraction is high. In this paper, a method to select speech emotion features based on contribution analysis algorithm of NN is presented. The emotion features are selected by using contribution analysis algorithm of NN from the 95 extracted features. Cluster analysis is applied to analyze the effectiveness for the features selected, and the time of feature extraction is evaluated. Finally, 24 emotion features selected are used to recognize six speech emotions. The experiments show that this method can improve the recognition rate and the time of feature extraction

  18. Facial paresis in patients with mesial temporal sclerosis: clinical and quantitative MRI-based evidence of widespread disease.

    Science.gov (United States)

    Lin, Katia; Carrete, Henrique; Lin, Jaime; de Oliveira, Pedro Alessandro Leite; Caboclo, Luis Otávio Sales Ferreira; Sakamoto, Américo Ceiki; Yacubian, Elza Márcia Targas

    2007-08-01

    To assess the frequency and significance of facial paresis (FP) in a well-defined cohort of mesial temporal lobe epilepsy (MTLE) patients. One hundred consecutive patients with MRI findings consistent with mesial temporal sclerosis (MTS) and concordant electroclinical data underwent facial motor examination at rest, with voluntary expression, and with spontaneous smiling. Hippocampal, amygdaloid, and temporopolar (TP) volumetric measures were acquired. Thirty healthy subjects, matched according to age and sex, were taken as controls. Central-type FP was found in 46 patients. In 41 (89%) of 46, it was visualized at rest, with voluntary and emotional expression characterizing true facial motor paresis. In 33 (72%) of 46 patients, FP was contralateral to the side of MTS. By using a 2-SD cutoff from the mean of normal controls, we found reduction in TP volume ipsilateral to MTS in 61% of patients with FP and in 33% of those without (p = 0.01). Febrile seizures as initial precipitating injury (IPI) were observed in 34% of the patients and were classified as complex in 12 (26%) of 46 of those with FP and in five (9%) of 54 of those without (p = 0.02). The presence of FP was significantly associated with a shorter latent period and younger age at onset of habitual seizures, in particular, with secondarily generalized tonic-clonic seizures. Facial paresis is a reliable lateralizing sign in MTLE and was associated with history of complex febrile seizures as IPI, younger age at onset of disease, and atrophy of temporal pole ipsilateral to MTS, indicating more widespread disease.

  19. Level Sets and Voronoi based Feature Extraction from any Imagery

    DEFF Research Database (Denmark)

    Sharma, O.; Anton, François; Mioc, Darka

    2012-01-01

    imagery, and 2D/3D acoustic images (from hydrographic surveys). The application involving satellite imagery shown in this paper is coastline detection, but the methodology can be easily applied to feature extraction on any king of imagery. A prototype application that is developed as part of this research...

  20. Microarray-based large scale detection of single feature ...

    Indian Academy of Sciences (India)

    2015-12-08

    Dec 8, 2015 ... Hybridization and data quality. The five genotypes such as JKC703, JKC725, JKC737,. JKC777 and JKC783 were used in the present study. To assess our microarray intensity data, the raw intensity data of only PM probes/features of cotton cultivars were log2 transformed and studied by density plots, and ...

  1. Mechanical design synthesis from sparse feature-based input

    Science.gov (United States)

    Burgett, Steve R.; Bush, Roger T.; Sastry, S. Shankar; Sequin, Carlo H.

    1995-05-01

    We are researching a new paradigm for CAD which aims to support the early stages of mechanical design well enough that designers are motivated to actually use the workstation as a conceptual design tool. At the heart of our approach is shape synthesis, the computer generation of part designs. The need for such automation arises from the fact that any mechanical part is defined by two kinds of geometry: features that are critical to its function (application features), and the material that merely fleshes out the rest of the part (bulk shape). Application features are most often associated with contact surfaces of the part, for example, a bore for a bearing or a mounting surface for a motor. They are the high-level entities in terms of which the designer reasons about the design. Bulk shape must obey certain constraints, such as noninterference with other parts, minimum allowable thickness of the part, etc., but is somewhat arbitrary. We are developing a system wherein the designer inputs the application features, along with topological constraints, degrees of freedom, and boundary volumes, then the bulk shapes of the parts are synthesized automatically. Overall economy is enhanced by reducing the amount of input necessary from the designer, by providing for more complete exploration of the design space, and by enhancing manufacturability and assemblability of the component parts. This paper presents the functional requirements of such a system, and discusses preliminary results.

  2. CLUSTERING-BASED FEATURE LEARNING ON VARIABLE STARS

    International Nuclear Information System (INIS)

    Mackenzie, Cristóbal; Pichara, Karim; Protopapas, Pavlos

    2016-01-01

    The success of automatic classification of variable stars depends strongly on the lightcurve representation. Usually, lightcurves are represented as a vector of many descriptors designed by astronomers called features. These descriptors are expensive in terms of computing, require substantial research effort to develop, and do not guarantee a good classification. Today, lightcurve representation is not entirely automatic; algorithms must be designed and manually tuned up for every survey. The amounts of data that will be generated in the future mean astronomers must develop scalable and automated analysis pipelines. In this work we present a feature learning algorithm designed for variable objects. Our method works by extracting a large number of lightcurve subsequences from a given set, which are then clustered to find common local patterns in the time series. Representatives of these common patterns are then used to transform lightcurves of a labeled set into a new representation that can be used to train a classifier. The proposed algorithm learns the features from both labeled and unlabeled lightcurves, overcoming the bias using only labeled data. We test our method on data sets from the Massive Compact Halo Object survey and the Optical Gravitational Lensing Experiment; the results show that our classification performance is as good as and in some cases better than the performance achieved using traditional statistical features, while the computational cost is significantly lower. With these promising results, we believe that our method constitutes a significant step toward the automation of the lightcurve classification pipeline

  3. Sequence-based feature prediction and annotation of proteins

    DEFF Research Database (Denmark)

    Juncker, Agnieszka; Jensen, Lars J.; Pierleoni, Andrea

    2009-01-01

    A recent trend in computational methods for annotation of protein function is that many prediction tools are combined in complex workflows and pipelines to facilitate the analysis of feature combinations, for example, the entire repertoire of kinase-binding motifs in the human proteome....

  4. Synthetic triphones from trajectory-based feature distributions

    CSIR Research Space (South Africa)

    Badenhorst, J

    2015-11-01

    Full Text Available level, and these models are then used to create features for unseen or rare triphones. We find that a fairly restricted model (piece-wise linear with three line segments per channel of a diphone transition) is able to represent training data quite...

  5. A Feature Fusion Based Forecasting Model for Financial Time Series

    Science.gov (United States)

    Guo, Zhiqiang; Wang, Huaiqing; Liu, Quan; Yang, Jie

    2014-01-01

    Predicting the stock market has become an increasingly interesting research area for both researchers and investors, and many prediction models have been proposed. In these models, feature selection techniques are used to pre-process the raw data and remove noise. In this paper, a prediction model is constructed to forecast stock market behavior with the aid of independent component analysis, canonical correlation analysis, and a support vector machine. First, two types of features are extracted from the historical closing prices and 39 technical variables obtained by independent component analysis. Second, a canonical correlation analysis method is utilized to combine the two types of features and extract intrinsic features to improve the performance of the prediction model. Finally, a support vector machine is applied to forecast the next day's closing price. The proposed model is applied to the Shanghai stock market index and the Dow Jones index, and experimental results show that the proposed model performs better in the area of prediction than other two similar models. PMID:24971455

  6. Feature-based engineering of compensations in web service environment

    DEFF Research Database (Denmark)

    Schaefer, Michael; Dolog, Peter

    2009-01-01

    In this paper, we introduce a product line approach for developing Web services with extended compensation capabilities. We adopt a feature modelling approach in order to describe variable and common compensation properties of Web service variants, as well as service consumer application...

  7. Comparison of features response in texture-based iris segmentation

    CSIR Research Space (South Africa)

    Bachoo, A

    2009-03-01

    Full Text Available and eyelashes that corrupt the iris region of interest. An accurate segmentation algorithm must localize and remove these noise components. Texture features are considered in this paper for describing iris and non-iris regions. These regions are classified using...

  8. A Feature Subtraction Method for Image Based Kinship Verification under Uncontrolled Environments

    DEFF Research Database (Denmark)

    Duan, Xiaodong; Tan, Zheng-Hua

    2015-01-01

    the feature distance between face image pairs with kinship and maximize the distance between non-kinship pairs. Based on the subtracted feature, the verification is realized through a simple Gaussian based distance comparison method. Experiments on two public databases show that the feature subtraction method...

  9. Pose and Expression Independent Facial Landmark Localization Using Dense-SURF and the Hausdorff Distance.

    Science.gov (United States)

    Sangineto, Enver

    2013-03-01

    We present an approach to automatic localization of facial feature points which deals with pose, expression, and identity variations combining 3D shape models with local image patch classification. The latter is performed by means of densely extracted SURF-like features, which we call DU-SURF, while the former is based on a multiclass version of the Hausdorff distance to address local classification errors and nonvisible points. The final system is able to localize facial points in real-world scenarios, dealing with out of plane head rotations, expression changes, and different lighting conditions. Extensive experimentation with the proposed method has been carried out showing the superiority of our approach with respect to other state-of-the-art systems. Finally, DU-SURF features have been compared with other modern features and we experimentally demonstrate their competitive classification accuracy and computational efficiency.

  10. Facial Recognition in a Group-Living Cichlid Fish

    Science.gov (United States)

    Kohda, Masanori; Jordan, Lyndon Alexander; Hotta, Takashi; Kosaka, Naoya; Karino, Kenji; Tanaka, Hirokazu; Taniyama, Masami; Takeyama, Tomohiro

    2015-01-01

    The theoretical underpinnings of the mechanisms of sociality, e.g. territoriality, hierarchy, and reciprocity, are based on assumptions of individual recognition. While behavioural evidence suggests individual recognition is widespread, the cues that animals use to recognise individuals are established in only a handful of systems. Here, we use digital models to demonstrate that facial features are the visual cue used for individual recognition in the social fish Neolamprologus pulcher. Focal fish were exposed to digital images showing four different combinations of familiar and unfamiliar face and body colorations. Focal fish attended to digital models with unfamiliar faces longer and from a further distance to the model than to models with familiar faces. These results strongly suggest that fish can distinguish individuals accurately using facial colour patterns. Our observations also suggest that fish are able to rapidly (≤ 0.5 sec) discriminate between familiar and unfamiliar individuals, a speed of recognition comparable to primates including humans. PMID:26605789

  11. A novel feature extraction approach for microarray data based on multi-algorithm fusion.

    Science.gov (United States)

    Jiang, Zhu; Xu, Rong

    2015-01-01

    Feature extraction is one of the most important and effective method to reduce dimension in data mining, with emerging of high dimensional data such as microarray gene expression data. Feature extraction for gene selection, mainly serves two purposes. One is to identify certain disease-related genes. The other is to find a compact set of discriminative genes to build a pattern classifier with reduced complexity and improved generalization capabilities. Depending on the purpose of gene selection, two types of feature extraction algorithms including ranking-based feature extraction and set-based feature extraction are employed in microarray gene expression data analysis. In ranking-based feature extraction, features are evaluated on an individual basis, without considering inter-relationship between features in general, while set-based feature extraction evaluates features based on their role in a feature set by taking into account dependency between features. Just as learning methods, feature extraction has a problem in its generalization ability, which is robustness. However, the issue of robustness is often overlooked in feature extraction. In order to improve the accuracy and robustness of feature extraction for microarray data, a novel approach based on multi-algorithm fusion is proposed. By fusing different types of feature extraction algorithms to select the feature from the samples set, the proposed approach is able to improve feature extraction performance. The new approach is tested against gene expression dataset including Colon cancer data, CNS data, DLBCL data, and Leukemia data. The testing results show that the performance of this algorithm is better than existing solutions.

  12. The face is not an empty canvas: how facial expressions interact with facial appearance.

    Science.gov (United States)

    Hess, Ursula; Adams, Reginald B; Kleck, Robert E

    2009-12-12

    Faces are not simply blank canvases upon which facial expressions write their emotional messages. In fact, facial appearance and facial movement are both important social signalling systems in their own right. We here provide multiple lines of evidence for the notion that the social signals derived from facial appearance on the one hand and facial movement on the other interact in a complex manner, sometimes reinforcing and sometimes contradicting one another. Faces provide information on who a person is. Sex, age, ethnicity, personality and other characteristics that can define a person and the social group the person belongs to can all be derived from the face alone. The present article argues that faces interact with the perception of emotion expressions because this information informs a decoder's expectations regarding an expresser's probable emotional reactions. Facial appearance also interacts more directly with the interpretation of facial movement because some of the features that are used to derive personality or sex information are also features that closely resemble certain emotional expressions, thereby enhancing or diluting the perceived strength of particular expressions.

  13. Feature-Based Analysis of Plasma-Based Particle Acceleration Data

    Energy Technology Data Exchange (ETDEWEB)

    Rubel, Oliver [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Geddes, Cameron G. R. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Chen, Min [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Cormier-Michel, Estelle [Tech-X Corp., Boulder, CO (United States); Bethel, E. Wes [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2014-02-01

    Plasma-based particle accelerators can produce and sustain thousands of times stronger acceleration fields than conventional particle accelerators, providing a potential solution to the problem of the growing size and cost of conventional particle accelerators. To facilitate scientific knowledge discovery from the ever growing collections of accelerator simulation data generated by accelerator physicists to investigate next-generation plasma-based particle accelerator designs, we describe a novel approach for automatic detection and classification of particle beams and beam substructures due to temporal differences in the acceleration process, here called acceleration features. The automatic feature detection in combination with a novel visualization tool for fast, intuitive, query-based exploration of acceleration features enables an effective top-down data exploration process, starting from a high-level, feature-based view down to the level of individual particles. We describe the application of our analysis in practice to analyze simulations of single pulse and dual and triple colliding pulse accelerator designs, and to study the formation and evolution of particle beams, to compare substructures of a beam and to investigate transverse particle loss.

  14. Train axle bearing fault detection using a feature selection scheme based multi-scale morphological filter

    Science.gov (United States)

    Li, Yifan; Liang, Xihui; Lin, Jianhui; Chen, Yuejian; Liu, Jianxin

    2018-02-01

    This paper presents a novel signal processing scheme, feature selection based multi-scale morphological filter (MMF), for train axle bearing fault detection. In this scheme, more than 30 feature indicators of vibration signals are calculated for axle bearings with different conditions and the features which can reflect fault characteristics more effectively and representatively are selected using the max-relevance and min-redundancy principle. Then, a filtering scale selection approach for MMF based on feature selection and grey relational analysis is proposed. The feature selection based MMF method is tested on diagnosis of artificially created damages of rolling bearings of railway trains. Experimental results show that the proposed method has a superior performance in extracting fault features of defective train axle bearings. In addition, comparisons are performed with the kurtosis criterion based MMF and the spectral kurtosis criterion based MMF. The proposed feature selection based MMF method outperforms these two methods in detection of train axle bearing faults.

  15. Brain Computed Tomography Compared with Facial 3-Dimensional Computed Tomography for Diagnosis of Facial Fractures.

    Science.gov (United States)

    Lee, Sun Hwa; Yun, Seong Jong; Ryu, Seokyong; Choi, Seoung Won; Kim, Hye Jin; Kang, Tae Kyug; Oh, Sung Chan; Cho, Suk Jin

    2017-05-01

    To compare the detection of facial fractures and radiation dose between brain computed tomography (CT) and facial 3-dimensional (3D) CT in pediatric patients who have experienced a trauma. Four hundred pediatric patients who experienced a trauma and underwent immediate brain CT and facial 3D CT between January 2016 and June 2016 were included in this retrospective study. Two reviewers independently analyzed and determined the presence of the facial fractures of 8 anatomic regions based on brain CT and facial 3D CT over a 1-week interval. Suggested treatment decisions for facial fractures seen on brain CT and facial 3D CT were evaluated by one physician. The facial 3D CT scans, interpreted by a senior radiologist, were considered as the reference standard. Diagnostic performance, radiation dose, and interobserver agreement of the CT scans were evaluated. Brain CT showed a high sensitivity (94.1%-96.5%), high specificity (99.7%-100%), and high accuracy (98.8%-99.0%) in both reviewers, and performed as well as did facial 3D CT (P ≥ .25). The suggested treatment decision was not different between the brain CT and facial 3D CT findings. The agreements between the reference standard and the reviewers, and between reviewers 1 and 2 were excellent (k = 0.946-0.993). The mean effective radiation doses used in brain CT (3.6 mSv) were significantly lower than those in brain CT with facial 3D CT (5.5 mSv) (P Brain CT showed acceptable diagnostic performance and can be used as the first-line imaging tool in the workup of pediatric patients with suspected facial fractures. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Facial Transplantation Surgery Introduction

    OpenAIRE

    Eun, Seok-Chan

    2015-01-01

    Severely disfiguring facial injuries can have a devastating impact on the patient's quality of life. During the past decade, vascularized facial allotransplantation has progressed from an experimental possibility to a clinical reality in the fields of disease, trauma, and congenital malformations. This technique may now be considered a viable option for repairing complex craniofacial defects for which the results of autologous reconstruction remain suboptimal. Vascularized facial allotranspla...

  17. Shape based automated detection of pulmonary nodules with surface feature based false positive reduction

    International Nuclear Information System (INIS)

    Nomura, Y.; Itoh, H.; Masutani, Y.; Ohtomo, K.; Maeda, E.; Yoshikawa, T.; Hayashi, N.

    2007-01-01

    We proposed a shape based automated detection of pulmonary nodules with surface feature based false positive (FP) reduction. In the proposed system, the FP existing in internal of vessel bifurcation is removed using extracted surface of vessels and nodules. From the validation with 16 chest CT scans, we find that the proposed CAD system achieves 18.7 FPs/scan at 90% sensitivity, and 7.8 FPs/scan at 80% sensitivity. (orig.)

  18. Gamelan Music Onset Detection based on Spectral Features

    Directory of Open Access Journals (Sweden)

    Yoyon Kusnendar Suprapto

    2013-03-01

    Full Text Available This research detects onsets of percussive instruments by examining the performance on the sound signals of gamelan instruments as one of traditional music instruments in Indonesia. Onset plays important role in determining musical rythmic structure, like beat, tempo, and is highly required in many applications of music information retrieval. There are four onset detection methods compared that employ spectral features, such as magnitude, phase, and the combination of both, which are phase slope (PS, weighted phase deviation (WPD, spectral flux (SF, and rectified complex domain (RCD. These features are extracted by representing the sound signals into time-frequency domain using overlapped Short-time Fourier Transform (STFT and varying the window length. Onset detection functions are processed through peak-picking using dynamic threshold. The results showed that by using suitable window length and parameter setting of dynamic threshold, F-measure which is greater than 0.80 can be obtained for certain methods.

  19. Selection of a green manufacturing process based on CAD features

    OpenAIRE

    Gaha, Raoudha; Yannou, Bernard; Benamara, Abdelmajid

    2016-01-01

    International audience; Environmentally conscious manufacturing process (ECMP) has become an obligation to the environment and to the society itself, enforced primarily by governmental regulations and customer perspective on environmental issues. ECMP involves integrating environmental thinking into new product development. This is especially true in the computer-aided design (CAD) phase which is the last phase in the design process. At this stage, more than 80 % of choices are done. Feature ...

  20. Relational kernel-based grasping with numerical features

    OpenAIRE

    Antanas, Laura; Moreno, Plinio; De Raedt, Luc

    2015-01-01

    Object grasping is a key task in robot manipulation. Performing a grasp largely depends on the object properties and grasp constraints. This paper proposes a new statistical relational learning approach to recognize graspable points in object point clouds. We characterize each point with numerical shape features and represent each cloud as a (hyper-) graph by considering qualitative spatial relations between neighboring points. Further, we use kernels on graphs to exploit extended contextual ...

  1. GA Based Optimal Feature Extraction Method for Functional Data Classification

    OpenAIRE

    Jun Wan; Zehua Chen; Yingwu Chen; Zhidong Bai

    2010-01-01

    Classification is an interesting problem in functional data analysis (FDA), because many science and application problems end up with classification problems, such as recognition, prediction, control, decision making, management, etc. As the high dimension and high correlation in functional data (FD), it is a key problem to extract features from FD whereas keeping its global characters, which relates to the classification efficiency and precision to heavens. In this paper...

  2. Predicting couple therapy outcomes based on speech acoustic features.

    Directory of Open Access Journals (Sweden)

    Md Nasir

    Full Text Available Automated assessment and prediction of marital outcome in couples therapy is a challenging task but promises to be a potentially useful tool for clinical psychologists. Computational approaches for inferring therapy outcomes using observable behavioral information obtained from conversations between spouses offer objective means for understanding relationship dynamics. In this work, we explore whether the acoustics of the spoken interactions of clinically distressed spouses provide information towards assessment of therapy outcomes. The therapy outcome prediction task in this work includes detecting whether there was a relationship improvement or not (posed as a binary classification as well as discerning varying levels of improvement or decline in the relationship status (posed as a multiclass recognition task. We use each interlocutor's acoustic speech signal characteristics such as vocal intonation and intensity, both independently and in relation to one another, as cues for predicting the therapy outcome. We also compare prediction performance with one obtained via standardized behavioral codes characterizing the relationship dynamics provided by human experts as features for automated classification. Our experiments, using data from a longitudinal clinical study of couples in distressed relations, showed that predictions of relationship outcomes obtained directly from vocal acoustics are comparable or superior to those obtained using human-rated behavioral codes as prediction features. In addition, combining direct signal-derived features with manually coded behavioral features improved the prediction performance in most cases, indicating the complementarity of relevant information captured by humans and machine algorithms. Additionally, considering the vocal properties of the interlocutors in relation to one another, rather than in isolation, showed to be important for improving the automatic prediction. This finding supports the notion

  3. Cirrhosis Classification Based on Texture Classification of Random Features

    Directory of Open Access Journals (Sweden)

    Hui Liu

    2014-01-01

    Full Text Available Accurate staging of hepatic cirrhosis is important in investigating the cause and slowing down the effects of cirrhosis. Computer-aided diagnosis (CAD can provide doctors with an alternative second opinion and assist them to make a specific treatment with accurate cirrhosis stage. MRI has many advantages, including high resolution for soft tissue, no radiation, and multiparameters imaging modalities. So in this paper, multisequences MRIs, including T1-weighted, T2-weighted, arterial, portal venous, and equilibrium phase, are applied. However, CAD does not meet the clinical needs of cirrhosis and few researchers are concerned with it at present. Cirrhosis is characterized by the presence of widespread fibrosis and regenerative nodules in the hepatic, leading to different texture patterns of different stages. So, extracting texture feature is the primary task. Compared with typical gray level cooccurrence matrix (GLCM features, texture classification from random features provides an effective way, and we adopt it and propose CCTCRF for triple classification (normal, early, and middle and advanced stage. CCTCRF does not need strong assumptions except the sparse character of image, contains sufficient texture information, includes concise and effective process, and makes case decision with high accuracy. Experimental results also illustrate the satisfying performance and they are also compared with typical NN with GLCM.

  4. Research based on the SoPC platform of feature-based image registration

    Science.gov (United States)

    Shi, Yue-dong; Wang, Zhi-hui

    2015-12-01

    This paper focuses on the study of implementing feature-based image registration by System on a Programmable Chip (SoPC) hardware platform. We solidify the image registration algorithm on the FPGA chip, in which embedded soft core processor Nios II can speed up the image processing system. In this way, we can make image registration technology get rid of the PC. And, consequently, this kind of technology will be got an extensive use. The experiment result indicates that our system shows stable performance, particularly in terms of matching processing which noise immunity is good. And feature points of images show a reasonable distribution.

  5. [Facial tics and spasms].

    Science.gov (United States)

    Potgieser, Adriaan R E; van Dijk, J Marc C; Elting, Jan Willem J; de Koning-Tijssen, Marina A J

    2014-01-01

    Facial tics and spasms are socially incapacitating, but effective treatment is often available. The clinical picture is sufficient for distinguishing between the different diseases that cause this affliction.We describe three cases of patients with facial tics or spasms: one case of tics, which are familiar to many physicians; one case of blepharospasms; and one case of hemifacial spasms. We discuss the differential diagnosis and the treatment possibilities for facial tics and spasms. Early diagnosis and treatment is important, because of the associated social incapacitation. Botulin toxin should be considered as a treatment option for facial tics and a curative neurosurgical intervention should be considered for hemifacial spasms.

  6. SVM Classifiers: The Objects Identification on the Base of Their Hyperspectral Features

    Directory of Open Access Journals (Sweden)

    Demidova Liliya

    2017-01-01

    Full Text Available The problem of the objects identification on the base of their hyperspectral features has been considered. It is offered to use the SVM classifiers on the base of the modified PSO algorithm, adapted to specifics of the problem of the objects identification on the base of their hyperspectral features. The results of the objects identification on the base of their hyperspectral features with using of the SVM classifiers have been presented.

  7. Binary pattern analysis for 3D facial action unit detection

    NARCIS (Netherlands)

    Sandbach, Georgia; Zafeiriou, Stefanos; Pantic, Maja

    2012-01-01

    In this paper we propose new binary pattern features for use in the problem of 3D facial action unit (AU) detection. Two representations of 3D facial geometries are employed, the depth map and the Azimuthal Projection Distance Image (APDI). To these the traditional Local Binary Pattern is applied,

  8. Oro-facial carcinoma in kaduna | Adeola | Nigerian Journal of ...

    African Journals Online (AJOL)

    Patient and Method A 5-years retrospective study of 211 patients with Oro-facial cancers in the maxillo-facial unit of Ahmadu Bello University Kaduna, was carried out. The demographic pattern, clinical features, Histopathological findings and treatments modalities as obtained from the patients' folder were studied.

  9. Automatic brain MR image denoising based on texture feature-based artificial neural networks.

    Science.gov (United States)

    Chang, Yu-Ning; Chang, Herng-Hua

    2015-01-01

    Noise is one of the main sources of quality deterioration not only for visual inspection but also in computerized processing in brain magnetic resonance (MR) image analysis such as tissue classification, segmentation and registration. Accordingly, noise removal in brain MR images is important for a wide variety of subsequent processing applications. However, most existing denoising algorithms require laborious tuning of parameters that are often sensitive to specific image features and textures. Automation of these parameters through artificial intelligence techniques will be highly beneficial. In the present study, an artificial neural network associated with image texture feature analysis is proposed to establish a predictable parameter model and automate the denoising procedure. In the proposed approach, a total of 83 image attributes were extracted based on four categories: 1) Basic image statistics. 2) Gray-level co-occurrence matrix (GLCM). 3) Gray-level run-length matrix (GLRLM) and 4) Tamura texture features. To obtain the ranking of discrimination in these texture features, a paired-samples t-test was applied to each individual image feature computed in every image. Subsequently, the sequential forward selection (SFS) method was used to select the best texture features according to the ranking of discrimination. The selected optimal features were further incorporated into a back propagation neural network to establish a predictable parameter model. A wide variety of MR images with various scenarios were adopted to evaluate the performance of the proposed framework. Experimental results indicated that this new automation system accurately predicted the bilateral filtering parameters and effectively removed the noise in a number of MR images. Comparing to the manually tuned filtering process, our approach not only produced better denoised results but also saved significant processing time.

  10. Quantitative facial asymmetry: using three-dimensional photogrammetry to measure baseline facial surface symmetry.

    Science.gov (United States)

    Taylor, Helena O; Morrison, Clinton S; Linden, Olivia; Phillips, Benjamin; Chang, Johnny; Byrne, Margaret E; Sullivan, Stephen R; Forrest, Christopher R

    2014-01-01

    subjectively, can be easily and reproducibly measured using three-dimensional photogrammetry. The RMSD for facial asymmetry of healthy volunteers clusters at approximately 0.80 ± 0.24 mm. Patients with facial asymmetry due to a pathologic process can be differentiated from normative facial asymmetry based on their RMSDs.

  11. Conditional Mutual Information Based Feature Selection for Classification Task

    Czech Academy of Sciences Publication Activity Database

    Novovičová, Jana; Somol, Petr; Haindl, Michal; Pudil, Pavel

    2007-01-01

    Roč. 45, č. 4756 (2007), s. 417-426 ISSN 0302-9743 R&D Projects: GA MŠk 1M0572; GA AV ČR IAA2075302 EU Projects: European Commission(XE) 507752 - MUSCLE Grant - others:GA MŠk(CZ) 2C06019 Institutional research plan: CEZ:AV0Z10750506 Keywords : Pattern classification * feature selection * conditional mutual information * text categorization Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.402, year: 2005

  12. Corner Feature Extraction: Techniques for Landmark Based Navigation Systems

    OpenAIRE

    Namoshe, Molaletsa; Matsebe, Oudetse; Tlale, Nkgatho

    2010-01-01

    In this paper we discussed the results of an EKF SLAM using real data logged and computed offline. One of the most important parts of the SLAM process is to accurately map the environment the robot is exploring and localize in it. To achieve this however, is depended on the precise acquirement of features extracted from the external sensor. We looked at corner detection methods and we proposed an improved version of the method discussed in section 2.1.1. It transpired that methods found in th...

  13. Three-Class Mammogram Classification Based on Descriptive CNN Features

    Directory of Open Access Journals (Sweden)

    M. Mohsin Jadoon

    2017-01-01

    Full Text Available In this paper, a novel classification technique for large data set of mammograms using a deep learning method is proposed. The proposed model targets a three-class classification study (normal, malignant, and benign cases. In our model we have presented two methods, namely, convolutional neural network-discrete wavelet (CNN-DW and convolutional neural network-curvelet transform (CNN-CT. An augmented data set is generated by using mammogram patches. To enhance the contrast of mammogram images, the data set is filtered by contrast limited adaptive histogram equalization (CLAHE. In the CNN-DW method, enhanced mammogram images are decomposed as its four subbands by means of two-dimensional discrete wavelet transform (2D-DWT, while in the second method discrete curvelet transform (DCT is used. In both methods, dense scale invariant feature (DSIFT for all subbands is extracted. Input data matrix containing these subband features of all the mammogram patches is created that is processed as input to convolutional neural network (CNN. Softmax layer and support vector machine (SVM layer are used to train CNN for classification. Proposed methods have been compared with existing methods in terms of accuracy rate, error rate, and various validation assessment measures. CNN-DW and CNN-CT have achieved accuracy rate of 81.83% and 83.74%, respectively. Simulation results clearly validate the significance and impact of our proposed model as compared to other well-known existing techniques.

  14. Three-Class Mammogram Classification Based on Descriptive CNN Features.

    Science.gov (United States)

    Jadoon, M Mohsin; Zhang, Qianni; Haq, Ihsan Ul; Butt, Sharjeel; Jadoon, Adeel

    2017-01-01

    In this paper, a novel classification technique for large data set of mammograms using a deep learning method is proposed. The proposed model targets a three-class classification study (normal, malignant, and benign cases). In our model we have presented two methods, namely, convolutional neural network-discrete wavelet (CNN-DW) and convolutional neural network-curvelet transform (CNN-CT). An augmented data set is generated by using mammogram patches. To enhance the contrast of mammogram images, the data set is filtered by contrast limited adaptive histogram equalization (CLAHE). In the CNN-DW method, enhanced mammogram images are decomposed as its four subbands by means of two-dimensional discrete wavelet transform (2D-DWT), while in the second method discrete curvelet transform (DCT) is used. In both methods, dense scale invariant feature (DSIFT) for all subbands is extracted. Input data matrix containing these subband features of all the mammogram patches is created that is processed as input to convolutional neural network (CNN). Softmax layer and support vector machine (SVM) layer are used to train CNN for classification. Proposed methods have been compared with existing methods in terms of accuracy rate, error rate, and various validation assessment measures. CNN-DW and CNN-CT have achieved accuracy rate of 81.83% and 83.74%, respectively. Simulation results clearly validate the significance and impact of our proposed model as compared to other well-known existing techniques.

  15. Part-based Pedestrian Detection and Feature-based Tracking for Driver Assistance

    DEFF Research Database (Denmark)

    Prioletti, Antonio; Møgelmose, Andreas; Grislieri, Paolo

    2013-01-01

    gained a special place among the different approaches presented. This paper presents a state-of-the-art pedestrian detection system based on a two-stage classifier. Candidates are extracted with a Haar cascade classifier trained with the Daimler Detection Benchmark data set and then validated through...... on a prototype vehicle and offers high performance in terms of several metrics, such as detection rate, false positives per hour, and frame rate. The novelty of this system relies on the combination of a HOG part-based approach, tracking based on a specific optimized feature, and porting on a real prototype....

  16. Wavelet-based segmentation and feature extraction of heart sounds for intelligent PDA-based phonocardiography.

    Science.gov (United States)

    Nazeran, H

    2007-01-01

    Many pathological conditions of the cardiovascular system cause murmurs and aberrations in heart sounds. Phonocardiography provides the clinician with a complementary tool to record the heart sounds heard during auscultation. The advancement of intracardiac phonocardiography combined with modern digital signal processing techniques has strongly renewed researchers' interest in studying heart sounds and murmurs. The aim of this work is to investigate the applicability of different spectral analysis methods to heart sound signals and explore their suitability for PDA-based implementation. Fourier transform (FT), short-time Fourier transform (STFT) and wavelet transform (WT) are used to perform spectral analysis on heart sounds. A segmentation algorithm based on Shannon energy is used to differentiate between first and second heart sounds. Then wavelet transform is deployed again to extract 64 features of heart sounds. The FT provides valuable frequency information but the timing information is lost during the transformation process. The STFT or spectrogram provides valuable time-frequency information but there is a trade-off between time and frequency resolution. Wavelet analysis, however, does not suffer from limitations of the STFT and provides adequate time and frequency resolution to accurately characterize the normal and pathological heart sounds. The results show that the wavelet-based segmentation algorithm is quite effective in localizing the important components of both normal and abnormal heart sounds. They also demonstrate that wavelet-based feature extraction provides suitable feature vectors which are clearly differentiable and useful for automatic classification of heart sounds.

  17. Facial talon cusps.

    LENUS (Irish Health Repository)

    McNamara, T

    1997-12-01

    This is a report of two patients with isolated facial talon cusps. One occurred on a permanent mandibular central incisor; the other on a permanent maxillary canine. The locations of these talon cusps suggests that the definition of a talon cusp include teeth in addition to the incisor group and be extended to include the facial aspect of teeth.

  18. A facial marker in facial wasting rehabilitation.

    Science.gov (United States)

    Rauso, Raffaele; Tartaro, Gianpaolo; Freda, Nicola; Rusciani, Antonio; Curinga, Giuseppe

    2012-02-01

    Facial lipoatrophy is one of the most distressing manifestation for HIV patients. It can be stigmatizing, severely affecting quality of life and self-esteem, and it may result in reduced antiretroviral adherence. Several filling techniques have been proposed in facial wasting restoration, with different outcomes. The aim of this study is to present a triangular area that is useful to fill in facial wasting rehabilitation. Twenty-eight HIV patients rehabilitated for facial wasting were enrolled in this study. Sixteen were rehabilitated with a non-resorbable filler and twelve with structural fat graft harvested from lipohypertrophied areas. A photographic pre-operative and post-operative evaluation was performed by the patients and by two plastic surgeons who were "blinded." The filled area, in both patients rehabilitated with structural fat grafts or non-resorbable filler, was a triangular area of depression identified between the nasolabial fold, the malar arch, and the line that connects these two anatomical landmarks. The cosmetic result was evaluated after three months after the last filling procedure in the non-resorbable filler group and after three months post-surgery in the structural fat graft group. The mean patient satisfaction score was 8.7 as assessed with a visual analogue scale. The mean score for blinded evaluators was 7.6. In this study the authors describe a triangular area of the face, between the nasolabial fold, the malar arch, and the line that connects these two anatomical landmarks, where a good aesthetic facial restoration in HIV patients with facial wasting may be achieved regardless of which filling technique is used.

  19. Uniform Local Binary Pattern Based Texture-Edge Feature for 3D Human Behavior Recognition

    OpenAIRE

    Ming, Yue; Wang, Guangchao; Fan, Chunxiao

    2015-01-01

    With the rapid development of 3D somatosensory technology, human behavior recognition has become an important research field. Human behavior feature analysis has evolved from traditional 2D features to 3D features. In order to improve the performance of human activity recognition, a human behavior recognition method is proposed, which is based on a hybrid texture-edge local pattern coding feature extraction and integration of RGB and depth videos information. The paper mainly focuses on backg...

  20. Efficient Divide-And-Conquer Classification Based on Feature-Space Decomposition

    OpenAIRE

    Guo, Qi; Chen, Bo-Wei; Jiang, Feng; Ji, Xiangyang; Kung, Sun-Yuan

    2015-01-01

    This study presents a divide-and-conquer (DC) approach based on feature space decomposition for classification. When large-scale datasets are present, typical approaches usually employed truncated kernel methods on the feature space or DC approaches on the sample space. However, this did not guarantee separability between classes, owing to overfitting. To overcome such problems, this work proposes a novel DC approach on feature spaces consisting of three steps. Firstly, we divide the feature ...

  1. Learner features in a New Corpus-based Swahili dictionary ...

    African Journals Online (AJOL)

    As far as traditionally published Swahili language dictionaries are concerned, throughout the long history of Swahili lexicography, most new dictionaries were based on their predecessors. Thus far the only innovative traditionally printed corpus-based dictionary has been published by Finnish scholars (Abdulla et al. 2002).

  2. Identification of Malassezia species in the facial lesions of Chinese seborrhoeic dermatitis patients based on DNA sequencing.

    Science.gov (United States)

    Lian, C-h; Shen, L-l; Gao, Q-y; Jiang, M; Zhao, Z-j; Zhao, J-j

    2014-12-01

    The genus Malassezia is important in the aetiology of facial seborrhoeic dermatitis (FSD), which is the most common clinical type. The purpose of this study was to analyse the distribution of Malassezia species in the facial lesions of Chinese seborrhoeic dermatitis (SD) patients and healthy individuals. Sixty-four isolates of Malassezia were isolated from FSD patients and 60 isolates from healthy individuals. Sequence analysis of the internal transcribed spacer (ITS) region was used to identify the isolates. The most frequently identified Malassezia species associated with FSD was M. furfur (76.56%), followed by M. sympodialis (12.50%) and M. japonica (9.38%). The most frequently isolated species in healthy individuals were M. furfur (61.67%), followed by M. sympodialis (25.00%), M. japonica (6.67%), M. globosa (3.33%), and M. obtusa (3.33%). Overall, our study revealed that while M. furfur is the predominant Malassezia species in Chinese SD patients, there is no significant difference in the distribution of Malassezia species between Chinese SD patients and healthy individuals. © 2014 Blackwell Verlag GmbH.

  3. [Improvement of rosacea treatment based on the morphological and functional features of the skin].

    Science.gov (United States)

    Tsiskarishvili, N V; Katsitadze, A G; Tsiskarishvili, Ts I

    2013-10-01

    Rosacea - a widespread disease sometimes aleak with severe complications, mainly affecting the skin. Irrational and inadequate treatment leads to chronicity of diseases and psychosocial disadaptation of patients. Lately, a clear upward trend in the number of patients in whom in the process of complex treatment manifestations (with the varying degrees of severity) of impaired barrier function of the skin are observed and they need the protection and restoration of the damaged stratum corneum. In patients with rosacea in order to study the function of the facial skin's horny layer we used the skin analyzer BIA (bioimpedance analysis, which in duration of 6 seconds determines the moisture content, oiliness and the softness of the skin) and significant deviations from the norm (decrease in moisture content, fatness and increased roughness) was revealed. These changes were most clearly pronounced in patients with steroid rosacea. To restore the skin barrier the drug "Episofit A" (Laboratory of Evolutionary Dermatology, France) has been used (1-2 times a day for 6 weeks). Evaluation of treatment efficacy was conducted every 2 weeks by means of a scale from 0 to 5 for parameters of dryness, erythema, peeling and expression of subjective feelings. In accordance with received results, using of Episofit A emulsion, especially on the baсkground of long-term treatment with topical steroids, had a pronounced therapeutic effect. Thus, treatment of patients with consideration of morphological and functional features of facial skin, helps to improve the results traditional therapy, and the drug is highly effective means of the new direction in skin care - corneotherapy aimed to reconstruct and protect damaged stratum corneum.

  4. Specific Features of Intramolecular Proton Transfer Reaction in Schiff Bases

    Directory of Open Access Journals (Sweden)

    Aleksander Koll

    2003-06-01

    Full Text Available Abstract: The differences between the intramolecular proton transfer in Mannich and Schiff bases are discussed. The tautomeric forms being in equilibrium in both types of molecules are seriously different. In Mannich bases there are in equilibrium the forms of phenols and phenolates. In Schiff bases each of tautomers is strongly influenced by resonance between zwitterionic and keto structures. Despite the common opinion that the proton transfer forms in compounds with internal π-electronic coupling are mainly keto forms it is shown in this work, that in Schiff bases the content of keto structure is slightly less than zwitterionic one. Almost equal participation of both forms leads to effective resonance between them and stabilization of intramolecular hydrogen bond in this way.

  5. Sad Facial Expressions Increase Choice Blindness

    Directory of Open Access Journals (Sweden)

    Yajie Wang

    2018-01-01

    Full Text Available Previous studies have discovered a fascinating phenomenon known as choice blindness—individuals fail to detect mismatches between the face they choose and the face replaced by the experimenter. Although previous studies have reported a couple of factors that can modulate the magnitude of choice blindness, the potential effect of facial expression on choice blindness has not yet been explored. Using faces with sad and neutral expressions (Experiment 1 and faces with happy and neutral expressions (Experiment 2 in the classic choice blindness paradigm, the present study investigated the effects of facial expressions on choice blindness. The results showed that the detection rate was significantly lower on sad faces than neutral faces, whereas no significant difference was observed between happy faces and neutral faces. The exploratory analysis of verbal reports found that participants who reported less facial features for sad (as compared to neutral expressions also tended to show a lower detection rate of sad (as compared to neutral faces. These findings indicated that sad facial expressions increased choice blindness, which might have resulted from inhibition of further processing of the detailed facial features by the less attractive sad expressions (as compared to neutral expressions.

  6. GAIN RATIO BASED FEATURE SELECTION METHOD FOR PRIVACY PRESERVATION

    Directory of Open Access Journals (Sweden)

    R. Praveena Priyadarsini

    2011-04-01

    Full Text Available Privacy-preservation is a step in data mining that tries to safeguard sensitive information from unsanctioned disclosure and hence protecting individual data records and their privacy. There are various privacy preservation techniques like k-anonymity, l-diversity and t-closeness and data perturbation. In this paper k-anonymity privacy protection technique is applied to high dimensional datasets like adult and census. since, both the data sets are high dimensional, feature subset selection method like Gain Ratio is applied and the attributes of the datasets are ranked and low ranking attributes are filtered to form new reduced data subsets. K-anonymization privacy preservation technique is then applied on reduced datasets. The accuracy of the privacy preserved reduced datasets and the original datasets are compared for their accuracy on the two functionalities of data mining namely classification and clustering using naïve Bayesian and k-means algorithm respectively. Experimental results show that classification and clustering accuracy are comparatively the same for reduced k-anonym zed datasets and the original data sets.

  7. A Study of Moment Based Features on Handwritten Digit Recognition

    Directory of Open Access Journals (Sweden)

    Pawan Kumar Singh

    2016-01-01

    Full Text Available Handwritten digit recognition plays a significant role in many user authentication applications in the modern world. As the handwritten digits are not of the same size, thickness, style, and orientation, therefore, these challenges are to be faced to resolve this problem. A lot of work has been done for various non-Indic scripts particularly, in case of Roman, but, in case of Indic scripts, the research is limited. This paper presents a script invariant handwritten digit recognition system for identifying digits written in five popular scripts of Indian subcontinent, namely, Indo-Arabic, Bangla, Devanagari, Roman, and Telugu. A 130-element feature set which is basically a combination of six different types of moments, namely, geometric moment, moment invariant, affine moment invariant, Legendre moment, Zernike moment, and complex moment, has been estimated for each digit sample. Finally, the technique is evaluated on CMATER and MNIST databases using multiple classifiers and, after performing statistical significance tests, it is observed that Multilayer Perceptron (MLP classifier outperforms the others. Satisfactory recognition accuracies are attained for all the five mentioned scripts.

  8. Facial emotion recognition in Parkinson's disease: A review and new hypotheses

    Science.gov (United States)

    Vérin, Marc; Sauleau, Paul; Grandjean, Didier

    2018-01-01

    Abstract Parkinson's disease is a neurodegenerative disorder classically characterized by motor symptoms. Among them, hypomimia affects facial expressiveness and social communication and has a highly negative impact on patients' and relatives' quality of life. Patients also frequently experience nonmotor symptoms, including emotional‐processing impairments, leading to difficulty in recognizing emotions from faces. Aside from its theoretical importance, understanding the disruption of facial emotion recognition in PD is crucial for improving quality of life for both patients and caregivers, as this impairment is associated with heightened interpersonal difficulties. However, studies assessing abilities in recognizing facial emotions in PD still report contradictory outcomes. The origins of this inconsistency are unclear, and several questions (regarding the role of dopamine replacement therapy or the possible consequences of hypomimia) remain unanswered. We therefore undertook a fresh review of relevant articles focusing on facial emotion recognition in PD to deepen current understanding of this nonmotor feature, exploring multiple significant potential confounding factors, both clinical and methodological, and discussing probable pathophysiological mechanisms. This led us to examine recent proposals about the role of basal ganglia‐based circuits in emotion and to consider the involvement of facial mimicry in this deficit from the perspective of embodied simulation theory. We believe our findings will inform clinical practice and increase fundamental knowledge, particularly in relation to potential embodied emotion impairment in PD. © 2018 The Authors. Movement Disorders published by Wiley Periodicals, Inc. on behalf of International Parkinson and Movement Disorder Society. PMID:29473661

  9. Not just another face in the crowd: society's perceptions of facial paralysis.

    Science.gov (United States)

    Ishii, Lisa; Godoy, Andres; Encarnacion, Carlos O; Byrne, Patrick J; Boahene, Kofi D O; Ishii, Masaru

    2012-03-01

    There is a paucity of data showing the perception penalty caused by facial paralysis. Our objective was to measure society's perception of facial paralysis on the characteristic of beauty. We hypothesized that patients with paralysis would be considered by society as less attractive than normals, a difference amplified by smiling. Randomized controlled experiment. Forty subjects viewed photographs of normal and paralyzed faces. They rated attractiveness, identified paralysis if present, its severity, and the feature most affected. There were significant differences in attractiveness scores for normal and paralyzed faces (Wilcoxon rank sum test, z = 16.912; P standard deviation less attractive than normal faces. Smiling increased attractiveness for normals (constant, 5.9; smile effect, 0.735; P < .001). The smile × paralysis interaction term was -0.892; P < .001, but not significantly different from the smile term (χ(2) (1) = 0.87; P = .352). The random effects model showed an intersubject rating variability of 1.32. The attractiveness penalty imposed by facial paralysis is significant, with paralyzed faces considered markedly less attractive than normals. However, the ratings did not change significantly when patients smiled, despite the increased asymmetry that occurs through smiling. Observers were moderately good at identifying the presence of facial paralysis, but less good at distinguishing side of involvement. These results have important implications for patient counseling and management of facial paralysis patients in an evidence-based manner. Copyright © 2011 The American Laryngological, Rhinological, and Otological Society, Inc.

  10. Facial skin follllicular hyperkeratosis of patients with basal cell carcinoma

    Directory of Open Access Journals (Sweden)

    M. V. Zhuchkov

    2016-01-01

    Full Text Available This article provides a clinical observation of paraneoplastic syndrome of a patient with basal cell carcinoma of skin. Authors present clinical features of the described for the first time, paraneoplastic retentional follicular hyperkeratosis of facial area.

  11. A Mean-Shift-Based Feature Descriptor for Wide Baseline Stereo Matching

    Directory of Open Access Journals (Sweden)

    Yiwen Dou

    2015-01-01

    Full Text Available We propose a novel Mean-Shift-based building approach in wide baseline. Initially, scale-invariance feature transform (SIFT approach is used to extract relatively stable feature points. As to each matching SIFT feature point, it needs a reasonable neighborhood range so as to choose feature points set. Subsequently, in view of selecting repeatable and high robust feature points, Mean-Shift controls corresponding feature scale. At last, our approach is employed to depth image acquirement in wide baseline and Graph Cut algorithm optimizes disparity information. Compared with the existing methods such as SIFT, speeded up robust feature (SURF, and normalized cross-correlation (NCC, the presented approach has the advantages of higher robustness and accuracy rate. Experimental results on low resolution image and weak feature description in wide baseline confirm the validity of our approach.

  12. Bag-of-visual-words based feature extraction for SAR target classification

    Science.gov (United States)

    Amrani, Moussa; Chaib, Souleyman; Omara, Ibrahim; Jiang, Feng

    2017-07-01

    Feature extraction plays a key role in the classification performance of synthetic aperture radar automatic target recognition (SAR-ATR). It is very crucial to choose appropriate features to train a classifier, which is prerequisite. Inspired by the great success of Bag-of-Visual-Words (BoVW), we address the problem of feature extraction by proposing a novel feature extraction method for SAR target classification. First, Gabor based features are adopted to extract features from the training SAR images. Second, a discriminative codebook is generated using K-means clustering algorithm. Third, after feature encoding by computing the closest Euclidian distance, the targets are represented by new robust bag of features. Finally, for target classification, support vector machine (SVM) is used as a baseline classifier. Experiments on Moving and Stationary Target Acquisition and Recognition (MSTAR) public release dataset are conducted, and the classification accuracy and time complexity results demonstrate that the proposed method outperforms the state-of-the-art methods.

  13. Facial Emotions Recognition using Gabor Transform and Facial Animation Parameters with Neural Networks

    Science.gov (United States)

    Harit, Aditya; Joshi, J. C., Col; Gupta, K. K.

    2018-03-01

    The paper proposed an automatic facial emotion recognition algorithm which comprises of two main components: feature extraction and expression recognition. The algorithm uses a Gabor filter bank on fiducial points to find the facial expression features. The resulting magnitudes of Gabor transforms, along with 14 chosen FAPs (Facial Animation Parameters), compose the feature space. There are two stages: the training phase and the recognition phase. Firstly, for the present 6 different emotions, the system classifies all training expressions in 6 different classes (one for each emotion) in the training stage. In the recognition phase, it recognizes the emotion by applying the Gabor bank to a face image, then finds the fiducial points, and then feeds it to the trained neural architecture.

  14. A tool to ascertain taxonomic relatedness based on features derived ...

    Indian Academy of Sciences (India)

    MADHU

    placements of a new isolate based on phenotypic characteristics are now being supported by information preserved in the 16S rRNA gene. ... were extracted from the training data set of the 16S rDNA sequence, and was subjected to an artificial neural network ..... variables to 275 principal components, accounting for 99%.

  15. Graph-based unsupervised feature selection and multiview ...

    Indian Academy of Sciences (India)

    2015-09-28

    Sep 28, 2015 ... llama.med.harvard.edu/funcassociate), a Web-based application which discovers properties enriched in lists of genes or proteins that emerge from large-scale experimen- tation (Berriz et al. 2009) is also used for biological significance measurement. Further, gene-card (http:// www.genecards.org/) (Safran ...

  16. Novel feature extraction method based on weight difference of weighted network for epileptic seizure detection.

    Science.gov (United States)

    Fenglin Wang; Qingfang Meng; Hong-Bo Xie; Yuehui Chen

    2014-01-01

    The extraction method of classification feature is primary and core problem in all epileptic EEG detection algorithms, since it can seriously affect the performance of the detection algorithm. In this paper, a novel epileptic EEG feature extraction method based on the statistical parameter of weighted complex network is proposed. The EEG signal is first transformed into weighted network and the weight differences of all the nodes in the network are analyzed. Then the sum of top quintile weight differences is extracted as the classification feature. At last, the extracted feature is applied to classify the epileptic EEG dataset. Experimental results show that the single feature classification based on the extracted feature obtains higher classification accuracy up to 94.75%, which indicates that the extracted feature can distinguish the ictal EEG from interictal EEG and has great potentiality of real-time epileptic seizures detection.

  17. Modeling first impressions from highly variable facial images.

    Science.gov (United States)

    Vernon, Richard J W; Sutherland, Clare A M; Young, Andrew W; Hartley, Tom

    2014-08-12

    First impressions of social traits, such as trustworthiness or dominance, are reliably perceived in faces, and despite their questionable validity they can have considerable real-world consequences. We sought to uncover the information driving such judgments, using an attribute-based approach. Attributes (physical facial features) were objectively measured from feature positions and colors in a database of highly variable "ambient" face photographs, and then used as input for a neural network to model factor dimensions (approachability, youthful-attractiveness, and dominance) thought to underlie social attributions. A linear model based on this approach was able to account for 58% of the variance in raters' impressions of previously unseen faces, and factor-attribute correlations could be used to rank attributes by their importance to each factor. Reversing this process, neural networks were then used to predict facial attributes and corresponding image properties from specific combinations of factor scores. In this way, the factors driving social trait impressions could be visualized as a series of computer-generated cartoon face-like images, depicting how attributes change along each dimension. This study shows that despite enormous variation in ambient images of faces, a substantial proportion of the variance in first impressions can be accounted for through linear changes in objectively defined features.

  18. Facial appearance affects science communication

    OpenAIRE

    Gheorghiu, AI; Callana, MJ; Skylark, William John

    2017-01-01

    First impressions based on facial appearance predict many important social outcomes. We investigated whether such impressions also influence the communication of scientific findings to lay audiences, a process that shapes public beliefs, opinion, and policy. First, we investigated the traits that engender interest in a scientist’s work, and those that create the impression of a “good scientist” who does high-quality research. Apparent competence and morality were positively related to both in...

  19. An anatomical study for localisation of zygomatic branch of facial nerve and masseteric nerve – an aid to nerve coaptation for facial reanimation surgery: A cadaver based study in Eastern India

    Directory of Open Access Journals (Sweden)

    Ratnadeep Poddar

    2017-01-01

    Full Text Available Context: In cases of chronic facial palsy, where direct neurotisation is possible, ipsilateral masseteric nerve is a very suitable motor donor. We have tried to specifically locate the masseteric nerve for this purpose. Aims: Describing an approach of localisation and exposure of both the zygomatic branch of Facial nerve and the nerve to masseter, with respect to a soft tissue reference point over face. Settings and Design: Observational cross sectional study, conducted on 12 fresh cadavers. Subjects and Methods: A curved incision was given, passing about 0.5cms in front of the tragal cartilage. A reference point “R” was pointed out. The zygomatic branch of facial nerve and masseteric nerve were dissected out and their specific locations were recorded from fixed reference points with help of copper wire and slide callipers. Statistical Analysis Used: Central Tendency measurements and Unpaired “t” test. Results: Zygomatic branch of the Facial nerve was located within a small circular area of radius 1 cm, the centre of which lies at a distance of 1.1 cms (±0.4cm in males and 0.2cm (±0.1cm in females from the point, 'R', in a vertical (coronal plane. The nerve to masseter was noted to lie within a circular area of 1 cm radius, the centre of which was at a distance of 2.5cms (±0.4cm and 1.7cms (±0.2cm from R, in male and female cadavers, respectively. Finally, Masseteric nerve's depth, from the masseteric surface was found to be 1cm (±0.1cm; male and 0.8cm (±0.1cm; female. Conclusions: This novel approach can reduce the post operative cosmetic morbidity and per-operative complications of facial reanimation surgery.

  20. Feature-Enhanced, Model-Based Sparse Aperture Imaging

    Science.gov (United States)

    2008-03-01

    information about angle-dependent scattering. Methods employing subaperture analysis and parametric models expect to find contiguous intervals in θ for...transform, which is not a transform in the strict sense, but a method in image analysis for detecting straight lines in binary images [12], uses a ρ-θ...We explore the application of a homotopy continuation-based method for sparse signal representation in overcomplete dictio- naries. Our problem setup