WorldWideScience

Sample records for face recognition disorders

  1. [Face recognition in patients with autism spectrum disorders].

    Science.gov (United States)

    Kita, Yosuke; Inagaki, Masumi

    2012-07-01

    The present study aimed to review previous research conducted on face recognition in patients with autism spectrum disorders (ASD). Face recognition is a key question in the ASD research field because it can provide clues for elucidating the neural substrates responsible for the social impairment of these patients. Historically, behavioral studies have reported low performance and/or unique strategies of face recognition among ASD patients. However, the performance and strategy of ASD patients is comparable to those of the control group, depending on the experimental situation or developmental stage, suggesting that face recognition of ASD patients is not entirely impaired. Recent brain function studies, including event-related potential and functional magnetic resonance imaging studies, have investigated the cognitive process of face recognition in ASD patients, and revealed impaired function in the brain's neural network comprising the fusiform gyrus and amygdala. This impaired function is potentially involved in the diminished preference for faces, and in the atypical development of face recognition, eliciting symptoms of unstable behavioral characteristics in these patients. Additionally, face recognition in ASD patients is examined from a different perspective, namely self-face recognition, and facial emotion recognition. While the former topic is intimately linked to basic social abilities such as self-other discrimination, the latter is closely associated with mentalizing. Further research on face recognition in ASD patients should investigate the connection between behavioral and neurological specifics in these patients, by considering developmental changes and the spectrum clinical condition of ASD.

  2. Recognition of face and non-face stimuli in autistic spectrum disorder.

    Science.gov (United States)

    Arkush, Leo; Smith-Collins, Adam P R; Fiorentini, Chiara; Skuse, David H

    2013-12-01

    The ability to remember faces is critical for the development of social competence. From childhood to adulthood, we acquire a high level of expertise in the recognition of facial images, and neural processes become dedicated to sustaining competence. Many people with autism spectrum disorder (ASD) have poor face recognition memory; changes in hairstyle or other non-facial features in an otherwise familiar person affect their recollection skills. The observation implies that they may not use the configuration of the inner face to achieve memory competence, but bolster performance in other ways. We aimed to test this hypothesis by comparing the performance of a group of high-functioning unmedicated adolescents with ASD and a matched control group on a "surprise" face recognition memory task. We compared their memory for unfamiliar faces with their memory for images of houses. To evaluate the role that is played by peripheral cues in assisting recognition memory, we cropped both sets of pictures, retaining only the most salient central features. ASD adolescents had poorer recognition memory for faces than typical controls, but their recognition memory for houses was unimpaired. Cropping images of faces did not disproportionately influence their recall accuracy, relative to controls. House recognition skills (cropped and uncropped) were similar in both groups. In the ASD group only, performance on both sets of task was closely correlated, implying that memory for faces and other complex pictorial stimuli is achieved by domain-general (non-dedicated) cognitive mechanisms. Adolescents with ASD apparently do not use domain-specialized processing of inner facial cues to support face recognition memory.

  3. The Effect of Inversion on Face Recognition in Adults with Autism Spectrum Disorder

    Science.gov (United States)

    Hedley, Darren; Brewer, Neil; Young, Robyn

    2015-01-01

    Face identity recognition has widely been shown to be impaired in individuals with autism spectrum disorders (ASD). In this study we examined the influence of inversion on face recognition in 26 adults with ASD and 33 age and IQ matched controls. Participants completed a recognition test comprising upright and inverted faces. Participants with ASD…

  4. Face and Word Recognition Can Be Selectively Affected by Brain Injury or Developmental Disorders

    DEFF Research Database (Denmark)

    Robotham, Ro J.; Starrfelt, Randi

    2017-01-01

    face recognition deficit, and pure alexia, a selective word recognition deficit. Together, the patterns of impaired reading with preserved face recognition and impaired face recognition with preserved reading constitute a double dissociation. The existence of these selective deficits has been...... also have deficits in the other. The implications of this would be immense, with most textbooks in cognitive neuropsychology requiring drastic revisions. In order to evaluate the evidence for dissociations, we review studies that specifically investigate whether face or word recognition can...... be selectively affected by acquired brain injury or developmental disorders. We only include studies published since 2004, as comprehensive reviews of earlier studies are available. Most of the studies assess the supposedly preserved functions using sensitive measurements. We found convincing evidence...

  5. Brief Report: Face-Specific Recognition Deficits in Young Children with Autism Spectrum Disorders

    Science.gov (United States)

    Bradshaw, Jessica; Shic, Frederick; Chawarska, Katarzyna

    2011-01-01

    This study used eyetracking to investigate the ability of young children with autism spectrum disorders (ASD) to recognize social (faces) and nonsocial (simple objects and complex block patterns) stimuli using the visual paired comparison (VPC) paradigm. Typically developing (TD) children showed evidence for recognition of faces and simple…

  6. Face Recognition in Children with a Pervasive Developmental Disorder Not Otherwise Specified.

    Science.gov (United States)

    Serra, M.; Althaus, M.; de Sonneville, L. M. J.; Stant, A. D.; Jackson, A. E.; Minderaa, R. B.

    2003-01-01

    A study investigated the accuracy and speed of face recognition in 26 children (ages 7-10) with Pervasive Developmental Disorder Not Otherwise Specified. Subjects needed an amount of time to recognize the faces that almost equaled the time they needed to recognize abstract patterns that were difficult to distinguish. (Contains references.)…

  7. Using eye movements as an index of implicit face recognition in autism spectrum disorder.

    Science.gov (United States)

    Hedley, Darren; Young, Robyn; Brewer, Neil

    2012-10-01

    Individuals with an autism spectrum disorder (ASD) typically show impairment on face recognition tasks. Performance has usually been assessed using overt, explicit recognition tasks. Here, a complementary method involving eye tracking was used to examine implicit face recognition in participants with ASD and in an intelligence quotient-matched non-ASD control group. Differences in eye movement indices between target and foil faces were used as an indicator of implicit face recognition. Explicit face recognition was assessed using old-new discrimination and reaction time measures. Stimuli were faces of studied (target) or unfamiliar (foil) persons. Target images at test were either identical to the images presented at study or altered by changing the lighting, pose, or by masking with visual noise. Participants with ASD performed worse than controls on the explicit recognition task. Eye movement-based measures, however, indicated that implicit recognition may not be affected to the same degree as explicit recognition. Autism Res 2012, 5: 363-379. © 2012 International Society for Autism Research, Wiley Periodicals, Inc.

  8. Impairments in Monkey and Human Face Recognition in 2-Year-Old Toddlers with Autism Spectrum Disorder and Developmental Delay

    Science.gov (United States)

    Chawarska, Katarzyna; Volkmar, Fred

    2007-01-01

    Face recognition impairments are well documented in older children with Autism Spectrum Disorders (ASD); however, the developmental course of the deficit is not clear. This study investigates the progressive specialization of face recognition skills in children with and without ASD. Experiment 1 examines human and monkey face recognition in…

  9. Face identity recognition in autism spectrum disorders: a review of behavioral studies.

    Science.gov (United States)

    Weigelt, Sarah; Koldewyn, Kami; Kanwisher, Nancy

    2012-03-01

    Face recognition--the ability to recognize a person from their facial appearance--is essential for normal social interaction. Face recognition deficits have been implicated in the most common disorder of social interaction: autism. Here we ask: is face identity recognition in fact impaired in people with autism? Reviewing behavioral studies we find no strong evidence for a qualitative difference in how facial identity is processed between those with and without autism: markers of typical face identity recognition, such as the face inversion effect, seem to be present in people with autism. However, quantitatively--i.e., how well facial identity is remembered or discriminated--people with autism perform worse than typical individuals. This impairment is particularly clear in face memory and in face perception tasks in which a delay intervenes between sample and test, and less so in tasks with no memory demand. Although some evidence suggests that this deficit may be specific to faces, further evidence on this question is necessary.

  10. Face recognition in children with a pervasive developmental disorder not otherwise specified

    NARCIS (Netherlands)

    Serra, M; Althaus, M; de Sonneville, LMJ; Stant, AD; Jackson, AE; Minderaa, RB

    2003-01-01

    This study investigates the accuracy and speed of face recognition in children with a Pervasive Developmental Disorder Not Otherwise Specified (PDDNOS; DSM-IV, American Psychiatric Association [APA], 1994). The study includes a clinical group of 26 nonretarded 7- to 10-year-old children with PDDNOS

  11. Emotion Recognition in Faces and the Use of Visual Context in Young People with High-Functioning Autism Spectrum Disorders

    Science.gov (United States)

    Wright, Barry; Clarke, Natalie; Jordan, Jo; Young, Andrew W.; Clarke, Paula; Miles, Jeremy; Nation, Kate; Clarke, Leesa; Williams, Christine

    2008-01-01

    We compared young people with high-functioning autism spectrum disorders (ASDs) with age, sex and IQ matched controls on emotion recognition of faces and pictorial context. Each participant completed two tests of emotion recognition. The first used Ekman series faces. The second used facial expressions in visual context. A control task involved…

  12. Emotion Recognition in Faces and the Use of Visual Context in Young People with High-Functioning Autism Spectrum Disorders

    Science.gov (United States)

    Wright, Barry; Clarke, Natalie; Jordan, Jo; Young, Andrew W.; Clarke, Paula; Miles, Jeremy; Nation, Kate; Clarke, Leesa; Williams, Christine

    2008-01-01

    We compared young people with high-functioning autism spectrum disorders (ASDs) with age, sex and IQ matched controls on emotion recognition of faces and pictorial context. Each participant completed two tests of emotion recognition. The first used Ekman series faces. The second used facial expressions in visual context. A control task involved…

  13. Face recognition deficits in autism spectrum disorders are both domain specific and process specific.

    Science.gov (United States)

    Weigelt, Sarah; Koldewyn, Kami; Kanwisher, Nancy

    2013-01-01

    Although many studies have reported face identity recognition deficits in autism spectrum disorders (ASD), two fundamental question remains: 1) Is this deficit "process specific" for face memory in particular, or does it extend to perceptual discrimination of faces as well? And 2) Is the deficit "domain specific" for faces, or is it found more generally for other social or even nonsocial stimuli? The answers to these questions are important both for understanding the nature of autism and its developmental etiology, and for understanding the functional architecture of face processing in the typical brain. Here we show that children with ASD are impaired (compared to age and IQ-matched typical children) in face memory, but not face perception, demonstrating process specificity. Further, we find no deficit for either memory or perception of places or cars, indicating domain specificity. Importantly, we further showed deficits in both the perception and memory of bodies, suggesting that the relevant domain of deficit may be social rather than specifically facial. These results provide a more precise characterization of the cognitive phenotype of autism and further indicate a functional dissociation between face memory and face perception.

  14. No Differences in Emotion Recognition Strategies in Children with Autism Spectrum Disorder: Evidence from Hybrid Faces

    Directory of Open Access Journals (Sweden)

    Kris Evers

    2014-01-01

    Full Text Available Emotion recognition problems are frequently reported in individuals with an autism spectrum disorder (ASD. However, this research area is characterized by inconsistent findings, with atypical emotion processing strategies possibly contributing to existing contradictions. In addition, an attenuated saliency of the eyes region is often demonstrated in ASD during face identity processing. We wanted to compare reliance on mouth versus eyes information in children with and without ASD, using hybrid facial expressions. A group of six-to-eight-year-old boys with ASD and an age- and intelligence-matched typically developing (TD group without intellectual disability performed an emotion labelling task with hybrid facial expressions. Five static expressions were used: one neutral expression and four emotional expressions, namely, anger, fear, happiness, and sadness. Hybrid faces were created, consisting of an emotional face half (upper or lower face region with the other face half showing a neutral expression. Results showed no emotion recognition problem in ASD. Moreover, we provided evidence for the existence of top- and bottom-emotions in children: correct identification of expressions mainly depends on information in the eyes (so-called top-emotions: happiness or in the mouth region (so-called bottom-emotions: sadness, anger, and fear. No stronger reliance on mouth information was found in children with ASD.

  15. Delayed Face Recognition in Children and Adolescents with Autism Spectrum Disorders

    Directory of Open Access Journals (Sweden)

    Zahra Shahrivar

    2012-06-01

    Full Text Available Objective: Children with autism spectrum disorders (ASDs have great problems in social interactions including face recognition. There are many studies reporting deficits in face memory in individuals with ASDs. On the other hand, some studies indicate that this kind of memory is intact in this group. In the present study, delayed face recognition has been investigated in children and adolescents with ASDs compared to the age and sex matched typically developing group.Methods: In two sessions, Benton Facial Recognition Test was administered to 15 children and adolescents with ASDs (high functioning autism and Asperger syndrome and to 15 normal participants, ages 8-17 years. In the first condition, the long form of Benton Facial Recognition Test was used without any delay. In the second session, this test was administered with 15 seconds delay after one week. The reaction times and correct responses were measured in both conditions as the dependent variables.Results: Comparison of the reaction times and correct responses in the two groups revealed no significant difference in delayed and non-delayed conditions. Furthermore, no significant difference was observed between the two conditions in ASDs patients when comparing the variables. Although a significant correlation (p<0.05 was found between delayed and non-delayed conditions, it was not significant in the normal group. Moreover, data analysis revealed no significant difference between the two groups in the two conditions when the IQ was considered as covariate. Conclusion: In this study, it was found that the ability to recognize faces in simultaneous and delayed conditions is similar between adolescents with ASDs and their normal counterparts.

  16. Recognition disorders for famous faces and voices: a review of the literature and normative data of a new test battery.

    Science.gov (United States)

    Quaranta, Davide; Piccininni, Chiara; Carlesimo, Giovanni Augusto; Luzzi, Simona; Marra, Camillo; Papagno, Costanza; Trojano, Luigi; Gainotti, Guido

    2016-03-01

    Several anatomo-clinical investigations have shown that familiar face recognition disorders not due to high level perceptual defects are often observed in patients with lesions of the right anterior temporal lobe (ATL). The meaning of these findings is, however, controversial, because some authors claim that these patients show pure instances of modality-specific 'associative prosopagnosia', whereas other authors maintain that in these patients voice recognition is also impaired and that these patients have a 'multimodal person recognition disorder'. To solve the problem of the nature of famous faces recognition disorders in patients affected by right ATL lesions, it is therefore very important to verify with formal tests if these patients are or are not able to recognize others by voice, but a direct comparison between the two modalities is hindered by the fact that voice recognition is more difficult than face recognition. To circumvent this difficulty, we constructed a test battery in which subjects were requested to recognize the same persons (well-known at the national level) through their faces and voices, evaluating familiarity and identification processes. The present paper describes the 'Famous People Recognition Battery' and reports the normative data necessary to clarify the nature of person recognition disorders observed in patients affected by right ATL lesions.

  17. Handbook of Face Recognition

    CERN Document Server

    Li, Stan Z

    2011-01-01

    This highly anticipated new edition provides a comprehensive account of face recognition research and technology, spanning the full range of topics needed for designing operational face recognition systems. After a thorough introductory chapter, each of the following chapters focus on a specific topic, reviewing background information, up-to-date techniques, and recent results, as well as offering challenges and future directions. Features: fully updated, revised and expanded, covering the entire spectrum of concepts, methods, and algorithms for automated face detection and recognition systems

  18. Famous face recognition, face matching, and extraversion.

    Science.gov (United States)

    Lander, Karen; Poyarekar, Siddhi

    2015-01-01

    It has been previously established that extraverts who are skilled at interpersonal interaction perform significantly better than introverts on a face-specific recognition memory task. In our experiment we further investigate the relationship between extraversion and face recognition, focusing on famous face recognition and face matching. Results indicate that more extraverted individuals perform significantly better on an upright famous face recognition task and show significantly larger face inversion effects. However, our results did not find an effect of extraversion on face matching or inverted famous face recognition.

  19. Differential contribution of right and left temporo-occipital and anterior temporal lesions to face recognition disorders

    Directory of Open Access Journals (Sweden)

    Guido eGainotti

    2011-06-01

    Full Text Available In the study of prosopagnosia, several issues (such as the specific or non-specific manifestations of prosopagnosia, the unitary or non-unitary nature of this syndrome and the mechanisms underlying face recognition disorders are still controversial. Two main sources of variance partially accounting for these controversies could be the qualitative differences between the face recognition disorders observed in patients with prevalent lesions of the right or left hemisphere and in those with lesions encroaching upon the temporo-occipital or the (right anterior temporal cortex.Results of our review seem to confirm these suggestions. Indeed, they show that (a the most specific forms of prosopagnosia are due to lesions of a right posterior network including the OFA and the FFA, whereas (b the face identification defects observed in patients with left temporo-occipital lesions seem due to a semantic defect impeding access to person-specific semantic information from the visual modality. Furthermore, face recognition defects resulting from right anterior temporal lesions can usually be considered as part of a multimodal people recognition disorder.The implications of our review are, therefore, the following: (1 to consider the components of visual agnosia often observed in prosopagnosic patients with bilateral temporo-occipital lesions as part of a semantic defect, resulting from left-sided lesions (and not from prosopagnosia proper; (2 to systematically investigate voice recognition disorders in patients with right anterior temporal lesions to determine whether the face recognition defect should be considered a form of ‘associative prosopagnosia’ or a form of the ‘multimodal people recognition disorder’.

  20. Neural basis of distorted self-face recognition in social anxiety disorder

    Directory of Open Access Journals (Sweden)

    Min-Kyeong Kim

    2016-01-01

    Conclusion: Patients with SAD have a positive point of view of their own face and experience self-relevance for the attractively transformed self-faces. This distorted cognition may be based on dysfunctions in the frontal and inferior parietal regions. The abnormal engagement of the fronto-parietal attentional network during processing face stimuli in non-social situations may be linked to distorted self-recognition in SAD.

  1. Using Computerized Games to Teach Face Recognition Skills to Children with Autism Spectrum Disorder: The "Let's Face It!" Program

    Science.gov (United States)

    Tanaka, James W.; Wolf, Julie M.; Klaiman, Cheryl; Koenig, Kathleen; Cockburn, Jeffrey; Herlihy, Lauren; Brown, Carla; Stahl, Sherin; Kaiser, Martha D.; Schultz, Robert T.

    2010-01-01

    Background: An emerging body of evidence indicates that relative to typically developing children, children with autism are selectively impaired in their ability to recognize facial identity. A critical question is whether face recognition skills can be enhanced through a direct training intervention. Methods: In a randomized clinical trial,…

  2. FACE RECOGNITION FROM FRONT-VIEW FACE

    Institute of Scientific and Technical Information of China (English)

    WuLifang; ShenLansun

    2003-01-01

    This letter presents a face normalization algorithm based on 2-D face model to rec-ognize faces with variant postures from front-view face.A 2-D face mesh model can be extracted from faces with rotation to left or right and the corresponding front-view mesh model can be estimated according to facial symmetry.Then based on the relationship between the two mesh models,the nrmalized front-view face is formed by gray level mapping.Finally,the face recognition will be finished based on Principal Component Analysis(PCA).Experiments show that better face recognition performance is achieved in this way.

  3. FACE RECOGNITION FROM FRONT-VIEW FACE

    Institute of Scientific and Technical Information of China (English)

    Wu Lifang; Shen Lansun

    2003-01-01

    This letter presents a face normalization algorithm based on 2-D face model to recognize faces with variant postures from front-view face. A 2-D face mesh model can be extracted from faces with rotation to left or right and the corresponding front-view mesh model can be estimated according to the facial symmetry. Then based on the inner relationship between the two mesh models, the normalized front-view face is formed by gray level mapping. Finally, the face recognition will be finished based on Principal Component Analysis (PCA). Experiments show that better face recognition performance is achieved in this way.

  4. Forensic Face Recognition: A Survey

    NARCIS (Netherlands)

    Ali, Tauseef; Spreeuwers, Luuk; Veldhuis, Raymond; Quaglia, Adamo; Epifano, Calogera M.

    2012-01-01

    The improvements of automatic face recognition during the last 2 decades have disclosed new applications like border control and camera surveillance. A new application field is forensic face recognition. Traditionally, face recognition by human experts has been used in forensics, but now there is a

  5. Successful Face Recognition Is Associated with Increased Prefrontal Cortex Activation in Autism Spectrum Disorder

    Science.gov (United States)

    Herrington, John D.; Riley, Meghan E.; Grupe, Daniel W.; Schultz, Robert T.

    2015-01-01

    This study examines whether deficits in visual information processing in autism-spectrum disorder (ASD) can be offset by the recruitment of brain structures involved in selective attention. During functional MRI, 12 children with ASD and 19 control participants completed a selective attention one-back task in which images of faces and houses were…

  6. Voice Recognition in Face-Blind Patients.

    Science.gov (United States)

    Liu, Ran R; Pancaroglu, Raika; Hills, Charlotte S; Duchaine, Brad; Barton, Jason J S

    2016-04-01

    Right or bilateral anterior temporal damage can impair face recognition, but whether this is an associative variant of prosopagnosia or part of a multimodal disorder of person recognition is an unsettled question, with implications for cognitive and neuroanatomic models of person recognition. We assessed voice perception and short-term recognition of recently heard voices in 10 subjects with impaired face recognition acquired after cerebral lesions. All 4 subjects with apperceptive prosopagnosia due to lesions limited to fusiform cortex had intact voice discrimination and recognition. One subject with bilateral fusiform and anterior temporal lesions had a combined apperceptive prosopagnosia and apperceptive phonagnosia, the first such described case. Deficits indicating a multimodal syndrome of person recognition were found only in 2 subjects with bilateral anterior temporal lesions. All 3 subjects with right anterior temporal lesions had normal voice perception and recognition, 2 of whom performed normally on perceptual discrimination of faces. This confirms that such lesions can cause a modality-specific associative prosopagnosia.

  7. Looking but Not Seeing: Atypical Visual Scanning and Recognition of Faces in 2 and 4-Year-Old Children with Autism Spectrum Disorder

    Science.gov (United States)

    Chawarska, Katarzyna; Shic, Frederick

    2009-01-01

    This study used eye-tracking to examine visual scanning and recognition of faces by 2- and 4-year-old children with autism spectrum disorder (ASD) (N = 44) and typically developing (TD) controls (N = 30). TD toddlers at both age levels scanned and recognized faces similarly. Toddlers with ASD looked increasingly away from faces with age,…

  8. [Comparative studies of face recognition].

    Science.gov (United States)

    Kawai, Nobuyuki

    2012-07-01

    Every human being is proficient in face recognition. However, the reason for and the manner in which humans have attained such an ability remain unknown. These questions can be best answered-through comparative studies of face recognition in non-human animals. Studies in both primates and non-primates show that not only primates, but also non-primates possess the ability to extract information from their conspecifics and from human experimenters. Neural specialization for face recognition is shared with mammals in distant taxa, suggesting that face recognition evolved earlier than the emergence of mammals. A recent study indicated that a social insect, the golden paper wasp, can distinguish their conspecific faces, whereas a closely related species, which has a less complex social lifestyle with just one queen ruling a nest of underlings, did not show strong face recognition for their conspecifics. Social complexity and the need to differentiate between one another likely led humans to evolve their face recognition abilities.

  9. Study of Face Recognition Techniques

    Directory of Open Access Journals (Sweden)

    Sangeeta Kaushik

    2014-12-01

    Full Text Available A study of both face recognition and detection techniques is carried out using the algorithms like Principal Component Analysis (PCA, Kernel Principal Component Analysis (KPCA, Linear Discriminant Analysis (LDA and Line Edge Map (LEM. These algorithms show different rates of accuracy under different conditions. The automatic recognition of human faces presents a challenge to the pattern recognition community. Typically, human faces are different in shapes with minor similarity from person to person. Furthermore, lighting condition changes, facial expressions and pose variations further complicate the face recognition task as one of the difficult problems in pattern analysis.

  10. Genetic specificity of face recognition.

    Science.gov (United States)

    Shakeshaft, Nicholas G; Plomin, Robert

    2015-10-13

    Specific cognitive abilities in diverse domains are typically found to be highly heritable and substantially correlated with general cognitive ability (g), both phenotypically and genetically. Recent twin studies have found the ability to memorize and recognize faces to be an exception, being similarly heritable but phenotypically substantially uncorrelated both with g and with general object recognition. However, the genetic relationships between face recognition and other abilities (the extent to which they share a common genetic etiology) cannot be determined from phenotypic associations. In this, to our knowledge, first study of the genetic associations between face recognition and other domains, 2,000 18- and 19-year-old United Kingdom twins completed tests assessing their face recognition, object recognition, and general cognitive abilities. Results confirmed the substantial heritability of face recognition (61%), and multivariate genetic analyses found that most of this genetic influence is unique and not shared with other cognitive abilities.

  11. Forensic Face Recognition: A Survey

    NARCIS (Netherlands)

    Ali, Tauseef; Veldhuis, Raymond; Spreeuwers, Luuk

    2010-01-01

    Beside a few papers which focus on the forensic aspects of automatic face recognition, there is not much published about it in contrast to the literature on developing new techniques and methodologies for biometric face recognition. In this report, we review forensic facial identification which is t

  12. Side-View Face Recognition

    NARCIS (Netherlands)

    Santemiz, Pinar; Spreeuwers, Luuk J.; Veldhuis, Raymond N.J.; Biggelaar , van den Olivier

    2011-01-01

    As a widely used biometrics, face recognition has many advantages such as being non-intrusive, natural and passive. On the other hand, in real-life scenarios with uncontrolled environment, pose variation up to side-view positions makes face recognition a challenging work. In this paper we discuss th

  13. Comparing Face Detection and Recognition Techniques

    OpenAIRE

    Korra, Jyothi

    2016-01-01

    This paper implements and compares different techniques for face detection and recognition. One is find where the face is located in the images that is face detection and second is face recognition that is identifying the person. We study three techniques in this paper: Face detection using self organizing map (SOM), Face recognition by projection and nearest neighbor and Face recognition using SVM.

  14. Holistic processing predicts face recognition.

    Science.gov (United States)

    Richler, Jennifer J; Cheung, Olivia S; Gauthier, Isabel

    2011-04-01

    The concept of holistic processing is a cornerstone of face-recognition research. In the study reported here, we demonstrated that holistic processing predicts face-recognition abilities on the Cambridge Face Memory Test and on a perceptual face-identification task. Our findings validate a large body of work that relies on the assumption that holistic processing is related to face recognition. These findings also reconcile the study of face recognition with the perceptual-expertise work it inspired; such work links holistic processing of objects with people's ability to individuate them. Our results differ from those of a recent study showing no link between holistic processing and face recognition. This discrepancy can be attributed to the use in prior research of a popular but flawed measure of holistic processing. Our findings salvage the central role of holistic processing in face recognition and cast doubt on a subset of the face-perception literature that relies on a problematic measure of holistic processing.

  15. Face Recognition and Visual Search Strategies in Autism Spectrum Disorders: Amending and Extending a Recent Review by Weigelt et al.

    Directory of Open Access Journals (Sweden)

    Julia Tang

    Full Text Available The purpose of this review was to build upon a recent review by Weigelt et al. which examined visual search strategies and face identification between individuals with autism spectrum disorders (ASD and typically developing peers. Seven databases, CINAHL Plus, EMBASE, ERIC, Medline, Proquest, PsychInfo and PubMed were used to locate published scientific studies matching our inclusion criteria. A total of 28 articles not included in Weigelt et al. met criteria for inclusion into this systematic review. Of these 28 studies, 16 were available and met criteria at the time of the previous review, but were mistakenly excluded; and twelve were recently published. Weigelt et al. found quantitative, but not qualitative, differences in face identification in individuals with ASD. In contrast, the current systematic review found both qualitative and quantitative differences in face identification between individuals with and without ASD. There is a large inconsistency in findings across the eye tracking and neurobiological studies reviewed. Recommendations for future research in face recognition in ASD were discussed.

  16. Effective indexing for face recognition

    Science.gov (United States)

    Sochenkov, I.; Sochenkova, A.; Vokhmintsev, A.; Makovetskii, A.; Melnikov, A.

    2016-09-01

    Face recognition is one of the most important tasks in computer vision and pattern recognition. Face recognition is useful for security systems to provide safety. In some situations it is necessary to identify the person among many others. In this case this work presents new approach in data indexing, which provides fast retrieval in big image collections. Data indexing in this research consists of five steps. First, we detect the area containing face, second we align face, and then we detect areas containing eyes and eyebrows, nose, mouth. After that we find key points of each area using different descriptors and finally index these descriptors with help of quantization procedure. The experimental analysis of this method is performed. This paper shows that performing method has results at the level of state-of-the-art face recognition methods, but it is also gives results fast that is important for the systems that provide safety.

  17. Optimizing Face Recognition Using PCA

    Directory of Open Access Journals (Sweden)

    Manal Abdullah

    2012-03-01

    Full Text Available Principle Component Analysis PCA is a classical feature extraction and data representation technique widely used in pattern recognition. It is one of the most successful techniques in face recognition. But it has drawback of high computational especially for big size database. This paper conducts a study to optimize the time complexity of PCA (eigenfaces that does not affects the recognition performance. The authors minimize the participated eigenvectors which consequently decreases the computational time. A comparison is done to compare the differences between the recognition time in the original algorithm and in the enhanced algorithm. The performance of the original and the enhanced proposed algorithm is tested on face94 face database. Experimental results show that the recognition time is reduced by 35% by applying our proposed enhanced algorithm. DET Curves are used to illustrate the experimental results.

  18. Optimizing Face Recognition Using PCA

    Directory of Open Access Journals (Sweden)

    Manal Abdullah

    2012-04-01

    Full Text Available Principle Component Analysis PCA is a classical feature extraction and data representation technique widely used in pattern recognition. It is one of the most successful techniques in face recognition. But it has drawback of high computational especially for big size database. This paper conducts a study to optimize the time complexity of PCA (eigenfaces that does not affects the recognition performance. The authorsminimize the participated eigenvectors which consequently decreases the computational time. A comparison is done to compare the differences between the recognition time in the original algorithm and in the enhanced algorithm. The performance of the original and the enhanced proposed algorithm is tested on face94 face database. Experimental results show that the recognition time is reduced by 35% by applying our proposed enhanced algorithm. DET Curves are used to illustrate the experimental results.

  19. Similarity measures for face recognition

    CERN Document Server

    Vezzetti, Enrico

    2015-01-01

    Face recognition has several applications, including security, such as (authentication and identification of device users and criminal suspects), and in medicine (corrective surgery and diagnosis). Facial recognition programs rely on algorithms that can compare and compute the similarity between two sets of images. This eBook explains some of the similarity measures used in facial recognition systems in a single volume. Readers will learn about various measures including Minkowski distances, Mahalanobis distances, Hansdorff distances, cosine-based distances, among other methods. The book also summarizes errors that may occur in face recognition methods. Computer scientists "facing face" and looking to select and test different methods of computing similarities will benefit from this book. The book is also useful tool for students undertaking computer vision courses.

  20. Pilgrims Face Recognition Dataset -- HUFRD

    OpenAIRE

    Aly, Salah A.

    2012-01-01

    In this work, we define a new pilgrims face recognition dataset, called HUFRD dataset. The new developed dataset presents various pilgrims' images taken from outside the Holy Masjid El-Harram in Makkah during the 2011-2012 Hajj and Umrah seasons. Such dataset will be used to test our developed facial recognition and detection algorithms, as well as assess in the missing and found recognition system \\cite{crowdsensing}.

  1. Emotion-independent face recognition

    Science.gov (United States)

    De Silva, Liyanage C.; Esther, Kho G. P.

    2000-12-01

    Current face recognition techniques tend to work well when recognizing faces under small variations in lighting, facial expression and pose, but deteriorate under more extreme conditions. In this paper, a face recognition system to recognize faces of known individuals, despite variations in facial expression due to different emotions, is developed. The eigenface approach is used for feature extraction. Classification methods include Euclidean distance, back propagation neural network and generalized regression neural network. These methods yield 100% recognition accuracy when the training database is representative, containing one image representing the peak expression for each emotion of each person apart from the neutral expression. The feature vectors used for comparison in the Euclidean distance method and for training the neural network must be all the feature vectors of the training set. These results are obtained for a face database consisting of only four persons.

  2. Age-invariant face recognition.

    Science.gov (United States)

    Park, Unsang; Tong, Yiying; Jain, Anil K

    2010-05-01

    One of the challenges in automatic face recognition is to achieve temporal invariance. In other words, the goal is to come up with a representation and matching scheme that is robust to changes due to facial aging. Facial aging is a complex process that affects both the 3D shape of the face and its texture (e.g., wrinkles). These shape and texture changes degrade the performance of automatic face recognition systems. However, facial aging has not received substantial attention compared to other facial variations due to pose, lighting, and expression. We propose a 3D aging modeling technique and show how it can be used to compensate for the age variations to improve the face recognition performance. The aging modeling technique adapts view-invariant 3D face models to the given 2D face aging database. The proposed approach is evaluated on three different databases (i.g., FG-NET, MORPH, and BROWNS) using FaceVACS, a state-of-the-art commercial face recognition engine.

  3. Face Processing: Models For Recognition

    Science.gov (United States)

    Turk, Matthew A.; Pentland, Alexander P.

    1990-03-01

    The human ability to process faces is remarkable. We can identify perhaps thousands of faces learned throughout our lifetime and read facial expression to understand such subtle qualities as emotion. These skills are quite robust, despite sometimes large changes in the visual stimulus due to expression, aging, and distractions such as glasses or changes in hairstyle or facial hair. Computers which model and recognize faces will be useful in a variety of applications, including criminal identification, human-computer interface, and animation. We discuss models for representing faces and their applicability to the task of recognition, and present techniques for identifying faces and detecting eye blinks.

  4. Kernel learning algorithms for face recognition

    CERN Document Server

    Li, Jun-Bao; Pan, Jeng-Shyang

    2013-01-01

    Kernel Learning Algorithms for Face Recognition covers the framework of kernel based face recognition. This book discusses the advanced kernel learning algorithms and its application on face recognition. This book also focuses on the theoretical deviation, the system framework and experiments involving kernel based face recognition. Included within are algorithms of kernel based face recognition, and also the feasibility of the kernel based face recognition method. This book provides researchers in pattern recognition and machine learning area with advanced face recognition methods and its new

  5. Multibiometrics for face recognition

    NARCIS (Netherlands)

    Veldhuis, Raymond; Deravi, Farzin; Tao, Qian

    2008-01-01

    Fusion is a popular practice to combine multiple sources of biometric information to achieve systems with greater performance and flexibility. In this paper various approaches to fusion within a multibiometrics context are considered and an application to the fusion of 2D and 3D face information is

  6. Multibiometrics for face recognition

    NARCIS (Netherlands)

    Veldhuis, Raymond N.J.; Deravi, Farzin; Tao, Q.

    Fusion is a popular practice to combine multiple sources of biometric information to achieve systems with greater performance and flexibility. In this paper various approaches to fusion within a multibiometrics context are considered and an application to the fusion of 2D and 3D face information is

  7. Automated Face Recognition System

    Science.gov (United States)

    1992-12-01

    atestfOl.feature-vectjJ -averageljJ); for(j=l; <num-coefsj++) for(i= 5 num-train-faces;i++) sdlQjI -(btrainhil.feaure..vecU1- veagU (btraintil.feature- vecU ... vecU ])* (atest(O1.feature-vecUJ - btrain[iI.feature- vecU ]) + temp; btrain(ii.distance = sqrt ( (double) temp); I**** Store the k-nearest neighbors rank

  8. Multithread Face Recognition in Cloud

    Directory of Open Access Journals (Sweden)

    Dakshina Ranjan Kisku

    2016-01-01

    Full Text Available Faces are highly challenging and dynamic objects that are employed as biometrics evidence in identity verification. Recently, biometrics systems have proven to be an essential security tools, in which bulk matching of enrolled people and watch lists is performed every day. To facilitate this process, organizations with large computing facilities need to maintain these facilities. To minimize the burden of maintaining these costly facilities for enrollment and recognition, multinational companies can transfer this responsibility to third-party vendors who can maintain cloud computing infrastructures for recognition. In this paper, we showcase cloud computing-enabled face recognition, which utilizes PCA-characterized face instances and reduces the number of invariant SIFT points that are extracted from each face. To achieve high interclass and low intraclass variances, a set of six PCA-characterized face instances is computed on columns of each face image by varying the number of principal components. Extracted SIFT keypoints are fused using sum and max fusion rules. A novel cohort selection technique is applied to increase the total performance. The proposed protomodel is tested on BioID and FEI face databases, and the efficacy of the system is proven based on the obtained results. We also compare the proposed method with other well-known methods.

  9. A Survey: Face Recognition Techniques

    Directory of Open Access Journals (Sweden)

    Muhammad Sharif

    2012-12-01

    Full Text Available In this study, the existing techniques of face recognition are to be encountered along with their pros and cons to conduct a brief survey. The most general methods include Eigenface (Eigenfeatures, Hidden Markov Model (HMM, geometric based and template matching approaches. This survey actually performs analysis on these approaches in order to constitute face representations which will be discussed as under. In the second phase of the survey, factors affecting the recognition rates and processes are also discussed along with the solutions provided by different authors.

  10. Reaction Time of Facial Affect Recognition in Asperger's Disorder for Cartoon and Real, Static and Moving Faces

    Science.gov (United States)

    Miyahara, Motohide; Bray, Anne; Tsujii, Masatsugu; Fujita, Chikako; Sugiyama, Toshiro

    2007-01-01

    This study used a choice reaction-time paradigm to test the perceived impairment of facial affect recognition in Asperger's disorder. Twenty teenagers with Asperger's disorder and 20 controls were compared with respect to the latency and accuracy of response to happy or disgusted facial expressions, presented in cartoon or real images and in…

  11. Reaction Time of Facial Affect Recognition in Asperger's Disorder for Cartoon and Real, Static and Moving Faces

    Science.gov (United States)

    Miyahara, Motohide; Bray, Anne; Tsujii, Masatsugu; Fujita, Chikako; Sugiyama, Toshiro

    2007-01-01

    This study used a choice reaction-time paradigm to test the perceived impairment of facial affect recognition in Asperger's disorder. Twenty teenagers with Asperger's disorder and 20 controls were compared with respect to the latency and accuracy of response to happy or disgusted facial expressions, presented in cartoon or real images and in…

  12. Face Recognition using Curvelet Transform

    CERN Document Server

    Cohen, Rami

    2011-01-01

    Face recognition has been studied extensively for more than 20 years now. Since the beginning of 90s the subject has became a major issue. This technology is used in many important real-world applications, such as video surveillance, smart cards, database security, internet and intranet access. This report reviews recent two algorithms for face recognition which take advantage of a relatively new multiscale geometric analysis tool - Curvelet transform, for facial processing and feature extraction. This transform proves to be efficient especially due to its good ability to detect curves and lines, which characterize the human's face. An algorithm which is based on the two algorithms mentioned above is proposed, and its performance is evaluated on three data bases of faces: AT&T (ORL), Essex Grimace and Georgia-Tech. k-nearest neighbour (k-NN) and Support vector machine (SVM) classifiers are used, along with Principal Component Analysis (PCA) for dimensionality reduction. This algorithm shows good results, ...

  13. Face Recognition in Various Illuminations

    Directory of Open Access Journals (Sweden)

    Saurabh D. Parmar,

    2014-05-01

    Full Text Available Face Recognition (FR under various illuminations is very challenging. Normalization technique is useful for removing the dimness and shadow from the facial image which reduces the effect of illumination variations still retaining the necessary information of the face. The robust local feature extractor which is the gray-scale invariant texture called Local Binary Pattern (LBP is helpful for feature extraction. K-Nearest Neighbor classifier is utilized for the purpose of classification and to match the face images from the database. Experimental results were based on Yale-B database with three different sub categories. The proposed method has been tested to robust face recognition in various illumination conditions. Extensive experiment shows that the proposed system can achieve very encouraging performance in various illumination environments.

  14. Covert Face Recognition without Prosopagnosia

    Directory of Open Access Journals (Sweden)

    H. D. Ellis

    1993-01-01

    Full Text Available An experiment is reported where subjects were presented with familiar or unfamiliar faces for supraliminal durations or for durations individually assessed as being below the threshold for recognition. Their electrodermal responses to each stimulus were measured and the results showed higher peak amplitude skin conductance responses for familiar than for unfamiliar faces, regardless of whether they had been displayed supraliminally or subliminally. A parallel is drawn between elevated skin conductance responses to subliminal stimuli and findings of covert recognition of familiar faces in prosopagnosic patients, some of whom show increased electrodermal activity (EDA to previously familiar faces. The supraliminal presentation data also served to replicate similar work by Tranel et al (1985. The results are considered alongside other data indicating the relation between non-conscious, “automatic” aspects of normal visual information processing and abilities which can be found to be preserved without awareness after brain injury.

  15. Face recognition using Krawtchouk moment

    Indian Academy of Sciences (India)

    J Sheeba Rani; D Devaraj

    2012-08-01

    Feature extraction is one of the important tasks in face recognition. Moments are widely used feature extractor due to their superior discriminatory power and geometrical invariance. Moments generally capture the global features of the image. This paper proposes Krawtchouk moment for feature extraction in face recognition system, which has the ability to extract local features from any region of interest. Krawtchouk moment is used to extract both local features and global features of the face. The extracted features are fused using summed normalized distance strategy. Nearest neighbour classifier is employed to classify the faces. The proposed method is tested using ORL and Yale databases. Experimental results show that the proposed method is able to recognize images correctly, even if the images are corrupted with noise and possess change in facial expression and tilt.

  16. Face recognition, a landmarks tale

    NARCIS (Netherlands)

    Beumer, Gerrit Maarten

    2009-01-01

    Face recognition is a technology that appeals to the imagination of many people. This is particularly reflected in the popularity of science-fiction films and forensic detective series such as CSI, CSI New York, CSI Miami, Bones and NCIS. Although these series tend to be set in the present, their a

  17. Face recognition, a landmarks tale

    NARCIS (Netherlands)

    Beumer, G.M.

    2009-01-01

    Face recognition is a technology that appeals to the imagination of many people. This is particularly reflected in the popularity of science-fiction films and forensic detective series such as CSI, CSI New York, CSI Miami, Bones and NCIS. Although these series tend to be set in the present, their ap

  18. Towards automatic forensic face recognition

    NARCIS (Netherlands)

    Ali, Tauseef; Spreeuwers, Luuk; Veldhuis, Raymond

    2011-01-01

    In this paper we present a methodology and experimental results for evidence evaluation in the context of forensic face recognition. In forensic applications, the matching score (hereafter referred to as similarity score) from a biometric system must be represented as a Likelihood Ratio (LR). In our

  19. Embedded Face Detection and Recognition

    Directory of Open Access Journals (Sweden)

    Göksel Günlü

    2012-10-01

    Full Text Available The need to increase security in open or public spaces has in turn given rise to the requirement to monitor these spaces and analyse those images on‐site and on‐time. At this point, the use of smart cameras ‐ of which the popularity has been increasing ‐ is one step ahead. With sensors and Digital Signal Processors (DSPs, smart cameras generate ad hoc results by analysing the numeric images transmitted from the sensor by means of a variety of image‐processing algorithms. Since the images are not transmitted to a distance processing unit but rather are processed inside the camera, it does not necessitate high‐ bandwidth networks or high processor powered systems; it can instantaneously decide on the required access. Nonetheless, on account of restricted memory, processing power and overall power, image processing algorithms need to be developed and optimized for embedded processors. Among these algorithms, one of the most important is for face detection and recognition. A number of face detection and recognition methods have been proposed recently and many of these methods have been tested on general‐purpose processors. In smart cameras ‐ which are real‐life applications of such methods ‐ the widest use is on DSPs. In the present study, the Viola‐Jones face detection method ‐ which was reported to run faster on PCs ‐ was optimized for DSPs; the face recognition method was combined with the developed sub‐region and mask‐based DCT (Discrete Cosine Transform. As the employed DSP is a fixed‐point processor, the processes were performed with integers insofar as it was possible. To enable face recognition, the image was divided into sub‐ regions and from each sub‐region the robust coefficients against disruptive elements ‐ like face expression, illumination, etc. ‐ were selected as the features. The discrimination of the selected features was enhanced via LDA (Linear Discriminant Analysis and then employed for

  20. Embedded Face Detection and Recognition

    Directory of Open Access Journals (Sweden)

    Göksel Günlü

    2012-10-01

    Full Text Available The need to increase security in open or public spaces has in turn given rise to the requirement to monitor these spaces and analyse those images on-site and on-time. At this point, the use of smart cameras – of which the popularity has been increasing – is one step ahead. With sensors and Digital Signal Processors (DSPs, smart cameras generate ad hoc results by analysing the numeric images transmitted from the sensor by means of a variety of image-processing algorithms. Since the images are not transmitted to a distance processing unit but rather are processed inside the camera, it does not necessitate high-bandwidth networks or high processor powered systems; it can instantaneously decide on the required access. Nonetheless, on account of restricted memory, processing power and overall power, image processing algorithms need to be developed and optimized for embedded processors. Among these algorithms, one of the most important is for face detection and recognition. A number of face detection and recognition methods have been proposed recently and many of these methods have been tested on general-purpose processors. In smart cameras – which are real-life applications of such methods – the widest use is on DSPs. In the present study, the Viola-Jones face detection method – which was reported to run faster on PCs – was optimized for DSPs; the face recognition method was combined with the developed sub-region and mask-based DCT (Discrete Cosine Transform. As the employed DSP is a fixed-point processor, the processes were performed with integers insofar as it was possible. To enable face recognition, the image was divided into sub-regions and from each sub-region the robust coefficients against disruptive elements – like face expression, illumination, etc. – were selected as the features. The discrimination of the selected features was enhanced via LDA (Linear Discriminant Analysis and then employed for recognition. Thanks to its

  1. Individual differences in holistic processing predict face recognition ability.

    Science.gov (United States)

    Wang, Ruosi; Li, Jingguang; Fang, Huizhen; Tian, Moqian; Liu, Jia

    2012-02-01

    Why do some people recognize faces easily and others frequently make mistakes in recognizing faces? Classic behavioral work has shown that faces are processed in a distinctive holistic manner that is unlike the processing of objects. In the study reported here, we investigated whether individual differences in holistic face processing have a significant influence on face recognition. We found that the magnitude of face-specific recognition accuracy correlated with the extent to which participants processed faces holistically, as indexed by the composite-face effect and the whole-part effect. This association is due to face-specific processing in particular, not to a more general aspect of cognitive processing, such as general intelligence or global attention. This finding provides constraints on computational models of face recognition and may elucidate mechanisms underlying cognitive disorders, such as prosopagnosia and autism, that are associated with deficits in face recognition.

  2. Familiar Face Recognition in Children with Autism: The Differential Use of Inner and Outer Face Parts

    Science.gov (United States)

    Wilson, Rebecca; Pascalis, Olivier; Blades, Mark

    2007-01-01

    We investigated whether children with autistic spectrum disorders (ASD) have a deficit in recognising familiar faces. Children with ASD were given a forced choice familiar face recognition task with three conditions: full faces, inner face parts and outer face parts. Control groups were children with developmental delay (DD) and typically…

  3. Face Recognition in Uncontrolled Environment

    Directory of Open Access Journals (Sweden)

    Radhey Shyam

    2016-08-01

    Full Text Available This paper presents a novel method of facial image representation for face recognition in uncontrolled environment. It is named as augmented local binary patterns (A-LBP that works on both, uniform and non-uniform patterns. It replaces the central non-uniform pattern with a majority value of the neighbouring uniform patterns obtained after processing all neighbouring non-uniform patterns. These patterns are finally combined with the neighbouring uniform patterns, in order to extract discriminatory information from the local descriptors. The experimental results indicate the vitality of the proposed method on particular face datasets, where the images are prone to extreme variations of illumination.

  4. Recognition of Face and Emotional Facial Expressions in Autism

    Directory of Open Access Journals (Sweden)

    Muhammed Tayyib Kadak

    2013-03-01

    Full Text Available Autism is a genetically transferred neurodevelopmental disorder characterized by severe and permanent deficits in many interpersonal relation areas like communication, social interaction and emotional responsiveness. Patients with autism have deficits in face recognition, eye contact and recognition of emotional expression. Both recognition of face and expression of facial emotion carried on face processing. Structural and functional impairment in fusiform gyrus, amygdala, superior temporal sulcus and other brain regions lead to deficits in recognition of face and facial emotion. Therefore studies suggest that face processing deficits resulted in problems in areas of social interaction and emotion in autism. Studies revealed that children with autism had problems in recognition of facial expression and used mouth region more than eye region. It was also shown that autistic patients interpreted ambiguous expressions as negative emotion. In autism, deficits related in various stages of face processing like detection of gaze, face identity, recognition of emotional expression were determined, so far. Social interaction impairments in autistic spectrum disorders originated from face processing deficits during the periods of infancy, childhood and adolescence. Recognition of face and expression of facial emotion could be affected either automatically by orienting towards faces after birth, or by “learning” processes in developmental periods such as identity and emotion processing. This article aimed to review neurobiological basis of face processing and recognition of emotional facial expressions during normal development and in autism.

  5. Age Dependent Face Recognition using Eigenface

    OpenAIRE

    Hlaing Htake Khaung Tin

    2013-01-01

    Face recognition is the most successful form of human surveillance. Face recognition technology, is being used to improve human efficiency when recognition faces, is one of the fastest growing fields in the biometric industry. In the first stage, the age is classified into eleven categories which distinguish the person oldness in terms of age. In the second stage of the process is face recognition based on the predicted age. Age prediction has considerable potential applications in human comp...

  6. Comparison of face Recognition Algorithms on Dummy Faces

    Directory of Open Access Journals (Sweden)

    Aruni Singh

    2012-09-01

    Full Text Available In the age of rising crime face recognition is enormously important in the contexts of computer vision, psychology, surveillance, fraud detection, pattern recognition, neural network, content based video processing, etc. Face is a non intrusive strong biometrics for identification and hence criminals always try to hide their facial organs by different artificial means such as plastic surgery, disguise and dummy. The availability of a comprehensive face database is crucial to test the performance of these face recognition algorithms. However, while existing publicly-available face databases contain face images with a wide variety of poses, illumination, gestures and face occlusions but there is no dummy face database is available in public domain. The contributions of this research paper are: i Preparation of dummy face database of 110 subjects ii Comparison of some texture based, feature based and holistic face recognition algorithms on that dummy face database, iii Critical analysis of these types of algorithms on dummy face database.

  7. Covert face recognition relies on affective valence in congenital prosopagnosia.

    Science.gov (United States)

    Bate, Sarah; Haslam, Catherine; Jansari, Ashok; Hodgson, Timothy L

    2009-06-01

    Dominant accounts of covert recognition in prosopagnosia assume subthreshold activation of face representations created prior to onset of the disorder. Yet, such accounts cannot explain covert recognition in congenital prosopagnosia, where the impairment is present from birth. Alternatively, covert recognition may rely on affective valence, yet no study has explored this possibility. The current study addressed this issue in 3 individuals with congenital prosopagnosia, using measures of the scanpath to indicate recognition. Participants were asked to memorize 30 faces paired with descriptions of aggressive, nice, or neutral behaviours. In a later recognition test, eye movements were monitored while participants discriminated studied from novel faces. Sampling was reduced for studied--nice compared to studied--aggressive faces, and performance for studied--neutral and novel faces fell between these two conditions. This pattern of findings suggests that (a) positive emotion can facilitate processing in prosopagnosia, and (b) covert recognition may rely on emotional valence rather than familiarity.

  8. Face Recognition Based on Facial Features

    Directory of Open Access Journals (Sweden)

    Muhammad Sharif

    2012-08-01

    Full Text Available Commencing from the last decade several different methods have been planned and developed in the prospect of face recognition that is one of the chief stimulating zone in the area of image processing. Face recognitions processes have various applications in the prospect of security systems and crime investigation systems. The study is basically comprised of three phases, i.e., face detection, facial features extraction and face recognition. The first phase is the face detection process where region of interest i.e., features region is extracted. The 2nd phase is features extraction. Here face features i.e., eyes, nose and lips are extracted out commencing the extracted face area. The last module is the face recognition phase which makes use of the extracted left eye for the recognition purpose by combining features of Eigenfeatures and Fisherfeatures.

  9. The improved relative entropy for face recognition

    Directory of Open Access Journals (Sweden)

    Zhang Qi Rong

    2016-01-01

    Full Text Available The relative entropy is least sensitive to noise. In this paper, we propose the improved relative entropy for face recognition (IRE. The IRE method of recognition rate is far higher than the LDA, LPP method, by experimental results on CMU PIE face database and YALE B face database.

  10. Bayesian Face Recognition and Perceptual Narrowing in Face-Space

    Science.gov (United States)

    Balas, Benjamin

    2012-01-01

    During the first year of life, infants' face recognition abilities are subject to "perceptual narrowing", the end result of which is that observers lose the ability to distinguish previously discriminable faces (e.g. other-race faces) from one another. Perceptual narrowing has been reported for faces of different species and different races, in…

  11. Face Recognition in Real-world Images

    OpenAIRE

    Fontaine, Xavier; Achanta, Radhakrishna; Süsstrunk, Sabine

    2017-01-01

    Face recognition systems are designed to handle well-aligned images captured under controlled situations. However real-world images present varying orientations, expressions, and illumination conditions. Traditional face recognition algorithms perform poorly on such images. In this paper we present a method for face recognition adapted to real-world conditions that can be trained using very few training examples and is computationally efficient. Our method consists of performing a novel align...

  12. Face Engagement during Infancy Predicts Later Face Recognition Ability in Younger Siblings of Children with Autism

    Science.gov (United States)

    de Klerk, Carina C. J. M.; Gliga, Teodora; Charman, Tony; Johnson, Mark H.

    2014-01-01

    Face recognition difficulties are frequently documented in children with autism spectrum disorders (ASD). It has been hypothesized that these difficulties result from a reduced interest in faces early in life, leading to decreased cortical specialization and atypical development of the neural circuitry for face processing. However, a recent study…

  13. Impaired processing of self-face recognition in anorexia nervosa.

    Science.gov (United States)

    Hirot, France; Lesage, Marine; Pedron, Lya; Meyer, Isabelle; Thomas, Pierre; Cottencin, Olivier; Guardia, Dewi

    2016-03-01

    Body image disturbances and massive weight loss are major clinical symptoms of anorexia nervosa (AN). The aim of the present study was to examine the influence of body changes and eating attitudes on self-face recognition ability in AN. Twenty-seven subjects suffering from AN and 27 control participants performed a self-face recognition task (SFRT). During the task, digital morphs between their own face and a gender-matched unfamiliar face were presented in a random sequence. Participants' self-face recognition failures, cognitive flexibility, body concern and eating habits were assessed with the Self-Face Recognition Questionnaire (SFRQ), Trail Making Test (TMT), Body Shape Questionnaire (BSQ) and Eating Disorder Inventory-2 (EDI-2), respectively. Subjects suffering from AN exhibited significantly greater difficulties than control participants in identifying their own face (p = 0.028). No significant difference was observed between the two groups for TMT (all p > 0.1, non-significant). Regarding predictors of self-face recognition skills, there was a negative correlation between SFRT and body mass index (p = 0.01) and a positive correlation between SFRQ and EDI-2 (p < 0.001) or BSQ (p < 0.001). Among factors involved, nutritional status and intensity of eating disorders could play a part in impaired self-face recognition.

  14. Traditional facial tattoos disrupt face recognition processes.

    Science.gov (United States)

    Buttle, Heather; East, Julie

    2010-01-01

    Factors that are important to successful face recognition, such as features, configuration, and pigmentation/reflectance, are all subject to change when a face has been engraved with ink markings. Here we show that the application of facial tattoos, in the form of spiral patterns (typically associated with the Maori tradition of a Moko), disrupts face recognition to a similar extent as face inversion, with recognition accuracy little better than chance performance (2AFC). These results indicate that facial tattoos can severely disrupt our ability to recognise a face that previously did not have the pattern.

  15. A Multi—View Face Recognition System

    Institute of Scientific and Technical Information of China (English)

    张永越; 彭振云; 等

    1997-01-01

    In many automatic face recognition systems,posture constraining is a key factor preventing them from application.In this paper a series of strategies will be described to achieve a system which enables face recognition under varying pose.These approaches include the multi-view face modeling,the threschold image based face feature detection,the affine transformation based face posture normalization and the template matching based face identification.Combining all of these strategies,a face recognition system with the pose invariance is designed successfully,Using a 75MHZ Pentium PC and with a database of 75 individuals,15 images for each person,and 225 test images with various postures,a very good recognition rate of 96.89% is obtained.

  16. The hierarchical brain network for face recognition.

    Science.gov (United States)

    Zhen, Zonglei; Fang, Huizhen; Liu, Jia

    2013-01-01

    Numerous functional magnetic resonance imaging (fMRI) studies have identified multiple cortical regions that are involved in face processing in the human brain. However, few studies have characterized the face-processing network as a functioning whole. In this study, we used fMRI to identify face-selective regions in the entire brain and then explore the hierarchical structure of the face-processing network by analyzing functional connectivity among these regions. We identified twenty-five regions mainly in the occipital, temporal and frontal cortex that showed a reliable response selective to faces (versus objects) across participants and across scan sessions. Furthermore, these regions were clustered into three relatively independent sub-networks in a face-recognition task on the basis of the strength of functional connectivity among them. The functionality of the sub-networks likely corresponds to the recognition of individual identity, retrieval of semantic knowledge and representation of emotional information. Interestingly, when the task was switched to object recognition from face recognition, the functional connectivity between the inferior occipital gyrus and the rest of the face-selective regions were significantly reduced, suggesting that this region may serve as an entry node in the face-processing network. In sum, our study provides empirical evidence for cognitive and neural models of face recognition and helps elucidate the neural mechanisms underlying face recognition at the network level.

  17. Face aftereffects predict individual differences in face recognition ability.

    Science.gov (United States)

    Dennett, Hugh W; McKone, Elinor; Edwards, Mark; Susilo, Tirta

    2012-01-01

    Face aftereffects are widely studied on the assumption that they provide a useful tool for investigating face-space coding of identity. However, a long-standing issue concerns the extent to which face aftereffects originate in face-level processes as opposed to earlier stages of visual processing. For example, some recent studies failed to find atypical face aftereffects in individuals with clinically poor face recognition. We show that in individuals within the normal range of face recognition abilities, there is an association between face memory ability and a figural face aftereffect that is argued to reflect the steepness of broadband-opponent neural response functions in underlying face-space. We further show that this correlation arises from face-level processing, by reporting results of tests of nonface memory and nonface aftereffects. We conclude that face aftereffects can tap high-level face-space, and that face-space coding differs in quality between individuals and contributes to face recognition ability.

  18. Fusing Facial Features for Face Recognition

    Directory of Open Access Journals (Sweden)

    Jamal Ahmad Dargham

    2012-06-01

    Full Text Available Face recognition is an important biometric method because of its potential applications in many fields, such as access control, surveillance, and human-computer interaction. In this paper, a face recognition system that fuses the outputs of three face recognition systems based on Gabor jets is presented. The first system uses the magnitude, the second uses the phase, and the third uses the phase-weighted magnitude of the jets. The jets are generated from facial landmarks selected using three selection methods. It was found out that fusing the facial features gives better recognition rate than either facial feature used individually regardless of the landmark selection method.

  19. Discriminant Phase Component for Face Recognition

    Directory of Open Access Journals (Sweden)

    Naser Zaeri

    2012-01-01

    Full Text Available Numerous face recognition techniques have been developed owing to the growing number of real-world applications. Most of current algorithms for face recognition involve considerable amount of computations and hence they cannot be used on devices constrained with limited speed and memory. In this paper, we propose a novel solution for efficient face recognition problem for systems that utilize small memory capacities and demand fast performance. The new technique divides the face images into components and finds the discriminant phases of the Fourier transform of these components automatically using the sequential floating forward search method. A thorough study and comprehensive experiments relating time consumption versus system performance are applied to benchmark face image databases. Finally, the proposed technique is compared with other known methods and evaluated through the recognition rate and the computational time, where we achieve a recognition rate of 98.5% with computational time of 6.4 minutes for a database consisting of 2360 images.

  20. Face recognition increases during saccade preparation.

    Directory of Open Access Journals (Sweden)

    Hai Lin

    Full Text Available Face perception is integral to human perception system as it underlies social interactions. Saccadic eye movements are frequently made to bring interesting visual information, such as faces, onto the fovea for detailed processing. Just before eye movement onset, the processing of some basic features, such as the orientation, of an object improves at the saccade landing point. Interestingly, there is also evidence that indicates faces are processed in early visual processing stages similar to basic features. However, it is not known whether this early enhancement of processing includes face recognition. In this study, three experiments were performed to map the timing of face presentation to the beginning of the eye movement in order to evaluate pre-saccadic face recognition. Faces were found to be similarly processed as simple objects immediately prior to saccadic movements. Starting ∼ 120 ms before a saccade to a target face, independent of whether or not the face was surrounded by other faces, the face recognition gradually improved and the critical spacing of the crowding decreased as saccade onset was approaching. These results suggest that an upcoming saccade prepares the visual system for new information about faces at the saccade landing site and may reduce the background in a crowd to target the intended face. This indicates an important role of pre-saccadic eye movement signals in human face recognition.

  1. Face recognition increases during saccade preparation.

    Science.gov (United States)

    Lin, Hai; Rizak, Joshua D; Ma, Yuan-ye; Yang, Shang-chuan; Chen, Lin; Hu, Xin-tian

    2014-01-01

    Face perception is integral to human perception system as it underlies social interactions. Saccadic eye movements are frequently made to bring interesting visual information, such as faces, onto the fovea for detailed processing. Just before eye movement onset, the processing of some basic features, such as the orientation, of an object improves at the saccade landing point. Interestingly, there is also evidence that indicates faces are processed in early visual processing stages similar to basic features. However, it is not known whether this early enhancement of processing includes face recognition. In this study, three experiments were performed to map the timing of face presentation to the beginning of the eye movement in order to evaluate pre-saccadic face recognition. Faces were found to be similarly processed as simple objects immediately prior to saccadic movements. Starting ∼ 120 ms before a saccade to a target face, independent of whether or not the face was surrounded by other faces, the face recognition gradually improved and the critical spacing of the crowding decreased as saccade onset was approaching. These results suggest that an upcoming saccade prepares the visual system for new information about faces at the saccade landing site and may reduce the background in a crowd to target the intended face. This indicates an important role of pre-saccadic eye movement signals in human face recognition.

  2. Real Time Implementation Of Face Recognition System

    Directory of Open Access Journals (Sweden)

    Megha Manchanda

    2014-10-01

    Full Text Available This paper proposes face recognition method using PCA for real time implementation. Nowadays security is gaining importance as it is becoming necessary for people to keep passwords in their mind and carry cards. Such implementations however, are becoming less secure and practical, also is becoming more problematic thus leading to an increasing interest in techniques related to biometrics systems. Face recognition system is amongst important subjects in biometrics systems. This system is very useful for security in particular and has been widely used and developed in many countries. This study aims to achieve face recognition successfully by detecting human face in real time, based on Principal Component Analysis (PCA algorithm.

  3. Face Recognition Using Kernel Discriminant Analysis

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Linear Discrimiant Analysis (LDA) has demonstrated their success in face recognition. But LDA is difficult to handle the high nonlinear problems, such as changes of large viewpoint and illumination in face recognition. In order to overcome these problems, we investigate Kernel Discriminant Analysis (KDA) for face recognition. This approach adopts the kernel functions to replace the dot products of nonlinear mapping in the high dimensional feature space, and then the nonlinear problem can be solved in the input space conveniently without explicit mapping. Two face databases are used to test KDA approach. The results show that our approach outperforms the conventional PCA(Eigenface) and LDA(Fisherface) approaches.

  4. Robust video foreground segmentation and face recognition

    Institute of Scientific and Technical Information of China (English)

    GUAN Ye-peng

    2009-01-01

    Face recognition provides a natural visual interface for human computer interaction (HCI) applications.The process of face recognition,however,is inhibited by variations in the appearance of face images caused by changes in lighting,expression,viewpoint,aging and introduction of occlusion.Although various algorithms have been presented for face recognition,face recognition is still a very challenging topic.A novel approach of real time face recognition for HCI is proposed in the paper.In view of the limits of the popular approaches to foreground segmentation,wavelet multi-scale transform based background subtraction is developed to extract foreground objects.The optimal selection of the threshold is automatically determined,which does not require any complex supervised training or manual experimental calibration.A robust real time face recognition algorithm is presented,which combines the projection matrixes without iteration and kernel Fisher discriminant analysis (KFDA) to overcome some difficulties existing in the real face recognition.Superior performance of the proposed algorithm is demonstrated by comparing with other algorithms through experiments.The proposed algorithm can also be applied to the video image sequences of natural HCI.

  5. DWT BASED HMM FOR FACE RECOGNITION

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    A novel Discrete Wavelet Transform (DWT) based Hidden Markov Module (HMM) for face recognition is presented in this letter. To improve the accuracy of HMM based face recognition algorithm, DWT is used to replace Discrete Cosine Transform (DCT) for observation sequence extraction. Extensive experiments are conducted on two public databases and the results show that the proposed method can improve the accuracy significantly, especially when the face database is large and only few training images are available.

  6. Age Dependent Face Recognition using Eigenface

    Directory of Open Access Journals (Sweden)

    Hlaing Htake Khaung Tin

    2013-10-01

    Full Text Available Face recognition is the most successful form of human surveillance. Face recognition technology, is being used to improve human efficiency when recognition faces, is one of the fastest growing fields in the biometric industry. In the first stage, the age is classified into eleven categories which distinguish the person oldness in terms of age. In the second stage of the process is face recognition based on the predicted age. Age prediction has considerable potential applications in human computer interaction and multimedia communication. In this paper proposes an Eigen based age estimation algorithm for estimate an image from the database. Eigenface has proven to be a useful and robust cue for age prediction, age simulation, face recognition, localization and tracking. The scheme is based on an information theory approach that decomposes face images into a small set of characteristic feature images called eigenfaces, which may be thought of as the principal components of the initial training set of face images. The eigenface approach used in this scheme has advantages over other face recognition methods in its speed, simplicity, learning capability and robustness to small changes in the face image.

  7. Face recognition system and method using face pattern words and face pattern bytes

    Energy Technology Data Exchange (ETDEWEB)

    Zheng, Yufeng

    2014-12-23

    The present invention provides a novel system and method for identifying individuals and for face recognition utilizing facial features for face identification. The system and method of the invention comprise creating facial features or face patterns called face pattern words and face pattern bytes for face identification. The invention also provides for pattern recognitions for identification other than face recognition. The invention further provides a means for identifying individuals based on visible and/or thermal images of those individuals by utilizing computer software implemented by instructions on a computer or computer system and a computer readable medium containing instructions on a computer system for face recognition and identification.

  8. Extraversion predicts individual differences in face recognition.

    Science.gov (United States)

    Li, Jingguang; Tian, Moqian; Fang, Huizhen; Xu, Miao; Li, He; Liu, Jia

    2010-07-01

    In daily life, one of the most common social tasks we perform is to recognize faces. However, the relation between face recognition ability and social activities is largely unknown. Here we ask whether individuals with better social skills are also better at recognizing faces. We found that extraverts who have better social skills correctly recognized more faces than introverts. However, this advantage was absent when extraverts were asked to recognize non-social stimuli (e.g., flowers). In particular, the underlying facet that makes extraverts better face recognizers is the gregariousness facet that measures the degree of inter-personal interaction. In addition, the link between extraversion and face recognition ability was independent of general cognitive abilities. These findings provide the first evidence that links face recognition ability to our daily activity in social communication, supporting the hypothesis that extraverts are better at decoding social information than introverts.

  9. Contextual modulation of biases in face recognition.

    Directory of Open Access Journals (Sweden)

    Fatima Maria Felisberti

    Full Text Available BACKGROUND: The ability to recognize the faces of potential cooperators and cheaters is fundamental to social exchanges, given that cooperation for mutual benefit is expected. Studies addressing biases in face recognition have so far proved inconclusive, with reports of biases towards faces of cheaters, biases towards faces of cooperators, or no biases at all. This study attempts to uncover possible causes underlying such discrepancies. METHODOLOGY AND FINDINGS: Four experiments were designed to investigate biases in face recognition during social exchanges when behavioral descriptors (prosocial, antisocial or neutral embedded in different scenarios were tagged to faces during memorization. Face recognition, measured as accuracy and response latency, was tested with modified yes-no, forced-choice and recall tasks (N = 174. An enhanced recognition of faces tagged with prosocial descriptors was observed when the encoding scenario involved financial transactions and the rules of the social contract were not explicit (experiments 1 and 2. Such bias was eliminated or attenuated by making participants explicitly aware of "cooperative", "cheating" and "neutral/indifferent" behaviors via a pre-test questionnaire and then adding such tags to behavioral descriptors (experiment 3. Further, in a social judgment scenario with descriptors of salient moral behaviors, recognition of antisocial and prosocial faces was similar, but significantly better than neutral faces (experiment 4. CONCLUSION: The results highlight the relevance of descriptors and scenarios of social exchange in face recognition, when the frequency of prosocial and antisocial individuals in a group is similar. Recognition biases towards prosocial faces emerged when descriptors did not state the rules of a social contract or the moral status of a behavior, and they point to the existence of broad and flexible cognitive abilities finely tuned to minor changes in social context.

  10. Face recognition performance of individuals with Asperger syndrome on the Cambridge Face Memory Test.

    Science.gov (United States)

    Hedley, Darren; Brewer, Neil; Young, Robyn

    2011-12-01

    Although face recognition deficits in individuals with Autism Spectrum Disorder (ASD), including Asperger syndrome (AS), are widely acknowledged, the empirical evidence is mixed. This in part reflects the failure to use standardized and psychometrically sound tests. We contrasted standardized face recognition scores on the Cambridge Face Memory Test (CFMT) for 34 individuals with AS with those for 42, IQ-matched non-ASD individuals, and age-standardized scores from a large Australian cohort. We also examined the influence of IQ, autistic traits, and negative affect on face recognition performance. Overall, participants with AS performed significantly worse on the CFMT than the non-ASD participants and when evaluated against standardized test norms. However, while 24% of participants with AS presented with severe face recognition impairment (>2 SDs below the mean), many individuals performed at or above the typical level for their age: 53% scored within +/- 1 SD of the mean and 9% demonstrated superior performance (>1 SD above the mean). Regression analysis provided no evidence that IQ, autistic traits, or negative affect significantly influenced face recognition: diagnostic group membership was the only significant predictor of face recognition performance. In sum, face recognition performance in ASD is on a continuum, but with average levels significantly below non-ASD levels of performance.

  11. Real-time, face recognition technology

    Energy Technology Data Exchange (ETDEWEB)

    Brady, S.

    1995-11-01

    The Institute for Scientific Computing Research (ISCR) at Lawrence Livermore National Laboratory recently developed the real-time, face recognition technology KEN. KEN uses novel imaging devices such as silicon retinas developed at Caltech or off-the-shelf CCD cameras to acquire images of a face and to compare them to a database of known faces in a robust fashion. The KEN-Online project makes that recognition technology accessible through the World Wide Web (WWW), an internet service that has recently seen explosive growth. A WWW client can submit face images, add them to the database of known faces and submit other pictures that the system tries to recognize. KEN-Online serves to evaluate the recognition technology and grow a large face database. KEN-Online includes the use of public domain tools such as mSQL for its name-database and perl scripts to assist the uploading of images.

  12. An Introduction to Face Recognition Technology

    Directory of Open Access Journals (Sweden)

    Shang-Hung Lin

    2000-01-01

    Full Text Available Recently face recognition is attracting much attention in the society of network multimedia information access.  Areas such as network security, content indexing and retrieval, and video compression benefits from face recognition technology because "people" are the center of attention in a lot of video.  Network access control via face recognition not only makes hackers virtually impossible to steal one's "password", but also increases the user-friendliness in human-computer interaction.  Indexing and/or retrieving video data based on the appearances of particular persons will be useful for users such as news reporters, political scientists, and moviegoers.  For the applications of videophone and teleconferencing, the assistance of face recognition also provides a more efficient coding scheme.  In this paper, we give an introductory course of this new information processing technology.  The paper shows the readers the generic framework for the face recognition system, and the variants that are frequently encountered by the face recognizer.  Several famous face recognition algorithms, such as eigenfaces and neural networks, will also be explained.

  13. Exemplar-based Face Recognition from Video

    DEFF Research Database (Denmark)

    Krüger, Volker; Zhou, Shaohua; Chellappa, Rama

    2005-01-01

    -temporal relations: This allows the system to use dynamics as well as to generate warnings when 'implausible' situations occur or to circumvent these altogether. We have studied the effectiveness of temporal integration for recognition purposes by using the face recognition as an example problem. Face recognition...... is a prominent problem and has been studied more extensively than almost any other recognition problem. An observation is that face recognition works well in ideal conditions. If those conditions, however, are not met, then all present algorithms break down disgracefully. This probelm appears to be general...... to all vision techniques that intend to extract visual information out of a low snr. image. It is exactly a strength of cognitive systems that they are able to cope with non-ideal situations. In this chapter we will present a techniques that allows to integrate visual information over time and we...

  14. How fast is famous face recognition?

    Directory of Open Access Journals (Sweden)

    Gladys eBarragan-Jason

    2012-10-01

    Full Text Available The rapid recognition of familiar faces is crucial for social interactions. However the actual speed with which recognition can be achieved remains largely unknown as most studies have been carried out without any speed constraints. Different paradigms have been used, leading to conflicting results, and although many authors suggest that face recognition is fast, the speed of face recognition has not been directly compared to fast visual tasks. In this study, we sought to overcome these limitations. Subjects performed three tasks, a familiarity categorization task (famous faces among unknown faces, a superordinate categorization task (human faces among animal ones and a gender categorization task. All tasks were performed under speed constraints. The results show that, despite the use of speed constraints, subjects were slow when they had to categorize famous faces: minimum reaction time was 467 ms, which is 180 ms more than during superordinate categorization and 160 ms more than in the gender condition. Our results are compatible with a hierarchy of face processing from the superordinate level to the familiarity level. The processes taking place between detection and recognition need to be investigated in detail.

  15. [Face recognition in patients with schizophrenia].

    Science.gov (United States)

    Doi, Hirokazu; Shinohara, Kazuyuki

    2012-07-01

    It is well known that patients with schizophrenia show severe deficiencies in social communication skills. These deficiencies are believed to be partly derived from abnormalities in face recognition. However, the exact nature of these abnormalities exhibited by schizophrenic patients with respect to face recognition has yet to be clarified. In the present paper, we review the main findings on face recognition deficiencies in patients with schizophrenia, particularly focusing on abnormalities in the recognition of facial expression and gaze direction, which are the primary sources of information of others' mental states. The existing studies reveal that the abnormal recognition of facial expression and gaze direction in schizophrenic patients is attributable to impairments in both perceptual processing of visual stimuli, and cognitive-emotional responses to social information. Furthermore, schizophrenic patients show malfunctions in distributed neural regions, ranging from the fusiform gyrus recruited in the structural encoding of facial stimuli, to the amygdala which plays a primary role in the detection of the emotional significance of stimuli. These findings were obtained from research in patient groups with heterogeneous characteristics. Because previous studies have indicated that impairments in face recognition in schizophrenic patients might vary according to the types of symptoms, it is of primary importance to compare the nature of face recognition deficiencies and the impairments of underlying neural functions across sub-groups of patients.

  16. Face Recognition using Eigenfaces and Neural Networks

    Directory of Open Access Journals (Sweden)

    Mohamed Rizon

    2006-01-01

    Full Text Available In this study, we develop a computational model to identify the face of an unknown person’s by applying eigenfaces. The eigenfaces has been applied to extract the basic face of the human face images. The eigenfaces is then projecting onto human faces to identify unique features vectors. This significant features vector can be used to identify an unknown face by using the backpropagation neural network that utilized euclidean distance for classification and recognition. The ORL database for this investigation consists of 40 people with various 400 face images had been used for the learning. The eigenfaces including implemented Jacobi’s method for eigenvalues and eigenvectors has been performed. The classification and recognition using backpropagation neural network showed impressive positive result to classify face images.

  17. A novel thermal face recognition approach using face pattern words

    Science.gov (United States)

    Zheng, Yufeng

    2010-04-01

    A reliable thermal face recognition system can enhance the national security applications such as prevention against terrorism, surveillance, monitoring and tracking, especially at nighttime. The system can be applied at airports, customs or high-alert facilities (e.g., nuclear power plant) for 24 hours a day. In this paper, we propose a novel face recognition approach utilizing thermal (long wave infrared) face images that can automatically identify a subject at both daytime and nighttime. With a properly acquired thermal image (as a query image) in monitoring zone, the following processes will be employed: normalization and denoising, face detection, face alignment, face masking, Gabor wavelet transform, face pattern words (FPWs) creation, face identification by similarity measure (Hamming distance). If eyeglasses are present on a subject's face, an eyeglasses mask will be automatically extracted from the querying face image, and then masked with all comparing FPWs (no more transforms). A high identification rate (97.44% with Top-1 match) has been achieved upon our preliminary face dataset (of 39 subjects) from the proposed approach regardless operating time and glasses-wearing condition.e

  18. WCTFR : WRAPPING CURVELET TRANSFORM BASED FACE RECOGNITION

    Directory of Open Access Journals (Sweden)

    Arunalatha J S

    2015-03-01

    Full Text Available The recognition of a person based on biological features are efficient compared with traditional knowledge based recognition system. In this paper we propose Wrapping Curvelet Transform based Face Recognition (WCTFR. The Wrapping Curvelet Transform (WCT is applied on face images of database and test images to derive coefficients. The obtained coefficient matrix is rearranged to form WCT features of each image. The test image WCT features are compared with database images using Euclidean Distance (ED to compute Equal Error Rate (EER and True Success Rate (TSR. The proposed algorithm with WCT performs better than Curvelet Transform algorithms used in [1], [10] and [11].

  19. Face Detection and Modeling for Recognition

    Science.gov (United States)

    2002-01-01

    facial components show the important role of hair and face outlines in human face recognition. . . 8 1.6 Caricatures of (a) Vincent Van Gogh ; (b) Jim... Vincent Van Gogh ; (b) Jim Carrey; (c) Arnold Schwarzenegger; (d) Einstein; (e) G. W. Bush; and (f) Bill Gates. Images are down- loaded from [9], [10

  20. The Neuropsychology of Familiar Person Recognition from Face and Voice

    OpenAIRE

    2014-01-01

    Prosopagnosia has been considered for a long period of time as the most important and almost exclusive disorder in the recognition of familiar people. In recent years, however, this conviction has been undermined by the description of patients showing a concomitant defect in the recognition of familiar faces and voices as a consequence of lesions encroaching upon the right anterior temporal lobe (ATL). These new data have obliged researchers to reconsider on one hand the construct of ‘associa...

  1. Robust Face Recognition through Local Graph Matching

    Directory of Open Access Journals (Sweden)

    Ehsan Fazl-Ersi

    2007-09-01

    Full Text Available A novel face recognition method is proposed, in which face images are represented by a set of local labeled graphs, each containing information about the appearance and geometry of a 3-tuple of face feature points, extracted using Local Feature Analysis (LFA technique. Our method automatically learns a model set and builds a graph space for each individual. A two-stage method for optimal matching between the graphs extracted from a probe image and the trained model graphs is proposed. The recognition of each probe face image is performed by assigning it to the trained individual with the maximum number of references. Our approach achieves perfect result on the ORL face set and an accuracy rate of 98.4% on the FERET face set, which shows the superiority of our method over all considered state-of-the-art methods. I

  2. Face Recognition Based on Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Ali Javed

    2013-02-01

    Full Text Available The purpose of the proposed research work is to develop a computer system that can recognize a person by comparing the characteristics of face to those of known individuals. The main focus is on frontal two dimensional images that are taken in a controlled environment i.e. the illumination and the background will be constant. All the other methods of person’s identification and verification like iris scan or finger print scan require high quality and costly equipment’s but in face recognition we only require a normal camera giving us a 2-D frontal image of the person that will be used for the process of the person’s recognition. Principal Component Analysis technique has been used in the proposed system of face recognition. The purpose is to compare the results of the technique under the different conditions and to find the most efficient approach for developing a facial recognition system

  3. RGB-D-T based Face Recognition

    DEFF Research Database (Denmark)

    Nikisins, Olegs; Nasrollahi, Kamal; Greitans, Modris

    2014-01-01

    Facial images are of critical importance in many real-world applications from gaming to surveillance. The current literature on facial image analysis, from face detection to face and facial expression recognition, are mainly performed in either RGB, Depth (D), or both of these modalities. But......, such analyzes have rarely included Thermal (T) modality. This paper paves the way for performing such facial analyzes using synchronized RGB-D-T facial images by introducing a database of 51 persons including facial images of different rotations, illuminations, and expressions. Furthermore, a face recognition...

  4. Direct Neighborhood Discriminant Analysis for Face Recognition

    Directory of Open Access Journals (Sweden)

    Miao Cheng

    2008-01-01

    Full Text Available Face recognition is a challenging problem in computer vision and pattern recognition. Recently, many local geometrical structure-based techiniques are presented to obtain the low-dimensional representation of face images with enhanced discriminatory power. However, these methods suffer from the small simple size (SSS problem or the high computation complexity of high-dimensional data. To overcome these problems, we propose a novel local manifold structure learning method for face recognition, named direct neighborhood discriminant analysis (DNDA, which separates the nearby samples of interclass and preserves the local within-class geometry in two steps, respectively. In addition, the PCA preprocessing to reduce dimension to a large extent is not needed in DNDA avoiding loss of discriminative information. Experiments conducted on ORL, Yale, and UMIST face databases show the effectiveness of the proposed method.

  5. Face-space: A unifying concept in face recognition research.

    Science.gov (United States)

    Valentine, Tim; Lewis, Michael B; Hills, Peter J

    2016-10-01

    The concept of a multidimensional psychological space, in which faces can be represented according to their perceived properties, is fundamental to the modern theorist in face processing. Yet the idea was not clearly expressed until 1991. The background that led to the development of face-space is explained, and its continuing influence on theories of face processing is discussed. Research that has explored the properties of the face-space and sought to understand caricature, including facial adaptation paradigms, is reviewed. Face-space as a theoretical framework for understanding the effect of ethnicity and the development of face recognition is evaluated. Finally, two applications of face-space in the forensic setting are discussed. From initially being presented as a model to explain distinctiveness, inversion, and the effect of ethnicity, face-space has become a central pillar in many aspects of face processing. It is currently being developed to help us understand adaptation effects with faces. While being in principle a simple concept, face-space has shaped, and continues to shape, our understanding of face perception.

  6. Robust multi-camera view face recognition

    CERN Document Server

    Kisku, Dakshina Ranjan; Gupta, Phalguni; Sing, Jamuna Kanta

    2010-01-01

    This paper presents multi-appearance fusion of Principal Component Analysis (PCA) and generalization of Linear Discriminant Analysis (LDA) for multi-camera view offline face recognition (verification) system. The generalization of LDA has been extended to establish correlations between the face classes in the transformed representation and this is called canonical covariate. The proposed system uses Gabor filter banks for characterization of facial features by spatial frequency, spatial locality and orientation to make compensate to the variations of face instances occurred due to illumination, pose and facial expression changes. Convolution of Gabor filter bank to face images produces Gabor face representations with high dimensional feature vectors. PCA and canonical covariate are then applied on the Gabor face representations to reduce the high dimensional feature spaces into low dimensional Gabor eigenfaces and Gabor canonical faces. Reduced eigenface vector and canonical face vector are fused together usi...

  7. Face Recognition With Neural Networks

    Science.gov (United States)

    1992-12-01

    Ninth Annual Cognitive Science Society Conference, Volume unknown:461-473 (1987). 8. Damasio , Antonio R. "Prosopagnosia," Trends in Neuroscience, 8:132...is also supported by the work of J. C. Meadows and A. R. Damasio in their studies of individuals who have lost the ability to recognize faces, a

  8. Self-face recognition in social context.

    Science.gov (United States)

    Sugiura, Motoaki; Sassa, Yuko; Jeong, Hyeonjeong; Wakusawa, Keisuke; Horie, Kaoru; Sato, Shigeru; Kawashima, Ryuta

    2012-06-01

    The concept of "social self" is often described as a representation of the self-reflected in the eyes or minds of others. Although the appearance of one's own face has substantial social significance for humans, neuroimaging studies have failed to link self-face recognition and the likely neural substrate of the social self, the medial prefrontal cortex (MPFC). We assumed that the social self is recruited during self-face recognition under a rich social context where multiple other faces are available for comparison of social values. Using functional magnetic resonance imaging (fMRI), we examined the modulation of neural responses to the faces of the self and of a close friend in a social context. We identified an enhanced response in the ventral MPFC and right occipitoparietal sulcus in the social context specifically for the self-face. Neural response in the right lateral parietal and inferior temporal cortices, previously claimed as self-face-specific, was unaffected for the self-face but unexpectedly enhanced for the friend's face in the social context. Self-face-specific activation in the pars triangularis of the inferior frontal gyrus, and self-face-specific reduction of activation in the left middle temporal gyrus and the right supramarginal gyrus, replicating a previous finding, were not subject to such modulation. Our results thus demonstrated the recruitment of a social self during self-face recognition in the social context. At least three brain networks for self-face-specific activation may be dissociated by different patterns of response-modulation in the social context, suggesting multiple dynamic self-other representations in the human brain.

  9. AN ILLUMINATION INVARIANT TEXTURE BASED FACE RECOGNITION

    Directory of Open Access Journals (Sweden)

    K. Meena

    2013-11-01

    Full Text Available Automatic face recognition remains an interesting but challenging computer vision open problem. Poor illumination is considered as one of the major issue, since illumination changes cause large variation in the facial features. To resolve this, illumination normalization preprocessing techniques are employed in this paper to enhance the face recognition rate. The methods such as Histogram Equalization (HE, Gamma Intensity Correction (GIC, Normalization chain and Modified Homomorphic Filtering (MHF are used for preprocessing. Owing to great success, the texture features are commonly used for face recognition. But these features are severely affected by lighting changes. Hence texture based models Local Binary Pattern (LBP, Local Derivative Pattern (LDP, Local Texture Pattern (LTP and Local Tetra Patterns (LTrPs are experimented under different lighting conditions. In this paper, illumination invariant face recognition technique is developed based on the fusion of illumination preprocessing with local texture descriptors. The performance has been evaluated using YALE B and CMU-PIE databases containing more than 1500 images. The results demonstrate that MHF based normalization gives significant improvement in recognition rate for the face images with large illumination conditions.

  10. Image Pixel Fusion for Human Face Recognition

    CERN Document Server

    Bhowmik, Mrinal Kanti; Nasipuri, Mita; Basu, Dipak Kumar; Kundu, Mahantapas

    2010-01-01

    In this paper we present a technique for fusion of optical and thermal face images based on image pixel fusion approach. Out of several factors, which affect face recognition performance in case of visual images, illumination changes are a significant factor that needs to be addressed. Thermal images are better in handling illumination conditions but not very consistent in capturing texture details of the faces. Other factors like sunglasses, beard, moustache etc also play active role in adding complicacies to the recognition process. Fusion of thermal and visual images is a solution to overcome the drawbacks present in the individual thermal and visual face images. Here fused images are projected into an eigenspace and the projected images are classified using a radial basis function (RBF) neural network and also by a multi-layer perceptron (MLP). In the experiments Object Tracking and Classification Beyond Visible Spectrum (OTCBVS) database benchmark for thermal and visual face images have been used. Compar...

  11. 3D face modeling, analysis and recognition

    CERN Document Server

    Daoudi, Mohamed; Veltkamp, Remco

    2013-01-01

    3D Face Modeling, Analysis and Recognition presents methodologies for analyzing shapes of facial surfaces, develops computational tools for analyzing 3D face data, and illustrates them using state-of-the-art applications. The methodologies chosen are based on efficient representations, metrics, comparisons, and classifications of features that are especially relevant in the context of 3D measurements of human faces. These frameworks have a long-term utility in face analysis, taking into account the anticipated improvements in data collection, data storage, processing speeds, and application s

  12. FaceID: A face detection and recognition system

    Energy Technology Data Exchange (ETDEWEB)

    Shah, M.B.; Rao, N.S.V.; Olman, V.; Uberbacher, E.C.; Mann, R.C.

    1996-12-31

    A face detection system that automatically locates faces in gray-level images is described. Also described is a system which matches a given face image with faces in a database. Face detection in an Image is performed by template matching using templates derived from a selected set of normalized faces. Instead of using original gray level images, vertical gradient images were calculated and used to make the system more robust against variations in lighting conditions and skin color. Faces of different sizes are detected by processing the image at several scales. Further, a coarse-to-fine strategy is used to speed up the processing, and a combination of whole face and face component templates are used to ensure low false detection rates. The input to the face recognition system is a normalized vertical gradient image of a face, which is compared against a database using a set of pretrained feedforward neural networks with a winner-take-all fuser. The training is performed by using an adaptation of the backpropagation algorithm. This system has been developed and tested using images from the FERET database and a set of images obtained from Rowley, et al and Sung and Poggio.

  13. Parallel Architecture for Face Recognition using MPI

    Directory of Open Access Journals (Sweden)

    Dalia Shouman Ibrahim

    2017-01-01

    Full Text Available The face recognition applications are widely used in different fields like security and computer vision. The recognition process should be done in real time to take fast decisions. Princi-ple Component Analysis (PCA considered as feature extraction technique and is widely used in facial recognition applications by projecting images in new face space. PCA can reduce the dimensionality of the image. However, PCA consumes a lot of processing time due to its high intensive computation nature. Hence, this paper proposes two different parallel architectures to accelerate training and testing phases of PCA algorithm by exploiting the benefits of distributed memory architecture. The experimental results show that the proposed architectures achieve linear speed-up and system scalability on different data sizes from the Facial Recognition Technology (FERET database.

  14. Enhancing face recognition by image warping

    OpenAIRE

    García Bueno, Jorge

    2009-01-01

    This project has been developed as an improvement which could be added to the actual computer vision algorithms. It is based on the original idea proposed and published by Rob Jenkins and Mike Burton about the power of the face averages in arti cial recognition. The present project aims to create a new automated procedure applied for face recognition working with average images. Up to now, this algorithm has been used manually. With this study, the averaging and warping process will be done b...

  15. Influence of motion on face recognition.

    Science.gov (United States)

    Bonfiglio, Natale S; Manfredi, Valentina; Pessa, Eliano

    2012-02-01

    The influence of motion information and temporal associations on recognition of non-familiar faces was investigated using two groups which performed a face recognition task. One group was presented with regular temporal sequences of face views designed to produce the impression of motion of the face rotating in depth, the other group with random sequences of the same views. In one condition, participants viewed the sequences of the views in rapid succession with a negligible interstimulus interval (ISI). This condition was characterized by three different presentation times. In another condition, participants were presented a sequence with a 1-sec. ISI among the views. That regular sequences of views with a negligible ISI and a shorter presentation time were hypothesized to give rise to better recognition, related to a stronger impression of face rotation. Analysis of data from 45 participants showed a shorter presentation time was associated with significantly better accuracy on the recognition task; however, differences between performances associated with regular and random sequences were not significant.

  16. Probabilistic recognition of human faces from video

    DEFF Research Database (Denmark)

    Zhou, Saohua; Krüger, Volker; Chellappa, Rama

    2003-01-01

    Recognition of human faces using a gallery of still or video images and a probe set of videos is systematically investigated using a probabilistic framework. In still-to-video recognition, where the gallery consists of still images, a time series state space model is proposed to fuse temporal...... of the identity variable produces the recognition result. The model formulation is very general and it allows a variety of image representations and transformations. Experimental results using videos collected by NIST/USF and CMU illustrate the effectiveness of this approach for both still-to-video and video...... demonstrate that, due to the propagation of the identity variable over time, a degeneracy in posterior probability of the identity variable is achieved to give improved recognition. The gallery is generalized to videos in order to realize video-to-video recognition. An exemplar-based learning strategy...

  17. The Neuropsychology of Familiar Person Recognition from Face and Voice

    Directory of Open Access Journals (Sweden)

    Guido Gainotti

    2014-05-01

    Full Text Available Prosopagnosia has been considered for a long period of time as the most important and almost exclusive disorder in the recognition of familiar people. In recent years, however, this conviction has been undermined by the description of patients showing a concomitant defect in the recognition of familiar faces and voices as a consequence of lesions encroaching upon the right anterior temporal lobe (ATL. These new data have obliged researchers to reconsider on one hand the construct of ‘associative prosopagnosia’ and on the other hand current models of people recognition. A systematic review of the patterns of familiar people recognition disorders observed in patients with right and left ATL lesions has shown that in patients with right ATL lesions face familiarity feelings and the retrieval of person-specific semantic information from faces are selectively affected, whereas in patients with left ATL lesions the defect selectively concerns famous people naming. Furthermore, some patients with right ATL lesions and intact face familiarity feelings show a defect in the retrieval of person-specific semantic knowledge greater from face than from name. These data are at variance with current models assuming: (a that familiarity feelings are generated at the level of person identity nodes (PINs where information processed by various sensory modalities converge, and (b that PINs provide a modality-free gateway to a single semantic system, where information about people is stored in an amodal format. They suggest, on the contrary: (a that familiarity feelings are generated at the level of modality-specific recognition units; (b that face and voice recognition units are represented more in the right than in the left ATLs; (c that in the right ATL are mainly stored person-specific information based on a convergence of perceptual information, whereas in the left ATLs are represented verbally-mediated person-specific information.

  18. Quest Hierarchy for Hyperspectral Face Recognition

    Science.gov (United States)

    2011-03-01

    IEEE, 363-366 (2002). [40] Moghaddam, Baback and Ming-Hsuan Yang , “Learning Gender with Support Faces,” IEEE Transactions on Pattern Analysis and...64] Luo, Jun , Yong Ma, Erina Takikawa, Shihong Lao, Masato Kawade, Bao-Liang Lu, "Person-Specific SIFT Features for Face Recognition," IEEE...http://www.airforcetimes.com/news/2009/02/airforce_WAAS_021609/ [117] Capaccio, Tony , “Boeing Co. Drones Play Pivotal Role In War On Taliban, Al

  19. Metacognition of emotional face recognition.

    Science.gov (United States)

    Kelly, Karen J; Metcalfe, Janet

    2011-08-01

    While humans are adept at recognizing emotional states conveyed by facial expressions, the current literature suggests that they lack accurate metacognitions about their performance in this domain. This finding comes from global trait-based questionnaires that assess the extent to which an individual perceives him or herself as empathic, as compared to other people. Those who rate themselves as empathically accurate are no better than others at recognizing emotions. Metacognition of emotion recognition can also be assessed using relative measures that evaluate how well a person thinks s/he has understood the emotion in a particular facial display as compared to other displays. While this is the most common method of metacognitive assessment of people's judgments of learning or their feelings of knowing, this kind of metacognition--"relative meta-accuracy"--has not been studied within the domain of emotion. As well as asking for global metacognitive judgments, we asked people to provide relative, trial-by-trial prospective and retrospective judgments concerning whether they would be right or wrong in recognizing the expressions conveyed in particular facial displays. Our question was: Do people know when they will be correct in knowing what expression is conveyed, and do they know when they do not know? Although we, like others, found that global meta-accuracy was unpredictive of performance, relative meta-accuracy, given by the correlation between participants' trial-by-trial metacognitive judgments and performance on each item, were highly accurate both on the Mind in the Eyes task (Experiment 1) and on the Ekman Emotional Expression Multimorph task (in Experiment 2). 2011 APA, all rights reserved

  20. Face Detection and Face Recognition in Android Mobile Applications

    Directory of Open Access Journals (Sweden)

    Octavian DOSPINESCU

    2016-01-01

    Full Text Available The quality of the smartphone’s camera enables us to capture high quality pictures at a high resolution, so we can perform different types of recognition on these images. Face detection is one of these types of recognition that is very common in our society. We use it every day on Facebook to tag friends in our pictures. It is also used in video games alongside Kinect concept, or in security to allow the access to private places only to authorized persons. These are just some examples of using facial recognition, because in modern society, detection and facial recognition tend to surround us everywhere. The aim of this article is to create an appli-cation for smartphones that can recognize human faces. The main goal of this application is to grant access to certain areas or rooms only to certain authorized persons. For example, we can speak here of hospitals or educational institutions where there are rooms where only certain employees can enter. Of course, this type of application can cover a wide range of uses, such as helping people suffering from Alzheimer's to recognize the people they loved, to fill gaps persons who can’t remember the names of their relatives or for example to automatically capture the face of our own children when they smile.

  1. Face Recognition Using Local and Global Features

    Directory of Open Access Journals (Sweden)

    Jian Huang

    2004-04-01

    Full Text Available The combining classifier approach has proved to be a proper way for improving recognition performance in the last two decades. This paper proposes to combine local and global facial features for face recognition. In particular, this paper addresses three issues in combining classifiers, namely, the normalization of the classifier output, selection of classifier(s for recognition, and the weighting of each classifier. For the first issue, as the scales of each classifier's output are different, this paper proposes two methods, namely, linear-exponential normalization method and distribution-weighted Gaussian normalization method, in normalizing the outputs. Second, although combining different classifiers can improve the performance, we found that some classifiers are redundant and may even degrade the recognition performance. Along this direction, we develop a simple but effective algorithm for classifiers selection. Finally, the existing methods assume that each classifier is equally weighted. This paper suggests a weighted combination of classifiers based on Kittler's combining classifier framework. Four popular face recognition methods, namely, eigenface, spectroface, independent component analysis (ICA, and Gabor jet are selected for combination and three popular face databases, namely, Yale database, Olivetti Research Laboratory (ORL database, and the FERET database, are selected for evaluation. The experimental results show that the proposed method has 5–7% accuracy improvement.

  2. Incremental Supervised Subspace Learning for Face Recognition

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Subspace learning algorithms have been well studied in face recognition. Among them, linear discriminant analysis (LDA) is one of the most widely used supervised subspace learning method. Due to the difficulty of designing an incremental solution of the eigen decomposition on the product of matrices, there is little work for computing LDA incrementally. To avoid this limitation, an incremental supervised subspace learning (ISSL) algorithm was proposed, which incrementally learns an adaptive subspace by optimizing the maximum margin criterion (MMC). With the dynamically added face images, ISSL can effectively constrain the computational cost. Feasibility of the new algorithm has been successfully tested on different face data sets.

  3. Wavelet-based multispectral face recognition

    Institute of Scientific and Technical Information of China (English)

    LIU Dian-ting; ZHOU Xiao-dan; WANG Cheng-wen

    2008-01-01

    This paper proposes a novel wavelet-based face recognition method using thermal infrared (1R) and visible-light face images. The method applies the combination of Gabor and the Fisherfaces method to the reconstructed IR and visible images derived from wavelet frequency subbands. Our objective is to search for the subbands that are insensitive to the variation in expression and in illumination. The classification performance is improved by combining the multispectal information coming from the subbands that attain individually low equal error rate. Experimental results on Notre Dame face database show that the proposed wavelet-based algorithm outperforms previous multispectral images fusion method as well as monospectral method.

  4. Serotonergic modulation of face-emotion recognition.

    Science.gov (United States)

    Del-Ben, C M; Ferreira, C A Q; Alves-Neto, W C; Graeff, F G

    2008-04-01

    Facial expressions of basic emotions have been widely used to investigate the neural substrates of emotion processing, but little is known about the exact meaning of subjective changes provoked by perceiving facial expressions. Our assumption was that fearful faces would be related to the processing of potential threats, whereas angry faces would be related to the processing of proximal threats. Experimental studies have suggested that serotonin modulates the brain processes underlying defensive responses to environmental threats, facilitating risk assessment behavior elicited by potential threats and inhibiting fight or flight responses to proximal threats. In order to test these predictions about the relationship between fearful and angry faces and defensive behaviors, we carried out a review of the literature about the effects of pharmacological probes that affect 5-HT-mediated neurotransmission on the perception of emotional faces. The hypothesis that angry faces would be processed as a proximal threat and that, as a consequence, their recognition would be impaired by an increase in 5-HT function was not supported by the results reviewed. In contrast, most of the studies that evaluated the behavioral effects of serotonin challenges showed that increased 5-HT neurotransmission facilitates the recognition of fearful faces, whereas its decrease impairs the same performance. These results agree with the hypothesis that fearful faces are processed as potential threats and that 5-HT enhances this brain processing.

  5. Serotonergic modulation of face-emotion recognition

    Directory of Open Access Journals (Sweden)

    C.M. Del-Ben

    2008-04-01

    Full Text Available Facial expressions of basic emotions have been widely used to investigate the neural substrates of emotion processing, but little is known about the exact meaning of subjective changes provoked by perceiving facial expressions. Our assumption was that fearful faces would be related to the processing of potential threats, whereas angry faces would be related to the processing of proximal threats. Experimental studies have suggested that serotonin modulates the brain processes underlying defensive responses to environmental threats, facilitating risk assessment behavior elicited by potential threats and inhibiting fight or flight responses to proximal threats. In order to test these predictions about the relationship between fearful and angry faces and defensive behaviors, we carried out a review of the literature about the effects of pharmacological probes that affect 5-HT-mediated neurotransmission on the perception of emotional faces. The hypothesis that angry faces would be processed as a proximal threat and that, as a consequence, their recognition would be impaired by an increase in 5-HT function was not supported by the results reviewed. In contrast, most of the studies that evaluated the behavioral effects of serotonin challenges showed that increased 5-HT neurotransmission facilitates the recognition of fearful faces, whereas its decrease impairs the same performance. These results agree with the hypothesis that fearful faces are processed as potential threats and that 5-HT enhances this brain processing.

  6. A connectionist computational method for face recognition

    Directory of Open Access Journals (Sweden)

    Pujol Francisco A.

    2016-06-01

    Full Text Available In this work, a modified version of the elastic bunch graph matching (EBGM algorithm for face recognition is introduced. First, faces are detected by using a fuzzy skin detector based on the RGB color space. Then, the fiducial points for the facial graph are extracted automatically by adjusting a grid of points to the result of an edge detector. After that, the position of the nodes, their relation with their neighbors and their Gabor jets are calculated in order to obtain the feature vector defining each face. A self-organizing map (SOM framework is shown afterwards. Thus, the calculation of the winning neuron and the recognition process are performed by using a similarity function that takes into account both the geometric and texture information of the facial graph. The set of experiments carried out for our SOM-EBGM method shows the accuracy of our proposal when compared with other state-of the-art methods.

  7. Face recognition with L1-norm subspaces

    Science.gov (United States)

    Maritato, Federica; Liu, Ying; Colonnese, Stefania; Pados, Dimitris A.

    2016-05-01

    We consider the problem of representing individual faces by maximum L1-norm projection subspaces calculated from available face-image ensembles. In contrast to conventional L2-norm subspaces, L1-norm subspaces are seen to offer significant robustness to image variations, disturbances, and rank selection. Face recognition becomes then the problem of associating a new unknown face image to the "closest," in some sense, L1 subspace in the database. In this work, we also introduce the concept of adaptively allocating the available number of principal components to different face image classes, subject to a given total number/budget of principal components. Experimental studies included in this paper illustrate and support the theoretical developments.

  8. FACE RECOGNITION USING TWO DIMENSIONAL LAPLACIAN EIGENMAP

    Institute of Scientific and Technical Information of China (English)

    Chen Jiangfeng; Yuan Baozong; Pei Bingnan

    2008-01-01

    Recently,some research efforts have shown that face images possibly reside on a nonlinear sub-manifold. Though Laplacianfaces method considered the manifold structures of the face images,it has limits to solve face recognition problem. This paper proposes a new feature extraction method,Two Dimensional Laplacian EigenMap (2DLEM),which especially considers the manifold structures of the face images,and extracts the proper features from face image matrix directly by using a linear transformation. As opposed to Laplacianfaces,2DLEM extracts features directly from 2D images without a vectorization preprocessing. To test 2DLEM and evaluate its performance,a series of ex-periments are performed on the ORL database and the Yale database. Moreover,several experiments are performed to compare the performance of three 2D methods. The experiments show that 2DLEM achieves the best performance.

  9. Adaptive Face Recognition via Structed Representation

    Institute of Scientific and Technical Information of China (English)

    ZHANG Yu-hua; ZENG Xiao-ming

    2014-01-01

    In this paper, we propose a face recognition approach-Structed Sparse Representation-based classification when the measurement of the test sample is less than the number training samples of each subject. When this condition is not satisfied, we exploit Nearest Subspace approach to classify the test sample. In order to adapt all the cases, we combine the two approaches to an adaptive classification method-Adaptive approach. The adaptive approach yields greater recognition accuracy than the SRC approach and CRC_RLS approach with low sample rate on the Extend Yale B dataset. And it is more efficient than other two approaches.

  10. Face Recognition using Optimal Representation Ensemble

    CERN Document Server

    Li, Hanxi; Gao, Yongsheng

    2011-01-01

    Recently, the face recognizers based on linear representations have been shown to deliver state-of-the-art performance. In real-world applications, however, face images usually suffer from expressions, disguises and random occlusions. The problematic facial parts undermine the validity of the linear-subspace assumption and thus the recognition performance deteriorates significantly. In this work, we address the problem in a learning-inference-mixed fashion. By observing that the linear-subspace assumption is more reliable on certain face patches rather than on the holistic face, some Bayesian Patch Representations (BPRs) are randomly generated and interpreted according to the Bayes' theory. We then train an ensemble model over the patch-representations by minimizing the empirical risk w.r.t the "leave-one-out margins". The obtained model is termed Optimal Representation Ensemble (ORE), since it guarantees the optimality from the perspective of Empirical Risk Minimization. To handle the unknown patterns in tes...

  11. AN EVEN COMPONENT BASED FACE RECOGNITION METHOD

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    This paper presents a novel face recognition algorithm. To provide additional variations to training data set, even-odd decomposition is adopted, and only the even components (half-even face images) are used for further processing. To tackle with shift-variant problem,Fourier transform is applied to half-even face images. To reduce the dimension of an image,PCA (Principle Component Analysis) features are extracted from the amplitude spectrum of half-even face images. Finally, nearest neighbor classifier is employed for the task of classification. Experimental results on ORL database show that the proposed method outperforms in terms of accuracy the conventional eigenface method which applies PCA on original images and the eigenface method which uses both the original images and their mirror images as training set.

  12. Human Face Recognition Using Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Răzvan-Daniel Albu

    2009-10-01

    Full Text Available In this paper, I present a novel hybrid face recognition approach based on a convolutional neural architecture, designed to robustly detect highly variable face patterns. The convolutional network extracts successively larger features in a hierarchical set of layers. With the weights of the trained neural networks there are created kernel windows used for feature extraction in a 3-stage algorithm. I present experimental results illustrating the efficiency of the proposed approach. I use a database of 796 images of 159 individuals from Reims University which contains quite a high degree of variability in expression, pose, and facial details.

  13. Face Recognition (Patterns Matching & Bio-Metrics

    Directory of Open Access Journals (Sweden)

    Jignesh Dhirubhai Hirapara

    2012-08-01

    Full Text Available Government agencies are investing a considerable amount of resources into improving security systems as result of recent terrorist events that dangerously exposed flaws and weaknesses in today’s safety mechanisms. Badge or password-based authentication procedures are too easy to hack. Biometrics represents a valid alternative but they suffer of drawbacks as well. Iris scanning, for example, is very reliable but too intrusive; fingerprints are socially accepted, but not applicable to non-con sentient people. On the other hand, face recognition represents a good compromise between what’s socially acceptable and what’s reliable, even when operating under controlled conditions. In last decade, many algorithms based on linear/nonlinear methods, neural networks, wavelets, etc. have been proposed. Nevertheless, Face Recognition Vendor Test 2002 shown that most of these approaches encountered problems in outdoor conditions. This lowered their reliability compared to state of the art biometrics.

  14. Enhanced Face Recognition using Data Fusion

    Directory of Open Access Journals (Sweden)

    Alaa Eleyan

    2012-12-01

    Full Text Available In this paper we scrutinize the influence of fusion on the face recognition performance. In pattern recognition task, benefiting from different uncorrelated observations and performing fusion at feature and/or decision levels improves the overall performance. In features fusion approach, we fuse (concatenate the feature vectors obtained using different feature extractors for the same image. Classification is then performed using different similarity measures. In decisions fusion approach, the fusion is performed at decisions level, where decisions from different algorithms are fused using majority voting. The proposed method was tested using face images having different facial expressions and conditions obtained from ORL and FRAV2D databases. Simulations results show that the performance of both feature and decision fusion approaches outperforms the single performances of the fused algorithms significantly.

  15. Face Recognition Based on Nonlinear Feature Approach

    Directory of Open Access Journals (Sweden)

    Eimad E.A. Abusham

    2008-01-01

    Full Text Available Feature extraction techniques are widely used to reduce the complexity high dimensional data. Nonlinear feature extraction via Locally Linear Embedding (LLE has attracted much attention due to their high performance. In this paper, we proposed a novel approach for face recognition to address the challenging task of recognition using integration of nonlinear dimensional reduction Locally Linear Embedding integrated with Local Fisher Discriminant Analysis (LFDA to improve the discriminating power of the extracted features by maximize between-class while within-class local structure is preserved. Extensive experimentation performed on the CMU-PIE database indicates that the proposed methodology outperforms Benchmark methods such as Principal Component Analysis (PCA, Fisher Discrimination Analysis (FDA. The results showed that 95% of recognition rate could be obtained using our proposed method.

  16. Gender-Based Prototype Formation in Face Recognition

    Science.gov (United States)

    Baudouin, Jean-Yves; Brochard, Renaud

    2011-01-01

    The role of gender categories in prototype formation during face recognition was investigated in 2 experiments. The participants were asked to learn individual faces and then to recognize them. During recognition, individual faces were mixed with faces, which were blended faces of same or different genders. The results of the 2 experiments showed…

  17. Applying Artificial Neural Networks for Face Recognition

    Directory of Open Access Journals (Sweden)

    Thai Hoang Le

    2011-01-01

    Full Text Available This paper introduces some novel models for all steps of a face recognition system. In the step of face detection, we propose a hybrid model combining AdaBoost and Artificial Neural Network (ABANN to solve the process efficiently. In the next step, labeled faces detected by ABANN will be aligned by Active Shape Model and Multi Layer Perceptron. In this alignment step, we propose a new 2D local texture model based on Multi Layer Perceptron. The classifier of the model significantly improves the accuracy and the robustness of local searching on faces with expression variation and ambiguous contours. In the feature extraction step, we describe a methodology for improving the efficiency by the association of two methods: geometric feature based method and Independent Component Analysis method. In the face matching step, we apply a model combining many Neural Networks for matching geometric features of human face. The model links many Neural Networks together, so we call it Multi Artificial Neural Network. MIT + CMU database is used for evaluating our proposed methods for face detection and alignment. Finally, the experimental results of all steps on CallTech database show the feasibility of our proposed model.

  18. Face and body recognition show similar improvement during childhood.

    Science.gov (United States)

    Bank, Samantha; Rhodes, Gillian; Read, Ainsley; Jeffery, Linda

    2015-09-01

    Adults are proficient in extracting identity cues from faces. This proficiency develops slowly during childhood, with performance not reaching adult levels until adolescence. Bodies are similar to faces in that they convey identity cues and rely on specialized perceptual mechanisms. However, it is currently unclear whether body recognition mirrors the slow development of face recognition during childhood. Recent evidence suggests that body recognition develops faster than face recognition. Here we measured body and face recognition in 6- and 10-year-old children and adults to determine whether these two skills show different amounts of improvement during childhood. We found no evidence that they do. Face and body recognition showed similar improvement with age, and children, like adults, were better at recognizing faces than bodies. These results suggest that the mechanisms of face and body memory mature at a similar rate or that improvement of more general cognitive and perceptual skills underlies improvement of both face and body recognition.

  19. Semantic information can facilitate covert face recognition in congenital prosopagnosia.

    Science.gov (United States)

    Rivolta, Davide; Schmalzl, Laura; Coltheart, Max; Palermo, Romina

    2010-11-01

    People with congenital prosopagnosia have never developed the ability to accurately recognize faces. This single case investigation systematically investigates covert and overt face recognition in "C.," a 69 year-old woman with congenital prosopagnosia. Specifically, we: (a) describe the first assessment of covert face recognition in congenital prosopagnosia using multiple tasks; (b) show that semantic information can contribute to covert recognition; and (c) provide a theoretical explanation for the mechanisms underlying covert face recognition.

  20. Varying face occlusion detection and iterative recovery for face recognition

    Science.gov (United States)

    Wang, Meng; Hu, Zhengping; Sun, Zhe; Zhao, Shuhuan; Sun, Mei

    2017-05-01

    In most sparse representation methods for face recognition (FR), occlusion problems were usually solved via removing the occlusion part of both query samples and training samples to perform the recognition process. This practice ignores the global feature of facial image and may lead to unsatisfactory results due to the limitation of local features. Considering the aforementioned drawback, we propose a method called varying occlusion detection and iterative recovery for FR. The main contributions of our method are as follows: (1) to detect an accurate occlusion area of facial images, an image processing and intersection-based clustering combination method is used for occlusion FR; (2) according to an accurate occlusion map, the new integrated facial images are recovered iteratively and put into a recognition process; and (3) the effectiveness on recognition accuracy of our method is verified by comparing it with three typical occlusion map detection methods. Experiments show that the proposed method has a highly accurate detection and recovery performance and that it outperforms several similar state-of-the-art methods against partial contiguous occlusion.

  1. Incremental Nonnegative Matrix Factorization for Face Recognition

    Directory of Open Access Journals (Sweden)

    Wen-Sheng Chen

    2008-01-01

    Full Text Available Nonnegative matrix factorization (NMF is a promising approach for local feature extraction in face recognition tasks. However, there are two major drawbacks in almost all existing NMF-based methods. One shortcoming is that the computational cost is expensive for large matrix decomposition. The other is that it must conduct repetitive learning, when the training samples or classes are updated. To overcome these two limitations, this paper proposes a novel incremental nonnegative matrix factorization (INMF for face representation and recognition. The proposed INMF approach is based on a novel constraint criterion and our previous block strategy. It thus has some good properties, such as low computational complexity, sparse coefficient matrix. Also, the coefficient column vectors between different classes are orthogonal. In particular, it can be applied to incremental learning. Two face databases, namely FERET and CMU PIE face databases, are selected for evaluation. Compared with PCA and some state-of-the-art NMF-based methods, our INMF approach gives the best performance.

  2. Human Face Recognition using Line Features

    CERN Document Server

    Bhowmik, Mrinal Kanti; Nasipuri, Mita; Basu, Dipak Kumar; Kundu, Mahantapas

    2010-01-01

    In this work we investigate a novel approach to handle the challenges of face recognition, which includes rotation, scale, occlusion, illumination etc. Here, we have used thermal face images as those are capable to minimize the affect of illumination changes and occlusion due to moustache, beards, adornments etc. The proposed approach registers the training and testing thermal face images in polar coordinate, which is capable to handle complicacies introduced by scaling and rotation. Line features are extracted from thermal polar images and feature vectors are constructed using these line. Feature vectors thus obtained passes through principal component analysis (PCA) for the dimensionality reduction of feature vectors. Finally, the images projected into eigenspace are classified using a multi-layer perceptron. In the experiments we have used Object Tracking and Classification Beyond Visible Spectrum (OTCBVS) database. Experimental results show that the proposed approach significantly improves the verificatio...

  3. Weighted Attribute Fusion Model for Face Recognition

    CERN Document Server

    Sakthivel, S

    2010-01-01

    Recognizing a face based on its attributes is an easy task for a human to perform as it is a cognitive process. In recent years, Face Recognition is achieved with different kinds of facial features which were used separately or in a combined manner. Currently, Feature fusion methods and parallel methods are the facial features used and performed by integrating multiple feature sets at different levels. However, this integration and the combinational methods do not guarantee better result. Hence to achieve better results, the feature fusion model with multiple weighted facial attribute set is selected. For this feature model, face images from predefined data set has been taken from Olivetti Research Laboratory (ORL) and applied on different methods like Principal Component Analysis (PCA) based Eigen feature extraction technique, Discrete Cosine Transformation (DCT) based feature extraction technique, Histogram Based Feature Extraction technique and Simple Intensity based features. The extracted feature set obt...

  4. Eigenvector Weighting Function in Face Recognition

    Directory of Open Access Journals (Sweden)

    Pang Ying Han

    2011-01-01

    Full Text Available Graph-based subspace learning is a class of dimensionality reduction technique in face recognition. The technique reveals the local manifold structure of face data that hidden in the image space via a linear projection. However, the real world face data may be too complex to measure due to both external imaging noises and the intra-class variations of the face images. Hence, features which are extracted by the graph-based technique could be noisy. An appropriate weight should be imposed to the data features for better data discrimination. In this paper, a piecewise weighting function, known as Eigenvector Weighting Function (EWF, is proposed and implemented in two graph based subspace learning techniques, namely Locality Preserving Projection and Neighbourhood Preserving Embedding. Specifically, the computed projection subspace of the learning approach is decomposed into three partitions: a subspace due to intra-class variations, an intrinsic face subspace, and a subspace which is attributed to imaging noises. Projected data features are weighted differently in these subspaces to emphasize the intrinsic face subspace while penalizing the other two subspaces. Experiments on FERET and FRGC databases are conducted to show the promising performance of the proposed technique.

  5. Facial Emotion Recognition in Bipolar Disorder and Healthy Aging.

    Science.gov (United States)

    Altamura, Mario; Padalino, Flavia A; Stella, Eleonora; Balzotti, Angela; Bellomo, Antonello; Palumbo, Rocco; Di Domenico, Alberto; Mammarella, Nicola; Fairfield, Beth

    2016-03-01

    Emotional face recognition is impaired in bipolar disorder, but it is not clear whether this is specific for the illness. Here, we investigated how aging and bipolar disorder influence dynamic emotional face recognition. Twenty older adults, 16 bipolar patients, and 20 control subjects performed a dynamic affective facial recognition task and a subsequent rating task. Participants pressed a key as soon as they were able to discriminate whether the neutral face was assuming a happy or angry facial expression and then rated the intensity of each facial expression. Results showed that older adults recognized happy expressions faster, whereas bipolar patients recognized angry expressions faster. Furthermore, both groups rated emotional faces more intensely than did the control subjects. This study is one of the first to compare how aging and clinical conditions influence emotional facial recognition and underlines the need to consider the role of specific and common factors in emotional face recognition.

  6. Research on Face Recognition Based on Embedded System

    Directory of Open Access Journals (Sweden)

    Hong Zhao

    2013-01-01

    Full Text Available Because a number of image feature data to store, complex calculation to execute during the face recognition, therefore the face recognition process was realized only by PCs with high performance. In this paper, the OpenCV facial Haar-like features were used to identify face region; the Principal Component Analysis (PCA was employed in quick extraction of face features and the Euclidean Distance was also adopted in face recognition; as thus, data amount and computational complexity would be reduced effectively in face recognition, and the face recognition could be carried out on embedded platform. Finally, based on Tiny6410 embedded platform, a set of embedded face recognition systems was constructed. The test results showed that the system has stable operation and high recognition rate can be used in portable and mobile identification and authentication.

  7. Complex Wavelet Transform-Based Face Recognition

    Directory of Open Access Journals (Sweden)

    2009-03-01

    Full Text Available Complex approximately analytic wavelets provide a local multiscale description of images with good directional selectivity and invariance to shifts and in-plane rotations. Similar to Gabor wavelets, they are insensitive to illumination variations and facial expression changes. The complex wavelet transform is, however, less redundant and computationally efficient. In this paper, we first construct complex approximately analytic wavelets in the single-tree context, which possess Gabor-like characteristics. We, then, investigate the recently developed dual-tree complex wavelet transform (DT-CWT and the single-tree complex wavelet transform (ST-CWT for the face recognition problem. Extensive experiments are carried out on standard databases. The resulting complex wavelet-based feature vectors are as discriminating as the Gabor wavelet-derived features and at the same time are of lower dimension when compared with that of Gabor wavelets. In all experiments, on two well-known databases, namely, FERET and ORL databases, complex wavelets equaled or surpassed the performance of Gabor wavelets in recognition rate when equal number of orientations and scales is used. These findings indicate that complex wavelets can provide a successful alternative to Gabor wavelets for face recognition.

  8. A Fuzzy Neural Model for Face Recognition

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    In this paper, a fuzzy neural model is proposed for face recognition. Each rule in the proposed fuzzy neural model is used to estimate one cluster of pattern distribution in a form, which is different from the classical possibilitydensity function. Through self-adaptive learning and fuzzy inference, a confidence value will be assigned to a given pattern to denote the possibility of this pattern's belongingness to some certain class/subject. The architecture of the whole system takes structure of one-class-in-one-network (OCON), which has many advantages such as easy convergence, suitable for distribution application, quickretrieving, and incremental training. Novel methods are used to determine the number of fuzzy rules and initialize fuzzy subsets. The proposed approach has characteristics of quick learning/recognition speed, high recognition accuracy and robustness. The proposed approach can even recognize very low-resolution face images (e.g., 7x6) well that human cannot when the number of subjects is not very large. Experiments on ORL demonstrate the effectiveness of the proposed approachand an average error rate of 3.95% is obtained.

  9. Face recognition: a model specific ability

    Directory of Open Access Journals (Sweden)

    Jeremy B Wilmer

    2014-10-01

    Full Text Available In our everyday lives, we view it as a matter of course that different people are good at different things. It can be surprising, in this context, to learn that most of what is known about cognitive ability variation across individuals concerns the broadest of all cognitive abilities, often labeled g. In contrast, our knowledge of specific abilities, those that correlate little with g, is severely constrained. Here, we draw upon our experience investigating an exceptionally specific ability, face recognition, to make the case that many specific abilities could easily have been missed. In making this case, we derive key insights from earlier false starts in the measurement of face recognition’s variation across individuals, and we highlight the convergence of factors that enabled the recent discovery that this variation is specific. We propose that the case of face recognition ability illustrates a set of tools and perspectives that could accelerate fruitful work on specific cognitive abilities. By revealing relatively independent dimensions of human ability, such work would enhance our capacity to understand the uniqueness of individual minds.

  10. Locally Linear Discriminate Embedding for Face Recognition

    Directory of Open Access Journals (Sweden)

    Eimad E. Abusham

    2009-01-01

    Full Text Available A novel method based on the local nonlinear mapping is presented in this research. The method is called Locally Linear Discriminate Embedding (LLDE. LLDE preserves a local linear structure of a high-dimensional space and obtains a compact data representation as accurately as possible in embedding space (low dimensional before recognition. For computational simplicity and fast processing, Radial Basis Function (RBF classifier is integrated with the LLDE. RBF classifier is carried out onto low-dimensional embedding with reference to the variance of the data. To validate the proposed method, CMU-PIE database has been used and experiments conducted in this research revealed the efficiency of the proposed methods in face recognition, as compared to the linear and non-linear approaches.

  11. Efficient Recognition of Human Faces from Video in Particle Filter

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Face recognition from video requires dealing with uncertainty both in tracking and recognition. This paper proposed an effective method for face recognition from video. In order to realize simultaneous tracking and recognition, fisherface-based recognition is combined with tracking into one model. This model is then embedded into particle filter to perform face recognition from video. In order to improve the robustness of tracking, an expectation maximization (EM) algorithm was adopted to update the appearance model. The experimental results show that the proposed method can perform well in tracking and recognition even in poor conditions such as occlusion and remarkable change in lighting.

  12. A Massively Parallel Face Recognition System

    Directory of Open Access Journals (Sweden)

    Lahdenoja Olli

    2007-01-01

    Full Text Available We present methods for processing the LBPs (local binary patterns with a massively parallel hardware, especially with CNN-UM (cellular nonlinear network-universal machine. In particular, we present a framework for implementing a massively parallel face recognition system, including a dedicated highly accurate algorithm suitable for various types of platforms (e.g., CNN-UM and digital FPGA. We study in detail a dedicated mixed-mode implementation of the algorithm and estimate its implementation cost in the view of its performance and accuracy restrictions.

  13. A Massively Parallel Face Recognition System

    Directory of Open Access Journals (Sweden)

    Ari Paasio

    2006-12-01

    Full Text Available We present methods for processing the LBPs (local binary patterns with a massively parallel hardware, especially with CNN-UM (cellular nonlinear network-universal machine. In particular, we present a framework for implementing a massively parallel face recognition system, including a dedicated highly accurate algorithm suitable for various types of platforms (e.g., CNN-UM and digital FPGA. We study in detail a dedicated mixed-mode implementation of the algorithm and estimate its implementation cost in the view of its performance and accuracy restrictions.

  14. 2DPCA versus PCA for face recognition

    Institute of Scientific and Technical Information of China (English)

    HU Jian-jun; TAN Guan-zheng; LUAN Feng-gang; A. S. M. LIBDA

    2015-01-01

    Dimensionality reduction methods play an important role in face recognition. Principal component analysis (PCA) and two-dimensional principal component analysis (2DPCA) are two kinds of important methods in this field. Recent research seems like that 2DPCA method is superior to PCA method. To prove if this conclusion is always true, a comprehensive comparison study between PCA and 2DPCA methods was carried out. A novel concept, called column-image difference (CID), was proposed to analyze the difference between PCA and 2DPCA methods in theory. It is found that there exist some restrictive conditions when 2DPCA outperforms PCA. After theoretical analysis, the experiments were conducted on four famous face image databases. The experiment results confirm the validity of theoretical claim.

  15. Face Recognition using Segmental Euclidean Distance

    Directory of Open Access Journals (Sweden)

    Farrukh Sayeed

    2011-09-01

    Full Text Available In this paper an attempt has been made to detect the face using the combination of integral image along with the cascade structured classifier which is built using Adaboost learning algorithm. The detected faces are then passed through a filtering process for discarding the non face regions. They are individually split up into five segments consisting of forehead, eyes, nose, mouth and chin. Each segment is considered as a separate image and Eigenface also called principal component analysis (PCA features of each segment is computed. The faces having a slight pose are also aligned for proper segmentation. The test image is also segmented similarly and its PCA features are found. The segmental Euclidean distance classifier is used for matching the test image with the stored one. The success rate comes out to be 88 per cent on the CG(full database created from the databases of California Institute and Georgia Institute. However the performance of this approach on ORL(full database with the same features is only 70 per cent. For the sake of comparison, DCT(full and fuzzy features are tried on CG and ORL databases but using a well known classifier, support vector machine (SVM. Results of recognition rate with DCT features on SVM classifier are increased by 3 per cent over those due to PCA features and Euclidean distance classifier on the CG database. The results of recognition are improved to 96 per cent with fuzzy features on ORL database with SVM.Defence Science Journal, 2011, 61(5, pp.431-442, DOI:http://dx.doi.org/10.14429/dsj.61.1178

  16. Study of Different Face Recognition Algorithms and Challenges

    Directory of Open Access Journals (Sweden)

    Uma Shankar Kurmi

    2014-03-01

    Full Text Available At present face recognition has wide area of applications such as security, law enforcement. Imaging conditions, Orientation, Pose and presence of occlusion are huge problems associated with face recognition. The performance of face recognition systems decreases due to these problems. Discriminant Analysis (LDA or Principal Components Analysis (PCA is used to get better recognition results. Human face contains relevant information that can extracted from face model developed by PCA technique. Principal Components Analysis method uses eigenface approach to describe face image variation. A face recognition technique that is robust to all situations is not available. Some techniques are better in case of illumination, some for pose problem and some for occlusion problem. This paper presents some algorithms for face recognition.

  17. Face recognition from a moving platform via sparse representation

    Science.gov (United States)

    Hsu, Ming Kai; Hsu, Charles; Lee, Ting N.; Szu, Harold

    2012-06-01

    A video-based surveillance system for passengers includes face detection, face tracking and face recognition. In general, the final recognition result of the video-based surveillance system is usually determined by the cumulative recognition results. Under this strategy, the correctness of face tracking plays an important role for the system recognition rate. For face tracking, the challenges of face tracking on a moving platform are that the space and time information used for conventional face tracking algorithms may be lost. Consequently, conventional face tracking algorithms can barely handle the face tracking on a moving platform. In this paper, we have verified the state-of-the-art technologies for face detection, face tracking and face recognition on a moving platform. In the mean time, we also proposed a new strategy for face tracking on a moving platform or face tracking under very low frame rate. The steps of the new strategy for face detection are: (1) classification the detected faces over a certain period instead of every frame (2) Tracking of each passenger is equivalent to reconstruct the time order of certain period for each passenger. If the cumulative recognition results are the only part needed for the surveillance system, step 2 can be skipped. In addition, if the additional information from the passengers is required, such as path tracking, lip read, gesture recognition, etc, time order reconstruction in step 2 can offer the information required.

  18. Face Recognition by Metropolitan Police Super-Recognisers

    OpenAIRE

    2016-01-01

    Face recognition is used to prove identity across a wide variety of settings. Despite this, research consistently shows that people are typically rather poor at matching faces to photos. Some professional groups, such as police and passport officers, have been shown to perform just as poorly as the general public on standard tests of face recognition. However, face recognition skills are subject to wide individual variation, with some people showing exceptional ability-a group that has come t...

  19. A reciprocal model of face recognition and autistic traits: evidence from an individual differences perspective.

    Science.gov (United States)

    Halliday, Drew W R; MacDonald, Stuart W S; Scherf, K Suzanne; Sherf, Suzanne K; Tanaka, James W

    2014-01-01

    Although not a core symptom of the disorder, individuals with autism often exhibit selective impairments in their face processing abilities. Importantly, the reciprocal connection between autistic traits and face perception has rarely been examined within the typically developing population. In this study, university participants from the social sciences, physical sciences, and humanities completed a battery of measures that assessed face, object and emotion recognition abilities, general perceptual-cognitive style, and sub-clinical autistic traits (the Autism Quotient (AQ)). We employed separate hierarchical multiple regression analyses to evaluate which factors could predict face recognition scores and AQ scores. Gender, object recognition performance, and AQ scores predicted face recognition behaviour. Specifically, males, individuals with more autistic traits, and those with lower object recognition scores performed more poorly on the face recognition test. Conversely, university major, gender and face recognition performance reliably predicted AQ scores. Science majors, males, and individuals with poor face recognition skills showed more autistic-like traits. These results suggest that the broader autism phenotype is associated with lower face recognition abilities, even among typically developing individuals.

  20. Impaired face recognition is associated with social inhibition.

    Science.gov (United States)

    Avery, Suzanne N; VanDerKlok, Ross M; Heckers, Stephan; Blackford, Jennifer U

    2016-02-28

    Face recognition is fundamental to successful social interaction. Individuals with deficits in face recognition are likely to have social functioning impairments that may lead to heightened risk for social anxiety. A critical component of social interaction is how quickly a face is learned during initial exposure to a new individual. Here, we used a novel Repeated Faces task to assess how quickly memory for faces is established. Face recognition was measured over multiple exposures in 52 young adults ranging from low to high in social inhibition, a core dimension of social anxiety. High social inhibition was associated with a smaller slope of change in recognition memory over repeated face exposure, indicating participants with higher social inhibition showed smaller improvements in recognition memory after seeing faces multiple times. We propose that impaired face learning is an important mechanism underlying social inhibition and may contribute to, or maintain, social anxiety.

  1. Random-Profiles-Based 3D Face Recognition System

    Directory of Open Access Journals (Sweden)

    Joongrock Kim

    2014-03-01

    Full Text Available In this paper, a noble nonintrusive three-dimensional (3D face modeling system for random-profile-based 3D face recognition is presented. Although recent two-dimensional (2D face recognition systems can achieve a reliable recognition rate under certain conditions, their performance is limited by internal and external changes, such as illumination and pose variation. To address these issues, 3D face recognition, which uses 3D face data, has recently received much attention. However, the performance of 3D face recognition highly depends on the precision of acquired 3D face data, while also requiring more computational power and storage capacity than 2D face recognition systems. In this paper, we present a developed nonintrusive 3D face modeling system composed of a stereo vision system and an invisible near-infrared line laser, which can be directly applied to profile-based 3D face recognition. We further propose a novel random-profile-based 3D face recognition method that is memory-efficient and pose-invariant. The experimental results demonstrate that the reconstructed 3D face data consists of more than 50 k 3D point clouds and a reliable recognition rate against pose variation.

  2. 3D face recognition algorithm based on detecting reliable components

    Institute of Scientific and Technical Information of China (English)

    Huang Wenjun; Zhou Xuebing; Niu Xiamu

    2007-01-01

    Fisherfaces algorithm is a popular method for face recognition. However, there exist some unstable components that degrade recognition performance. In this paper, we propose a method based on detecting reliable components to overcome the problem and introduce it to 3D face recognition. The reliable components are detected within the binary feature vector, which is generated from the Fisherfaces feature vector based on statistical properties, and is used for 3D face recognition as the final feature vector. Experimental results show that the reliable components feature vector is much more effective than the Fisherfaces feature vector for face recognition.

  3. Prevalence of face recognition deficits in middle childhood.

    Science.gov (United States)

    Bennetts, Rachel J; Murray, Ebony; Boyce, Tian; Bate, Sarah

    2017-02-01

    Approximately 2-2.5% of the adult population is believed to show severe difficulties with face recognition, in the absence of any neurological injury-a condition known as developmental prosopagnosia (DP). However, to date no research has attempted to estimate the prevalence of face recognition deficits in children, possibly because there are very few child-friendly, well-validated tests of face recognition. In the current study, we examined face and object recognition in a group of primary school children (aged 5-11 years), to establish whether our tests were suitable for children and to provide an estimate of face recognition difficulties in children. In Experiment 1 (n = 184), children completed a pre-existing test of child face memory, the Cambridge Face Memory Test-Kids (CFMT-K), and a bicycle test with the same format. In Experiment 2 (n = 413), children completed three-alternative forced-choice matching tasks with faces and bicycles. All tests showed good psychometric properties. The face and bicycle tests were well matched for difficulty and showed a similar developmental trajectory. Neither the memory nor the matching tests were suitable to detect impairments in the youngest groups of children, but both tests appear suitable to screen for face recognition problems in middle childhood. In the current sample, 1.2-5.2% of children showed difficulties with face recognition; 1.2-4% showed face-specific difficulties-that is, poor face recognition with typical object recognition abilities. This is somewhat higher than previous adult estimates: It is possible that face matching tests overestimate the prevalence of face recognition difficulties in children; alternatively, some children may "outgrow" face recognition difficulties.

  4. Effects of pose and image resolution on automatic face recognition

    NARCIS (Netherlands)

    Mahmood, Zahid; Ali, Tauseef; Khan, Samee U.

    The popularity of face recognition systems have increased due to their use in widespread applications. Driven by the enormous number of potential application domains, several algorithms have been proposed for face recognition. Face pose and image resolutions are among the two important factors that

  5. Effects of pose and image resolution on automatic face recognition

    NARCIS (Netherlands)

    Mahmood, Zahid; Ali, Tauseef; Khan, Samee U.

    2015-01-01

    The popularity of face recognition systems have increased due to their use in widespread applications. Driven by the enormous number of potential application domains, several algorithms have been proposed for face recognition. Face pose and image resolutions are among the two important factors that

  6. Direct Gaze Modulates Face Recognition in Young Infants

    Science.gov (United States)

    Farroni, Teresa; Massaccesi, Stefano; Menon, Enrica; Johnson, Mark H.

    2007-01-01

    From birth, infants prefer to look at faces that engage them in direct eye contact. In adults, direct gaze is known to modulate the processing of faces, including the recognition of individuals. In the present study, we investigate whether direction of gaze has any effect on face recognition in four-month-old infants. Four-month infants were shown…

  7. Expression modeling for expression-invariant face recognition

    NARCIS (Netherlands)

    Haar, F.B. Ter; Veltkamp, R.C.

    2010-01-01

    Morphable face models have proven to be an effective tool for 3D face modeling and face recognition, but the extension to 3D face scans with expressions is still a challenge. The two main difficulties are (1) how to build a new morphable face model that deals with expressions, and (2) how to fit thi

  8. QUEST Hierarchy for Hyperspectral Face Recognition

    Directory of Open Access Journals (Sweden)

    David M. Ryer

    2012-01-01

    Full Text Available A qualia exploitation of sensor technology (QUEST motivated architecture using algorithm fusion and adaptive feedback loops for face recognition for hyperspectral imagery (HSI is presented. QUEST seeks to develop a general purpose computational intelligence system that captures the beneficial engineering aspects of qualia-based solutions. Qualia-based approaches are constructed from subjective representations and have the ability to detect, distinguish, and characterize entities in the environment Adaptive feedback loops are implemented that enhance performance by reducing candidate subjects in the gallery and by injecting additional probe images during the matching process. The architecture presented provides a framework for exploring more advanced integration strategies beyond those presented. Algorithmic results and performance improvements are presented as spatial, spectral, and temporal effects are utilized; additionally, a Matlab-based graphical user interface (GUI is developed to aid processing, track performance, and to display results.

  9. AN ADVANCED SCALE INVARIANT FEATURE TRANSFORM ALGORITHM FOR FACE RECOGNITION

    OpenAIRE

    Mohammad Mohsen Ahmadinejad; Elizabeth Sherly

    2016-01-01

    In computer vision, Scale-invariant feature transform (SIFT) algorithm is widely used to describe and detect local features in images due to its excellent performance. But for face recognition, the implementation of SIFT was complicated because of detecting false key-points in the face image due to irrelevant portions like hair style and other background details. This paper proposes an algorithm for face recognition to improve recognition accuracy by selecting relevant SIFT key-points only th...

  10. Collaborative Representation based Classification for Face Recognition

    CERN Document Server

    Zhang, Lei; Feng, Xiangchu; Ma, Yi; Zhang, David

    2012-01-01

    By coding a query sample as a sparse linear combination of all training samples and then classifying it by evaluating which class leads to the minimal coding residual, sparse representation based classification (SRC) leads to interesting results for robust face recognition. It is widely believed that the l1- norm sparsity constraint on coding coefficients plays a key role in the success of SRC, while its use of all training samples to collaboratively represent the query sample is rather ignored. In this paper we discuss how SRC works, and show that the collaborative representation mechanism used in SRC is much more crucial to its success of face classification. The SRC is a special case of collaborative representation based classification (CRC), which has various instantiations by applying different norms to the coding residual and coding coefficient. More specifically, the l1 or l2 norm characterization of coding residual is related to the robustness of CRC to outlier facial pixels, while the l1 or l2 norm c...

  11. Image preprocessing study on KPCA-based face recognition

    Science.gov (United States)

    Li, Xuan; Li, Dehua

    2015-12-01

    Face recognition as an important biometric identification method, with its friendly, natural, convenient advantages, has obtained more and more attention. This paper intends to research a face recognition system including face detection, feature extraction and face recognition, mainly through researching on related theory and the key technology of various preprocessing methods in face detection process, using KPCA method, focuses on the different recognition results in different preprocessing methods. In this paper, we choose YCbCr color space for skin segmentation and choose integral projection for face location. We use erosion and dilation of the opening and closing operation and illumination compensation method to preprocess face images, and then use the face recognition method based on kernel principal component analysis method for analysis and research, and the experiments were carried out using the typical face database. The algorithms experiment on MATLAB platform. Experimental results show that integration of the kernel method based on PCA algorithm under certain conditions make the extracted features represent the original image information better for using nonlinear feature extraction method, which can obtain higher recognition rate. In the image preprocessing stage, we found that images under various operations may appear different results, so as to obtain different recognition rate in recognition stage. At the same time, in the process of the kernel principal component analysis, the value of the power of the polynomial function can affect the recognition result.

  12. Familiar Person Recognition: Is Autonoetic Consciousness More Likely to Accompany Face Recognition Than Voice Recognition?

    Science.gov (United States)

    Barsics, Catherine; Brédart, Serge

    2010-11-01

    Autonoetic consciousness is a fundamental property of human memory, enabling us to experience mental time travel, to recollect past events with a feeling of self-involvement, and to project ourselves in the future. Autonoetic consciousness is a characteristic of episodic memory. By contrast, awareness of the past associated with a mere feeling of familiarity or knowing relies on noetic consciousness, depending on semantic memory integrity. Present research was aimed at evaluating whether conscious recollection of episodic memories is more likely to occur following the recognition of a familiar face than following the recognition of a familiar voice. Recall of semantic information (biographical information) was also assessed. Previous studies that investigated the recall of biographical information following person recognition used faces and voices of famous people as stimuli. In this study, the participants were presented with personally familiar people's voices and faces, thus avoiding the presence of identity cues in the spoken extracts and allowing a stricter control of frequency exposure with both types of stimuli (voices and faces). In the present study, the rate of retrieved episodic memories, associated with autonoetic awareness, was significantly higher from familiar faces than familiar voices even though the level of overall recognition was similar for both these stimuli domains. The same pattern was observed regarding semantic information retrieval. These results and their implications for current Interactive Activation and Competition person recognition models are discussed.

  13. Multi—pose Color Face Recognition in a Complex Background

    Institute of Scientific and Technical Information of China (English)

    ZHUChangren; WANGRunsheng

    2003-01-01

    Face recognition has wider application fields. In recurrent references, most of the algorithms that deal with the face recognition in the static images are with simple background, and only used for ID picture recogni-tion. It is necessary to study the whole process of multi-pose face recognition in a clutter background. In this pa-per an automatic multi-pose face recognition system with multi-feature is proposed. It consists of several steps: face detection, detection and location of the face organs, feature extraction for recognition, recognition decision. In face de-tection the combination of skin-color and multi-verification which consists of the analysis of the shape, local organ fea-tures and head model is applied to improve the perfor-mance. In detection and location of the face organ feature points, with the analysis of multiple features and their pro-jections, the combination of an iterative search with a con-fidence function and template matching at the candidate points is adopted to improve the performance of accuracy and speed. In feature extraction for recognition, geome-try normalization based on three-point afflne transform is adopted to conserve the information to a maximum con-tent before the feature extraction of principal component analysis (PCA). In recognition decision, a hierarchical face model with the division of the face poses is introduced to reduce its retrieval space and thus to cut its time consump-tion. In addition, a fusion decision is applied to improve the face recognition performance. Also, pose recognition result can be got simultaneously. The new approach is ap-plied to 420 color images which consist of multi-pose faces with two visible eyes in a complex background, and the results are satisfactory.

  14. Face age and sex modulate the other-race effect in face recognition.

    Science.gov (United States)

    Wallis, Jennifer; Lipp, Ottmar V; Vanman, Eric J

    2012-11-01

    Faces convey a variety of socially relevant cues that have been shown to affect recognition, such as age, sex, and race, but few studies have examined the interactive effect of these cues. White participants of two distinct age groups were presented with faces that differed in race, age, and sex in a face recognition paradigm. Replicating the other-race effect, young participants recognized young own-race faces better than young other-race faces. However, recognition performance did not differ across old faces of different races (Experiments 1, 2A). In addition, participants showed an other-age effect, recognizing White young faces better than White old faces. Sex affected recognition performance only when age was not varied (Experiment 2B). Overall, older participants showed a similar recognition pattern (Experiment 3) as young participants, displaying an other-race effect for young, but not old, faces. However, they recognized young and old White faces on a similar level. These findings indicate that face cues interact to affect recognition performance such that age and sex information reliably modulate the effect of race cues. These results extend accounts of face recognition that explain recognition biases (such as the other-race effect) as a function of dichotomous ingroup/outgroup categorization, in that outgroup characteristics are not simply additive but interactively determine recognition performance.

  15. A Survey of 2D Face Recognition Techniques

    Directory of Open Access Journals (Sweden)

    Mejda Chihaoui

    2016-09-01

    Full Text Available Despite the existence of various biometric techniques, like fingerprints, iris scan, as well as hand geometry, the most efficient and more widely-used one is face recognition. This is because it is inexpensive, non-intrusive and natural. Therefore, researchers have developed dozens of face recognition techniques over the last few years. These techniques can generally be divided into three categories, based on the face data processing methodology. There are methods that use the entire face as input data for the proposed recognition system, methods that do not consider the whole face, but only some features or areas of the face and methods that use global and local face characteristics simultaneously. In this paper, we present an overview of some well-known methods in each of these categories. First, we expose the benefits of, as well as the challenges to the use of face recognition as a biometric tool. Then, we present a detailed survey of the well-known methods by expressing each method’s principle. After that, a comparison between the three categories of face recognition techniques is provided. Furthermore, the databases used in face recognition are mentioned, and some results of the applications of these methods on face recognition databases are presented. Finally, we highlight some new promising research directions that have recently appeared.

  16. When family looks strange and strangers look normal: a case of impaired face perception and recognition after stroke.

    Science.gov (United States)

    Heutink, Joost; Brouwer, Wiebo H; Kums, Evelien; Young, Andy; Bouma, Anke

    2012-02-01

    We describe a patient (JS) with impaired recognition and distorted visual perception of faces after an ischemic stroke. Strikingly, JS reports that the faces of family members look distorted, while faces of other people look normal. After neurological and neuropsychological examination, we assessed response accuracy, response times, and skin conductance responses on a face recognition task in which photographs of close family members, celebrities and unfamiliar people were presented. JS' performance was compared to the performance of three healthy control participants. Results indicate that three aspects of face perception appear to be impaired in JS. First, she has impaired recognition of basic emotional expressions. Second, JS has poor recognition of familiar faces in general, but recognition of close family members is disproportionally impaired compared to faces of celebrities. Third, JS perceives faces of family members as distorted. In this paper we consider whether these impairments can be interpreted in terms of previously described disorders of face perception and recent models for face perception.

  17. PCA Based Rapid and Real Time Face Recognition Technique

    Directory of Open Access Journals (Sweden)

    T R Chandrashekar

    2013-12-01

    Full Text Available Economical and efficient that is used in various applications is face Biometric which has been a popular form biometric system. Face recognition system is being a topic of research for last few decades. Several techniques are proposed to improve the performance of face recognition system. Accuracy is tested against intensity, distance from camera, and pose variance. Multiple face recognition is another subtopic which is under research now a day. Speed at which the technique works is a parameter under consideration to evaluate a technique. As an example a support vector machine performs really well for face recognition but the computational efficiency degrades significantly with increase in number of classes. Eigen Face technique produces quality features for face recognition but the accuracy is proved to be comparatively less to many other techniques. With increase in use of core processors in personal computers and application demanding speed in processing and multiple face detection and recognition system (for example an entry detection system in shopping mall or an industry, demand for such systems are cumulative as there is a need for automated systems worldwide. In this paper we propose a novel system of face recognition developed with C# .Net that can detect multiple faces and can recognize the faces parallel by utilizing the system resources and the core processors. The system is built around Haar Cascade based face detection and PCA based face recognition system with C#.Net. Parallel library designed for .Net is used to aide to high speed detection and recognition of the real time faces. Analysis of the performance of the proposed technique with some of the conventional techniques reveals that the proposed technique is not only accurate, but also is fast in comparison to other techniques.

  18. Newborns' Face Recognition: Role of Inner and Outer Facial Features

    Science.gov (United States)

    Turati, Chiara; Macchi Cassia, Viola; Simion, Francesca; Leo, Irene

    2006-01-01

    Existing data indicate that newborns are able to recognize individual faces, but little is known about what perceptual cues drive this ability. The current study showed that either the inner or outer features of the face can act as sufficient cues for newborns' face recognition (Experiment 1), but the outer part of the face enjoys an advantage…

  19. Examplers based image fusion features for face recognition

    CERN Document Server

    James, Alex Pappachen

    2012-01-01

    Examplers of a face are formed from multiple gallery images of a person and are used in the process of classification of a test image. We incorporate such examplers in forming a biologically inspired local binary decisions on similarity based face recognition method. As opposed to single model approaches such as face averages the exampler based approach results in higher recognition accu- racies and stability. Using multiple training samples per person, the method shows the following recognition accuracies: 99.0% on AR, 99.5% on FERET, 99.5% on ORL, 99.3% on EYALE, 100.0% on YALE and 100.0% on CALTECH face databases. In addition to face recognition, the method also detects the natural variability in the face images which can find application in automatic tagging of face images.

  20. Isolating the Special Component of Face Recognition: Peripheral Identification and a Mooney Face

    Science.gov (United States)

    McKone, Elinor

    2004-01-01

    A previous finding argues that, for faces, configural (holistic) processing can operate even in the complete absence of part-based contributions to recognition. Here, this result is confirmed using 2 methods. In both, recognition of inverted faces (parts only) was removed altogether (chance identification of faces in the periphery; no perception…

  1. Recognition of Moving and Static Faces by Young Infants

    Science.gov (United States)

    Otsuka, Yumiko; Konishi, Yukuo; Kanazawa, So; Yamaguchi, Masami K.; Abdi, Herve; O'Toole, Alice J.

    2009-01-01

    This study compared 3- to 4-month-olds' recognition of previously unfamiliar faces learned in a moving or a static condition. Infants in the moving condition showed successful recognition with only 30 s familiarization, even when different images of a face were used in the familiarization and test phase (Experiment 1). In contrast, infants in the…

  2. Transfer between Pose and Illumination Training in Face Recognition

    Science.gov (United States)

    Liu, Chang Hong; Bhuiyan, Md. Al-Amin; Ward, James; Sui, Jie

    2009-01-01

    The relationship between pose and illumination learning in face recognition was examined in a yes-no recognition paradigm. The authors assessed whether pose training can transfer to a new illumination or vice versa. Results show that an extensive level of pose training through a face-name association task was able to generalize to a new…

  3. Recognition of human face based on improved multi-sample

    Institute of Scientific and Technical Information of China (English)

    LIU Xia; LI Lei-lei; LI Ting-jun; LIU Lu; ZHANG Ying

    2009-01-01

    In order to solve the problem caused by variation illumination in human face recognition, we bring forward a face recognition algorithm based on the improved muhi-sample. In this algorithm, the face image is processed with Retinex theory, meanwhile, the Gabor filter is adopted to perform the feature extraction. The experimental results show that the application of Retinex theory improves the recognition accuracy, and makes the algorithm more robust to the variation illumination. The Gabor filter is more effective and accurate for extracting more useable facial local features. It is proved that the proposed algorithm has good recognition accuracy and it is stable under variation illumination.

  4. An infrared human face recognition method based on 2DPCA

    Institute of Scientific and Technical Information of China (English)

    LIU Xia; Li Ting-jun

    2009-01-01

    Aimed at the problems of infrared image recognition under varying illumination, face disguise, etc. ,we bring out an infrared human face recognition algorithm based on 2DPCA. The proposed algorithm can work out the covariance matrix of the training sample easily and directly; at the same time, it costs less time to work out the eigenvector. Relevant experiments are carried out, and the result indicates that compared with the traditional recognition algorithm, the proposed recognition method is swift and has a good adaptability to the changes of human face posture.

  5. When the face fits: recognition of celebrities from matching and mismatching faces and voices.

    Science.gov (United States)

    Stevenage, Sarah V; Neil, Greg J; Hamlin, Iain

    2014-01-01

    The results of two experiments are presented in which participants engaged in a face-recognition or a voice-recognition task. The stimuli were face-voice pairs in which the face and voice were co-presented and were either "matched" (same person), "related" (two highly associated people), or "mismatched" (two unrelated people). Analysis in both experiments confirmed that accuracy and confidence in face recognition was consistently high regardless of the identity of the accompanying voice. However accuracy of voice recognition was increasingly affected as the relationship between voice and accompanying face declined. Moreover, when considering self-reported confidence in voice recognition, confidence remained high for correct responses despite the proportion of these responses declining across conditions. These results converged with existing evidence indicating the vulnerability of voice recognition as a relatively weak signaller of identity, and results are discussed in the context of a person-recognition framework.

  6. A Review on Feature Extraction Techniques in Face Recognition

    Directory of Open Access Journals (Sweden)

    Rahimeh Rouhi

    2013-01-01

    Full Text Available Face recognition systems due to their significant application in the security scopes, have been of greatimportance in recent years. The existence of an exact balance between the computing cost, robustness andtheir ability for face recognition is an important characteristic for such systems. Besides, trying to designthe systems performing under different conditions (e.g. illumination, variation of pose, different expressionand etc. is a challenging problem in the feature extraction of the face recognition. As feature extraction isan important step in the face recognition operation, in the present study four techniques of featureextraction in the face recognition were reviewed, subsequently comparable results were presented, andthen the advantages and the disadvantages of these methods were discussed.

  7. Local Feature Learning for Face Recognition under Varying Poses

    DEFF Research Database (Denmark)

    Duan, Xiaodong; Tan, Zheng-Hua

    2015-01-01

    In this paper, we present a local feature learning method for face recognition to deal with varying poses. As opposed to the commonly used approaches of recovering frontal face images from profile views, the proposed method extracts the subject related part from a local feature by removing the pose...... related part in it on the basis of a pose feature. The method has a closed-form solution, hence being time efficient. For performance evaluation, cross pose face recognition experiments are conducted on two public face recognition databases FERET and FEI. The proposed method shows a significant...... recognition improvement under varying poses over general local feature approaches and outperforms or is comparable with related state-of-the-art pose invariant face recognition approaches. Copyright ©2015 by IEEE....

  8. A Real-Time Face Recognition System Using Eigenfaces

    Directory of Open Access Journals (Sweden)

    Daniel Georgescu

    2011-12-01

    Full Text Available A real-time system for recognizing faces in a video stream provided by a surveillance camera was implemented, having real-time face detection. Thus, both face detection and face recognition techniques are summary presented, without skipping the important technical aspects. The proposed approach essentially was to implement and verify the algorithm Eigenfaces for Recognition, which solves the recognition problem for two dimensional representations of faces, using the principal component analysis. The snapshots, representing input images for the proposed system, are projected in to a face space (feature space which best defines the variation for the face images training set. The face space is defined by the ‘eigenfaces’ which are the eigenvectors of the set of faces. These eigenfaces contribute in face reconstruction of a new face image projected onto face space with a meaningful (named weight.The projection of the new image in this feature space is then compared to the available projections of training set to identify the person using the Euclidian distance.  The implemented system is able to perform real-time face detection, face recognition and can give feedback giving a window with the subject's info from database and sending an e-mail notification to interested institutions.

  9. Unaware person recognition from the body when face identification fails.

    Science.gov (United States)

    Rice, Allyson; Phillips, P Jonathon; Natu, Vaidehi; An, Xiaobo; O'Toole, Alice J

    2013-11-01

    How does one recognize a person when face identification fails? Here, we show that people rely on the body but are unaware of doing so. State-of-the-art face-recognition algorithms were used to select images of people with almost no useful identity information in the face. Recognition of the face alone in these cases was near chance level, but recognition of the person was accurate. Accuracy in identifying the person without the face was identical to that in identifying the whole person. Paradoxically, people reported relying heavily on facial features over noninternal face and body features in making their identity decisions. Eye movements indicated otherwise, with gaze duration and fixations shifting adaptively toward the body and away from the face when the body was a better indicator of identity than the face. This shift occurred with no cost to accuracy or response time. Human identity processing may be partially inaccessible to conscious awareness.

  10. Face averages enhance user recognition for smartphone security.

    Science.gov (United States)

    Robertson, David J; Kramer, Robin S S; Burton, A Mike

    2015-01-01

    Our recognition of familiar faces is excellent, and generalises across viewing conditions. However, unfamiliar face recognition is much poorer. For this reason, automatic face recognition systems might benefit from incorporating the advantages of familiarity. Here we put this to the test using the face verification system available on a popular smartphone (the Samsung Galaxy). In two experiments we tested the recognition performance of the smartphone when it was encoded with an individual's 'face-average'--a representation derived from theories of human face perception. This technique significantly improved performance for both unconstrained celebrity images (Experiment 1) and for real faces (Experiment 2): users could unlock their phones more reliably when the device stored an average of the user's face than when they stored a single image. This advantage was consistent across a wide variety of everyday viewing conditions. Furthermore, the benefit did not reduce the rejection of imposter faces. This benefit is brought about solely by consideration of suitable representations for automatic face recognition, and we argue that this is just as important as development of matching algorithms themselves. We propose that this representation could significantly improve recognition rates in everyday settings.

  11. Pose-Invariant Face Recognition via RGB-D Images

    Directory of Open Access Journals (Sweden)

    Gaoli Sang

    2016-01-01

    Full Text Available Three-dimensional (3D face models can intrinsically handle large pose face recognition problem. In this paper, we propose a novel pose-invariant face recognition method via RGB-D images. By employing depth, our method is able to handle self-occlusion and deformation, both of which are challenging problems in two-dimensional (2D face recognition. Texture images in the gallery can be rendered to the same view as the probe via depth. Meanwhile, depth is also used for similarity measure via frontalization and symmetric filling. Finally, both texture and depth contribute to the final identity estimation. Experiments on Bosphorus, CurtinFaces, Eurecom, and Kiwi databases demonstrate that the additional depth information has improved the performance of face recognition with large pose variations and under even more challenging conditions.

  12. PARTIAL MATCHING FACE RECOGNITION METHOD FOR REHABILITATION NURSING ROBOTS BEDS

    Directory of Open Access Journals (Sweden)

    Dongmei LIANG

    2015-06-01

    Full Text Available In order to establish face recognition system in rehabilitation nursing robots beds and achieve real-time monitor the patient on the bed. We propose a face recognition method based on partial matching Hu moments which apply for rehabilitation nursing robots beds. Firstly we using Haar classifier to detect human faces automatically in dynamic video frames. Secondly we using Otsu threshold method to extract facial features (eyebrows, eyes, mouth in the face image and its Hu moments. Finally, we using Hu moment feature set to achieve the automatic face recognition. Experimental results show that this method can efficiently identify face in a dynamic video and it has high practical value (the accuracy rate is 91% and the average recognition time is 4.3s.

  13. Graph Laplace for occluded face completion and recognition.

    Science.gov (United States)

    Deng, Yue; Dai, Qionghai; Zhang, Zengke

    2011-08-01

    This paper proposes a spectral-graph-based algorithm for face image repairing, which can improve the recognition performance on occluded faces. The face completion algorithm proposed in this paper includes three main procedures: 1) sparse representation for partially occluded face classification; 2) image-based data mining; and 3) graph Laplace (GL) for face image completion. The novel part of the proposed framework is GL, as named from graphical models and the Laplace equation, and can achieve a high-quality repairing of damaged or occluded faces. The relationship between the GL and the traditional Poisson equation is proven. We apply our face repairing algorithm to produce completed faces, and use face recognition to evaluate the performance of the algorithm. Experimental results verify the effectiveness of the GL method for occluded face completion.

  14. The activation of visual face memory and explicit face recognition are delayed in developmental prosopagnosia.

    Science.gov (United States)

    Parketny, Joanna; Towler, John; Eimer, Martin

    2015-08-01

    Individuals with developmental prosopagnosia (DP) are strongly impaired in recognizing faces, but the causes of this deficit are not well understood. We employed event-related brain potentials (ERPs) to study the time-course of neural processes involved in the recognition of previously unfamiliar faces in DPs and in age-matched control participants with normal face recognition abilities. Faces of different individuals were presented sequentially in one of three possible views, and participants had to detect a specific Target Face ("Joe"). EEG was recorded during task performance to Target Faces, Nontarget Faces, or the participants' Own Face (which had to be ignored). The N250 component was measured as a marker of the match between a seen face and a stored representation in visual face memory. The subsequent P600f was measured as an index of attentional processes associated with the conscious awareness and recognition of a particular face. Target Faces elicited reliable N250 and P600f in the DP group, but both of these components emerged later in DPs than in control participants. This shows that the activation of visual face memory for previously unknown learned faces and the subsequent attentional processing and conscious recognition of these faces are delayed in DP. N250 and P600f components to Own Faces did not differ between the two groups, indicating that the processing of long-term familiar faces is less affected in DP. However, P600f components to Own Faces were absent in two participants with DP who failed to recognize their Own Face during the experiment. These results provide new evidence that face recognition deficits in DP may be linked to a delayed activation of visual face memory and explicit identity recognition mechanisms.

  15. Face recognition using improved-LDA with facial combined feature

    Institute of Scientific and Technical Information of China (English)

    Dake Zhou; Xin Yang; Ningsong Peng

    2005-01-01

    @@ Face recognition subjected to various conditions is a challenging task. This paper presents a combined feature improved Fisher classifier method for face recognition. Both of the facial holistic information and local information are used for face representation. In addition, the improved linear discriminant analysis (I-LDA) is employed for good generalization capability. Experiments show that the method is not only robust to moderate changes of illumination, pose and facial expression but also superior to the traditional methods, such as eigenfaces and Fisherfaces.

  16. A Novel Face Segmentation Algorithm from a Video Sequence for Real-Time Face Recognition

    Directory of Open Access Journals (Sweden)

    Sudhaker Samuel RD

    2007-01-01

    Full Text Available The first step in an automatic face recognition system is to localize the face region in a cluttered background and carefully segment the face from each frame of a video sequence. In this paper, we propose a fast and efficient algorithm for segmenting a face suitable for recognition from a video sequence. The cluttered background is first subtracted from each frame, in the foreground regions, a coarse face region is found using skin colour. Then using a dynamic template matching approach the face is efficiently segmented. The proposed algorithm is fast and suitable for real-time video sequence. The algorithm is invariant to large scale and pose variation. The segmented face is then handed over to a recognition algorithm based on principal component analysis and linear discriminant analysis. The online face detection, segmentation, and recognition algorithms take an average of 0.06 second on a 3.2 GHz P4 machine.

  17. An Approach to Face Recognition of 2-D Images Using Eigen Faces and PCA

    Directory of Open Access Journals (Sweden)

    Annapurna Mishra

    2012-04-01

    Full Text Available Face detection is to find any face in a given image. Face recognition is a two-dimension problem used for detecting faces. The information contained in a face can be analysed automatically by this system like identity, gender, expression, age, race and pose. Normally face detection is done for a single image but it can also be extended for video stream. As the face images are normally upright, they can be described by a small set of 2-D characteristics views. Here the face images are projected to a feature space or face space to encode the variation between the known face images. The projected feature space or the face space can be defined as ‘eigenfaces’ and can be formed by eigenvectors of the face image set. The above process can be used to recognize a new face in unsupervised manner. This paper introduces an algorithm which is used for effective face recognition. It takes into consideration not only the face extraction but also the mathematical calculations which enable us to bring the image into a simple and technical form. It can also be implemented in real-time using data acquisition hardware and software interface with the face recognition systems. Face recognition can be applied to various domains including security systems, personal identification, image and film processing and human computer interaction.

  18. A ROBUST EYE LOCALIZATION ALGORITHM FOR FACE RECOGNITION

    Institute of Scientific and Technical Information of China (English)

    Zhang Wencong; Li Xin; Yao Peng; Li Bin; Zhuang Zhenquan

    2008-01-01

    The accuracy of face alignment affects greatly the performance of a face recognition system.Since the face alignment is usually conducted using eye positions, the algorithm for accurate eye localization is essential for the accurate face recognition. In this paper, an algorithm is proposed for eye localization. First, the proper AdaBoost detection is adaptively trained to segment the region based on the special gray distribution in the region. After that, a fast radial symmetry operator is used to precisely locate the center of eyes. Experimental results show that the method can accurately locate the eyes, and it is robust to the variations of face poses, illuminations, expressions, and accessories.

  19. A Neural Model of Face Recognition: a Comprehensive Approach

    Science.gov (United States)

    Stara, Vera; Montesanto, Anna; Puliti, Paolo; Tascini, Guido; Sechi, Cristina

    Visual recognition of faces is an essential behavior of humans: we have optimal performance in everyday life and just such a performance makes us able to establish the continuity of actors in our social life and to quickly identify and categorize people. This remarkable ability justifies the general interest in face recognition of researchers belonging to different fields and specially of designers of biometrical identification systems able to recognize the features of person's faces in a background. Due to interdisciplinary nature of this topic in this contribute we deal with face recognition through a comprehensive approach with the purpose to reproduce some features of human performance, as evidenced by studies in psychophysics and neuroscience, relevant to face recognition. This approach views face recognition as an emergent phenomenon resulting from the nonlinear interaction of a number of different features. For this reason our model of face recognition has been based on a computational system implemented through an artificial neural network. This synergy between neuroscience and engineering efforts allowed us to implement a model that had a biological plausibility, performed the same tasks as human subjects, and gave a possible account of human face perception and recognition. In this regard the paper reports on an experimental study of performance of a SOM-based neural network in a face recognition task, with reference both to the ability to learn to discriminate different faces, and to the ability to recognize a face already encountered in training phase, when presented in a pose or with an expression differing from the one present in the training context.

  20. Face Recognition Using Local Quantized Patterns and Gabor Filters

    Science.gov (United States)

    Khryashchev, V.; Priorov, A.; Stepanova, O.; Nikitin, A.

    2015-05-01

    The problem of face recognition in a natural or artificial environment has received a great deal of researchers' attention over the last few years. A lot of methods for accurate face recognition have been proposed. Nevertheless, these methods often fail to accurately recognize the person in difficult scenarios, e.g. low resolution, low contrast, pose variations, etc. We therefore propose an approach for accurate and robust face recognition by using local quantized patterns and Gabor filters. The estimation of the eye centers is used as a preprocessing stage. The evaluation of our algorithm on different samples from a standardized FERET database shows that our method is invariant to the general variations of lighting, expression, occlusion and aging. The proposed approach allows about 20% correct recognition accuracy increase compared with the known face recognition algorithms from the OpenCV library. The additional use of Gabor filters can significantly improve the robustness to changes in lighting conditions.

  1. Tolerance of geometric distortions in infant's face recognition.

    Science.gov (United States)

    Yamashita, Wakayo; Kanazawa, So; Yamaguchi, Masami K

    2014-02-01

    The aim of the current study is to reveal the effect of global linear transformations (shearing, horizontal stretching, and vertical stretching) on the recognition of familiar faces (e.g., a mother's face) in 6- to 7-month-old infants. In this experiment, we applied the global linear transformations to both the infants' own mother's face and to a stranger's face, and we tested infants' preference between these faces. We found that only 7-month-old infants maintained preference for their own mother's face during the presentation of vertical stretching, while the preference for the mother's face disappeared during the presentation of shearing or horizontal stretching. These findings suggest that 7-month-old infants might not recognize faces based on calculating the absolute distance between facial features, and that the vertical dimension of facial features might be more related to infants' face recognition rather than the horizontal dimension.

  2. 2D Methods for pose invariant face recognition

    CSIR Research Space (South Africa)

    Mokoena, Ntabiseng

    2016-12-01

    Full Text Available The ability to recognise face images under random pose is a task that is done effortlessly by human beings. However, for a computer system, recognising face images under varying poses still remains an open research area. Face recognition across pose...

  3. Impact of eye detection error on face recognition performance

    NARCIS (Netherlands)

    Dutta, A.; Günther, Manuel; El Shafey, Laurent; Veldhuis, Raymond N.J.; Spreeuwers, Lieuwe Jan

    2015-01-01

    The locations of the eyes are the most commonly used features to perform face normalisation (i.e. alignment of facial features), which is an essential preprocessing stage of many face recognition systems. In this study, the authors study the sensitivity of open source implementations of five face

  4. Impact of eye detection error on face recognition performance

    NARCIS (Netherlands)

    Dutta, Abhishek; Günther, Manuel; El Shafey, Laurent; Veldhuis, Raymond; Spreeuwers, Luuk

    2015-01-01

    The locations of the eyes are the most commonly used features to perform face normalisation (i.e. alignment of facial features), which is an essential preprocessing stage of many face recognition systems. In this study, the authors study the sensitivity of open source implementations of five face re

  5. Multi-feature fusion for thermal face recognition

    Science.gov (United States)

    Bi, Yin; Lv, Mingsong; Wei, Yangjie; Guan, Nan; Yi, Wang

    2016-07-01

    Human face recognition has been researched for the last three decades. Face recognition with thermal images now attracts significant attention since they can be used in low/none illuminated environment. However, thermal face recognition performance is still insufficient for practical applications. One main reason is that most existing work leverage only single feature to characterize a face in a thermal image. To solve the problem, we propose multi-feature fusion, a technique that combines multiple features in thermal face characterization and recognition. In this work, we designed a systematical way to combine four features, including Local binary pattern, Gabor jet descriptor, Weber local descriptor and Down-sampling feature. Experimental results show that our approach outperforms methods that leverage only a single feature and is robust to noise, occlusion, expression, low resolution and different l1 -minimization methods.

  6. Face Memory and Object Recognition in Children with High-Functioning Autism or Asperger Syndrome and in Their Parents

    Science.gov (United States)

    Kuusikko-Gauffin, Sanna; Jansson-Verkasalo, Eira; Carter, Alice; Pollock-Wurman, Rachel; Jussila, Katja; Mattila, Marja-Leena; Rahko, Jukka; Ebeling, Hanna; Pauls, David; Moilanen, Irma

    2011-01-01

    Children with Autism Spectrum Disorders (ASDs) have reported to have impairments in face, recognition and face memory, but intact object recognition and object memory. Potential abnormalities, in these fields at the family level of high-functioning children with ASD remains understudied despite, the ever-mounting evidence that ASDs are genetic and…

  7. 3D face database for human pattern recognition

    Science.gov (United States)

    Song, LiMei; Lu, Lu

    2008-10-01

    Face recognition is an essential work to ensure human safety. It is also an important task in biomedical engineering. 2D image is not enough for precision face recognition. 3D face data includes more exact information, such as the precision size of eyes, mouth, etc. 3D face database is an important part in human pattern recognition. There is a lot of method to get 3D data, such as 3D laser scan system, 3D phase measurement, shape from shading, shape from motion, etc. This paper will introduce a non-orbit, non-contact, non-laser 3D measurement system. The main idea is from shape from stereo technique. Two cameras are used in different angle. A sequence of light will project on the face. Human face, human head, human tooth, human body can all be measured by the system. The visualization data of each person can form to a large 3D face database, which can be used in human recognition. The 3D data can provide a vivid copy of a face, so the recognition exactness can be reached to 100%. Although the 3D data is larger than 2D image, it can be used in the occasion where only few people include, such as the recognition of a family, a small company, etc.

  8. Facial emotion recognition, face scan paths, and face perception in children with neurofibromatosis type 1.

    Science.gov (United States)

    Lewis, Amelia K; Porter, Melanie A; Williams, Tracey A; Bzishvili, Samantha; North, Kathryn N; Payne, Jonathan M

    2017-05-01

    This study aimed to investigate face scan paths and face perception abilities in children with Neurofibromatosis Type 1 (NF1) and how these might relate to emotion recognition abilities in this population. The authors investigated facial emotion recognition, face scan paths, and face perception in 29 children with NF1 compared to 29 chronological age-matched typically developing controls. Correlations between facial emotion recognition, face scan paths, and face perception in children with NF1 were examined. Children with NF1 displayed significantly poorer recognition of fearful expressions compared to controls, as well as a nonsignificant trend toward poorer recognition of anger. Although there was no significant difference between groups in time spent viewing individual core facial features (eyes, nose, mouth, and nonfeature regions), children with NF1 spent significantly less time than controls viewing the face as a whole. Children with NF1 also displayed significantly poorer face perception abilities than typically developing controls. Facial emotion recognition deficits were not significantly associated with aberrant face scan paths or face perception abilities in the NF1 group. These results suggest that impairments in the perception, identification, and interpretation of information from faces are important aspects of the social-cognitive phenotype of NF1. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  9. Efficient Face Recognition in Video by Bit Planes Slicing

    Directory of Open Access Journals (Sweden)

    Srinivasa R. Inbathini

    2012-01-01

    Full Text Available Problem statement: Video-based face recognition must be able to overcome the imaging interference such as pose and illumination. Approach: A model is designed to study for face recognition based on video sequence and also test image. In training stage, single frontal image is taken as a input to the recognition system. A new virtual image is generated using bit plane feature fusion to effectively reduce the sensitivity to illumination variances. A Self-PCA is performed to get each set of Eigen faces and to get projected image. In recognition stage, automatic face detection scheme is first applied to the video sequences. Frames are extracted from the video and virtual frame is created. Each bit plane of test face is extracted and then the feature fusion face is constructed, followed by the projection and reconstruction using each set of the corresponding Eigen faces. Results: This algorithm is compared with conventional PCA algorithm. The minimum error of reconstruction is calculated. If error is less than a threshold value, then it recognizes the face from the database. Conclusion: Bit plane slicing mechanism is applied in video based face recognition. Experimental results shows that its far more superior than conventional method under various pose and illumination condition.

  10. Video-based face recognition via convolutional neural networks

    Science.gov (United States)

    Bao, Tianlong; Ding, Chunhui; Karmoshi, Saleem; Zhu, Ming

    2017-06-01

    Face recognition has been widely studied recently while video-based face recognition still remains a challenging task because of the low quality and large intra-class variation of video captured face images. In this paper, we focus on two scenarios of video-based face recognition: 1)Still-to-Video(S2V) face recognition, i.e., querying a still face image against a gallery of video sequences; 2)Video-to-Still(V2S) face recognition, in contrast to S2V scenario. A novel method was proposed in this paper to transfer still and video face images to an Euclidean space by a carefully designed convolutional neural network, then Euclidean metrics are used to measure the distance between still and video images. Identities of still and video images that group as pairs are used as supervision. In the training stage, a joint loss function that measures the Euclidean distance between the predicted features of training pairs and expanding vectors of still images is optimized to minimize the intra-class variation while the inter-class variation is guaranteed due to the large margin of still images. Transferred features are finally learned via the designed convolutional neural network. Experiments are performed on COX face dataset. Experimental results show that our method achieves reliable performance compared with other state-of-the-art methods.

  11. The Impact of Early Bilingualism on Face Recognition Processes

    Science.gov (United States)

    Kandel, Sonia; Burfin, Sabine; Méary, David; Ruiz-Tada, Elisa; Costa, Albert; Pascalis, Olivier

    2016-01-01

    Early linguistic experience has an impact on the way we decode audiovisual speech in face-to-face communication. The present study examined whether differences in visual speech decoding could be linked to a broader difference in face processing. To identify a phoneme we have to do an analysis of the speaker’s face to focus on the relevant cues for speech decoding (e.g., locating the mouth with respect to the eyes). Face recognition processes were investigated through two classic effects in face recognition studies: the Other-Race Effect (ORE) and the Inversion Effect. Bilingual and monolingual participants did a face recognition task with Caucasian faces (own race), Chinese faces (other race), and cars that were presented in an Upright or Inverted position. The results revealed that monolinguals exhibited the classic ORE. Bilinguals did not. Overall, bilinguals were slower than monolinguals. These results suggest that bilinguals’ face processing abilities differ from monolinguals’. Early exposure to more than one language may lead to a perceptual organization that goes beyond language processing and could extend to face analysis. We hypothesize that these differences could be due to the fact that bilinguals focus on different parts of the face than monolinguals, making them more efficient in other race face processing but slower. However, more studies using eye-tracking techniques are necessary to confirm this explanation. PMID:27486422

  12. The impact of early bilingualism on face recognition processes

    Directory of Open Access Journals (Sweden)

    Sonia Kandel

    2016-07-01

    Full Text Available Early linguistic experience has an impact on the way we decode audiovisual speech in face-to-face communication. The present study examined whether differences in visual speech decoding could be linked to a broader difference in face processing. To identify a phoneme we have to do an analysis of the speaker’s face to focus on the relevant cues for speech decoding (e.g., locating the mouth with respect to the eyes. Face recognition processes were investigated through two classic effects in face recognition studies: the Other Race Effect (ORE and the Inversion Effect. Bilingual and monolingual participants did a face recognition task with Caucasian faces (own race, Chinese faces (other race and cars that were presented in an Upright or Inverted position. The results revealed that monolinguals exhibited the classic ORE. Bilinguals did not. Overall, bilinguals were slower than monolinguals. These results suggest that bilinguals’ face processing abilities differ from monolinguals’. Early exposure to more than one language may lead to a perceptual organization that goes beyond language processing and could extend to face analysis. We hypothesize that these differences could be due to the fact that bilinguals focus on different parts of the face than monolinguals, making them more efficient in other race face processing but slower. However, more studies using eye-tracking techniques are necessary to confirm this explanation.

  13. Automatic landmark detection and face recognition for side-view face images

    NARCIS (Netherlands)

    Santemiz, Pinar; Spreeuwers, Luuk J.; Veldhuis, Raymond N.J.; Broemme, Arslan; Busch, Christoph

    2013-01-01

    In real-life scenarios where pose variation is up to side-view positions, face recognition becomes a challenging task. In this paper we propose an automatic side-view face recognition system designed for home-safety applications. Our goal is to recognize people as they pass through doors in order to

  14. Face Feature Extraction for Recognition Using Radon Transform

    Directory of Open Access Journals (Sweden)

    Justice Kwame Appati

    2016-07-01

    Full Text Available Face recognition for some time now has been a challenging exercise especially when it comes to recognizing faces with different pose. This perhaps is due to the use of inappropriate descriptors during the feature extraction stage. In this paper, a thorough examination of the Radon Transform as a face signature descriptor was investigated on one of the standard database. The global features were rather considered by constructing a Gray Level Co-occurrences Matrices (GLCMs. Correlation, Energy, Homogeneity and Contrast are computed from each image to form the feature vector for recognition. We showed that, the transformed face signatures are robust and invariant to the different pose. With the statistical features extracted, face training classes are optimally broken up through the use of Support Vector Machine (SVM whiles recognition rate for test face images are computed based on the L1 norm.

  15. Original and Mirror Face Images and Minimum Squared Error Classification for Visible Light Face Recognition

    Directory of Open Access Journals (Sweden)

    Rong Wang

    2015-01-01

    Full Text Available In real-world applications, the image of faces varies with illumination, facial expression, and poses. It seems that more training samples are able to reveal possible images of the faces. Though minimum squared error classification (MSEC is a widely used method, its applications on face recognition usually suffer from the problem of a limited number of training samples. In this paper, we improve MSEC by using the mirror faces as virtual training samples. We obtained the mirror faces generated from original training samples and put these two kinds of samples into a new set. The face recognition experiments show that our method does obtain high accuracy performance in classification.

  16. Original and Mirror Face Images and Minimum Squared Error Classification for Visible Light Face Recognition.

    Science.gov (United States)

    Wang, Rong

    2015-01-01

    In real-world applications, the image of faces varies with illumination, facial expression, and poses. It seems that more training samples are able to reveal possible images of the faces. Though minimum squared error classification (MSEC) is a widely used method, its applications on face recognition usually suffer from the problem of a limited number of training samples. In this paper, we improve MSEC by using the mirror faces as virtual training samples. We obtained the mirror faces generated from original training samples and put these two kinds of samples into a new set. The face recognition experiments show that our method does obtain high accuracy performance in classification.

  17. Face Recognition Combining Eigen Features with a Parzen Classifier

    Institute of Scientific and Technical Information of China (English)

    SUN Xin; LIU Bing; LIU Ben-yong

    2005-01-01

    A face recognition scheme is proposed, wherein a face image is preprocessed by pixel averaging and energy normalizing to reduce data dimension and brightness variation effect, followed by the Fourier transform to estimate the spectrum of the preprocessed image. The principal component analysis is conducted on the spectra of a face image to obtain eigen features. Combining eigen features with a Parzen classifier, experiments are taken on the ORL face database.

  18. Hybrid SVM/HMM Method for Face Recognition

    Institute of Scientific and Technical Information of China (English)

    刘江华; 陈佳品; 程君实

    2004-01-01

    A face recognition system based on Support Vector Machine (SVM) and Hidden Markov Model (HMM) has been proposed. The powerful discriminative ability of SVM is combined with the temporal modeling ability of HMM. The output of SVM is moderated to be probability output, which replaces the Mixture of Gauss (MOG) in HMM. Wavelet transformation is used to extract observation vector, which reduces the data dimension and improves the robustness.The hybrid system is compared with pure HMM face recognition method based on ORL face database and Yale face database. Experiments results show that the hybrid method has better performance.

  19. Iterative closest normal point for 3D face recognition.

    Science.gov (United States)

    Mohammadzade, Hoda; Hatzinakos, Dimitrios

    2013-02-01

    The common approach for 3D face recognition is to register a probe face to each of the gallery faces and then calculate the sum of the distances between their points. This approach is computationally expensive and sensitive to facial expression variation. In this paper, we introduce the iterative closest normal point method for finding the corresponding points between a generic reference face and every input face. The proposed correspondence finding method samples a set of points for each face, denoted as the closest normal points. These points are effectively aligned across all faces, enabling effective application of discriminant analysis methods for 3D face recognition. As a result, the expression variation problem is addressed by minimizing the within-class variability of the face samples while maximizing the between-class variability. As an important conclusion, we show that the surface normal vectors of the face at the sampled points contain more discriminatory information than the coordinates of the points. We have performed comprehensive experiments on the Face Recognition Grand Challenge database, which is presently the largest available 3D face database. We have achieved verification rates of 99.6 and 99.2 percent at a false acceptance rate of 0.1 percent for the all versus all and ROC III experiments, respectively, which, to the best of our knowledge, have seven and four times less error rates, respectively, compared to the best existing methods on this database.

  20. Feature based sliding window technique for face recognition

    Science.gov (United States)

    Javed, Muhammad Younus; Mohsin, Syed Maajid; Anjum, Muhammad Almas

    2010-02-01

    Human beings are commonly identified by biometric schemes which are concerned with identifying individuals by their unique physical characteristics. The use of passwords and personal identification numbers for detecting humans are being used for years now. Disadvantages of these schemes are that someone else may use them or can easily be forgotten. Keeping in view of these problems, biometrics approaches such as face recognition, fingerprint, iris/retina and voice recognition have been developed which provide a far better solution when identifying individuals. A number of methods have been developed for face recognition. This paper illustrates employment of Gabor filters for extracting facial features by constructing a sliding window frame. Classification is done by assigning class label to the unknown image that has maximum features similar to the image stored in the database of that class. The proposed system gives a recognition rate of 96% which is better than many of the similar techniques being used for face recognition.

  1. Understanding eye movements in face recognition using hidden Markov models.

    Science.gov (United States)

    Chuk, Tim; Chan, Antoni B; Hsiao, Janet H

    2014-09-16

    We use a hidden Markov model (HMM) based approach to analyze eye movement data in face recognition. HMMs are statistical models that are specialized in handling time-series data. We conducted a face recognition task with Asian participants, and model each participant's eye movement pattern with an HMM, which summarized the participant's scan paths in face recognition with both regions of interest and the transition probabilities among them. By clustering these HMMs, we showed that participants' eye movements could be categorized into holistic or analytic patterns, demonstrating significant individual differences even within the same culture. Participants with the analytic pattern had longer response times, but did not differ significantly in recognition accuracy from those with the holistic pattern. We also found that correct and wrong recognitions were associated with distinctive eye movement patterns; the difference between the two patterns lies in the transitions rather than locations of the fixations alone.

  2. Toward End-to-End Face Recognition Through Alignment Learning

    Science.gov (United States)

    Zhong, Yuanyi; Chen, Jiansheng; Huang, Bo

    2017-08-01

    Plenty of effective methods have been proposed for face recognition during the past decade. Although these methods differ essentially in many aspects, a common practice of them is to specifically align the facial area based on the prior knowledge of human face structure before feature extraction. In most systems, the face alignment module is implemented independently. This has actually caused difficulties in the designing and training of end-to-end face recognition models. In this paper we study the possibility of alignment learning in end-to-end face recognition, in which neither prior knowledge on facial landmarks nor artificially defined geometric transformations are required. Specifically, spatial transformer layers are inserted in front of the feature extraction layers in a Convolutional Neural Network (CNN) for face recognition. Only human identity clues are used for driving the neural network to automatically learn the most suitable geometric transformation and the most appropriate facial area for the recognition task. To ensure reproducibility, our model is trained purely on the publicly available CASIA-WebFace dataset, and is tested on the Labeled Face in the Wild (LFW) dataset. We have achieved a verification accuracy of 99.08\\% which is comparable to state-of-the-art single model based methods.

  3. 3D Face Recognition with Sparse Spherical Representations

    CERN Document Server

    Llonch, R Sala; Tosic, I; Frossard, P

    2008-01-01

    This paper addresses the problem of 3D face recognition using simultaneous sparse approximations on the sphere. The 3D face point clouds are first aligned with a novel and fully automated registration process. They are then represented as signals on the 2D sphere in order to preserve depth and geometry information. Next, we implement a dimensionality reduction process with simultaneous sparse approximations and subspace projection. It permits to represent each 3D face by only a few spherical functions that are able to capture the salient facial characteristics, and hence to preserve the discriminant facial information. We eventually perform recognition by effective matching in the reduced space, where Linear Discriminant Analysis can be further activated for improved recognition performance. The 3D face recognition algorithm is evaluated on the FRGC v.1.0 data set, where it is shown to outperform classical state-of-the-art solutions that work with depth images.

  4. Robust face recognition algorithm for identifition of disaster victims

    Science.gov (United States)

    Gevaert, Wouter J. R.; de With, Peter H. N.

    2013-02-01

    We present a robust face recognition algorithm for the identification of occluded, injured and mutilated faces with a limited training set per person. In such cases, the conventional face recognition methods fall short due to specific aspects in the classification. The proposed algorithm involves recursive Principle Component Analysis for reconstruction of afiected facial parts, followed by a feature extractor based on Gabor wavelets and uniform multi-scale Local Binary Patterns. As a classifier, a Radial Basis Neural Network is employed. In terms of robustness to facial abnormalities, tests show that the proposed algorithm outperforms conventional face recognition algorithms like, the Eigenfaces approach, Local Binary Patterns and the Gabor magnitude method. To mimic real-life conditions in which the algorithm would have to operate, specific databases have been constructed and merged with partial existing databases and jointly compiled. Experiments on these particular databases show that the proposed algorithm achieves recognition rates beyond 95%.

  5. Feature Extraction based Face Recognition, Gender and Age Classification

    Directory of Open Access Journals (Sweden)

    Venugopal K R

    2010-01-01

    Full Text Available The face recognition system with large sets of training sets for personal identification normally attains good accuracy. In this paper, we proposed Feature Extraction based Face Recognition, Gender and Age Classification (FEBFRGAC algorithm with only small training sets and it yields good results even with one image per person. This process involves three stages: Pre-processing, Feature Extraction and Classification. The geometric features of facial images like eyes, nose, mouth etc. are located by using Canny edge operator and face recognition is performed. Based on the texture and shape information gender and age classification is done using Posteriori Class Probability and Artificial Neural Network respectively. It is observed that the face recognition is 100%, the gender and age classification is around 98% and 94% respectively.

  6. An Approach to Face Recognition of 2-D Images Using Eigen Faces and PCA

    Directory of Open Access Journals (Sweden)

    Annapurna Mishra

    2012-05-01

    Full Text Available Face detection is to find any face in a given image. Face recognition is a two-dimension problem used fordetecting faces. The information contained in a face can be analysed automatically by this system likeidentity, gender, expression, age, race and pose. Normally face detection is done for a single image but itcan also be extended for video stream. As the face images are normally upright, they can be described by asmall set of 2-D characteristics views. Here the face images are projected to a feature space or face spaceto encode the variation between the known face images. The projected feature space or the face space canbe defined as ‘eigenfaces’ and can be formed by eigenvectors of the face image set. The above process canbe used to recognize a new face in unsupervised manner. This paper introduces an algorithm which is usedfor effective face recognition. It takes into consideration not only the face extraction but also themathematical calculations which enable us to bring the image into a simple and technical form. It can alsobe implemented in real-time using data acquisition hardware and software interface with the facerecognition systems. Face recognition can be applied to various domains including security systems,personal identification, image and film processing and human computer interaction.

  7. A Parallel Framework for Multilayer Perceptron for Human Face Recognition

    CERN Document Server

    Bhowmik, M K; Nasipuri, M; Basu, D K; Kundu, M

    2010-01-01

    Artificial neural networks have already shown their success in face recognition and similar complex pattern recognition tasks. However, a major disadvantage of the technique is that it is extremely slow during training for larger classes and hence not suitable for real-time complex problems such as pattern recognition. This is an attempt to develop a parallel framework for the training algorithm of a perceptron. In this paper, two general architectures for a Multilayer Perceptron (MLP) have been demonstrated. The first architecture is All-Class-in-One-Network (ACON) where all the classes are placed in a single network and the second one is One-Class-in-One-Network (OCON) where an individual single network is responsible for each and every class. Capabilities of these two architectures were compared and verified in solving human face recognition, which is a complex pattern recognition task where several factors affect the recognition performance like pose variations, facial expression changes, occlusions, and ...

  8. Neural correlates of recognition memory for emotional faces and scenes

    OpenAIRE

    Keightley, Michelle L.; Chiew, Kimberly S.; Anderson, John A. E.; Grady, Cheryl L.

    2010-01-01

    We examined the influence of emotional valence and type of item to be remembered on brain activity during recognition, using faces and scenes. We used multivariate analyses of event-related fMRI data to identify whole-brain patterns, or networks of activity. Participants demonstrated better recognition for scenes vs faces and for negative vs neutral and positive items. Activity was increased in extrastriate cortex and inferior frontal gyri for emotional scenes, relative to neutral scenes and ...

  9. Do people have insight into their face recognition abilities?

    Science.gov (United States)

    Palermo, Romina; Rossion, Bruno; Rhodes, Gillian; Laguesse, Renaud; Tez, Tolga; Hall, Bronwyn; Albonico, Andrea; Malaspina, Manuela; Daini, Roberta; Irons, Jessica; Al-Janabi, Shahd; Taylor, Libby C; Rivolta, Davide; McKone, Elinor

    2017-02-01

    Diagnosis of developmental or congenital prosopagnosia (CP) involves self-report of everyday face recognition difficulties, which are corroborated with poor performance on behavioural tests. This approach requires accurate self-evaluation. We examine the extent to which typical adults have insight into their face recognition abilities across four experiments involving nearly 300 participants. The experiments used five tests of face recognition ability: two that tap into the ability to learn and recognize previously unfamiliar faces [the Cambridge Face Memory Test, CFMT; Duchaine, B., & Nakayama, K. (2006). The Cambridge Face Memory Test: Results for neurologically intact individuals and an investigation of its validity using inverted face stimuli and prosopagnosic participants. Neuropsychologia, 44(4), 576-585. doi:10.1016/j.neuropsychologia.2005.07.001; and a newly devised test based on the CFMT but where the study phases involve watching short movies rather than viewing static faces-the CFMT-Films] and three that tap face matching [Benton Facial Recognition Test, BFRT; Benton, A., Sivan, A., Hamsher, K., Varney, N., & Spreen, O. (1983). Contribution to neuropsychological assessment. New York: Oxford University Press; and two recently devised sequential face matching tests]. Self-reported ability was measured with the 15-item Kennerknecht et al. questionnaire [Kennerknecht, I., Ho, N. Y., & Wong, V. C. (2008). Prevalence of hereditary prosopagnosia (HPA) in Hong Kong Chinese population. American Journal of Medical Genetics Part A, 146A(22), 2863-2870. doi:10.1002/ajmg.a.32552]; two single-item questions assessing face recognition ability; and a new 77-item meta-cognition questionnaire. Overall, we find that adults with typical face recognition abilities have only modest insight into their ability to recognize faces on behavioural tests. In a fifth experiment, we assess self-reported face recognition ability in people with CP and find that some people who expect to

  10. What Types of Visual Recognition Tasks Are Mediated by the Neural Subsystem that Subserves Face Recognition?

    Science.gov (United States)

    Brooks, Brian E.; Cooper, Eric E.

    2006-01-01

    Three divided visual field experiments tested current hypotheses about the types of visual shape representation tasks that recruit the cognitive and neural mechanisms underlying face recognition. Experiment 1 found a right hemisphere advantage for subordinate but not basic-level face recognition. Experiment 2 found a right hemisphere advantage for…

  11. REAL TIME FACE RECOGNITION USING ADABOOST IMPROVED FAST PCA ALGORITHM

    Directory of Open Access Journals (Sweden)

    K. Susheel Kumar

    2011-08-01

    Full Text Available This paper presents an automated system for human face recognition in a real time background world fora large homemade dataset of persons face. The task is very difficult as the real time backgroundsubtraction in an image is still a challenge. Addition to this there is a huge variation in human face imagein terms of size, pose and expression. The system proposed collapses most of this variance. To detect realtime human face AdaBoost with Haar cascade is used and a simple fast PCA and LDA is used torecognize the faces detected. The matched face is then used to mark attendance in the laboratory, in ourcase. This biometric system is a real time attendance system based on the human face recognition with asimple and fast algorithms and gaining a high accuracy rate..

  12. Ethical aspects of face recognition systems in public places.

    NARCIS (Netherlands)

    Brey, Philip A.E.

    2004-01-01

    This essay examines ethical aspects of the use of facial recognition technology for surveillance purposes in public and semipublic areas, focusing particularly on the balance between security and privacy and civil liberties. As a case study, the FaceIt facial recognition engine of Identix Corporatio

  13. A comparative study of baseline algorithms of face recognition

    NARCIS (Netherlands)

    Mehmood, Zahid; Ali, Tauseef; Khattak, Shahid; Khan, Samee U.

    2014-01-01

    In this paper we present a comparative study of two well-known face recognition algorithms. The contribution of this work is to reveal the robustness of each FR algorithm with respect to various factors, such as variation in pose and low resolution of the images used for recognition. This evaluation

  14. Robust Multi biometric Recognition Using Face and Ear Images

    CERN Document Server

    Boodoo, Nazmeen Bibi

    2009-01-01

    This study investigates the use of ear as a biometric for authentication and shows experimental results obtained on a newly created dataset of 420 images. Images are passed to a quality module in order to reduce False Rejection Rate. The Principal Component Analysis (eigen ear) approach was used, obtaining 90.7 percent recognition rate. Improvement in recognition results is obtained when ear biometric is fused with face biometric. The fusion is done at decision level, achieving a recognition rate of 96 percent.

  15. Newborns' Face Recognition over Changes in Viewpoint

    Science.gov (United States)

    Turati, Chiara; Bulf, Hermann; Simion, Francesca

    2008-01-01

    The study investigated the origins of the ability to recognize faces despite rotations in depth. Four experiments are reported that tested, using the habituation technique, whether 1-to-3-day-old infants are able to recognize the invariant aspects of a face over changes in viewpoint. Newborns failed to recognize facial perceptual invariances…

  16. Novel averaging window filter for SIFT in infrared face recognition

    Institute of Scientific and Technical Information of China (English)

    Junfeng Bai; Yong Ma; Jing Li; Fan Fan; Hongyuan Wang

    2011-01-01

    The extraction of stable local features directly affects the performance of infrared face recognition algorithms. Recent studies on the application of scale invariant feature transform (SIFT) to infrared face recognition show that star-styled window filter (SWF) can filter out errors incorrectly introduced by SIFT. The current letter proposes an improved filter pattern called Y-styled window filter (YWF) to further eliminate the wrong matches. Compared with SWF, YWF patterns are sparser and do not maintain rotation invariance; thus, they are more suitable to infrared face recognition. Our experimental results demonstrate that a YWF-based averaging window outperforms an SWF-based one in reducing wrong matches, therefore improving the reliability of infrared face recognition systems.%@@ The extraction of stable local features directly affects the performance of infrared face recognition algorithms.Recent studies on the application of scale invariant feature transform(SIFT) to infrared face recognition show that star-styled window filter(SWF) can filter out errors incorrectly introduced by SIFT.

  17. Word and face recognition deficits following posterior cerebral artery stroke

    DEFF Research Database (Denmark)

    Kuhn, Christina D.; Asperud Thomsen, Johanne; Delfi, Tzvetelina

    2016-01-01

    Abstract Recent findings have challenged the existence of category specific brain areas for perceptual processing of words and faces, suggesting the existence of a common network supporting the recognition of both. We examined the performance of patients with focal lesions in posterior cortical...... areas to investigate whether deficits in recognition of words and faces systematically co-occur as would be expected if both functions rely on a common cerebral network. Seven right-handed patients with unilateral brain damage following stroke in areas supplied by the posterior cerebral artery were...... included (four with right hemisphere damage, three with left, tested at least 1 year post stroke). We examined word and face recognition using a delayed match-to-sample paradigm using four different categories of stimuli: cropped faces, full faces, words, and cars. Reading speed and word length effects...

  18. The Complete Gabor-Fisher Classifier for Robust Face Recognition

    Directory of Open Access Journals (Sweden)

    Štruc Vitomir

    2010-01-01

    Full Text Available Abstract This paper develops a novel face recognition technique called Complete Gabor Fisher Classifier (CGFC. Different from existing techniques that use Gabor filters for deriving the Gabor face representation, the proposed approach does not rely solely on Gabor magnitude information but effectively uses features computed based on Gabor phase information as well. It represents one of the few successful attempts found in the literature of combining Gabor magnitude and phase information for robust face recognition. The novelty of the proposed CGFC technique comes from (1 the introduction of a Gabor phase-based face representation and (2 the combination of the recognition technique using the proposed representation with classical Gabor magnitude-based methods into a unified framework. The proposed face recognition framework is assessed in a series of face verification and identification experiments performed on the XM2VTS, Extended YaleB, FERET, and AR databases. The results of the assessment suggest that the proposed technique clearly outperforms state-of-the-art face recognition techniques from the literature and that its performance is almost unaffected by the presence of partial occlusions of the facial area, changes in facial expression, or severe illumination changes.

  19. Error Rates in Users of Automatic Face Recognition Software.

    Science.gov (United States)

    White, David; Dunn, James D; Schmid, Alexandra C; Kemp, Richard I

    2015-01-01

    In recent years, wide deployment of automatic face recognition systems has been accompanied by substantial gains in algorithm performance. However, benchmarking tests designed to evaluate these systems do not account for the errors of human operators, who are often an integral part of face recognition solutions in forensic and security settings. This causes a mismatch between evaluation tests and operational accuracy. We address this by measuring user performance in a face recognition system used to screen passport applications for identity fraud. Experiment 1 measured target detection accuracy in algorithm-generated 'candidate lists' selected from a large database of passport images. Accuracy was notably poorer than in previous studies of unfamiliar face matching: participants made over 50% errors for adult target faces, and over 60% when matching images of children. Experiment 2 then compared performance of student participants to trained passport officers-who use the system in their daily work-and found equivalent performance in these groups. Encouragingly, a group of highly trained and experienced "facial examiners" outperformed these groups by 20 percentage points. We conclude that human performance curtails accuracy of face recognition systems-potentially reducing benchmark estimates by 50% in operational settings. Mere practise does not attenuate these limits, but superior performance of trained examiners suggests that recruitment and selection of human operators, in combination with effective training and mentorship, can improve the operational accuracy of face recognition systems.

  20. Developmental Changes in Face Recognition during Childhood: Evidence from Upright and Inverted Faces

    Science.gov (United States)

    de Heering, Adelaide; Rossion, Bruno; Maurer, Daphne

    2012-01-01

    Adults are experts at recognizing faces but there is controversy about how this ability develops with age. We assessed 6- to 12-year-olds and adults using a digitized version of the Benton Face Recognition Test, a sensitive tool for assessing face perception abilities. Children's response times for correct responses did not decrease between ages 6…

  1. Eye-tracking analysis of face observing and face recognition

    Directory of Open Access Journals (Sweden)

    Andrej Iskra

    2016-07-01

    Full Text Available Images are one of the key elements of the content of the World Wide Web. One group of web images are also photos of people. When various institutions (universities, research organizations, companies, associations, etc. present their staff, they should include photos of people for the purpose of more informative presentation. The fact is, that there are many specifies how people see face images and how do they remember them. Several methods to investigate person’s behavior during use of web content can be performed and one of the most reliable method among them is eye tracking. It is very common technique, particularly when it comes to observing web images. Our research focused on behavior of observing face images in process of memorizing them. Test participants were presented with face images shown at different time scale. We focused on three main face elements: eyes, mouth and nose. The results of our analysis can help not only in web presentation, which are, in principle, not limited by time observation, but especially in public presentations (conferences, symposia, and meetings.

  2. Independent component analysis of edge information for face recognition

    CERN Document Server

    Karande, Kailash Jagannath

    2013-01-01

    The book presents research work on face recognition using edge information as features for face recognition with ICA algorithms. The independent components are extracted from edge information. These independent components are used with classifiers to match the facial images for recognition purpose. In their study, authors have explored Canny and LOG edge detectors as standard edge detection methods. Oriented Laplacian of Gaussian (OLOG) method is explored to extract the edge information with different orientations of Laplacian pyramid. Multiscale wavelet model for edge detection is also propos

  3. DIFFERENCE FEATURE NEURAL NETWORK IN RECOGNITION OF HUMAN FACES

    Institute of Scientific and Technical Information of China (English)

    Chen Gang; Qi Feihu

    2001-01-01

    This article discusses vision recognition process and finds out that human recognizes objects not by their isolated features, but by their main difference features which people get by contrasting them. According to the resolving character of difference features for vision recognition, the difference feature neural network(DFNN) which is the improved auto-associative neural network is proposed.Using ORL database, the comparative experiment for face recognition with face images and the ones added Gaussian noise is performed, and the result shows that DFNN is better than the auto-associative neural network and it proves DFNN is more efficient.

  4. Face Recognition System based on SURF and LDA Technique

    Directory of Open Access Journals (Sweden)

    Narpat A. Singh

    2016-02-01

    Full Text Available In the past decade, Improve the quality in face recognition system is a challenge. It is a challenging problem and widely studied in the different type of imag-es to provide the best quality of faces in real life. These problems come due to illumination and pose effect due to light in gradient features. The improvement and optimization of human face recognition and detection is an important problem in the real life that can be handles to optimize the error rate, accuracy, peak signal to noise ratio, mean square error, and structural similarity Index. Now-a-days, there several methods are proposed to recognition face in different problem to optimize above parameters. There occur many invariant changes in hu-man faces due to the illumination and pose variations. In this paper we proposed a novel method in face recogni-tion to improve the quality parameters using speed up robust feature and linear discriminant analysis for opti-mize result. SURF is used for feature matching. In this paper, we use linear discriminant analysis for the edge dimensions reduction to live faces from our data-sets. The proposed method shows the better result as compare to the previous result on the basis of comparative analysis because our method show the better quality and better results in live images of face.

  5. The Development of Spatial Frequency Biases in Face Recognition

    Science.gov (United States)

    Leonard, Hayley C.; Karmiloff-Smith, Annette; Johnson, Mark H.

    2010-01-01

    Previous research has suggested that a mid-band of spatial frequencies is critical to face recognition in adults, but few studies have explored the development of this bias in children. We present a paradigm adapted from the adult literature to test spatial frequency biases throughout development. Faces were presented on a screen with particular…

  6. Supervised Filter Learning for Representation Based Face Recognition.

    Directory of Open Access Journals (Sweden)

    Chao Bi

    Full Text Available Representation based classification methods, such as Sparse Representation Classification (SRC and Linear Regression Classification (LRC have been developed for face recognition problem successfully. However, most of these methods use the original face images without any preprocessing for recognition. Thus, their performances may be affected by some problematic factors (such as illumination and expression variances in the face images. In order to overcome this limitation, a novel supervised filter learning algorithm is proposed for representation based face recognition in this paper. The underlying idea of our algorithm is to learn a filter so that the within-class representation residuals of the faces' Local Binary Pattern (LBP features are minimized and the between-class representation residuals of the faces' LBP features are maximized. Therefore, the LBP features of filtered face images are more discriminative for representation based classifiers. Furthermore, we also extend our algorithm for heterogeneous face recognition problem. Extensive experiments are carried out on five databases and the experimental results verify the efficacy of the proposed algorithm.

  7. Neural correlates of recognition memory for emotional faces and scenes.

    Science.gov (United States)

    Keightley, Michelle L; Chiew, Kimberly S; Anderson, John A E; Grady, Cheryl L

    2011-01-01

    We examined the influence of emotional valence and type of item to be remembered on brain activity during recognition, using faces and scenes. We used multivariate analyses of event-related fMRI data to identify whole-brain patterns, or networks of activity. Participants demonstrated better recognition for scenes vs faces and for negative vs neutral and positive items. Activity was increased in extrastriate cortex and inferior frontal gyri for emotional scenes, relative to neutral scenes and all face types. Increased activity in these regions also was seen for negative faces relative to positive faces. Correct recognition of negative faces and scenes (hits vs correct rejections) was associated with increased activity in amygdala, hippocampus, extrastriate, frontal and parietal cortices. Activity specific to correctly recognized emotional faces, but not scenes, was found in sensorimotor areas and rostral prefrontal cortex. These results suggest that emotional valence and type of visual stimulus both modulate brain activity at recognition, and influence multiple networks mediating visual, memory and emotion processing. The contextual information in emotional scenes may facilitate memory via additional visual processing, whereas memory for emotional faces may rely more on cognitive control mediated by rostrolateral prefrontal regions.

  8. Development of Face Recognition in Infant Chimpanzees (Pan Troglodytes)

    Science.gov (United States)

    Myowa-Yamakoshi, M.; Yamaguchi, M.K.; Tomonaga, M.; Tanaka, M.; Matsuzawa, T.

    2005-01-01

    In this paper, we assessed the developmental changes in face recognition by three infant chimpanzees aged 1-18 weeks, using preferential-looking procedures that measured the infants' eye- and head-tracking of moving stimuli. In Experiment 1, we prepared photographs of the mother of each infant and an ''average'' chimpanzee face using…

  9. Facial emotion recognition in paranoid schizophrenia and autism spectrum disorder.

    Science.gov (United States)

    Sachse, Michael; Schlitt, Sabine; Hainz, Daniela; Ciaramidaro, Angela; Walter, Henrik; Poustka, Fritz; Bölte, Sven; Freitag, Christine M

    2014-11-01

    Schizophrenia (SZ) and autism spectrum disorder (ASD) share deficits in emotion processing. In order to identify convergent and divergent mechanisms, we investigated facial emotion recognition in SZ, high-functioning ASD (HFASD), and typically developed controls (TD). Different degrees of task difficulty and emotion complexity (face, eyes; basic emotions, complex emotions) were used. Two Benton tests were implemented in order to elicit potentially confounding visuo-perceptual functioning and facial processing. Nineteen participants with paranoid SZ, 22 with HFASD and 20 TD were included, aged between 14 and 33 years. Individuals with SZ were comparable to TD in all obtained emotion recognition measures, but showed reduced basic visuo-perceptual abilities. The HFASD group was impaired in the recognition of basic and complex emotions compared to both, SZ and TD. When facial identity recognition was adjusted for, group differences remained for the recognition of complex emotions only. Our results suggest that there is a SZ subgroup with predominantly paranoid symptoms that does not show problems in face processing and emotion recognition, but visuo-perceptual impairments. They also confirm the notion of a general facial and emotion recognition deficit in HFASD. No shared emotion recognition deficit was found for paranoid SZ and HFASD, emphasizing the differential cognitive underpinnings of both disorders.

  10. Robust Face Recognition via Block Sparse Bayesian Learning

    Directory of Open Access Journals (Sweden)

    Taiyong Li

    2013-01-01

    Full Text Available Face recognition (FR is an important task in pattern recognition and computer vision. Sparse representation (SR has been demonstrated to be a powerful framework for FR. In general, an SR algorithm treats each face in a training dataset as a basis function and tries to find a sparse representation of a test face under these basis functions. The sparse representation coefficients then provide a recognition hint. Early SR algorithms are based on a basic sparse model. Recently, it has been found that algorithms based on a block sparse model can achieve better recognition rates. Based on this model, in this study, we use block sparse Bayesian learning (BSBL to find a sparse representation of a test face for recognition. BSBL is a recently proposed framework, which has many advantages over existing block-sparse-model-based algorithms. Experimental results on the Extended Yale B, the AR, and the CMU PIE face databases show that using BSBL can achieve better recognition rates and higher robustness than state-of-the-art algorithms in most cases.

  11. The own-age face recognition bias is task dependent.

    Science.gov (United States)

    Proietti, Valentina; Macchi Cassia, Viola; Mondloch, Catherine J

    2015-08-01

    The own-age bias (OAB) in face recognition (more accurate recognition of own-age than other-age faces) is robust among young adults but not older adults. We investigated the OAB under two different task conditions. In Experiment 1 young and older adults (who reported more recent experience with own than other-age faces) completed a match-to-sample task with young and older adult faces; only young adults showed an OAB. In Experiment 2 young and older adults completed an identity detection task in which we manipulated the identity strength of target and distracter identities by morphing each face with an average face in 20% steps. Accuracy increased with identity strength and facial age influenced older adults' (but not younger adults') strategy, but there was no evidence of an OAB. Collectively, these results suggest that the OAB depends on task demands and may be absent when searching for one identity.

  12. Face Recognition Method Based on Fuzzy 2DPCA

    Directory of Open Access Journals (Sweden)

    Xiaodong Li

    2014-01-01

    Full Text Available 2DPCA, which is one of the most important face recognition methods, is relatively sensitive to substantial variations in light direction, face pose, and facial expression. In order to improve the recognition performance of the traditional 2DPCA, a new 2DPCA algorithm based on the fuzzy theory is proposed in this paper, namely, the fuzzy 2DPCA (F2DPCA. In this method, applying fuzzy K-nearest neighbor (FKNN, the membership degree matrix of the training samples is calculated, which is used to get the fuzzy means of each class. The average of fuzzy means is then incorporated into the definition of the general scatter matrix with anticipation that it can improve classification result. The comprehensive experiments on the ORL, the YALE, and the FERET face database show that the proposed method can improve the classification rates and reduce the sensitivity to variations between face images caused by changes in illumination, face expression, and face pose.

  13. Fusion of visible and infrared imagery for face recognition

    Institute of Scientific and Technical Information of China (English)

    Xuerong Chen(陈雪荣); Zhongliang Jing(敬忠良); Shaoyuan Sun(孙韶媛); Gang Xiao(肖刚)

    2004-01-01

    In recent years face recognition has received substantial attention, but still remained very challenging in real applications. Despite the variety of approaches and tools studied, face recognition is not accurate or robust enough to be used in uncontrolled environments. Infrared (IR) imagery of human faces offers a promising alternative to visible imagery, however, IR has its own limitations. In this paper, a scheme to fuse information from the two modalities is proposed. The scheme is based on eigenfaces and probabilistic neural network (PNN), using fuzzy integral to fuse the objective evidence supplied by each modality. Recognition rate is used to evaluate the fusion scheme. Experimental results show that the scheme improves recognition performance substantially.

  14. Multimodal recognition based on face and ear using local feature

    Science.gov (United States)

    Yang, Ruyin; Mu, Zhichun; Chen, Long; Fan, Tingyu

    2017-06-01

    The pose issue which may cause loss of useful information has always been a bottleneck in face and ear recognition. To address this problem, we propose a multimodal recognition approach based on face and ear using local feature, which is robust to large facial pose variations in the unconstrained scene. Deep learning method is used for facial pose estimation, and the method of a well-trained Faster R-CNN is used to detect and segment the region of face and ear. Then we propose a weighted region-based recognition method to deal with the local feature. The proposed method has achieved state-of-the-art recognition performance especially when the images are affected by pose variations and random occlusion in unconstrained scene.

  15. Efficient Facial Expression and Face Recognition using Ranking Method

    Directory of Open Access Journals (Sweden)

    Murali Krishna kanala

    2015-06-01

    Full Text Available Expression detection is useful as a non-invasive method of lie detection and behaviour prediction. However, these facial expressions may be difficult to detect to the untrained eye. In this paper we implements facial expression recognition techniques using Ranking Method. The human face plays an important role in our social interaction, conveying people's identity. Using human face as a key to security, the biometrics face recognition technology has received significant attention in the past several years. Experiments are performed using standard database like surprise, sad and happiness. The universally accepted three principal emotions to be recognized are: surprise, sad and happiness along with neutral.

  16. Localized versus Locality-Preserving Subspace Projections for Face Recognition

    Directory of Open Access Journals (Sweden)

    Iulian B. Ciocoiu

    2007-05-01

    Full Text Available Three different localized representation methods and a manifold learning approach to face recognition are compared in terms of recognition accuracy. The techniques under investigation are (a local nonnegative matrix factorization (LNMF; (b independent component analysis (ICA; (c NMF with sparse constraints (NMFsc; (d locality-preserving projections (Laplacian faces. A systematic comparative analysis is conducted in terms of distance metric used, number of selected features, and sources of variability on AR and Olivetti face databases. Results indicate that the relative ranking of the methods is highly task-dependent, and the performances vary significantly upon the distance metric used.

  17. Localized versus Locality-Preserving Subspace Projections for Face Recognition

    Directory of Open Access Journals (Sweden)

    Costin HaritonN

    2007-01-01

    Full Text Available Three different localized representation methods and a manifold learning approach to face recognition are compared in terms of recognition accuracy. The techniques under investigation are (a local nonnegative matrix factorization (LNMF; (b independent component analysis (ICA; (c NMF with sparse constraints (NMFsc; (d locality-preserving projections (Laplacian faces. A systematic comparative analysis is conducted in terms of distance metric used, number of selected features, and sources of variability on AR and Olivetti face databases. Results indicate that the relative ranking of the methods is highly task-dependent, and the performances vary significantly upon the distance metric used.

  18. A Multi-Modal Recognition System Using Face and Speech

    Directory of Open Access Journals (Sweden)

    Samir Akrouf

    2011-05-01

    Full Text Available Nowadays Person Recognition has got more and more interest especially for security reasons. The recognition performed by a biometric system using a single modality tends to be less performing due to sensor data, restricted degrees of freedom and unacceptable error rates. To alleviate some of these problems we use multimodal biometric systems which provide better recognition results. By combining different modalities, such us speech, face, fingerprint, etc., we increase the performance of recognition systems. In this paper, we study the fusion of speech and face in a recognition system for taking a final decision (i.e., accept or reject identity claim. We evaluate the performance of each system differently then we fuse the results and compare the performances.

  19. Sparse representation based face recognition using weighted regions

    Science.gov (United States)

    Bilgazyev, Emil; Yeniaras, E.; Uyanik, I.; Unan, Mahmut; Leiss, E. L.

    2013-12-01

    Face recognition is a challenging research topic, especially when the training (gallery) and recognition (probe) images are acquired using different cameras under varying conditions. Even a small noise or occlusion in the images can compromise the accuracy of recognition. Lately, sparse encoding based classification algorithms gave promising results for such uncontrollable scenarios. In this paper, we introduce a novel methodology by modeling the sparse encoding with weighted patches to increase the robustness of face recognition even further. In the training phase, we define a mask (i.e., weight matrix) using a sparse representation selecting the facial regions, and in the recognition phase, we perform comparison on selected facial regions. The algorithm was evaluated both quantitatively and qualitatively using two comprehensive surveillance facial image databases, i.e., SCfaceandMFPV, with the results clearly superior to common state-of-the-art methodologies in different scenarios.

  20. A face recognition algorithm based on thermal and visible data

    Science.gov (United States)

    Sochenkov, Ilya; Tihonkih, Dmitrii; Vokhmintcev, Aleksandr; Melnikov, Andrey; Makovetskii, Artyom

    2016-09-01

    In this work we present an algorithm of fusing thermal infrared and visible imagery to identify persons. The proposed face recognition method contains several components. In particular this is rigid body image registration. The rigid registration is achieved by a modified variant of the iterative closest point (ICP) algorithm. We consider an affine transformation in three-dimensional space that preserves the angles between the lines. An algorithm of matching is inspirited by the recent results of neurophysiology of vision. Also we consider the ICP minimizing error metric stage for the case of an arbitrary affine transformation. Our face recognition algorithm also uses the localized-contouring algorithms to segment the subject's face; thermal matching based on partial least squares discriminant analysis. Thermal imagery face recognition methods are advantageous when there is no control over illumination or for detecting disguised faces. The proposed algorithm leads to good matching accuracies for different person recognition scenarios (near infrared, far infrared, thermal infrared, viewed sketch). The performance of the proposed face recognition algorithm in real indoor environments is presented and discussed.

  1. Recognition memory in developmental prosopagnosia: electrophysiological evidence for abnormal routes to face recognition.

    Science.gov (United States)

    Burns, Edwin J; Tree, Jeremy J; Weidemann, Christoph T

    2014-01-01

    DUAL PROCESS MODELS OF RECOGNITION MEMORY PROPOSE TWO DISTINCT ROUTES FOR RECOGNIZING A FACE: recollection and familiarity. Recollection is characterized by the remembering of some contextual detail from a previous encounter with a face whereas familiarity is the feeling of finding a face familiar without any contextual details. The Remember/Know (R/K) paradigm is thought to index the relative contributions of recollection and familiarity to recognition performance. Despite researchers measuring face recognition deficits in developmental prosopagnosia (DP) through a variety of methods, none have considered the distinct contributions of recollection and familiarity to recognition performance. The present study examined recognition memory for faces in eight individuals with DP and a group of controls using an R/K paradigm while recording electroencephalogram (EEG) data at the scalp. Those with DP were found to produce fewer correct "remember" responses and more false alarms than controls. EEG results showed that posterior "remember" old/new effects were delayed and restricted to the right posterior (RP) area in those with DP in comparison to the controls. A posterior "know" old/new effect commonly associated with familiarity for faces was only present in the controls whereas individuals with DP exhibited a frontal "know" old/new effect commonly associated with words, objects and pictures. These results suggest that individuals with DP do not utilize normal face-specific routes when making face recognition judgments but instead process faces using a pathway more commonly associated with objects.

  2. Recognition Memory in Developmental Prosopagnosia: Electrophysiological Evidence for Abnormal Routes to Face Recognition

    Directory of Open Access Journals (Sweden)

    Edwin James Burns

    2014-08-01

    Full Text Available Dual process models of recognition memory propose two distinct routes for recognizing a face: recollection and familiarity. Recollection is characterized by the remembering of some contextual detail from a previous encounter with a face whereas familiarity is the feeling of finding a face familiar without any contextual details. The Remember/Know (R/K paradigm is thought to index the relative contributions of recollection and familiarity to recognition performance. Despite researchers measuring face recognition deficits in developmental prosopagnosia (DP through a variety of methods, none have considered the distinct contributions of recollection and familiarity to recognition performance. The present study examined recognition memory for faces in 8 individuals with DP and a group of controls using an R/K paradigm while recording electroencephalogram (EEG data at scalp. Those with DP were found to produce fewer correct remember responses and more false alarms than controls. EEG results showed that posterior remember old/new effects were delayed and restricted to the right posterior area in those with DP in comparison to the controls. A posterior know old/new effect commonly associated with familiarity for faces was only present in the controls whereas individuals with DP exhibited a frontal know old/new effect commonly associated with words, objects and pictures. These results suggest that individuals with DP do not utilize normal face specific routes when making face recognition judgments but instead process faces using a pathway more commonly associated with objects.

  3. [Neural basis of self-face recognition: social aspects].

    Science.gov (United States)

    Sugiura, Motoaki

    2012-07-01

    Considering the importance of the face in social survival and evidence from evolutionary psychology of visual self-recognition, it is reasonable that we expect neural mechanisms for higher social-cognitive processes to underlie self-face recognition. A decade of neuroimaging studies so far has, however, not provided an encouraging finding in this respect. Self-face specific activation has typically been reported in the areas for sensory-motor integration in the right lateral cortices. This observation appears to reflect the physical nature of the self-face which representation is developed via the detection of contingency between one's own action and sensory feedback. We have recently revealed that the medial prefrontal cortex, implicated in socially nuanced self-referential process, is activated during self-face recognition under a rich social context where multiple other faces are available for reference. The posterior cingulate cortex has also exhibited this activation modulation, and in the separate experiment showed a response to attractively manipulated self-face suggesting its relevance to positive self-value. Furthermore, the regions in the right lateral cortices typically showing self-face-specific activation have responded also to the face of one's close friend under the rich social context. This observation is potentially explained by the fact that the contingency detection for physical self-recognition also plays a role in physical social interaction, which characterizes the representation of personally familiar people. These findings demonstrate that neuroscientific exploration reveals multiple facets of the relationship between self-face recognition and social-cognitive process, and that technically the manipulation of social context is key to its success.

  4. Robust Point Set Matching for Partial Face Recognition.

    Science.gov (United States)

    Weng, Renliang; Lu, Jiwen; Tan, Yap-Peng

    2016-03-01

    Over the past three decades, a number of face recognition methods have been proposed in computer vision, and most of them use holistic face images for person identification. In many real-world scenarios especially some unconstrained environments, human faces might be occluded by other objects, and it is difficult to obtain fully holistic face images for recognition. To address this, we propose a new partial face recognition approach to recognize persons of interest from their partial faces. Given a pair of gallery image and probe face patch, we first detect keypoints and extract their local textural features. Then, we propose a robust point set matching method to discriminatively match these two extracted local feature sets, where both the textural information and geometrical information of local features are explicitly used for matching simultaneously. Finally, the similarity of two faces is converted as the distance between these two aligned feature sets. Experimental results on four public face data sets show the effectiveness of the proposed approach.

  5. Face Behavior Recognition Through Support Vector Machines

    Directory of Open Access Journals (Sweden)

    Haval A. Ahmed

    2016-01-01

    Full Text Available Communication between computers and humans has grown to be a major field of research. Facial Behavior Recognition through computer algorithms is a motivating and difficult field of research for establishing emotional interactions between humans and computers. Although researchers have suggested numerous methods of emotion recognition within the literature of this field, as yet, these research works have mainly focused on one method for their system output i.e. used one facial database for assessing their works. This may diminish the generalization method and additionally it might shrink the comparability range. A proposed technique for recognizing emotional expressions that are expressed through facial aspects of still images is presented. This technique uses the Support Vector Machines (SVM as a classifier of emotions. Substantive problems are considered such as diversity in facial databases, the samples included in each database, the number of facial expressions experienced an accurate method of extracting facial features, and the variety of structural models. After many experiments and the results of different models being compared, it is determined that this approach produces high recognition rates.

  6. A Benchmark and Comparative Study of Video-Based Face Recognition on COX Face Database.

    Science.gov (United States)

    Huang, Zhiwu; Shan, Shiguang; Wang, Ruiping; Zhang, Haihong; Lao, Shihong; Kuerban, Alifu; Chen, Xilin

    2015-12-01

    Face recognition with still face images has been widely studied, while the research on video-based face recognition is inadequate relatively, especially in terms of benchmark datasets and comparisons. Real-world video-based face recognition applications require techniques for three distinct scenarios: 1) Videoto-Still (V2S); 2) Still-to-Video (S2V); and 3) Video-to-Video (V2V), respectively, taking video or still image as query or target. To the best of our knowledge, few datasets and evaluation protocols have benchmarked for all the three scenarios. In order to facilitate the study of this specific topic, this paper contributes a benchmarking and comparative study based on a newly collected still/video face database, named COX(1) Face DB. Specifically, we make three contributions. First, we collect and release a largescale still/video face database to simulate video surveillance with three different video-based face recognition scenarios (i.e., V2S, S2V, and V2V). Second, for benchmarking the three scenarios designed on our database, we review and experimentally compare a number of existing set-based methods. Third, we further propose a novel Point-to-Set Correlation Learning (PSCL) method, and experimentally show that it can be used as a promising baseline method for V2S/S2V face recognition on COX Face DB. Extensive experimental results clearly demonstrate that video-based face recognition needs more efforts, and our COX Face DB is a good benchmark database for evaluation.

  7. Equivalent activation of the hippocampus by face-face and face-laugh paired associate learning and recognition.

    Science.gov (United States)

    Holdstock, J S; Crane, J; Bachorowski, J-A; Milner, B

    2010-11-01

    The human hippocampus is known to play an important role in relational memory. Both patient lesion studies and functional-imaging studies have shown that it is involved in the encoding and retrieval from memory of arbitrary associations. Two recent patient lesion studies, however, have found dissociations between spared and impaired memory within the domain of relational memory. Recognition of associations between information of the same kind (e.g., two faces) was spared, whereas recognition of associations between information of different kinds (e.g., face-name or face-voice associations) was impaired by hippocampal lesions. Thus, recognition of associations between information of the same kind may not be mediated by the hippocampus. Few imaging studies have directly compared activation at encoding and recognition of associations between same and different types of information. Those that have have shown mixed findings and been open to alternative interpretation. We used fMRI to compare hippocampal activation while participants studied and later recognized face-face and face-laugh paired associates. We found no differences in hippocampal activation between our two types of stimulus materials during either study or recognition. Study of both types of paired associate activated the hippocampus bilaterally, but the hippocampus was not activated by either condition during recognition. Our findings suggest that the human hippocampus is normally engaged to a similar extent by study and recognition of associations between information of the same kind and associations between information of different kinds.

  8. Arguments Against a Configural Processing Account of Familiar Face Recognition.

    Science.gov (United States)

    Burton, A Mike; Schweinberger, Stefan R; Jenkins, Rob; Kaufmann, Jürgen M

    2015-07-01

    Face recognition is a remarkable human ability, which underlies a great deal of people's social behavior. Individuals can recognize family members, friends, and acquaintances over a very large range of conditions, and yet the processes by which they do this remain poorly understood, despite decades of research. Although a detailed understanding remains elusive, face recognition is widely thought to rely on configural processing, specifically an analysis of spatial relations between facial features (so-called second-order configurations). In this article, we challenge this traditional view, raising four problems: (1) configural theories are underspecified; (2) large configural changes leave recognition unharmed; (3) recognition is harmed by nonconfigural changes; and (4) in separate analyses of face shape and face texture, identification tends to be dominated by texture. We review evidence from a variety of sources and suggest that failure to acknowledge the impact of familiarity on facial representations may have led to an overgeneralization of the configural account. We argue instead that second-order configural information is remarkably unimportant for familiar face recognition.

  9. Method for secure electronic voting system: face recognition based approach

    Science.gov (United States)

    Alim, M. Affan; Baig, Misbah M.; Mehboob, Shahzain; Naseem, Imran

    2017-06-01

    In this paper, we propose a framework for low cost secure electronic voting system based on face recognition. Essentially Local Binary Pattern (LBP) is used for face feature characterization in texture format followed by chi-square distribution is used for image classification. Two parallel systems are developed based on smart phone and web applications for face learning and verification modules. The proposed system has two tire security levels by using person ID followed by face verification. Essentially class specific threshold is associated for controlling the security level of face verification. Our system is evaluated three standard databases and one real home based database and achieve the satisfactory recognition accuracies. Consequently our propose system provides secure, hassle free voting system and less intrusive compare with other biometrics.

  10. Face recognition using facial expression: a novel approach

    Science.gov (United States)

    Singh, Deepak Kumar; Gupta, Priya; Tiwary, U. S.

    2008-04-01

    Facial expressions are undoubtedly the most effective nonverbal communication. The face has always been the equation of a person's identity. The face draws the demarcation line between identity and extinction. Each line on the face adds an attribute to the identity. These lines become prominent when we experience an emotion and these lines do not change completely with age. In this paper we have proposed a new technique for face recognition which focuses on the facial expressions of the subject to identify his face. This is a grey area on which not much light has been thrown earlier. According to earlier researches it is difficult to alter the natural expression. So our technique will be beneficial for identifying occluded or intentionally disguised faces. The test results of the experiments conducted prove that this technique will give a new direction in the field of face recognition. This technique will provide a strong base to the area of face recognition and will be used as the core method for critical defense security related issues.

  11. Perspective projection for variance pose face recognition from camera calibration

    Science.gov (United States)

    Fakhir, M. M.; Woo, W. L.; Chambers, J. A.; Dlay, S. S.

    2016-04-01

    Variance pose is an important research topic in face recognition. The alteration of distance parameters across variance pose face features is a challenging. We provide a solution for this problem using perspective projection for variance pose face recognition. Our method infers intrinsic camera parameters of the image which enable the projection of the image plane into 3D. After this, face box tracking and centre of eyes detection can be identified using our novel technique to verify the virtual face feature measurements. The coordinate system of the perspective projection for face tracking allows the holistic dimensions for the face to be fixed in different orientations. The training of frontal images and the rest of the poses on FERET database determine the distance from the centre of eyes to the corner of box face. The recognition system compares the gallery of images against different poses. The system initially utilises information on position of both eyes then focuses principally on closest eye in order to gather data with greater reliability. Differentiation between the distances and position of the right and left eyes is a unique feature of our work with our algorithm outperforming other state of the art algorithms thus enabling stable measurement in variance pose for each individual.

  12. Face Recognition by Metropolitan Police Super-Recognisers.

    Science.gov (United States)

    Robertson, David J; Noyes, Eilidh; Dowsett, Andrew J; Jenkins, Rob; Burton, A Mike

    2016-01-01

    Face recognition is used to prove identity across a wide variety of settings. Despite this, research consistently shows that people are typically rather poor at matching faces to photos. Some professional groups, such as police and passport officers, have been shown to perform just as poorly as the general public on standard tests of face recognition. However, face recognition skills are subject to wide individual variation, with some people showing exceptional ability-a group that has come to be known as 'super-recognisers'. The Metropolitan Police Force (London) recruits 'super-recognisers' from within its ranks, for deployment on various identification tasks. Here we test four working super-recognisers from within this police force, and ask whether they are really able to perform at levels above control groups. We consistently find that the police 'super-recognisers' perform at well above normal levels on tests of unfamiliar and familiar face matching, with degraded as well as high quality images. Recruiting employees with high levels of skill in these areas, and allocating them to relevant tasks, is an efficient way to overcome some of the known difficulties associated with unfamiliar face recognition.

  13. Face Recognition by Metropolitan Police Super-Recognisers.

    Directory of Open Access Journals (Sweden)

    David J Robertson

    Full Text Available Face recognition is used to prove identity across a wide variety of settings. Despite this, research consistently shows that people are typically rather poor at matching faces to photos. Some professional groups, such as police and passport officers, have been shown to perform just as poorly as the general public on standard tests of face recognition. However, face recognition skills are subject to wide individual variation, with some people showing exceptional ability-a group that has come to be known as 'super-recognisers'. The Metropolitan Police Force (London recruits 'super-recognisers' from within its ranks, for deployment on various identification tasks. Here we test four working super-recognisers from within this police force, and ask whether they are really able to perform at levels above control groups. We consistently find that the police 'super-recognisers' perform at well above normal levels on tests of unfamiliar and familiar face matching, with degraded as well as high quality images. Recruiting employees with high levels of skill in these areas, and allocating them to relevant tasks, is an efficient way to overcome some of the known difficulties associated with unfamiliar face recognition.

  14. A wavelet-based method for multispectral face recognition

    Science.gov (United States)

    Zheng, Yufeng; Zhang, Chaoyang; Zhou, Zhaoxian

    2012-06-01

    A wavelet-based method is proposed for multispectral face recognition in this paper. Gabor wavelet transform is a common tool for orientation analysis of a 2D image; whereas Hamming distance is an efficient distance measurement for face identification. Specifically, at each frequency band, an index number representing the strongest orientational response is selected, and then encoded in binary format to favor the Hamming distance calculation. Multiband orientation bit codes are then organized into a face pattern byte (FPB) by using order statistics. With the FPB, Hamming distances are calculated and compared to achieve face identification. The FPB algorithm was initially created using thermal images, while the EBGM method was originated with visible images. When two or more spectral images from the same subject are available, the identification accuracy and reliability can be enhanced using score fusion. We compare the identification performance of applying five recognition algorithms to the three-band (visible, near infrared, thermal) face images, and explore the fusion performance of combing the multiple scores from three recognition algorithms and from three-band face images, respectively. The experimental results show that the FPB is the best recognition algorithm, the HMM yields the best fusion result, and the thermal dataset results in the best fusion performance compared to other two datasets.

  15. Familiarity is not notoriety: Phenomenological accounts of face recognition

    Directory of Open Access Journals (Sweden)

    Davide eLiccione

    2014-09-01

    Full Text Available From a phenomenological perspective, faces are perceived differently from objects as their perception always involves the possibility of a relational engagement (Bredlau, 2011. This is especially true for familiar faces, i.e. faces of people with a history of real relational engagements. Similarly, valence of emotional expressions assumes a key role, as they define the sense and direction of this engagement. Following these premises, the aim of the present study is to demonstrate that face recognition is facilitated by at least two variables, familiarity and emotional expression, and that perception of familiar faces is not influenced by orientation. In order to verify this hypothesis, we implemented a 3x3x2 factorial design, showing seventeen healthy subjects three type of faces (unfamiliar, personally familiar, famous characterized by three different emotional expressions (happy, hungry/sad, neutral and in two different orientation (upright vs inverted. We showed every subject a total of 180 faces with the instructions to give a familiarity judgment. Reaction times were recorded and we found that the recognition of a face is facilitated by personal familiarity and emotional expression, and that this process is otherwise independent from a cognitive elaboration of stimuli and remains stable despite orientation. These results highlight the need to make a distinction between famous and personally familiar faces when studying face perception and to consider its historical aspects from a phenomenological point of view.

  16. Effect of familiarity and viewpoint on face recognition in chimpanzees.

    Science.gov (United States)

    Parr, Lisa A; Siebert, Erin; Taubert, Jessica

    2011-01-01

    Numerous studies have shown that familiarity strongly influences how well humans recognize faces. This is particularly true when faces are encountered across a change in viewpoint. In this situation, recognition may be accomplished by matching partial or incomplete information about a face to a stored representation of the known individual, whereas such representations are not available for unknown faces. Chimpanzees, our closest living relatives, share many of the same behavioral specializations for face processing as humans, but the influence of familiarity and viewpoint have never been compared in the same study. Here, we examined the ability of chimpanzees to match the faces of familiar and unfamiliar conspecifics in their frontal and 3/4 views using a computerized task. Results showed that, while chimpanzees were able to accurately match both familiar and unfamiliar faces in their frontal orientations, performance was significantly impaired only when unfamiliar faces were presented across a change in viewpoint. Therefore, like in humans, face processing in chimpanzees appears to be sensitive to individual familiarity. We propose that familiarization is a robust mechanism for strengthening the representation of faces and has been conserved in primates to achieve efficient individual recognition over a range of natural viewing conditions.

  17. Faces are special but not too special: spared face recognition in amnesia is based on familiarity.

    Science.gov (United States)

    Aly, Mariam; Knight, Robert T; Yonelinas, Andrew P

    2010-11-01

    Most current theories of human memory are material-general in the sense that they assume that the medial temporal lobe (MTL) is important for retrieving the details of prior events, regardless of the specific type of materials. Recent studies of amnesia have challenged the material-general assumption by suggesting that the MTL may be necessary for remembering words, but is not involved in remembering faces. We examined recognition memory for faces and words in a group of amnesic patients, which included hypoxic patients and patients with extensive left or right MTL lesions. Recognition confidence judgments were used to plot receiver operating characteristics (ROCs) in order to more fully quantify recognition performance and to estimate the contributions of recollection and familiarity. Consistent with the extant literature, an analysis of overall recognition accuracy showed that the patients were impaired at word memory but had spared face memory. However, the ROC analysis indicated that the patients were generally impaired at high confidence recognition responses for faces and words, and they exhibited significant recollection impairments for both types of materials. Familiarity for faces was preserved in all patients, but extensive left MTL damage impaired familiarity for words. These results show that face recognition may appear to be spared because performance tends to rely heavily on familiarity, a process that is relatively well preserved in amnesia. In addition, the findings challenge material-general theories of memory, and suggest that both material and process are important determinants of memory performance in amnesia.

  18. An Unsupervised Active Classification Techique for Face Recognition

    Directory of Open Access Journals (Sweden)

    Dr.S. Aruna Mastani

    2010-05-01

    Full Text Available Generally pattern recognition systems dealing with high dimensional data with very few samples for training are faced with the problem of over fitting. This is the case with appearance based methods of Face Recognition (FR systems where the pixels form the high dimensional feature vector representing the face images, and only few sample face images are present for training. Over-fitting is a condition where a pattern recognition system recognizes/classify the samples used for training perfectly, but poor enough in recognizing/classifying the unseen samples (testing samples that are not used for training. The reason for this is the less number of samples used for training are unable to cover all the possible variations of testing data that occur due to changes of illumination, expression, pose view point of face images. The two ways to overcome this problem is either to reduce the size of samples by extracting the best discriminant features or to provide with a classifier with enhanced generalizationcapability. Thus development of effective and reliable Face Recognition system boils down to that of representation of patterns (faces with minimum number of features with most discriminatory information, or to have a strong classification technique that best categorizes the patterns in to different classes. In this paper a method called active classification through clustering is proposed that combines the advantage of feature extraction and simultaneously uses novel approach for classification, by involving the information about the distribution of testing samples along with the training samples. The proposed technique in this paper is based up on this basic thought of involving the active participation of testing samples in the classifier implementation. Considering this as a new approach to face recognition system, experiments are performed using the well known databases ORL, UMIST and Yale and compared with the Existing methods to prove its

  19. Survey of Commercial Technologies for Face Recognition in Video

    Science.gov (United States)

    2014-09-01

    search facial components, identify a gestalt face 11 and compare it to a stored set of facial characteristics of known human faces. 3.2 Recognition System...theorize that a face is not merely a set of facial features but is rather something meaningful in its form. This is consistent with the Gestalt theory that...an image is seen in its entirety, not by its individual parts. Hence, the “ gestalt face” refers to a holistic representation of face. Gestalt’s theory

  20. Nonlinear fusion for face recognition using fuzzy integral

    Science.gov (United States)

    Chen, Xuerong; Jing, Zhongliang; Xiao, Gang

    2007-08-01

    Face recognition based only on the visual spectrum is not accurate or robust enough to be used in uncontrolled environments. Recently, infrared (IR) imagery of human face is considered as a promising alternative to visible imagery due to its relative insensitive to illumination changes. However, IR has its own limitations. In order to fuse information from the two modalities to achieve better result, we propose a new fusion recognition scheme based on nonlinear decision fusion, using fuzzy integral to fuse the objective evidence supplied by each modality. The scheme also employs independent component analysis (ICA) for feature extraction and support vector machines (SVMs) for classification evidence. Recognition rate is used to evaluate the proposed scheme. Experimental results show the scheme improves recognition performance substantially.

  1. Face recognition by combining eigenface method with different wavelet subbands

    Institute of Scientific and Technical Information of China (English)

    MA Yan; LI Shun-bao

    2006-01-01

    @@ A method combining eigenface with different wavelet subbands for face recognition is proposed.Each training image is decomposed into multi-subbands for extracting their eigenvector sets and projection vectors.In the recognition process,the inner product distance between the projection vectors of the test image and that of the training image are calculated.The training image,corresponding to the maximum distance under the given threshold condition,is considered as the final result.The experimental results on the ORL and YALE face database show that,compared with the eigenface method directly on the image domain or on a single wavelet subband,the recognition accuracy using the proposed method is improved by 5% without influencing the recognition speed.

  2. Risk for Bipolar Disorder is Associated with Face-Processing Deficits across Emotions

    Science.gov (United States)

    Brotman, Melissa A.; Skup, Martha; Rich, Brendan A.; Blair, Karina S.; Pine, Daniel S.; Blair, James R.; Leibenluft, Ellen

    2008-01-01

    The relationship between the risks for face-emotion labeling deficits and bipolar disorder (BD) among youths is examined. Findings show that youths at risk for BD did not show specific face-emotion recognition deficits. The need to provide more intense emotional information for face-emotion labeling of patients and at-risk youths is also discussed.

  3. Risk for Bipolar Disorder is Associated with Face-Processing Deficits across Emotions

    Science.gov (United States)

    Brotman, Melissa A.; Skup, Martha; Rich, Brendan A.; Blair, Karina S.; Pine, Daniel S.; Blair, James R.; Leibenluft, Ellen

    2008-01-01

    The relationship between the risks for face-emotion labeling deficits and bipolar disorder (BD) among youths is examined. Findings show that youths at risk for BD did not show specific face-emotion recognition deficits. The need to provide more intense emotional information for face-emotion labeling of patients and at-risk youths is also discussed.

  4. Evidence for view-invariant face recognition units in unfamiliar face learning.

    Science.gov (United States)

    Etchells, David B; Brooks, Joseph L; Johnston, Robert A

    2017-05-01

    Many models of face recognition incorporate the idea of a face recognition unit (FRU), an abstracted representation formed from each experience of a face which aids recognition under novel viewing conditions. Some previous studies have failed to find evidence of this FRU representation. Here, we report three experiments which investigated this theoretical construct by modifying the face learning procedure from that in previous work. During learning, one or two views of previously unfamiliar faces were shown to participants in a serial matching task. Later, participants attempted to recognize both seen and novel views of the learned faces (recognition phase). Experiment 1 tested participants' recognition of a novel view, a day after learning. Experiment 2 was identical, but tested participants on the same day as learning. Experiment 3 repeated Experiment 1, but tested participants on a novel view that was outside the rotation of those views learned. Results revealed a significant advantage, across all experiments, for recognizing a novel view when two views had been learned compared to single view learning. The observed view invariance supports the notion that an FRU representation is established during multi-view face learning under particular learning conditions.

  5. 3D Face Compression and Recognition using Spherical Wavelet Parametrization

    Directory of Open Access Journals (Sweden)

    Rabab M. Ramadan

    2012-09-01

    Full Text Available In this research an innovative fully automated 3D face compression and recognition system is presented. Several novelties are introduced to make the system performance robust and efficient. These novelties include: First, an automatic pose correction and normalization process by using curvature analysis for nose tip detection and iterative closest point (ICP image registration. Second, the use of spherical based wavelet coefficients for efficient representation of the 3D face. The spherical wavelet transformation is used to decompose the face image into multi-resolution sub images characterizing the underlying functions in a local fashion in both spacial and frequency domains. Two representation features based on spherical wavelet parameterization of the face image were proposed for the 3D face compression and recognition. Principle component analysis (PCA is used to project to a low resolution sub-band. To evaluate the performance of the proposed approach, experiments were performed on the GAVAB face database. Experimental results show that the spherical wavelet coefficients yield excellent compression capabilities with minimal set of features. Haar wavelet coefficients extracted from the face geometry image was found to generate good recognition results that outperform other methods working on the GAVAB database.

  6. 2D DOST based local phase pattern for face recognition

    Science.gov (United States)

    Moniruzzaman, Md.; Alam, Mohammad S.

    2017-05-01

    A new two dimensional (2-D) Discrete Orthogonal Stcokwell Transform (DOST) based Local Phase Pattern (LPP) technique has been proposed for efficient face recognition. The proposed technique uses 2-D DOST as preliminary preprocessing and local phase pattern to form robust feature signature which can effectively accommodate various 3D facial distortions and illumination variations. The S-transform, is an extension of the ideas of the continuous wavelet transform (CWT), is also known for its local spectral phase properties in time-frequency representation (TFR). It provides a frequency dependent resolution of the time-frequency space and absolutely referenced local phase information while maintaining a direct relationship with the Fourier spectrum which is unique in TFR. After utilizing 2-D Stransform as the preprocessing and build local phase pattern from extracted phase information yield fast and efficient technique for face recognition. The proposed technique shows better correlation discrimination compared to alternate pattern recognition techniques such as wavelet or Gabor based face recognition. The performance of the proposed method has been tested using the Yale and extended Yale facial database under different environments such as illumination variation and 3D changes in facial expressions. Test results show that the proposed technique yields better performance compared to alternate time-frequency representation (TFR) based face recognition techniques.

  7. Face recognition using composite classifier with 2DPCA

    Science.gov (United States)

    Li, Jia; Yan, Ding

    2017-01-01

    In the conventional face recognition, most researchers focused on enhancing the precision which input data was already the member of database. However, they paid less necessary attention to confirm whether the input data belonged to database. This paper proposed an approach of face recognition using two-dimensional principal component analysis (2DPCA). It designed a novel composite classifier founded by statistical technique. Moreover, this paper utilized the advantages of SVM and Logic Regression in field of classification and therefore made its accuracy improved a lot. To test the performance of the composite classifier, the experiments were implemented on the ORL and the FERET database and the result was shown and evaluated.

  8. FACELOCK-Lock Control Security System Using Face Recognition-

    Science.gov (United States)

    Hirayama, Takatsugu; Iwai, Yoshio; Yachida, Masahiko

    A security system using biometric person authentication technologies is suited to various high-security situations. The technology based on face recognition has advantages such as lower user’s resistance and lower stress. However, facial appearances change according to facial pose, expression, lighting, and age. We have developed the FACELOCK security system based on our face recognition methods. Our methods are robust for various facial appearances except facial pose. Our system consists of clients and a server. The client communicates with the server through our protocol over a LAN. Users of our system do not need to be careful about their facial appearance.

  9. Individual discriminative face recognition models based on subsets of features

    DEFF Research Database (Denmark)

    Clemmensen, Line Katrine Harder; Gomez, David Delgado; Ersbøll, Bjarne Kjær

    2007-01-01

    of the face recognition problem. The elastic net model is able to select a subset of features with low computational effort compared to other state-of-the-art feature selection methods. Furthermore, the fact that the number of features usually is larger than the number of images in the data base makes feature...... selection techniques such as forward selection or lasso regression become inadequate. In the experimental section, the performance of the elastic net model is compared with geometrical and color based algorithms widely used in face recognition such as Procrustes nearest neighbor, Eigenfaces, or Fisher...

  10. Generating virtual training samples for sparse representation of face images and face recognition

    Science.gov (United States)

    Du, Yong; Wang, Yu

    2016-03-01

    There are many challenges in face recognition. In real-world scenes, images of the same face vary with changing illuminations, different expressions and poses, multiform ornaments, or even altered mental status. Limited available training samples cannot convey these possible changes in the training phase sufficiently, and this has become one of the restrictions to improve the face recognition accuracy. In this article, we view the multiplication of two images of the face as a virtual face image to expand the training set and devise a representation-based method to perform face recognition. The generated virtual samples really reflect some possible appearance and pose variations of the face. By multiplying a training sample with another sample from the same subject, we can strengthen the facial contour feature and greatly suppress the noise. Thus, more human essential information is retained. Also, uncertainty of the training data is simultaneously reduced with the increase of the training samples, which is beneficial for the training phase. The devised representation-based classifier uses both the original and new generated samples to perform the classification. In the classification phase, we first determine K nearest training samples for the current test sample by calculating the Euclidean distances between the test sample and training samples. Then, a linear combination of these selected training samples is used to represent the test sample, and the representation result is used to classify the test sample. The experimental results show that the proposed method outperforms some state-of-the-art face recognition methods.

  11. Can Massive but Passive Exposure to Faces Contribute to Face Recognition Abilities?

    Science.gov (United States)

    Yovel, Galit; Halsband, Keren; Pelleg, Michel; Farkash, Naomi; Gal, Bracha; Goshen-Gottstein, Yonatan

    2012-01-01

    Recent studies have suggested that individuation of other-race faces is more crucial for enhancing recognition performance than exposure that involves categorization of these faces to an identity-irrelevant criterion. These findings were primarily based on laboratory training protocols that dissociated exposure and individuation by using…

  12. Two dimensional discriminant neighborhood preserving embedding in face recognition

    Science.gov (United States)

    Pang, Meng; Jiang, Jifeng; Lin, Chuang; Wang, Binghui

    2015-03-01

    One of the key issues of face recognition is to extract the features of face images. In this paper, we propose a novel method, named two-dimensional discriminant neighborhood preserving embedding (2DDNPE), for image feature extraction and face recognition. 2DDNPE benefits from four techniques, i.e., neighborhood preserving embedding (NPE), locality preserving projection (LPP), image based projection and Fisher criterion. Firstly, NPE and LPP are two popular manifold learning techniques which can optimally preserve the local geometry structures of the original samples from different angles. Secondly, image based projection enables us to directly extract the optimal projection vectors from twodimensional image matrices rather than vectors, which avoids the small sample size problem as well as reserves useful structural information embedded in the original images. Finally, the Fisher criterion applied in 2DDNPE can boost face recognition rates by minimizing the within-class distance, while maximizing the between-class distance. To evaluate the performance of 2DDNPE, several experiments are conducted on the ORL and Yale face datasets. The results corroborate that 2DDNPE outperforms the existing 1D feature extraction methods, such as NPE, LPP, LDA and PCA across all experiments with respect to recognition rate and training time. 2DDNPE also delivers consistently promising results compared with other competing 2D methods such as 2DNPP, 2DLPP, 2DLDA and 2DPCA.

  13. The Role of Higher Level Adaptive Coding Mechanisms in the Development of Face Recognition

    Science.gov (United States)

    Pimperton, Hannah; Pellicano, Elizabeth; Jeffery, Linda; Rhodes, Gillian

    2009-01-01

    DevDevelopmental improvements in face identity recognition ability are widely documented, but the source of children's immaturity in face recognition remains unclear. Differences in the way in which children and adults visually represent faces might underlie immaturities in face recognition. Recent evidence of a face identity aftereffect (FIAE),…

  14. Unified Model in Identity Subspace for Face Recognition

    Institute of Scientific and Technical Information of China (English)

    Pin Liao; Li Shen; Yi-Qiang Chen; Shu-Chang Liu

    2004-01-01

    Human faces have two important characteristics: (1) They are similar objects and the specific variations of each face are similar to each other; (2) They are nearly bilateral symmetric. Exploiting the two important properties, we build a unified model in identity subspace (UMIS) as a novel technique for face recognition from only one example image per person. An identity subspace spanned by bilateral symmetric bases, which compactly encodes identity information, is presented. The unified model, trained on an obtained training set with multiple samples per class from a known people group A, can be generalized well to facial images of unknown individuals,and can be used to recognize facial images from an unknown people group B with only one sample per subject.Extensive experimental results on two public databases (the Yale database and the Bern database) and our own database (the ICT-JDL database) demonstrate that the UMIS approach is significantly effective and robust for face recognition.

  15. Face Recognition System Based on Spectral Graph Wavelet Theory

    Directory of Open Access Journals (Sweden)

    R. Premalatha Kanikannan

    2014-09-01

    Full Text Available This study presents an efficient approach for automatic face recognition based on Spectral Graph Wavelet Theory (SGWT. SGWT is analogous to wavelet transform and the transform functions are defined on the vertices of a weighted graph. The given face image is decomposed by SGWT at first. The energies of obtained sub-bands are fused together and considered as feature vector for the corresponding image. The performance of proposed system is analyzed on ORL face database using nearest neighbor classifier. The face images used in this study has variations in pose, expression and facial details. The results indicate that the proposed system based on SGWT is better than wavelet transform and 94% recognition accuracy is achieved.

  16. Semisupervised Kernel Marginal Fisher Analysis for Face Recognition

    Directory of Open Access Journals (Sweden)

    Ziqiang Wang

    2013-01-01

    Full Text Available Dimensionality reduction is a key problem in face recognition due to the high-dimensionality of face image. To effectively cope with this problem, a novel dimensionality reduction algorithm called semisupervised kernel marginal Fisher analysis (SKMFA for face recognition is proposed in this paper. SKMFA can make use of both labelled and unlabeled samples to learn the projection matrix for nonlinear dimensionality reduction. Meanwhile, it can successfully avoid the singularity problem by not calculating the matrix inverse. In addition, in order to make the nonlinear structure captured by the data-dependent kernel consistent with the intrinsic manifold structure, a manifold adaptive nonparameter kernel is incorporated into the learning process of SKMFA. Experimental results on three face image databases demonstrate the effectiveness of our proposed algorithm.

  17. Face recognition with histograms of fractional differential gradients

    Science.gov (United States)

    Yu, Lei; Ma, Yan; Cao, Qi

    2014-05-01

    It has proved that fractional differentiation can enhance the edge information and nonlinearly preserve textural detailed information in an image. This paper investigates its ability for face recognition and presents a local descriptor called histograms of fractional differential gradients (HFDG) to extract facial visual features. HFDG encodes a face image into gradient patterns using multiorientation fractional differential masks, from which histograms of gradient directions are computed as the face representation. Experimental results on Yale, face recognition technology (FERET), Carnegie Mellon University pose, illumination, and expression (CMU PIE), and A. Martinez and R. Benavente (AR) databases validate the feasibility of the proposed method and show that HFDG outperforms local binary patterns (LBP), histograms of oriented gradients (HOG), enhanced local directional patterns (ELDP), and Gabor feature-based methods.

  18. Why the long face? The importance of vertical image structure for biological "barcodes" underlying face recognition.

    Science.gov (United States)

    Spence, Morgan L; Storrs, Katherine R; Arnold, Derek H

    2014-07-29

    Humans are experts at face recognition. The mechanisms underlying this complex capacity are not fully understood. Recently, it has been proposed that face recognition is supported by a coarse-scale analysis of visual information contained in horizontal bands of contrast distributed along the vertical image axis-a biological facial "barcode" (Dakin & Watt, 2009). A critical prediction of the facial barcode hypothesis is that the distribution of image contrast along the vertical axis will be more important for face recognition than image distributions along the horizontal axis. Using a novel paradigm involving dynamic image distortions, a series of experiments are presented examining famous face recognition impairments from selectively disrupting image distributions along the vertical or horizontal image axes. Results show that disrupting the image distribution along the vertical image axis is more disruptive for recognition than matched distortions along the horizontal axis. Consistent with the facial barcode hypothesis, these results suggest that human face recognition relies disproportionately on appropriately scaled distributions of image contrast along the vertical image axis.

  19. Recognition memory for words and faces in the very old.

    Science.gov (United States)

    Diesfeldt, H; Vink, M

    1989-09-01

    The assessment of very elderly people is hindered by a scarcity of normative and reliability data for non-verbal memory tests. We tested the suitability of Warrington's Recognition Memory Test (RMT) for use with the elderly. The RMT consists of verbal (Recognition Memory for Words, RMW) and non-verbal (Recognition Memory for Faces, RMF) subtests. The facial recognition test was used in the standard format and a Dutch-language version of the word recognition test was developed using low frequency (10 or less/million) monosyllabic words. Eighty-nine subjects, varying in age from 69 to 93, were tested with the RMF. Means and SD are provided for three age groups (69-79, 80-84 and 85-93). Forty-five consecutive subjects were tested both with the RMW and the RMF. Recognition memory for words was better than recognition memory for faces in this sample. Moderate correlations (0.30-0.48) were found between RMT and WAIS Vocabulary and Raven's Coloured Progressive Matrices scores. Warrington's RMT was well tolerated, even by very elderly adults. The standardization data for the elderly over 70 add to the usefulness of this test of verbal and non-verbal episodic memory.

  20. Design of embedded intelligent monitoring system based on face recognition

    Science.gov (United States)

    Liang, Weidong; Ding, Yan; Zhao, Liangjin; Li, Jia; Hu, Xuemei

    2017-01-01

    In this paper, a new embedded intelligent monitoring system based on face recognition is proposed. The system uses Pi Raspberry as the central processor. A sensors group has been designed with Zigbee module in order to assist the system to work better and the two alarm modes have been proposed using the Internet and 3G modem. The experimental results show that the system can work under various light intensities to recognize human face and send alarm information in real time.

  1. RESEARCH ON FACE RECOGNITION BASED ON IMED AND 2DPCA

    Institute of Scientific and Technical Information of China (English)

    Han Ke; Zhu Xiuchang

    2006-01-01

    This letter proposes an effective method for recognizing face images by combining two-Dimensional Principal Component Analysis (2DPCA) with IMage Euclidean Distance (IMED) method. The proposed method is comprised of four main stages. The first stage uses the wavelet decomposition to extract low frequency subimages from original face images and omits the other three subimages. The second stage concerns the application of IMED to face images. In the third stage, 2DPCA is employed to extract the face features from the processed results in the second stage. Finally, Support Vector Machine (SVM) is applied to classify the extracted face features. Experimental results on the AR face image database show that the proposed method yields better recognition performance in comparison with the 2DPCA method that is not combined with IMED.

  2. Facial emotion recognition in bipolar disorder: a critical review.

    Science.gov (United States)

    Rocca, Cristiana Castanho de Almeida; Heuvel, Eveline van den; Caetano, Sheila C; Lafer, Beny

    2009-06-01

    Literature review of the controlled studies in the last 18 years in emotion recognition deficits in bipolar disorder. A bibliographical research of controlled studies with samples larger than 10 participants from 1990 to June 2008 was completed in Medline, Lilacs, PubMed and ISI. Thirty-two papers were evaluated. Euthymic bipolar disorder presented impairment in recognizing disgust and fear. Manic BD showed difficult to recognize fearful and sad faces. Pediatric bipolar disorder patients and children at risk presented impairment in their capacity to recognize emotions in adults and children faces. Bipolar disorder patients were more accurate in recognizing facial emotions than schizophrenic patients. Bipolar disorder patients present impaired recognition of disgust, fear and sadness that can be partially attributed to mood-state. In mania, they have difficult to recognize fear and disgust. Bipolar disorder patients were more accurate in recognizing emotions than depressive and schizophrenic patients. Bipolar disorder children present a tendency to misjudge extreme facial expressions as being moderate or mild in intensity. Affective and cognitive deficits in bipolar disorder vary according to the mood states. Follow-up studies re-testing bipolar disorder patients after recovery are needed in order to investigate if these abnormalities reflect a state or trait marker and can be considered an endophenotype. Future studies should aim at standardizing task and designs.

  3. Recognition advantage of happy faces: tracing the neurocognitive processes.

    Science.gov (United States)

    Calvo, Manuel G; Beltrán, David

    2013-09-01

    The present study aimed to identify the brain processes-and their time course-underlying the typical behavioral recognition advantage of happy facial expressions. To this end, we recorded EEG activity during an expression categorization task for happy, angry, fearful, sad, and neutral faces, and the correlation between event-related-potential (ERP) patterns and recognition performance was assessed. N170 (150-180 ms) was enhanced for angry, fearful and sad faces; N2 was reduced and early posterior negativity (EPN; both, 200-320 ms) was enhanced for happy and angry faces; P3b (350-450 ms) was reduced for happy and neutral faces; and slow positive wave (SPW; 700-800 ms) was reduced for happy faces. This reveals (a) an early processing (N170) of negative affective valence (i.e., angry, fearful, and sad), (b) discrimination (N2 and EPN) of affective intensity or arousal (i.e., angry and happy), and (c) facilitated categorization (P3b) and decision (SPW) due to expressive distinctiveness (i.e., happy). In addition, N2, EPN, P3b, and SPW were related to categorization accuracy and speed. This suggests that conscious expression recognition and the typical happy face advantage depend on encoding of expressive intensity and, especially, on later response selection, rather than on the early processing of affective valence. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Holistic face processing can inhibit recognition of forensic facial composites.

    Science.gov (United States)

    McIntyre, Alex H; Hancock, Peter J B; Frowd, Charlie D; Langton, Stephen R H

    2016-04-01

    Facial composite systems help eyewitnesses to show the appearance of criminals. However, likenesses created by unfamiliar witnesses will not be completely accurate, and people familiar with the target can find them difficult to identify. Faces are processed holistically; we explore whether this impairs identification of inaccurate composite images and whether recognition can be improved. In Experiment 1 (n = 64) an imaging technique was used to make composites of celebrity faces more accurate and identification was contrasted with the original composite images. Corrected composites were better recognized, confirming that errors in production of the likenesses impair identification. The influence of holistic face processing was explored by misaligning the top and bottom parts of the composites (cf. Young, Hellawell, & Hay, 1987). Misalignment impaired recognition of corrected composites but identification of the original, inaccurate composites significantly improved. This effect was replicated with facial composites of noncelebrities in Experiment 2 (n = 57). We conclude that, like real faces, facial composites are processed holistically: recognition is impaired because unlike real faces, composites contain inaccuracies and holistic face processing makes it difficult to perceive identifiable features. This effect was consistent across composites of celebrities and composites of people who are personally familiar. Our findings suggest that identification of forensic facial composites can be enhanced by presenting composites in a misaligned format.

  5. Part-based set matching for face recognition in surveillance

    Science.gov (United States)

    Zheng, Fei; Wang, Guijin; Lin, Xinggang

    2013-12-01

    Face recognition in surveillance is a hot topic in computer vision due to the strong demand for public security and remains a challenging task owing to large variations in viewpoint and illumination of cameras. In surveillance, image sets are the most natural form of input by incorporating tracking. Recent advances in set-based matching also show its great potential for exploring the feature space for face recognition by making use of multiple samples of subjects. In this paper, we propose a novel method that exploits the salient features (such as eyes, noses, mouth) in set-based matching. To represent image sets, we adopt the affine hull model, which can general unseen appearances in the form of affine combinations of sample images. In our proposal, a robust part detector is first used to find four salient parts for each face image: two eyes, nose, and mouth. For each part, we construct an affine hull model by using the local binary pattern histograms of multiple samples of the part. We also construct an affine model for the whole face region. Then, we find the closest distance between the corresponding affine hull models to measure the similarity between parts/face regions, and a weighting scheme is introduced to combine the five distances (four parts and the whole face region) to obtain the final distance between two subjects. In the recognition phase, a nearest neighbor classifier is used. Experiments on the public ChokePoint dataset and our dataset demonstrate the superior performance of our method.

  6. Emotion-attention interactions in recognition memory for distractor faces.

    Science.gov (United States)

    Srinivasan, Narayanan; Gupta, Rashmi

    2010-04-01

    Effective filtering of distractor information has been shown to be dependent on perceptual load. Given the salience of emotional information and the presence of emotion-attention interactions, we wanted to explore the recognition memory for emotional distractors especially as a function of focused attention and distributed attention by manipulating load and the spatial spread of attention. We performed two experiments to study emotion-attention interactions by measuring recognition memory performance for distractor neutral and emotional faces. Participants performed a color discrimination task (low-load) or letter identification task (high-load) with a letter string display in Experiment 1 and a high-load letter identification task with letters presented in a circular array in Experiment 2. The stimuli were presented against a distractor face background. The recognition memory results show that happy faces were recognized better than sad faces under conditions of less focused or distributed attention. When attention is more spatially focused, sad faces were recognized better than happy faces. The study provides evidence for emotion-attention interactions in which specific emotional information like sad or happy is associated with focused or distributed attention respectively. Distractor processing with emotional information also has implications for theories of attention.

  7. Adaptive Deep Supervised Autoencoder Based Image Reconstruction for Face Recognition

    Directory of Open Access Journals (Sweden)

    Rongbing Huang

    2016-01-01

    Full Text Available Based on a special type of denoising autoencoder (DAE and image reconstruction, we present a novel supervised deep learning framework for face recognition (FR. Unlike existing deep autoencoder which is unsupervised face recognition method, the proposed method takes class label information from training samples into account in the deep learning procedure and can automatically discover the underlying nonlinear manifold structures. Specifically, we define an Adaptive Deep Supervised Network Template (ADSNT with the supervised autoencoder which is trained to extract characteristic features from corrupted/clean facial images and reconstruct the corresponding similar facial images. The reconstruction is realized by a so-called “bottleneck” neural network that learns to map face images into a low-dimensional vector and reconstruct the respective corresponding face images from the mapping vectors. Having trained the ADSNT, a new face image can then be recognized by comparing its reconstruction image with individual gallery images, respectively. Extensive experiments on three databases including AR, PubFig, and Extended Yale B demonstrate that the proposed method can significantly improve the accuracy of face recognition under enormous illumination, pose change, and a fraction of occlusion.

  8. Neural and genetic foundations of face recognition and prosopagnosia.

    Science.gov (United States)

    Grüter, Thomas; Grüter, Martina; Carbon, Claus-Christian

    2008-03-01

    Faces are of essential importance for human social life. They provide valuable information about the identity, expression, gaze, health, and age of a person. Recent face-processing models assume highly interconnected neural structures between different temporal, occipital, and frontal brain areas with several feedback loops. A selective deficit in the visual learning and recognition of faces is known as prosopagnosia, which can be found both in acquired and congenital form. Recently, a hereditary sub-type of congenital prosopagnosia with a very high prevalence rate of 2.5% has been identified. Recent research results show that hereditary prosopagnosia is a clearly circumscribed face-processing deficit with a characteristic set of clinical symptoms. Comparing face processing of people of prosopagnosia with that of controls can help to develop a more conclusive and integrated model of face processing. Here, we provide a summary of the current state of face processing research. We also describe the different types of prosopagnosia and present the set of typical symptoms found in the hereditary type. Finally, we will discuss the implications for future face recognition research.

  9. Face Spoof Attack Recognition Using Discriminative Image Patches

    Directory of Open Access Journals (Sweden)

    Zahid Akhtar

    2016-01-01

    Full Text Available Face recognition systems are now being used in many applications such as border crossings, banks, and mobile payments. The wide scale deployment of facial recognition systems has attracted intensive attention to the reliability of face biometrics against spoof attacks, where a photo, a video, or a 3D mask of a genuine user’s face can be used to gain illegitimate access to facilities or services. Though several face antispoofing or liveness detection methods (which determine at the time of capture whether a face is live or spoof have been proposed, the issue is still unsolved due to difficulty in finding discriminative and computationally inexpensive features and methods for spoof attacks. In addition, existing techniques use whole face image or complete video for liveness detection. However, often certain face regions (video frames are redundant or correspond to the clutter in the image (video, thus leading generally to low performances. Therefore, we propose seven novel methods to find discriminative image patches, which we define as regions that are salient, instrumental, and class-specific. Four well-known classifiers, namely, support vector machine (SVM, Naive-Bayes, Quadratic Discriminant Analysis (QDA, and Ensemble, are then used to distinguish between genuine and spoof faces using a voting based scheme. Experimental analysis on two publicly available databases (Idiap REPLAY-ATTACK and CASIA-FASD shows promising results compared to existing works.

  10. Orienting to face expression during encoding improves men's recognition of own gender faces.

    Science.gov (United States)

    Fulton, Erika K; Bulluck, Megan; Hertzog, Christopher

    2015-10-01

    It is unclear why women have superior episodic memory of faces, but the benefit may be partially the result of women engaging in superior processing of facial expressions. Therefore, we hypothesized that orienting instructions to attend to facial expression at encoding would significantly improve men's memory of faces and possibly reduce gender differences. We directed 203 college students (122 women) to study 120 faces under instructions to orient to either the person's gender or their emotional expression. They later took a recognition test of these faces by either judging whether they had previously studied the same person or that person with the exact same expression; the latter test evaluated recollection of specific facial details. Orienting to facial expressions during encoding significantly improved men's recognition of own-gender faces and eliminated the advantage that women had for male faces under gender orienting instructions. Although gender differences in spontaneous strategy use when orienting to faces cannot fully account for gender differences in face recognition, orienting men to facial expression during encoding is one way to significantly improve their episodic memory for male faces.

  11. Real-time face swapping as a tool for understanding infant self-recognition

    CERN Document Server

    Nguyen, Sao Mai; Asada, Minoru

    2011-01-01

    To study the preference of infants for contingency of movements and familiarity of faces during self-recognition task, we built, as an accurate and instantaneous imitator, a real-time face- swapper for videos. We present a non-constraint face-swapper based on 3D visual tracking that achieves real-time performance through parallel computing. Our imitator system is par- ticularly suited for experiments involving children with Autistic Spectrum Disorder who are often strongly disturbed by the constraints of other methods.

  12. Emotional Recognition in Autism Spectrum Conditions from Voices and Faces

    Science.gov (United States)

    Stewart, Mary E.; McAdam, Clair; Ota, Mitsuhiko; Peppe, Sue; Cleland, Joanne

    2013-01-01

    The present study reports on a new vocal emotion recognition task and assesses whether people with autism spectrum conditions (ASC) perform differently from typically developed individuals on tests of emotional identification from both the face and the voice. The new test of vocal emotion contained trials in which the vocal emotion of the sentence…

  13. Combining Illumination Normalization Methods for Better Face Recognition

    NARCIS (Netherlands)

    Boom, B.J.; Tao, Q.; Spreeuwers, L.J.; Veldhuis, R.N.J.

    2009-01-01

    Face Recognition under uncontrolled illumination conditions is partly an unsolved problem. There are two categories of illumination normalization methods. The first category performs a local preprocessing, where they correct a pixel value based on a local neighborhood in the images. The second categ

  14. Face recognition with multi-resolution spectral feature images.

    Directory of Open Access Journals (Sweden)

    Zhan-Li Sun

    Full Text Available The one-sample-per-person problem has become an active research topic for face recognition in recent years because of its challenges and significance for real-world applications. However, achieving relatively higher recognition accuracy is still a difficult problem due to, usually, too few training samples being available and variations of illumination and expression. To alleviate the negative effects caused by these unfavorable factors, in this paper we propose a more accurate spectral feature image-based 2DLDA (two-dimensional linear discriminant analysis ensemble algorithm for face recognition, with one sample image per person. In our algorithm, multi-resolution spectral feature images are constructed to represent the face images; this can greatly enlarge the training set. The proposed method is inspired by our finding that, among these spectral feature images, features extracted from some orientations and scales using 2DLDA are not sensitive to variations of illumination and expression. In order to maintain the positive characteristics of these filters and to make correct category assignments, the strategy of classifier committee learning (CCL is designed to combine the results obtained from different spectral feature images. Using the above strategies, the negative effects caused by those unfavorable factors can be alleviated efficiently in face recognition. Experimental results on the standard databases demonstrate the feasibility and efficiency of the proposed method.

  15. Face recognition with multi-resolution spectral feature images.

    Science.gov (United States)

    Sun, Zhan-Li; Lam, Kin-Man; Dong, Zhao-Yang; Wang, Han; Gao, Qing-Wei; Zheng, Chun-Hou

    2013-01-01

    The one-sample-per-person problem has become an active research topic for face recognition in recent years because of its challenges and significance for real-world applications. However, achieving relatively higher recognition accuracy is still a difficult problem due to, usually, too few training samples being available and variations of illumination and expression. To alleviate the negative effects caused by these unfavorable factors, in this paper we propose a more accurate spectral feature image-based 2DLDA (two-dimensional linear discriminant analysis) ensemble algorithm for face recognition, with one sample image per person. In our algorithm, multi-resolution spectral feature images are constructed to represent the face images; this can greatly enlarge the training set. The proposed method is inspired by our finding that, among these spectral feature images, features extracted from some orientations and scales using 2DLDA are not sensitive to variations of illumination and expression. In order to maintain the positive characteristics of these filters and to make correct category assignments, the strategy of classifier committee learning (CCL) is designed to combine the results obtained from different spectral feature images. Using the above strategies, the negative effects caused by those unfavorable factors can be alleviated efficiently in face recognition. Experimental results on the standard databases demonstrate the feasibility and efficiency of the proposed method.

  16. Regional registration for expression resistant 3-D face recognition

    NARCIS (Netherlands)

    Alyuz, Nese; Gökberk, B.; Akarun, Lale

    Biometric identification from three-dimensional (3-D) facial surface characteristics has become popular, especially in high security applications. In this paper, we propose a fully automatic expression insensitive 3-D face recognition system. Surface deformations due to facial expressions are a

  17. An Inner Face Advantage in Children's Recognition of Familiar Peers

    Science.gov (United States)

    Ge, Liezhong; Anzures, Gizelle; Wang, Zhe; Kelly, David J.; Pascalis, Olivier; Quinn, Paul C.; Slater, Alan M.; Yang, Zhiliang; Lee, Kang

    2008-01-01

    Children's recognition of familiar own-age peers was investigated. Chinese children (4-, 8-, and 14-year-olds) were asked to identify their classmates from photographs showing the entire face, the internal facial features only, the external facial features only, or the eyes, nose, or mouth only. Participants from all age groups were familiar with…

  18. Non-frontal model based approach to forensic face recognition

    NARCIS (Netherlands)

    Dutta, Abhishek; Veldhuis, Raymond; Spreeuwers, Luuk

    2012-01-01

    In this paper, we propose a non-frontal model based approach which ensures that a face recognition system always gets to compare images having similar view (or pose). This requires a virtual suspect reference set that consists of non-frontal suspect images having pose similar to the surveillance vie

  19. An Inner Face Advantage in Children's Recognition of Familiar Peers

    Science.gov (United States)

    Ge, Liezhong; Anzures, Gizelle; Wang, Zhe; Kelly, David J.; Pascalis, Olivier; Quinn, Paul C.; Slater, Alan M.; Yang, Zhiliang; Lee, Kang

    2008-01-01

    Children's recognition of familiar own-age peers was investigated. Chinese children (4-, 8-, and 14-year-olds) were asked to identify their classmates from photographs showing the entire face, the internal facial features only, the external facial features only, or the eyes, nose, or mouth only. Participants from all age groups were familiar with…

  20. Impact of Intention on the ERP Correlates of Face Recognition

    Science.gov (United States)

    Guillaume, Fabrice; Tiberghien, Guy

    2013-01-01

    The present study investigated the impact of study-test similarity on face recognition by manipulating, in the same experiment, the expression change (same vs. different) and the task-processing context (inclusion vs. exclusion instructions) as within-subject variables. Consistent with the dual-process framework, the present results showed that…

  1. Enhanced and Fast Face Recognition by Hashing Algorithm

    Directory of Open Access Journals (Sweden)

    M. Sharif

    2012-08-01

    Full Text Available This paper presents a face hashing technique for fast face recognition. The proposed technique employs the twoexisting algorithms, i.e., 2-D discrete cosine transformation and K-means clustering. The image has to go throughdifferent pre-processing phases and the two above-mentioned algorithms must be used in order to obtain the hashvalue of the face image. The searching process is increased by introducing a modified form of binary search. A newdatabase architecture called Facebases has also been introduced to further speedup the searching process.

  2. Word and face recognition deficits following posterior cerebral artery stroke

    DEFF Research Database (Denmark)

    Kuhn, Christina D.; Asperud Thomsen, Johanne; Delfi, Tzvetelina

    2016-01-01

    included (four with right hemisphere damage, three with left, tested at least 1 year post stroke). We examined word and face recognition using a delayed match-to-sample paradigm using four different categories of stimuli: cropped faces, full faces, words, and cars. Reading speed and word length effects...... were measured in a separate reading test. Patients were compared to controls using single case statistics. Combining the results from the two experiments, two patients with right hemisphere damage showed deficits in all categories. More interestingly, of the remaining patients, one with right and two...

  3. Undersampled face recognition via robust auxiliary dictionary learning.

    Science.gov (United States)

    Wei, Chia-Po; Wang, Yu-Chiang Frank

    2015-06-01

    In this paper, we address the problem of robust face recognition with undersampled training data. Given only one or few training images available per subject, we present a novel recognition approach, which not only handles test images with large intraclass variations such as illumination and expression. The proposed method is also to handle the corrupted ones due to occlusion or disguise, which is not present during training. This is achieved by the learning of a robust auxiliary dictionary from the subjects not of interest. Together with the undersampled training data, both intra and interclass variations can thus be successfully handled, while the unseen occlusions can be automatically disregarded for improved recognition. Our experiments on four face image datasets confirm the effectiveness and robustness of our approach, which is shown to outperform state-of-the-art sparse representation-based methods.

  4. A Novel Face Recognition Algorithm for Distinguishing Faces with Various Angles

    Institute of Scientific and Technical Information of China (English)

    Yong-Zhong Lu

    2008-01-01

    In order to distinguish faces of various angles during face recognition, an algorithm of the combination of approximate dynamic programming (ADP) called action dependent heuristic dynamic programming (ADHDP) and particle swarm optimization (PSO) is presented. ADP is used for dynamically changing the values of the PSO parameters. During the process of face recognition, the discrete cosine transformation (DCT) is first introduced to reduce negative effects. Then, Karhunen-Loeve (K-L) transformation can be used to compress images and decrease data dimensions. According to principal component analysis (PCA), the main parts of vectors are extracted for data representation. Finally, radial basis function (RBF) neural network is trained to recognize various faces. The training of RBF neural network is exploited by ADP-PSO. In terms of ORL Face Database, the experimental result gives a clear view of its accurate efficiency.

  5. The Facespan-the perceptual span for face recognition.

    Science.gov (United States)

    Papinutto, Michael; Lao, Junpeng; Ramon, Meike; Caldara, Roberto; Miellet, Sébastien

    2017-05-01

    In reading, the perceptual span is a well-established concept that refers to the amount of information that can be read in a single fixation. Surprisingly, despite extensive empirical interest in determining the perceptual strategies deployed to process faces and an ongoing debate regarding the factors or mechanism(s) underlying efficient face processing, the perceptual span for faces-the Facespan-remains undetermined. To address this issue, we applied the gaze-contingent Spotlight technique implemented in an old-new face recognition paradigm. This procedure allowed us to parametrically vary the amount of facial information available at a fixated location in order to determine the minimal aperture size at which face recognition performance plateaus. As expected, accuracy increased nonlinearly with spotlight size apertures. Analyses of Structural Similarity comparing the available information during spotlight and natural viewing conditions indicate that the Facespan-the minimum spatial extent of preserved facial information leading to comparable performance as in natural viewing-encompasses 7° of visual angle in our viewing conditions (size of the face stimulus: 15.6°; viewing distance: 70 cm), which represents 45% of the face. The present findings provide a benchmark for future investigations that will address if and how the Facespan is modulated by factors such as cultural, developmental, idiosyncratic, or task-related differences.

  6. Illumination Invariant Face Recognition using SQI and Weighted LBP Histogram

    Directory of Open Access Journals (Sweden)

    Mohsen Biglari

    2014-12-01

    Full Text Available Face recognition under uneven illumination is still an open problem. One of the main challenges in real-world face recognition systems is illumination variation. In this paper, a novel illumination invariant face recognition approach base on Self Quotient Image (SQI and weighted Local Binary Pattern (WLBP histogram has been proposed. In this system, the performance of the system is increased by using different sigma values of SQI for training and testing. Furthermore, using two multi-region uniform LBP operators for feature extraction simultaneously, made the system more robust to illumination variation. This approach gathers information of the image in different local and global levels. The weighted Chi square statistic is used for histogram comparison and NN (1-NN is used as classifier. The weighted approach emphasizes on the more important regions in the faces. The proposed approach is compared with some new and traditional methods like QI, SQI, QIR, MQI, DMQI, DSFQI, PCA and LDA on Yale face database B and CMU-PIE database. The experimental results show that the proposed method outperforms other tested methods.

  7. Anti Theft Mechanism Through Face recognition Using FPGA

    Science.gov (United States)

    Sundari, Y. B. T.; Laxminarayana, G.; Laxmi, G. Vijaya

    2012-11-01

    The use of vehicle is must for everyone. At the same time, protection from theft is also very important. Prevention of vehicle theft can be done remotely by an authorized person. The location of the car can be found by using GPS and GSM controlled by FPGA. In this paper, face recognition is used to identify the persons and comparison is done with the preloaded faces for authorization. The vehicle will start only when the authorized personís face is identified. In the event of theft attempt or unauthorized personís trial to drive the vehicle, an MMS/SMS will be sent to the owner along with the location. Then the authorized person can alert the security personnel for tracking and catching the vehicle. For face recognition, a Principal Component Analysis (PCA) algorithm is developed using MATLAB. The control technique for GPS and GSM is developed using VHDL over SPTRAN 3E FPGA. The MMS sending method is written in VB6.0. The proposed application can be implemented with some modifications in the systems wherever the face recognition or detection is needed like, airports, international borders, banking applications etc.

  8. Face recognition using SIFT features under 3D meshes

    Institute of Scientific and Technical Information of China (English)

    ZHANG Cheng; GU Yu-zhang; HU Ke-li; WANG Ying-guan

    2015-01-01

    Expression, occlusion, and pose variations are three main challenges for 3D face recognition. A novel method is presented to address 3D face recognition using scale-invariant feature transform (SIFT) features on 3D meshes. After preprocessing, shape index extrema on the 3D facial surface are selected as keypoints in the difference scale space and the unstable keypoints are removed after two screening steps. Then, a local coordinate system for each keypoint is established by principal component analysis (PCA). Next, two local geometric features are extracted around each keypoint through the local coordinate system. Additionally, the features are augmented by the symmetrization according to the approximate left-right symmetry in human face. The proposed method is evaluated on the Bosphorus, BU-3DFE, and Gavab databases, respectively. Good results are achieved on these three datasets. As a result, the proposed method proves robust to facial expression variations, partial external occlusions and large pose changes.

  9. Distance Adaptive Tensor Discriminative Geometry Preserving Projection for Face Recognition

    Directory of Open Access Journals (Sweden)

    Ziqiang Wang

    2012-09-01

    Full Text Available There is a growing interest in dimensionality reduction techniques for face recognition, however, the traditional dimensionality reduction algorithms often transform the input face image data into vectors before embedding. Such vectorization often ignores the underlying data structure and leads to higher computational complexity. To effectively cope with these problems, a novel dimensionality reduction algorithm termed distance adaptive tensor discriminative geometry preserving projection (DATDGPP is proposed in this paper. The key idea of DATDGPP is as follows: first, the face image data are directly encoded in high‐order tensor structure so that the relationships among the face image data can be preserved; second, the data‐adaptive tensor distance is adopted to model the correlation among different coordinates of tensor data; third, the transformation matrix which can preserve discrimination and local geometry information is obtained by an iteration algorithm. Experimental results on three face databases show that the proposed algorithm outperforms other representative dimensionality reduction algorithms.

  10. Face-blind for other-race faces: Individual differences in other-race recognition impairments.

    Science.gov (United States)

    Wan, Lulu; Crookes, Kate; Dawel, Amy; Pidcock, Madeleine; Hall, Ashleigh; McKone, Elinor

    2017-01-01

    We report the existence of a previously undescribed group of people, namely individuals who are so poor at recognition of other-race faces that they meet criteria for clinical-level impairment (i.e., they are "face-blind" for other-race faces). Testing 550 participants, and using the well-validated Cambridge Face Memory Test for diagnosing face blindness, results show the rate of other-race face blindness to be nontrivial, specifically 8.1% of Caucasians and Asians raised in majority own-race countries. Results also show risk factors for other-race face blindness to include: a lack of interracial contact; and being at the lower end of the normal range of general face recognition ability (i.e., even for own-race faces); but not applying less individuating effort to other-race than own-race faces. Findings provide a potential resolution of contradictory evidence concerning the importance of the other-race effect (ORE), by explaining how it is possible for the mean ORE to be modest in size (suggesting a genuine but minor problem), and simultaneously for individuals to suffer major functional consequences in the real world (e.g., eyewitness misidentification of other-race offenders leading to wrongful imprisonment). Findings imply that, in legal settings, evaluating an eyewitness's chance of having made an other-race misidentification requires information about the underlying face recognition abilities of the individual witness. Additionally, analogy with prosopagnosia (inability to recognize even own-race faces) suggests everyday social interactions with other-race people, such as those between colleagues in the workplace, will be seriously impacted by the ORE in some people. (PsycINFO Database Record

  11. Recognition of Immaturity and Emotional Expressions in Blended Faces by Children with Autism and Other Developmental Disabilities

    Science.gov (United States)

    Gross, Thomas F.

    2008-01-01

    The recognition of facial immaturity and emotional expression by children with autism, language disorders, mental retardation, and non-disabled controls was studied in two experiments. Children identified immaturity and expression in upright and inverted faces. The autism group identified fewer immature faces and expressions than control (Exp. 1 &…

  12. Computer-Assisted Face Processing Instruction Improves Emotion Recognition, Mentalizing, and Social Skills in Students with ASD

    Science.gov (United States)

    Rice, Linda Marie; Wall, Carla Anne; Fogel, Adam; Shic, Frederick

    2015-01-01

    This study examined the extent to which a computer-based social skills intervention called "FaceSay"™ was associated with improvements in affect recognition, mentalizing, and social skills of school-aged children with Autism Spectrum Disorder (ASD). "FaceSay"™ offers students simulated practice with eye gaze, joint attention,…

  13. Influence of Gaze Direction on Face Recognition: A Sensitive Effect

    Directory of Open Access Journals (Sweden)

    Noémy Daury

    2011-08-01

    Full Text Available This study was aimed at determining the conditions in which eye-contact may improve recognition memory for faces. Different stimuli and procedures were tested in four experiments. The effect of gaze direction on memory was found when a simple “yes-no” recognition task was used but not when the recognition task was more complex (e.g., including “Remember-Know” judgements, cf. Experiment 2, or confidence ratings, cf. Experiment 4. Moreover, even when a “yes-no” recognition paradigm was used, the effect occurred with one series of stimuli (cf. Experiment 1 but not with another one (cf. Experiment 3. The difficulty to produce the positive effect of gaze direction on memory is discussed.

  14. Are Face and Object Recognition Independent? A Neurocomputational Modeling Exploration.

    Science.gov (United States)

    Wang, Panqu; Gauthier, Isabel; Cottrell, Garrison

    2016-04-01

    Are face and object recognition abilities independent? Although it is commonly believed that they are, Gauthier et al. [Gauthier, I., McGugin, R. W., Richler, J. J., Herzmann, G., Speegle, M., & VanGulick, A. E. Experience moderates overlap between object and face recognition, suggesting a common ability. Journal of Vision, 14, 7, 2014] recently showed that these abilities become more correlated as experience with nonface categories increases. They argued that there is a single underlying visual ability, v, that is expressed in performance with both face and nonface categories as experience grows. Using the Cambridge Face Memory Test and the Vanderbilt Expertise Test, they showed that the shared variance between Cambridge Face Memory Test and Vanderbilt Expertise Test performance increases monotonically as experience increases. Here, we address why a shared resource across different visual domains does not lead to competition and to an inverse correlation in abilities? We explain this conundrum using our neurocomputational model of face and object processing ["The Model", TM, Cottrell, G. W., & Hsiao, J. H. Neurocomputational models of face processing. In A. J. Calder, G. Rhodes, M. Johnson, & J. Haxby (Eds.), The Oxford handbook of face perception. Oxford, UK: Oxford University Press, 2011]. We model the domain general ability v as the available computational resources (number of hidden units) in the mapping from input to label and experience as the frequency of individual exemplars in an object category appearing during network training. Our results show that, as in the behavioral data, the correlation between subordinate level face and object recognition accuracy increases as experience grows. We suggest that different domains do not compete for resources because the relevant features are shared between faces and objects. The essential power of experience is to generate a "spreading transform" for faces (separating them in representational space) that

  15. Spatial Location in Brief, Free-Viewing Face Encoding Modulates Contextual Face Recognition

    Directory of Open Access Journals (Sweden)

    Fatima M. Felisberti

    2013-08-01

    Full Text Available The effect of the spatial location of faces in the visual field during brief, free-viewing encoding in subsequent face recognition is not known. This study addressed this question by tagging three groups of faces with cheating, cooperating or neutral behaviours and presenting them for encoding in two visual hemifields (upper vs. lower or left vs. right. Participants then had to indicate if a centrally presented face had been seen before or not. Head and eye movements were free in all phases. Findings showed that the overall recognition of cooperators was significantly better than cheaters, and it was better for faces encoded in the upper hemifield than in the lower hemifield, both in terms of a higher d' and faster reaction time (RT. The d' for any given behaviour in the left and right hemifields was similar. The RT in the left hemifield did not vary with tagged behaviour, whereas the RT in the right hemifield was longer for cheaters than for cooperators. The results showed that memory biases in contextual face recognition were modulated by the spatial location of briefly encoded faces and are discussed in terms of scanning reading habits, top-left bias in lighting preference and peripersonal space.

  16. Robust Face Recognition Via Gabor Feature and Sparse Representation

    Directory of Open Access Journals (Sweden)

    Hao Yu-Juan

    2016-01-01

    Full Text Available Sparse representation based on compressed sensing theory has been widely used in the field of face recognition, and has achieved good recognition results. but the face feature extraction based on sparse representation is too simple, and the sparse coefficient is not sparse. In this paper, we improve the classification algorithm based on the fusion of sparse representation and Gabor feature, and then improved algorithm for Gabor feature which overcomes the problem of large dimension of the vector dimension, reduces the computation and storage cost, and enhances the robustness of the algorithm to the changes of the environment.The classification efficiency of sparse representation is determined by the collaborative representation,we simplify the sparse constraint based on L1 norm to the least square constraint, which makes the sparse coefficients both positive and reduce the complexity of the algorithm. Experimental results show that the proposed method is robust to illumination, facial expression and pose variations of face recognition, and the recognition rate of the algorithm is improved.

  17. Neural Mechanism for Mirrored Self-face Recognition.

    Science.gov (United States)

    Sugiura, Motoaki; Miyauchi, Carlos Makoto; Kotozaki, Yuka; Akimoto, Yoritaka; Nozawa, Takayuki; Yomogida, Yukihito; Hanawa, Sugiko; Yamamoto, Yuki; Sakuma, Atsushi; Nakagawa, Seishu; Kawashima, Ryuta

    2015-09-01

    Self-face recognition in the mirror is considered to involve multiple processes that integrate 2 perceptual cues: temporal contingency of the visual feedback on one's action (contingency cue) and matching with self-face representation in long-term memory (figurative cue). The aim of this study was to examine the neural bases of these processes by manipulating 2 perceptual cues using a "virtual mirror" system. This system allowed online dynamic presentations of real-time and delayed self- or other facial actions. Perception-level processes were identified as responses to only a single perceptual cue. The effect of the contingency cue was identified in the cuneus. The regions sensitive to the figurative cue were subdivided by the response to a static self-face, which was identified in the right temporal, parietal, and frontal regions, but not in the bilateral occipitoparietal regions. Semantic- or integration-level processes, including amodal self-representation and belief validation, which allow modality-independent self-recognition and the resolution of potential conflicts between perceptual cues, respectively, were identified in distinct regions in the right frontal and insular cortices. The results are supportive of the multicomponent notion of self-recognition and suggest a critical role for contingency detection in the co-emergence of self-recognition and empathy in infants.

  18. ASYMBOOST-BASED FISHER LINEAR CLASSIFIER FOR FACE RECOGNITION

    Institute of Scientific and Technical Information of China (English)

    Wang Xianji; Ye Xueyi; Li Bin; Li Xin; Zhuang Zhenquan

    2008-01-01

    When using AdaBoost to select discriminant features from some feature space (e.g. Gabor feature space) for face recognition, cascade structure is usually adopted to leverage the asymmetry in the distribution of positive and negative samples. Each node in the cascade structure is a classifier trained by AdaBoost with an asymmetric learning goal of high recognition rate but only moderate low false positive rate. One limitation of AdaBoost arises in the context of skewed example distribution and cascade classifiers: AdaBoost minimizes the classification error, which is not guaranteed to achieve the asymmetric node learning goal. In this paper, we propose to use the asymmetric AdaBoost (Asym-Boost) as a mechanism to address the asymmetric node learning goal. Moreover, the two parts of the selecting features and forming ensemble classifiers are decoupled, both of which occur simultaneously in AsymBoost and AdaBoost. Fisher Linear Discriminant Analysis (FLDA) is used on the selected features to learn a linear discriminant function that maximizes the separability of data among the different classes, which we think can improve the recognition performance. The proposed algorithm is dem onstrated with face recognition using a Gabor based representation on the FERET database. Experimental results show that the proposed algorithm yields better recognition performance than AdaBoost itself.

  19. Physiology-based face recognition in the thermal infrared spectrum.

    Science.gov (United States)

    Buddharaju, Pradeep; Pavlidis, Ioannis T; Tsiamyrtzis, Panagiotis; Bazakos, Mike

    2007-04-01

    The current dominant approaches to face recognition rely on facial characteristics that are on or over the skin. Some of these characteristics have low permanency can be altered, and their phenomenology varies significantly with environmental factors (e.g., lighting). Many methodologies have been developed to address these problems to various degrees. However, the current framework of face recognition research has a potential weakness due to its very nature. We present a novel framework for face recognition based on physiological information. The motivation behind this effort is to capitalize on the permanency of innate characteristics that are under the skin. To establish feasibility, we propose a specific methodology to capture facial physiological patterns using the bioheat information contained in thermal imagery. First, the algorithm delineates the human face from the background using the Bayesian framework. Then, it localizes the superficial blood vessel network using image morphology. The extracted vascular network produces contour shapes that are characteristic to each individual. The branching points of the skeletonized vascular network are referred to as Thermal Minutia Points (TMPs) and constitute the feature database. To render the method robust to facial pose variations, we collect for each subject to be stored in the database five different pose images (center, midleft profile, left profile, midright profile, and right profile). During the classification stage, the algorithm first estimates the pose of the test image. Then, it matches the local and global TMP structures extracted from the test image with those of the corresponding pose images in the database. We have conducted experiments on a multipose database of thermal facial images collected in our laboratory, as well as on the time-gap database of the University of Notre Dame. The good experimental results show that the proposed methodology has merit, especially with respect to the problem of

  20. Near-infrared face recognition utilizing open CV software

    Science.gov (United States)

    Sellami, Louiza; Ngo, Hau; Fowler, Chris J.; Kearney, Liam M.

    2014-06-01

    Commercially available hardware, freely available algorithms, and authors' developed software are synergized successfully to detect and recognize subjects in an environment without visible light. This project integrates three major components: an illumination device operating in near infrared (NIR) spectrum, a NIR capable camera and a software algorithm capable of performing image manipulation, facial detection and recognition. Focusing our efforts in the near infrared spectrum allows the low budget system to operate covertly while still allowing for accurate face recognition. In doing so a valuable function has been developed which presents potential benefits in future civilian and military security and surveillance operations.

  1. An Improved Face Recognition Technique Based on Modular LPCA Approach

    Directory of Open Access Journals (Sweden)

    Mathu S.S. Kumar

    2011-01-01

    Full Text Available Problem statement: A face identification algorithm based on modular localized variation by Eigen Subspace technique, also called modular localized principal component analysis, is presented in this study. Approach: The face imagery was partitioned into smaller sub-divisions from a predefined neighborhood and they were ultimately fused to acquire many sets of features. Since a few of the normal facial features of an individual do not differ even when the pose and illumination may differ, the proposed method manages these variations. Results: The proposed feature selection module has significantly, enhanced the identification precision using standard face databases when compared to conservative and modular PCA techniques. Conclusion: The proposed algorithm, when related with conservative PCA algorithm and modular PCA, has enhanced recognition accuracy for face imagery with illumination, expression and pose variations.

  2. Discriminative Local Sparse Representations for Robust Face Recognition

    CERN Document Server

    Chen, Yi; Do, Thong T; Monga, Vishal; Tran, Trac D

    2011-01-01

    A key recent advance in face recognition models a test face image as a sparse linear combination of a set of training face images. The resulting sparse representations have been shown to possess robustness against a variety of distortions like random pixel corruption, occlusion and disguise. This approach however makes the restrictive (in many scenarios) assumption that test faces must be perfectly aligned (or registered) to the training data prior to classification. In this paper, we propose a simple yet robust local block-based sparsity model, using adaptively-constructed dictionaries from local features in the training data, to overcome this misalignment problem. Our approach is inspired by human perception: we analyze a series of local discriminative features and combine them to arrive at the final classification decision. We propose a probabilistic graphical model framework to explicitly mine the conditional dependencies between these distinct sparse local features. In particular, we learn discriminative...

  3. Local Relation Map: A Novel Illumination Invariant Face Recognition Approach

    Directory of Open Access Journals (Sweden)

    Lian Zhichao

    2012-10-01

    Full Text Available In this paper, a novel illumination invariant face recognition approach is proposed. Different from most existing methods, an additive term as noise is considered in the face model under varying illuminations in addition to a multiplicative illumination term. High frequency coefficients of Discrete Cosine Transform (DCT are discarded to eliminate the effect caused by noise. Based on the local characteristics of the human face, a simple but effective illumination invariant feature local relation map is proposed. Experimental results on the Yale B, Extended Yale B and CMU PIE demonstrate the outperformance and lower computational burden of the proposed method compared to other existing methods. The results also demonstrate the validity of the proposed face model and the assumption on noise.

  4. Tolerance for distorted faces: challenges to a configural processing account of familiar face recognition.

    Science.gov (United States)

    Sandford, Adam; Burton, A Mike

    2014-09-01

    Face recognition is widely held to rely on 'configural processing', an analysis of spatial relations between facial features. We present three experiments in which viewers were shown distorted faces, and asked to resize these to their correct shape. Based on configural theories appealing to metric distances between features, we reason that this should be an easier task for familiar than unfamiliar faces (whose subtle arrangements of features are unknown). In fact, participants were inaccurate at this task, making between 8% and 13% errors across experiments. Importantly, we observed no advantage for familiar faces: in one experiment participants were more accurate with unfamiliars, and in two experiments there was no difference. These findings were not due to general task difficulty - participants were able to resize blocks of colour to target shapes (squares) more accurately. We also found an advantage of familiarity for resizing other stimuli (brand logos). If configural processing does underlie face recognition, these results place constraints on the definition of 'configural'. Alternatively, familiar face recognition might rely on more complex criteria - based on tolerance to within-person variation rather than highly specific measurement.

  5. Friends with Faces: How Social Networks Can Enhance Face Recognition and Vice Versa

    Science.gov (United States)

    Mavridis, Nikolaos; Kazmi, Wajahat; Toulis, Panos

    The "friendship" relation, a social relation among individuals, is one of the primary relations modeled in some of the world's largest online social networking sites, such as "FaceBook." On the other hand, the "co-occurrence" relation, as a relation among faces appearing in pictures, is one that is easily detectable using modern face detection techniques. These two relations, though appearing in different realms (social vs. visual sensory), have a strong correlation: faces that co-occur in photos often belong to individuals who are friends. Using real-world data gathered from "Facebook," which were gathered as part of the "FaceBots" project, the world's first physical face-recognizing and conversing robot that can utilize and publish information on "Facebook" was established. We present here methods as well as results for utilizing this correlation in both directions. Both algorithms for utilizing knowledge of the social context for faster and better face recognition are given, as well as algorithms for estimating the friendship network of a number of individuals given photos containing their faces. The results are quite encouraging. In the primary example, doubling of the recognition accuracy as well as a sixfold improvement in speed is demonstrated. Various improvements, interesting statistics, as well as an empirical investigation leading to predictions of scalability to much bigger data sets are discussed.

  6. Face Recognition Using Holistic Features and Simplified Linear Discriminant Analysis

    Directory of Open Access Journals (Sweden)

    Gou Koutaki

    2012-08-01

    Full Text Available This paper proposed an alternative approach to face recognition algorithm that is based on global/holistic features of face image and simplified Linear Discriminant Analysis (LDA. The proposed method can overcome main problems of the conventional LDA in terms of large processing time for retraining when a new class data was registered into the training data set. The holistic features of face image were proposed as dimensional reduction of raw face image. While, the simplified LDA which is the redefinition of between class scatter using constant global mean assignment was proposed to decrease time complexity of retraining process. In order to know the performance of the proposed method, several experiments were performed using several challenging face databases: ORL, YALE, ITS-Lab, INDIA, and FERET database. Furthermore, we compared the developed algorithm experimental results to the best traditional subspace methods such as DLDA, 2DLDA, (2D2DLDA, 2DPCA, and (2D22DPCA. The experimental results show that the proposed method can solve the retraining problem of the conventional LDA indicated by requiring short retraining time and stable recognition rate.

  7. A novel face recognition method with feature combination

    Institute of Scientific and Technical Information of China (English)

    LI Wen-shu; ZHOU Chang-le; XU Jia-tuo

    2005-01-01

    A novel combined personalized feature framework is proposed for face recognition (FR). In the framework, the proposed linear discriminant analysis (LDA) makes use of the null space of the within-class scatter matrix effectively, and Global feature vectors (PCA-transformed) and local feature vectors (Gabor wavelet-transformed) are integrated by complex vectors as input feature of improved LDA. The proposed method is compared to other commonly used FR methods on two face databases (ORL and UMIST). Results demonstrated that the performance of the proposed method is superior to that of traditional FR approaches

  8. Image Region Selection and Ensemble for Face Recognition

    Institute of Scientific and Technical Information of China (English)

    Xin Geng; Zhi-Hua Zhou

    2006-01-01

    In this paper, a novel framework for face recognition, namely Selective Ensemble of Image Regions (SEIR), is proposed. In this framework, all possible regions in the face image are regarded as a certain kind of features. There are two main steps in SEIR: the first step is to automatically select several regions from all possible candidates; the second step is to construct classifier ensemble from the selected regions. An implementation of SEIR based on multiple eigenspaces, namely SEME, is also proposed in this paper. SEME is analyzed and compared with eigenface, PCA + LDA, eigenfeature, and eigenface + eigenfeature through experiments. The experimental results show that SEME achieves the best performance.

  9. Galactose uncovers face recognition and mental images in congenital prosopagnosia : the first case report.

    OpenAIRE

    Esins, J.; Schultz, J; Bülthoff, I.; Kennerknecht, I.

    2014-01-01

    A woman in her early 40s with congenital prosopagnosia and attention deficit hyperactivity disorder observed for the first time sudden and extensive improvement of her face recognition abilities, mental imagery, and sense of navigation after galactose intake. This effect of galactose on prosopagnosia has never been reported before. Even if this effect is restricted to a subform of congenital prosopagnosia, galactose might improve the condition of other prosopagnosics. Congenital prosopagnosia...

  10. Next Level of Data Fusion for Human Face Recognition

    CERN Document Server

    Bhowmik, Mrinal Kanti; Bhattacharjee, Debotosh; Basu, Dipak Kumar; Nasipuri, Mita

    2011-01-01

    This paper demonstrates two different fusion techniques at two different levels of a human face recognition process. The first one is called data fusion at lower level and the second one is the decision fusion towards the end of the recognition process. At first a data fusion is applied on visual and corresponding thermal images to generate fused image. Data fusion is implemented in the wavelet domain after decomposing the images through Daubechies wavelet coefficients (db2). During the data fusion maximum of approximate and other three details coefficients are merged together. After that Principle Component Analysis (PCA) is applied over the fused coefficients and finally two different artificial neural networks namely Multilayer Perceptron(MLP) and Radial Basis Function(RBF) networks have been used separately to classify the images. After that, for decision fusion based decisions from both the classifiers are combined together using Bayesian formulation. For experiments, IRIS thermal/visible Face Database h...

  11. Robust Face Recognition via Occlusion Detection and Masking

    Directory of Open Access Journals (Sweden)

    Guo Tan

    2016-01-01

    Full Text Available Sparse representation-based classification (SRC method has demonstrated promising results in face recognition (FR. In this paper, we consider the problem of face recognition with occlusion. In sparse representation-based classification method, the reconstruction residual of test sample over the training set is usually heterogeneous with the training samples, highlighting the occlusion part in test sample. We detect the occlusion part by extracting a mask from the reconstruction residual through threshold operation. The mask will be applied in the representation-based classification framework to eliminate the impact of occlusion in FR. The method does not assume any prior knowledge about the occlusion, and extensive experiments on publicly available databases show the efficacy of the method.

  12. Determination of candidate subjects for better recognition of faces

    Science.gov (United States)

    Wang, Xuansheng; Chen, Zhen; Teng, Zhongming

    2016-05-01

    In order to improve the accuracy of face recognition and to solve the problem of various poses, we present an improved collaborative representation classification (CRC) algorithm using original training samples and the corresponding mirror images. First, the mirror images are generated from the original training samples. Second, both original training samples and their mirror images are simultaneously used to represent the test sample via improved collaborative representation. Then, some classes which are "close" to the test sample are coarsely selected as candidate classes. At last, the candidate classes are used to represent the test sample again, and then the class most similar to the test sample can be determined finely. The experimental results show our proposed algorithm has more robustness than the original CRC algorithm and can effectively improve the accuracy of face recognition.

  13. The effect of image resolution on the performance of a face recognition system

    NARCIS (Netherlands)

    Boom, B.J.; Beumer, G.M.; Spreeuwers, L.J.; Veldhuis, R.N.J.

    2006-01-01

    In this paper we investigate the effect of image resolution on the error rates of a face verification system. We do not restrict ourselves to the face recognition algorithm only, but we also consider the face registration. In our face recognition system, the face registration is done by finding land

  14. Face detection dissociates from face recognition : evidence from ERPs and the naso-temporal asymmetry (Abstract)

    NARCIS (Netherlands)

    de Gelder, B.; Pourtois, G.R.C.

    2002-01-01

    Neuropsychological data indicate that face processing could be distributed among two functionally and anatomically distinct mechanisms, one specialised for detection and the other aimed at recognition (de Gelder & Rouw, 2000; 2001). These two mechanisms may be implemented in different interacting re

  15. Using Regression to Measure Holistic Face Processing Reveals a Strong Link with Face Recognition Ability

    Science.gov (United States)

    DeGutis, Joseph; Wilmer, Jeremy; Mercado, Rogelio J.; Cohan, Sarah

    2013-01-01

    Although holistic processing is thought to underlie normal face recognition ability, widely discrepant reports have recently emerged about this link in an individual differences context. Progress in this domain may have been impeded by the widespread use of subtraction scores, which lack validity due to their contamination with control condition…

  16. Kernel-Based Nonlinear Discriminant Analysis for Face Recognition

    Institute of Scientific and Technical Information of China (English)

    LIU QingShan (刘青山); HUANG Rui (黄锐); LU HanQing (卢汉清); MA SongDe (马颂德)

    2003-01-01

    Linear subspace analysis methods have been successfully applied to extract features for face recognition. But they are inadequate to represent the complex and nonlinear variations of real face images, such as illumination, facial expression and pose variations, because of their linear properties. In this paper, a nonlinear subspace analysis method, Kernel-based Nonlinear Discriminant Analysis (KNDA), is presented for face recognition, which combines the nonlinear kernel trick with the linear subspace analysis method - Fisher Linear Discriminant Analysis (FLDA).First, the kernel trick is used to project the input data into an implicit feature space, then FLDA is performed in this feature space. Thus nonlinear discriminant features of the input data are yielded. In addition, in order to reduce the computational complexity, a geometry-based feature vectors selection scheme is adopted. Another similar nonlinear subspace analysis is Kernel-based Principal Component Analysis (KPCA), which combines the kernel trick with linear Principal Component Analysis (PCA). Experiments are performed with the polynomial kernel, and KNDA is compared with KPCA and FLDA. Extensive experimental results show that KNDA can give a higher recognition rate than KPCA and FLDA.

  17. Emotion recognition: the role of featural and configural face information.

    Science.gov (United States)

    Bombari, Dario; Schmid, Petra C; Schmid Mast, Marianne; Birri, Sandra; Mast, Fred W; Lobmaier, Janek S

    2013-01-01

    Several studies investigated the role of featural and configural information when processing facial identity. A lot less is known about their contribution to emotion recognition. In this study, we addressed this issue by inducing either a featural or a configural processing strategy (Experiment 1) and by investigating the attentional strategies in response to emotional expressions (Experiment 2). In Experiment 1, participants identified emotional expressions in faces that were presented in three different versions (intact, blurred, and scrambled) and in two orientations (upright and inverted). Blurred faces contain mainly configural information, and scrambled faces contain mainly featural information. Inversion is known to selectively hinder configural processing. Analyses of the discriminability measure (A') and response times (RTs) revealed that configural processing plays a more prominent role in expression recognition than featural processing, but their relative contribution varies depending on the emotion. In Experiment 2, we qualified these differences between emotions by investigating the relative importance of specific features by means of eye movements. Participants had to match intact expressions with the emotional cues that preceded the stimulus. The analysis of eye movements confirmed that the recognition of different emotions rely on different types of information. While the mouth is important for the detection of happiness and fear, the eyes are more relevant for anger, fear, and sadness.

  18. Modified SIFT Descriptors for Face Recognition under Different Emotions

    Directory of Open Access Journals (Sweden)

    Nirvair Neeru

    2016-01-01

    Full Text Available The main goal of this work is to develop a fully automatic face recognition algorithm. Scale Invariant Feature Transform (SIFT has sparingly been used in face recognition. In this paper, a Modified SIFT (MSIFT approach has been proposed to enhance the recognition performance of SIFT. In this paper, the work is done in three steps. First, the smoothing of the image has been done using DWT. Second, the computational complexity of SIFT in descriptor calculation is reduced by subtracting average from each descriptor instead of normalization. Third, the algorithm is made automatic by using Coefficient of Correlation (CoC instead of using the distance ratio (which requires user interaction. The main achievement of this method is reduced database size, as it requires only neutral images to store instead of all the expressions of the same face image. The experiments are performed on the Japanese Female Facial Expression (JAFFE database, which indicates that the proposed approach achieves better performance than SIFT based methods. In addition, it shows robustness against various facial expressions.

  19. Efficient face recognition method based on DCT and LDA

    Institute of Scientific and Technical Information of China (English)

    张燕昆; 刘重庆

    2004-01-01

    It has been demonstrated that the linear discriminant analysis (LDA) is an effective approach in face recognition tasks. However, due to the high dimensionality of an image space, many LDA based approaches first use the principal component analysis (PCA) to project an image into a lower dimensional space, then perform the LDA transform to extract discriminant feature. But some useful discriminant information to the following LDA transform will be lost in the PCA step. To overcome these defects, a face recognition method based on the discrete cosine transform (DCT) and the LDA is proposed. First the DCT is used to achieve dimension reduction, then LDA transform is performed on the lower space to extract features. Two face databases are used to test our method and the correct recognition rates of 97.5 % and 96.0 % are obtained respectively. The performance of the proposed method is compared with that of the PCA + LDA method and the results show that the method proposed outperforms the PCA + LDA method.

  20. A robust, low-cost approach to Face Detection and Face Recognition

    CERN Document Server

    Jyoti, Divya; Vaidya, Pallavi; Roja, M Mani

    2011-01-01

    In the domain of Biometrics, recognition systems based on iris, fingerprint or palm print scans etc. are often considered more dependable due to extremely low variance in the properties of these entities with respect to time. However, over the last decade data processing capability of computers has increased manifold, which has made real-time video content analysis possible. This shows that the need of the hour is a robust and highly automated Face Detection and Recognition algorithm with credible accuracy rate. The proposed Face Detection and Recognition system using Discrete Wavelet Transform (DWT) accepts face frames as input from a database containing images from low cost devices such as VGA cameras, webcams or even CCTV's, where image quality is inferior. Face region is then detected using properties of L*a*b* color space and only Frontal Face is extracted such that all additional background is eliminated. Further, this extracted image is converted to grayscale and its dimensions are resized to 128 x 128...

  1. Face liveness detection for face recognition based on cardiac features of skin color image

    Science.gov (United States)

    Suh, Kun Ha; Lee, Eui Chul

    2016-07-01

    With the growth of biometric technology, spoofing attacks have been emerged a threat to the security of the system. Main spoofing scenarios in the face recognition system include the printing attack, replay attack, and 3D mask attack. To prevent such attacks, techniques that evaluating liveness of the biometric data can be considered as a solution. In this paper, a novel face liveness detection method based on cardiac signal extracted from face is presented. The key point of proposed method is that the cardiac characteristic is detected in live faces but not detected in non-live faces. Experimental results showed that the proposed method can be effective way for determining printing attack or 3D mask attack.

  2. A novel polar-based human face recognition computational model

    Directory of Open Access Journals (Sweden)

    Y. Zana

    2009-07-01

    Full Text Available Motivated by a recently proposed biologically inspired face recognition approach, we investigated the relation between human behavior and a computational model based on Fourier-Bessel (FB spatial patterns. We measured human recognition performance of FB filtered face images using an 8-alternative forced-choice method. Test stimuli were generated by converting the images from the spatial to the FB domain, filtering the resulting coefficients with a band-pass filter, and finally taking the inverse FB transformation of the filtered coefficients. The performance of the computational models was tested using a simulation of the psychophysical experiment. In the FB model, face images were first filtered by simulated V1- type neurons and later analyzed globally for their content of FB components. In general, there was a higher human contrast sensitivity to radially than to angularly filtered images, but both functions peaked at the 11.3-16 frequency interval. The FB-based model presented similar behavior with regard to peak position and relative sensitivity, but had a wider frequency band width and a narrower response range. The response pattern of two alternative models, based on local FB analysis and on raw luminance, strongly diverged from the human behavior patterns. These results suggest that human performance can be constrained by the type of information conveyed by polar patterns, and consequently that humans might use FB-like spatial patterns in face processing.

  3. Recognition of Faces in Unconstrained Environments: A Comparative Study

    Directory of Open Access Journals (Sweden)

    Javier Ruiz-del-Solar

    2009-01-01

    Full Text Available The aim of this work is to carry out a comparative study of face recognition methods that are suitable to work in unconstrained environments. The analyzed methods are selected by considering their performance in former comparative studies, in addition to be real-time, to require just one image per person, and to be fully online. In the study two local-matching methods, histograms of LBP features and Gabor Jet descriptors, one holistic method, generalized PCA, and two image-matching methods, SIFT-based and ERCF-based, are analyzed. The methods are compared using the FERET, LFW, UCHFaceHRI, and FRGC databases, which allows evaluating them in real-world conditions that include variations in scale, pose, lighting, focus, resolution, facial expression, accessories, makeup, occlusions, background and photographic quality. Main conclusions of this study are: there is a large dependence of the methods on the amount of face and background information that is included in the face's images, and the performance of all methods decreases largely with outdoor-illumination. The analyzed methods are robust to inaccurate alignment, face occlusions, and variations in expressions, to a large degree. LBP-based methods are an excellent election if we need real-time operation as well as high recognition rates.

  4. LOCAL TEXTURE DESCRIPTION FRAMEWORK FOR TEXTURE BASED FACE RECOGNITION

    Directory of Open Access Journals (Sweden)

    R. Reena Rose

    2014-02-01

    Full Text Available Texture descriptors have an important role in recognizing face images. However, almost all the existing local texture descriptors use nearest neighbors to encode a texture pattern around a pixel. But in face images, most of the pixels have similar characteristics with that of its nearest neighbors because the skin covers large area in a face and the skin tone at neighboring regions are same. Therefore this paper presents a general framework called Local Texture Description Framework that uses only eight pixels which are at certain distance apart either circular or elliptical from the referenced pixel. Local texture description can be done using the foundation of any existing local texture descriptors. In this paper, the performance of the proposed framework is verified with three existing local texture descriptors Local Binary Pattern (LBP, Local Texture Pattern (LTP and Local Tetra Patterns (LTrPs for the five issues viz. facial expression, partial occlusion, illumination variation, pose variation and general recognition. Five benchmark databases JAFFE, Essex, Indian faces, AT&T and Georgia Tech are used for the experiments. Experimental results demonstrate that even with less number of patterns, the proposed framework could achieve higher recognition accuracy than that of their base models.

  5. Trainable Convolution Filters and Their Application to Face Recognition.

    Science.gov (United States)

    Kumar, Ritwik; Banerjee, Arunava; Vemuri, Baba C; Pfister, Hanspeter

    2012-07-01

    In this paper, we present a novel image classification system that is built around a core of trainable filter ensembles that we call Volterra kernel classifiers. Our system treats images as a collection of possibly overlapping patches and is composed of three components: (1) A scheme for a single patch classification that seeks a smooth, possibly nonlinear, functional mapping of the patches into a range space, where patches of the same class are close to one another, while patches from different classes are far apart-in the L_2 sense. This mapping is accomplished using trainable convolution filters (or Volterra kernels) where the convolution kernel can be of any shape or order. (2) Given a corpus of Volterra classifiers with various kernel orders and shapes for each patch, a boosting scheme for automatically selecting the best weighted combination of the classifiers to achieve higher per-patch classification rate. (3) A scheme for aggregating the classification information obtained for each patch via voting for the parent image classification. We demonstrate the effectiveness of the proposed technique using face recognition as an application area and provide extensive experiments on the Yale, CMU PIE, Extended Yale B, Multi-PIE, and MERL Dome benchmark face data sets. We call the Volterra kernel classifiers applied to face recognition Volterrafaces. We show that our technique, which falls into the broad class of embedding-based face image discrimination methods, consistently outperforms various state-of-the-art methods in the same category.

  6. Comparison of computer-based and optical face recognition paradigms

    Science.gov (United States)

    Alorf, Abdulaziz A.

    The main objectives of this thesis are to validate an improved principal components analysis (IPCA) algorithm on images; designing and simulating a digital model for image compression, face recognition and image detection by using a principal components analysis (PCA) algorithm and the IPCA algorithm; designing and simulating an optical model for face recognition and object detection by using the joint transform correlator (JTC); establishing detection and recognition thresholds for each model; comparing between the performance of the PCA algorithm and the performance of the IPCA algorithm in compression, recognition and, detection; and comparing between the performance of the digital model and the performance of the optical model in recognition and detection. The MATLAB(c) software was used for simulating the models. PCA is a technique used for identifying patterns in data and representing the data in order to highlight any similarities or differences. The identification of patterns in data of high dimensions (more than three dimensions) is too difficult because the graphical representation of data is impossible. Therefore, PCA is a powerful method for analyzing data. IPCA is another statistical tool for identifying patterns in data. It uses information theory for improving PCA. The joint transform correlator (JTC) is an optical correlator used for synthesizing a frequency plane filter for coherent optical systems. The IPCA algorithm, in general, behaves better than the PCA algorithm in the most of the applications. It is better than the PCA algorithm in image compression because it obtains higher compression, more accurate reconstruction, and faster processing speed with acceptable errors; in addition, it is better than the PCA algorithm in real-time image detection due to the fact that it achieves the smallest error rate as well as remarkable speed. On the other hand, the PCA algorithm performs better than the IPCA algorithm in face recognition because it offers

  7. EEG source imaging assists decoding in a face recognition task

    DEFF Research Database (Denmark)

    Andersen, Rasmus S.; Eliasen, Anders U.; Pedersen, Nicolai

    2017-01-01

    EEG based brain state decoding has numerous applications. State of the art decoding is based on processing of the multivariate sensor space signal, however evidence is mounting that EEG source reconstruction can assist decoding. EEG source imaging leads to high-dimensional representations...... of face recognition. This task concerns the differentiation of brain responses to images of faces and scrambled faces and poses a rather difficult decoding problem at the single trial level. We implement the pipeline using spatially focused features and show that this approach is challenged and source...... imaging does not lead to an improved decoding. We design a distributed pipeline in which the classifier has access to brain wide features which in turn does lead to a 15% reduction in the error rate using source space features. Hence, our work presents supporting evidence for the hypothesis that source...

  8. Face Expression Recognition and Analysis: The State of the Art

    CERN Document Server

    Bettadapura, Vinay

    2012-01-01

    The automatic recognition of facial expressions has been an active research topic since the early nineties. There have been several advances in the past few years in terms of face detection and tracking, feature extraction mechanisms and the techniques used for expression classification. This paper surveys some of the published work since 2001 till date. The paper presents a time-line view of the advances made in this field, the applications of automatic face expression recognizers, the characteristics of an ideal system, the databases that have been used and the advances made in terms of their standardization and a detailed summary of the state of the art. The paper also discusses facial parameterization using FACS Action Units (AUs) and MPEG-4 Facial Animation Parameters (FAPs) and the recent advances in face detection, tracking and feature extraction methods. Notes have also been presented on emotions, expressions and facial features, discussion on the six prototypic expressions and the recent studies on e...

  9. PERBANDINGAN EUCLIDEAN DISTANCE DENGAN CANBERRA DISTANCE PADA FACE RECOGNITION

    Directory of Open Access Journals (Sweden)

    Sendhy Rachmat Wurdianarto

    2014-08-01

    Full Text Available Perkembangan ilmu pada dunia komputer sangatlah pesat. Salah satu yang menandai hal ini adalah ilmu komputer telah merambah pada dunia biometrik. Arti biometrik sendiri adalah karakter-karakter manusia yang dapat digunakan untuk membedakan antara orang yang satu dengan yang lainnya. Salah satu pemanfaatan karakter / organ tubuh pada setiap manusia yang digunakan untuk identifikasi (pengenalan adalah dengan memanfaatkan wajah. Dari permasalahan diatas dalam pengenalan lebih tentang aplikasi Matlab pada Face Recognation menggunakan metode Euclidean Distance dan Canberra Distance. Model pengembangan aplikasi yang digunakan adalah model waterfall. Model waterfall beriisi rangkaian aktivitas proses yang disajikan dalam proses analisa kebutuhan, desain menggunakan UML (Unified Modeling Language, inputan objek gambar diproses menggunakan Euclidean Distance dan Canberra Distance. Kesimpulan yang dapat ditarik adalah aplikasi face Recognation menggunakan metode euclidean Distance dan Canverra Distance terdapat kelebihan dan kekurangan masing-masing. Untuk kedepannya aplikasi tersebut dapat dikembangkan dengan menggunakan objek berupa video ataupun objek lainnya.   Kata kunci : Euclidean Distance, Face Recognition, Biometrik, Canberra Distance

  10. A motivational determinant of facial emotion recognition: regulatory focus affects recognition of emotions in faces.

    Science.gov (United States)

    Sassenrath, Claudia; Sassenberg, Kai; Ray, Devin G; Scheiter, Katharina; Jarodzka, Halszka

    2014-01-01

    Two studies examined an unexplored motivational determinant of facial emotion recognition: observer regulatory focus. It was predicted that a promotion focus would enhance facial emotion recognition relative to a prevention focus because the attentional strategies associated with promotion focus enhance performance on well-learned or innate tasks - such as facial emotion recognition. In Study 1, a promotion or a prevention focus was experimentally induced and better facial emotion recognition was observed in a promotion focus compared to a prevention focus. In Study 2, individual differences in chronic regulatory focus were assessed and attention allocation was measured using eye tracking during the facial emotion recognition task. Results indicated that the positive relation between a promotion focus and facial emotion recognition is mediated by shorter fixation duration on the face which reflects a pattern of attention allocation matched to the eager strategy in a promotion focus (i.e., striving to make hits). A prevention focus did not have an impact neither on perceptual processing nor on facial emotion recognition. Taken together, these findings demonstrate important mechanisms and consequences of observer motivational orientation for facial emotion recognition.

  11. Face and Emotion Recognition in MCDD versus PDD-NOS

    Science.gov (United States)

    Herba, Catherine M.; de Bruin, Esther; Althaus, Monika; Verheij, Fop; Ferdinand, Robert F.

    2008-01-01

    Previous studies indicate that Multiple Complex Developmental Disorder (MCDD) children differ from PDD-NOS and autistic children on a symptom level and on psychophysiological functioning. Children with MCDD (n = 21) and PDD-NOS (n = 62) were compared on two facets of social-cognitive functioning: identification of neutral faces and facial…

  12. Individual differences in adaptive coding of face identity are linked to individual differences in face recognition ability.

    Science.gov (United States)

    Rhodes, Gillian; Jeffery, Linda; Taylor, Libby; Hayward, William G; Ewing, Louise

    2014-06-01

    Despite their similarity as visual patterns, we can discriminate and recognize many thousands of faces. This expertise has been linked to 2 coding mechanisms: holistic integration of information across the face and adaptive coding of face identity using norms tuned by experience. Recently, individual differences in face recognition ability have been discovered and linked to differences in holistic coding. Here we show that they are also linked to individual differences in adaptive coding of face identity, measured using face identity aftereffects. Identity aftereffects correlated significantly with several measures of face-selective recognition ability. They also correlated marginally with own-race face recognition ability, suggesting a role for adaptive coding in the well-known other-race effect. More generally, these results highlight the important functional role of adaptive face-coding mechanisms in face expertise, taking us beyond the traditional focus on holistic coding mechanisms.

  13. Feature selection for face recognition: a memetic algorithmic approach

    Institute of Scientific and Technical Information of China (English)

    Dinesh KUMAR; Shakti KUMAR; C. S. RAI

    2009-01-01

    The eigenface method that uses principal component analysis (PCA) has been the standard and popular method used in face recognition. This paper presents a PCA-memetic algorithm (PCA-MA) approach for feature selection. PCA has been extended by MAs where the former was used for feature extraction/dimensionality reduction and the latter exploited for feature selection. Simulations were performed over ORL and YaleB face databases using Euclidean norm as the classifier. It was found that as far as the recognition rate is concerned, PCA-MA completely outperforms the eigenface method. We compared the performance of PCA extended with genetic algorithm (PCA-GA) with our proposed PCA-MA method. The results also clearly established the supremacy of the PCA-MA method over the PCA-GA method. We further extended linear discriminant analysis (LDA) and kernel principal component analysis (KPCA) approaches with the MA and observed significant improvement in recognition rate with fewer features. This paper also compares the performance of PCA-MA, LDA-MA and KPCA-MA approaches.

  14. Recognition of facial affect in girls with conduct disorder.

    Science.gov (United States)

    Pajer, Kathleen; Leininger, Lisa; Gardner, William

    2010-02-28

    Impaired recognition of facial affect has been reported in youths and adults with antisocial behavior. However, few of these studies have examined subjects with the psychiatric disorders associated with antisocial behavior, and there are virtually no data on females. Our goal was to determine if facial affect recognition was impaired in adolescent girls with conduct disorder (CD). Performance on the Ekman Pictures of Facial Affect (POFA) task was compared in 35 girls with CD (mean age of 17.9 years+/-0.95; 38.9% African-American) and 30 girls who had no lifetime history of psychiatric disorder (mean age of 17.6 years+/-0.77; 30% African-American). Forty-five slides representing the six emotions in the POFA were presented one at a time; stimulus duration was 5s. Multivariate analyses indicated that CD vs. control status was not significantly associated with the total number of correct answers nor the number of correct answers for any specific emotion. Effect sizes were all considered small. Within-CD analyses did not demonstrate a significant effect for aggressive antisocial behavior on facial affect recognition. Our findings suggest that girls with CD are not impaired in facial affect recognition. However, we did find that girls with a history of trauma/neglect made a greater number of errors in recognizing fearful faces. Explanations for these findings are discussed and implications for future research presented. 2009 Elsevier B.V. All rights reserved.

  15. Log-Gabor Weber descriptor for face recognition

    Science.gov (United States)

    Li, Jing; Sang, Nong; Gao, Changxin

    2015-09-01

    The Log-Gabor transform, which is suitable for analyzing gradually changing data such as in iris and face images, has been widely used in image processing, pattern recognition, and computer vision. In most cases, only the magnitude or phase information of the Log-Gabor transform is considered. However, the complementary effect taken by combining magnitude and phase information simultaneously for an image-feature extraction problem has not been systematically explored in the existing works. We propose a local image descriptor for face recognition, called Log-Gabor Weber descriptor (LGWD). The novelty of our LGWD is twofold: (1) to fully utilize the information from the magnitude or phase feature of multiscale and orientation Log-Gabor transform, we apply the Weber local binary pattern operator to each transform response. (2) The encoded Log-Gabor magnitude and phase information are fused at the feature level by utilizing kernel canonical correlation analysis strategy, considering that feature level information fusion is effective when the modalities are correlated. Experimental results on the AR, Extended Yale B, and UMIST face databases, compared with those available from recent experiments reported in the literature, show that our descriptor yields a better performance than state-of-the art methods.

  16. Information Theory for Gabor Feature Selection for Face Recognition

    Directory of Open Access Journals (Sweden)

    Shen Linlin

    2006-01-01

    Full Text Available A discriminative and robust feature—kernel enhanced informative Gabor feature—is proposed in this paper for face recognition. Mutual information is applied to select a set of informative and nonredundant Gabor features, which are then further enhanced by kernel methods for recognition. Compared with one of the top performing methods in the 2004 Face Verification Competition (FVC2004, our methods demonstrate a clear advantage over existing methods in accuracy, computation efficiency, and memory cost. The proposed method has been fully tested on the FERET database using the FERET evaluation protocol. Significant improvements on three of the test data sets are observed. Compared with the classical Gabor wavelet-based approaches using a huge number of features, our method requires less than 4 milliseconds to retrieve a few hundreds of features. Due to the substantially reduced feature dimension, only 4 seconds are required to recognize 200 face images. The paper also unified different Gabor filter definitions and proposed a training sample generation algorithm to reduce the effects caused by unbalanced number of samples available in different classes.

  17. Information Theory for Gabor Feature Selection for Face Recognition

    Science.gov (United States)

    Shen, Linlin; Bai, Li

    2006-12-01

    A discriminative and robust feature—kernel enhanced informative Gabor feature—is proposed in this paper for face recognition. Mutual information is applied to select a set of informative and nonredundant Gabor features, which are then further enhanced by kernel methods for recognition. Compared with one of the top performing methods in the 2004 Face Verification Competition (FVC2004), our methods demonstrate a clear advantage over existing methods in accuracy, computation efficiency, and memory cost. The proposed method has been fully tested on the FERET database using the FERET evaluation protocol. Significant improvements on three of the test data sets are observed. Compared with the classical Gabor wavelet-based approaches using a huge number of features, our method requires less than 4 milliseconds to retrieve a few hundreds of features. Due to the substantially reduced feature dimension, only 4 seconds are required to recognize 200 face images. The paper also unified different Gabor filter definitions and proposed a training sample generation algorithm to reduce the effects caused by unbalanced number of samples available in different classes.

  18. Face Recognition as an Authentication Technique in Electronic Voting

    Directory of Open Access Journals (Sweden)

    Noha E. El-Sayad

    2013-07-01

    Full Text Available In this research a Face Detection and Recognition system (FDR used as an Authentication technique in online voting, which one of electronic is voting types, is proposed. Web based voting allows the voter to vote from any place in state or out of state. The voter’s image is captured and passed to a face detection algorithm (Eigenface or Gabor filter which is used to detect his face from the image and save it as the first matching point. The voter’s National identification card number is used to retrieve and return his saved photo from the database of the Supreme Council elections (SCE which is passed to the same detection algorithm (Eigenface or Gabor filter to detect face from it and save it as second matching point. The two matching points are used by a matching algorithm to check wither they are identical or not. If the results of the matching algorithm are two point match then checks wither this person has the right to vote or not. If he has right to vote then a voting form is presented to him. The result shows that the proposed algorithm capable of finding over 90% of the faces in database and allows their voter to vote in approximately 58 seconds.

  19. Face Recognition Algorithms Based on Transformed Shape Features

    Directory of Open Access Journals (Sweden)

    Sambhunath Biswas

    2012-05-01

    Full Text Available Human face recognition is, indeed, a challenging task, especially under illumination and pose variations. We examine in the present paper effectiveness of two simple algorithms using coiflet packet and Radon transforms to recognize human faces from some databases of still gray level images, under the environment of illumination and pose variations. Both the algorithms convert 2-D gray level training face images into their respective depth maps or physical shape which are subsequently transformed by Coiflet packet and Radon transforms to compute energy for feature extraction. Experiments show that such transformed shape features are robust to illumination and pose variations. With the features extracted, training classes are optimally separated through linear discriminant analysis (LDA, while classification for test face images is made through a k-NN classifier, based on L1 norm and Mahalanobis distance measures. Proposed algorithms are then tested on face images that differ in illumination,expression or pose separately, obtained from three databases,namely, ORL, Yale and Essex-Grimace databases. Results, so obtained, are compared with two different existing algorithms.Performance using Daubechies wavelets is also examined. It is seen that the proposed Coiflet packet and Radon transform based algorithms have significant performance, especially under different illumination conditions and pose variation. Comparison shows the proposed algorithms are superior.

  20. Face Recognition, Musical Appraisal, and Emotional Crossmodal Bias.

    Science.gov (United States)

    Invitto, Sara; Calcagnì, Antonio; Mignozzi, Arianna; Scardino, Rosanna; Piraino, Giulia; Turchi, Daniele; De Feudis, Irio; Brunetti, Antonio; Bevilacqua, Vitoantonio; de Tommaso, Marina

    2017-01-01

    Recent research on the crossmodal integration of visual and auditory perception suggests that evaluations of emotional information in one sensory modality may tend toward the emotional value generated in another sensory modality. This implies that the emotions elicited by musical stimuli can influence the perception of emotional stimuli presented in other sensory modalities, through a top-down process. The aim of this work was to investigate how crossmodal perceptual processing influences emotional face recognition and how potential modulation of this processing induced by music could be influenced by the subject's musical competence. We investigated how emotional face recognition processing could be modulated by listening to music and how this modulation varies according to the subjective emotional salience of the music and the listener's musical competence. The sample consisted of 24 participants: 12 professional musicians and 12 university students (non-musicians). Participants performed an emotional go/no-go task whilst listening to music by Albeniz, Chopin, or Mozart. The target stimuli were emotionally neutral facial expressions. We examined the N170 Event-Related Potential (ERP) and behavioral responses (i.e., motor reaction time to target recognition and musical emotional judgment). A linear mixed-effects model and a decision-tree learning technique were applied to N170 amplitudes and latencies. The main findings of the study were that musicians' behavioral responses and N170 is more affected by the emotional value of music administered in the emotional go/no-go task and this bias is also apparent in responses to the non-target emotional face. This suggests that emotional information, coming from multiple sensory channels, activates a crossmodal integration process that depends upon the stimuli emotional salience and the listener's appraisal.

  1. Face Recognition, Musical Appraisal, and Emotional Crossmodal Bias

    Directory of Open Access Journals (Sweden)

    Sara Invitto

    2017-08-01

    Full Text Available Recent research on the crossmodal integration of visual and auditory perception suggests that evaluations of emotional information in one sensory modality may tend toward the emotional value generated in another sensory modality. This implies that the emotions elicited by musical stimuli can influence the perception of emotional stimuli presented in other sensory modalities, through a top-down process. The aim of this work was to investigate how crossmodal perceptual processing influences emotional face recognition and how potential modulation of this processing induced by music could be influenced by the subject's musical competence. We investigated how emotional face recognition processing could be modulated by listening to music and how this modulation varies according to the subjective emotional salience of the music and the listener's musical competence. The sample consisted of 24 participants: 12 professional musicians and 12 university students (non-musicians. Participants performed an emotional go/no-go task whilst listening to music by Albeniz, Chopin, or Mozart. The target stimuli were emotionally neutral facial expressions. We examined the N170 Event-Related Potential (ERP and behavioral responses (i.e., motor reaction time to target recognition and musical emotional judgment. A linear mixed-effects model and a decision-tree learning technique were applied to N170 amplitudes and latencies. The main findings of the study were that musicians' behavioral responses and N170 is more affected by the emotional value of music administered in the emotional go/no-go task and this bias is also apparent in responses to the non-target emotional face. This suggests that emotional information, coming from multiple sensory channels, activates a crossmodal integration process that depends upon the stimuli emotional salience and the listener's appraisal.

  2. Face Recognition, Musical Appraisal, and Emotional Crossmodal Bias

    Science.gov (United States)

    Invitto, Sara; Calcagnì, Antonio; Mignozzi, Arianna; Scardino, Rosanna; Piraino, Giulia; Turchi, Daniele; De Feudis, Irio; Brunetti, Antonio; Bevilacqua, Vitoantonio; de Tommaso, Marina

    2017-01-01

    Recent research on the crossmodal integration of visual and auditory perception suggests that evaluations of emotional information in one sensory modality may tend toward the emotional value generated in another sensory modality. This implies that the emotions elicited by musical stimuli can influence the perception of emotional stimuli presented in other sensory modalities, through a top-down process. The aim of this work was to investigate how crossmodal perceptual processing influences emotional face recognition and how potential modulation of this processing induced by music could be influenced by the subject's musical competence. We investigated how emotional face recognition processing could be modulated by listening to music and how this modulation varies according to the subjective emotional salience of the music and the listener's musical competence. The sample consisted of 24 participants: 12 professional musicians and 12 university students (non-musicians). Participants performed an emotional go/no-go task whilst listening to music by Albeniz, Chopin, or Mozart. The target stimuli were emotionally neutral facial expressions. We examined the N170 Event-Related Potential (ERP) and behavioral responses (i.e., motor reaction time to target recognition and musical emotional judgment). A linear mixed-effects model and a decision-tree learning technique were applied to N170 amplitudes and latencies. The main findings of the study were that musicians' behavioral responses and N170 is more affected by the emotional value of music administered in the emotional go/no-go task and this bias is also apparent in responses to the non-target emotional face. This suggests that emotional information, coming from multiple sensory channels, activates a crossmodal integration process that depends upon the stimuli emotional salience and the listener's appraisal. PMID:28824392

  3. A face recognition algorithm based on multiple individual discriminative models

    DEFF Research Database (Denmark)

    Fagertun, Jens; Gomez, David Delgado; Ersbøll, Bjarne Kjær

    2005-01-01

    Abstract—In this paper, a novel algorithm for facial recognition is proposed. The technique combines the color texture and geometrical configuration provided by face images. Landmarks and pixel intensities are used by Principal Component Analysis and Fisher Linear Discriminant Analysis to associate...... facial image corresponds to a person in the database. Each projection is also able to visualizing the most discriminative facial features of the person associated to the projection. The performance of the proposed method is tested in two experiments. Results point out the proposed technique...... as an accurate and robust tool for facial identification and unknown detection....

  4. RESEARCH AND DEVELOPMENT OF DSP-BASED FACE RECOGNITION SYSTEM FOR ROBOTIC REHABILITATION NURSING BEDS

    Directory of Open Access Journals (Sweden)

    Ming XING

    2016-04-01

    Full Text Available This article describes the development of DSP as the core of the face recognition system, on the basis of understanding the background, significance and current research situation at home and abroad of face recognition issue, having a in-depth study to face detection, Image preprocessing, feature extraction face facial structure, facial expression feature extraction, classification and other issues during face recognition and have achieved research and development of DSP-based face recognition system for robotic rehabilitation nursing beds. The system uses a fixed-point DSP TMS320DM642 as a central processing unit, with a strong processing performance, high flexibility and programmability.

  5. Is having similar eye movement patterns during face learning and recognition beneficial for recognition performance? Evidence from hidden Markov modeling.

    Science.gov (United States)

    Chuk, Tim; Chan, Antoni B; Hsiao, Janet H

    2017-05-04

    The hidden Markov model (HMM)-based approach for eye movement analysis is able to reflect individual differences in both spatial and temporal aspects of eye movements. Here we used this approach to understand the relationship between eye movements during face learning and recognition, and its association with recognition performance. We discovered holistic (i.e., mainly looking at the face center) and analytic (i.e., specifically looking at the two eyes in addition to the face center) patterns during both learning and recognition. Although for both learning and recognition, participants who adopted analytic patterns had better recognition performance than those with holistic patterns, a significant positive correlation between the likelihood of participants' patterns being classified as analytic and their recognition performance was only observed during recognition. Significantly more participants adopted holistic patterns during learning than recognition. Interestingly, about 40% of the participants used different patterns between learning and recognition, and among them 90% switched their patterns from holistic at learning to analytic at recognition. In contrast to the scan path theory, which posits that eye movements during learning have to be recapitulated during recognition for the recognition to be successful, participants who used the same or different patterns during learning and recognition did not differ in recognition performance. The similarity between their learning and recognition eye movement patterns also did not correlate with their recognition performance. These findings suggested that perceptuomotor memory elicited by eye movement patterns during learning does not play an important role in recognition. In contrast, the retrieval of diagnostic information for recognition, such as the eyes for face recognition, is a better predictor for recognition performance. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Face Recognition Using Double Sparse Local Fisher Discriminant Analysis

    Directory of Open Access Journals (Sweden)

    Zhan Wang

    2015-01-01

    Full Text Available Local Fisher discriminant analysis (LFDA was proposed for dealing with the multimodal problem. It not only combines the idea of locality preserving projections (LPP for preserving the local structure of the high-dimensional data but also combines the idea of Fisher discriminant analysis (FDA for obtaining the discriminant power. However, LFDA also suffers from the undersampled problem as well as many dimensionality reduction methods. Meanwhile, the projection matrix is not sparse. In this paper, we propose double sparse local Fisher discriminant analysis (DSLFDA for face recognition. The proposed method firstly constructs a sparse and data-adaptive graph with nonnegative constraint. Then, DSLFDA reformulates the objective function as a regression-type optimization problem. The undersampled problem is avoided naturally and the sparse solution can be obtained by adding the regression-type problem to a l1 penalty. Experiments on Yale, ORL, and CMU PIE face databases are implemented to demonstrate the effectiveness of the proposed method.

  7. Recognition of Expressions on Human Face using AI Techniques

    Directory of Open Access Journals (Sweden)

    Arpita Nagpal

    2011-08-01

    Full Text Available Facial expressions convey non-verbal cues, which play animportant role in interpersonal relations. Facial expressionsrecognition technology helps in designing an intelligent humancomputer interfaces. This paper discusses a three phase techniquefor the facial expression recognition of the Indian faces. In thefirst phase the faces are tracked using Haar classifier in the livevideos of Indian student’s community. In the second phase 38facial feature points are detected using Active Appearance Model(AAM technique. In the last step the support vector machine(SVM is used to classify four primary facial expression.Integrating these broader techniques and obtaining a reasonablygood performance is a very big challenge. The performance ofthe proposed facial expressions recognizer is 82.7%.

  8. Cortical Thickness in Fusiform Face Area Predicts Face and Object Recognition Performance.

    Science.gov (United States)

    McGugin, Rankin W; Van Gulick, Ana E; Gauthier, Isabel

    2016-02-01

    The fusiform face area (FFA) is defined by its selectivity for faces. Several studies have shown that the response of FFA to nonface objects can predict behavioral performance for these objects. However, one possible account is that experts pay more attention to objects in their domain of expertise, driving signals up. Here, we show an effect of expertise with nonface objects in FFA that cannot be explained by differential attention to objects of expertise. We explore the relationship between cortical thickness of FFA and face and object recognition using the Cambridge Face Memory Test and Vanderbilt Expertise Test, respectively. We measured cortical thickness in functionally defined regions in a group of men who evidenced functional expertise effects for cars in FFA. Performance with faces and objects together accounted for approximately 40% of the variance in cortical thickness of several FFA patches. Whereas participants with a thicker FFA cortex performed better with vehicles, those with a thinner FFA cortex performed better with faces and living objects. The results point to a domain-general role of FFA in object perception and reveal an interesting double dissociation that does not contrast faces and objects but rather living and nonliving objects.

  9. Quality labeled faces in the wild (QLFW): a database for studying face recognition in real-world environments

    Science.gov (United States)

    Karam, Lina J.; Zhu, Tong

    2015-03-01

    The varying quality of face images is an important challenge that limits the effectiveness of face recognition technology when applied in real-world applications. Existing face image databases do not consider the effect of distortions that commonly occur in real-world environments. This database (QLFW) represents an initial attempt to provide a set of labeled face images spanning the wide range of quality, from no perceived impairment to strong perceived impairment for face detection and face recognition applications. Types of impairment include JPEG2000 compression, JPEG compression, additive white noise, Gaussian blur and contrast change. Subjective experiments are conducted to assess the perceived visual quality of faces under different levels and types of distortions and also to assess the human recognition performance under the considered distortions. One goal of this work is to enable automated performance evaluation of face recognition technologies in the presence of different types and levels of visual distortions. This will consequently enable the development of face recognition systems that can operate reliably on real-world visual content in the presence of real-world visual distortions. Another goal is to enable the development and assessment of visual quality metrics for face images and for face detection and recognition applications.

  10. Reduced Reliance on Optimal Facial Information for Identity Recognition in Autism Spectrum Disorder

    Science.gov (United States)

    Leonard, Hayley C.; Annaz, Dagmara; Karmiloff-Smith, Annette; Johnson, Mark H.

    2013-01-01

    Previous research into face processing in autism spectrum disorder (ASD) has revealed atypical biases toward particular facial information during identity recognition. Specifically, a focus on features (or high spatial frequencies [HSFs]) has been reported for both face and nonface processing in ASD. The current study investigated the development…

  11. Reduced Reliance on Optimal Facial Information for Identity Recognition in Autism Spectrum Disorder

    Science.gov (United States)

    Leonard, Hayley C.; Annaz, Dagmara; Karmiloff-Smith, Annette; Johnson, Mark H.

    2013-01-01

    Previous research into face processing in autism spectrum disorder (ASD) has revealed atypical biases toward particular facial information during identity recognition. Specifically, a focus on features (or high spatial frequencies [HSFs]) has been reported for both face and nonface processing in ASD. The current study investigated the development…

  12. A Comparative Study of 2D PCA Face Recognition Method with Other Statistically Based Face Recognition Methods

    Science.gov (United States)

    Senthilkumar, R.; Gnanamurthy, R. K.

    2016-09-01

    In this paper, two-dimensional principal component analysis (2D PCA) is compared with other algorithms like 1D PCA, Fisher discriminant analysis (FDA), independent component analysis (ICA) and Kernel PCA (KPCA) which are used for image representation and face recognition. As opposed to PCA, 2D PCA is based on 2D image matrices rather than 1D vectors, so the image matrix does not need to be transformed into a vector prior to feature extraction. Instead, an image covariance matrix is constructed directly using the original image matrices and its Eigen vectors are derived for image feature extraction. To test 2D PCA and evaluate its performance, a series of experiments are performed on three face image databases: ORL, Senthil, and Yale face databases. The recognition rate across all trials higher using 2D PCA than PCA, FDA, ICA and KPCA. The experimental results also indicated that the extraction of image features is computationally more efficient using 2D PCA than PCA.

  13. Pose invariant face recognition: 3D model from single photo

    Science.gov (United States)

    Napoléon, Thibault; Alfalou, Ayman

    2017-02-01

    Face recognition is widely studied in the literature for its possibilities in surveillance and security. In this paper, we report a novel algorithm for the identification task. This technique is based on an optimized 3D modeling allowing to reconstruct faces in different poses from a limited number of references (i.e. one image by class/person). Particularly, we propose to use an active shape model to detect a set of keypoints on the face necessary to deform our synthetic model with our optimized finite element method. Indeed, in order to improve our deformation, we propose a regularization by distances on graph. To perform the identification we use the VanderLugt correlator well know to effectively address this task. On the other hand we add a difference of Gaussian filtering step to highlight the edges and a description step based on the local binary patterns. The experiments are performed on the PHPID database enhanced with our 3D reconstructed faces of each person with an azimuth and an elevation ranging from -30° to +30°. The obtained results prove the robustness of our new method with 88.76% of good identification when the classic 2D approach (based on the VLC) obtains just 44.97%.

  14. Video based Parallel Face recognition using Gabor filter on homogeneous distributed systems

    DEFF Research Database (Denmark)

    Ali, Usman; Bilal, Muhammad

    This research aimed at building a fast video, parallel face recognition system based on the well known Gabor filtering approach. Face recognition is done after face detection in each frame of the video, individually. The master-slave technique is employed as the parallel computing model. Each frame...... is processed by different slave personal computers (PC) attached to the master, which acquire and distribute frames. It is believed that this approach can be used for practical face recognition applications with some further optimization...

  15. 3D face recognition with asymptotic cones based principal curvatures

    KAUST Repository

    Tang, Yinhang

    2015-05-01

    The classical curvatures of smooth surfaces (Gaussian, mean and principal curvatures) have been widely used in 3D face recognition (FR). However, facial surfaces resulting from 3D sensors are discrete meshes. In this paper, we present a general framework and define three principal curvatures on discrete surfaces for the purpose of 3D FR. These principal curvatures are derived from the construction of asymptotic cones associated to any Borel subset of the discrete surface. They describe the local geometry of the underlying mesh. First two of them correspond to the classical principal curvatures in the smooth case. We isolate the third principal curvature that carries out meaningful geometric shape information. The three principal curvatures in different Borel subsets scales give multi-scale local facial surface descriptors. We combine the proposed principal curvatures with the LNP-based facial descriptor and SRC for recognition. The identification and verification experiments demonstrate the practicability and accuracy of the third principal curvature and the fusion of multi-scale Borel subset descriptors on 3D face from FRGC v2.0.

  16. Face recognition using 4-PSK joint transform correlation

    Science.gov (United States)

    Moniruzzaman, Md.; Alam, Mohammad S.

    2016-04-01

    This paper presents an efficient phase-encoded and 4-phase shift keying (PSK)-based fringe-adjusted joint transform correlation (FJTC) technique for face recognition applications. The proposed technique uses phase encoding and a 4- channel phase shifting method on the reference image which can be pre-calculated without affecting the system processing speed. The 4-channel PSK step eliminates the unwanted zero-order term, autocorrelation among multiple similar input scene objects while yield enhanced cross-correlation output. For each channel, discrete wavelet decomposition preprocessing has been used to accommodate the impact of various 3D facial expressions, effects of noise, and illumination variations. The performance of the proposed technique has been tested using various image datasets such as Yale, and extended Yale B under different environments such as illumination variation and 3D changes in facial expressions. The test results show that the proposed technique yields significantly better performance when compared to existing JTC-based face recognition techniques.

  17. Multi-texture local ternary pattern for face recognition

    Science.gov (United States)

    Essa, Almabrok; Asari, Vijayan

    2017-05-01

    In imagery and pattern analysis domain a variety of descriptors have been proposed and employed for different computer vision applications like face detection and recognition. Many of them are affected under different conditions during the image acquisition process such as variations in illumination and presence of noise, because they totally rely on the image intensity values to encode the image information. To overcome these problems, a novel technique named Multi-Texture Local Ternary Pattern (MTLTP) is proposed in this paper. MTLTP combines the edges and corners based on the local ternary pattern strategy to extract the local texture features of the input image. Then returns a spatial histogram feature vector which is the descriptor for each image that we use to recognize a human being. Experimental results using a k-nearest neighbors classifier (k-NN) on two publicly available datasets justify our algorithm for efficient face recognition in the presence of extreme variations of illumination/lighting environments and slight variation of pose conditions.

  18. Leveraging Billions of Faces to Overcome Performance Barriers in Unconstrained Face Recognition

    CERN Document Server

    Taigman, Yaniv

    2011-01-01

    We employ the face recognition technology developed in house at face.com to a well accepted benchmark and show that without any tuning we are able to considerably surpass state of the art results. Much of the improvement is concentrated in the high-valued performance point of zero false positive matches, where the obtained recall rate almost doubles the best reported result to date. We discuss the various components and innovations of our system that enable this significant performance gap. These components include extensive utilization of an accurate 3D reconstructed shape model dealing with challenges arising from pose and illumination. In addition, discriminative models based on billions of faces are used in order to overcome aging and facial expression as well as low light and overexposure. Finally, we identify a challenging set of identification queries that might provide useful focus for future research.

  19. Mapping the emotional face. How individual face parts contribute to successful emotion recognition.

    Science.gov (United States)

    Wegrzyn, Martin; Vogt, Maria; Kireclioglu, Berna; Schneider, Julia; Kissler, Johanna

    2017-01-01

    Which facial features allow human observers to successfully recognize expressions of emotion? While the eyes and mouth have been frequently shown to be of high importance, research on facial action units has made more precise predictions about the areas involved in displaying each emotion. The present research investigated on a fine-grained level, which physical features are most relied on when decoding facial expressions. In the experiment, individual faces expressing the basic emotions according to Ekman were hidden behind a mask of 48 tiles, which was sequentially uncovered. Participants were instructed to stop the sequence as soon as they recognized the facial expression and assign it the correct label. For each part of the face, its contribution to successful recognition was computed, allowing to visualize the importance of different face areas for each expression. Overall, observers were mostly relying on the eye and mouth regions when successfully recognizing an emotion. Furthermore, the difference in the importance of eyes and mouth allowed to group the expressions in a continuous space, ranging from sadness and fear (reliance on the eyes) to disgust and happiness (mouth). The face parts with highest diagnostic value for expression identification were typically located in areas corresponding to action units from the facial action coding system. A similarity analysis of the usefulness of different face parts for expression recognition demonstrated that faces cluster according to the emotion they express, rather than by low-level physical features. Also, expressions relying more on the eyes or mouth region were in close proximity in the constructed similarity space. These analyses help to better understand how human observers process expressions of emotion, by delineating the mapping from facial features to psychological representation.

  20. Infrared face recognition based on binary particle swarm optimization and SVM-wrapper model

    Science.gov (United States)

    Xie, Zhihua; Liu, Guodong

    2015-10-01

    Infrared facial imaging, being light- independent, and not vulnerable to facial skin, expressions and posture, can avoid or limit the drawbacks of face recognition in visible light. Robust feature selection and representation is a key issue for infrared face recognition research. This paper proposes a novel infrared face recognition method based on local binary pattern (LBP). LBP can improve the robust of infrared face recognition under different environment situations. How to make full use of the discriminant ability in LBP patterns is an important problem. A search algorithm combination binary particle swarm with SVM is used to find out the best discriminative subset in LBP features. Experimental results show that the proposed method outperforms traditional LBP based infrared face recognition methods. It can significantly improve the recognition performance of infrared face recognition.

  1. Visual abilities are important for auditory-only speech recognition: evidence from autism spectrum disorder.

    Science.gov (United States)

    Schelinski, Stefanie; Riedel, Philipp; von Kriegstein, Katharina

    2014-12-01

    In auditory-only conditions, for example when we listen to someone on the phone, it is essential to fast and accurately recognize what is said (speech recognition). Previous studies have shown that speech recognition performance in auditory-only conditions is better if the speaker is known not only by voice, but also by face. Here, we tested the hypothesis that such an improvement in auditory-only speech recognition depends on the ability to lip-read. To test this we recruited a group of adults with autism spectrum disorder (ASD), a condition associated with difficulties in lip-reading, and typically developed controls. All participants were trained to identify six speakers by name and voice. Three speakers were learned by a video showing their face and three others were learned in a matched control condition without face. After training, participants performed an auditory-only speech recognition test that consisted of sentences spoken by the trained speakers. As a control condition, the test also included speaker identity recognition on the same auditory material. The results showed that, in the control group, performance in speech recognition was improved for speakers known by face in comparison to speakers learned in the matched control condition without face. The ASD group lacked such a performance benefit. For the ASD group auditory-only speech recognition was even worse for speakers known by face compared to speakers not known by face. In speaker identity recognition, the ASD group performed worse than the control group independent of whether the speakers were learned with or without face. Two additional visual experiments showed that the ASD group performed worse in lip-reading whereas face identity recognition was within the normal range. The findings support the view that auditory-only communication involves specific visual mechanisms. Further, they indicate that in ASD, speaker-specific dynamic visual information is not available to optimize auditory

  2. Visual scan paths and recognition of facial identity in autism spectrum disorder and typical development.

    Science.gov (United States)

    Wilson, C Ellie; Palermo, Romina; Brock, Jon

    2012-01-01

    Previous research suggests that many individuals with autism spectrum disorder (ASD) have impaired facial identity recognition, and also exhibit abnormal visual scanning of faces. Here, two hypotheses accounting for an association between these observations were tested: i) better facial identity recognition is associated with increased gaze time on the Eye region; ii) better facial identity recognition is associated with increased eye-movements around the face. Eye-movements of 11 children with ASD and 11 age-matched typically developing (TD) controls were recorded whilst they viewed a series of faces, and then completed a two alternative forced-choice recognition memory test for the faces. Scores on the memory task were standardized according to age. In both groups, there was no evidence of an association between the proportion of time spent looking at the Eye region of faces and age-standardized recognition performance, thus the first hypothesis was rejected. However, the 'Dynamic Scanning Index'--which was incremented each time the participant saccaded into and out of one of the core-feature interest areas--was strongly associated with age-standardized face recognition scores in both groups, even after controlling for various other potential predictors of performance. In support of the second hypothesis, results suggested that increased saccading between core-features was associated with more accurate face recognition ability, both in typical development and ASD. Causal directions of this relationship remain undetermined.

  3. Visual scan paths and recognition of facial identity in autism spectrum disorder and typical development.

    Directory of Open Access Journals (Sweden)

    C Ellie Wilson

    Full Text Available BACKGROUND: Previous research suggests that many individuals with autism spectrum disorder (ASD have impaired facial identity recognition, and also exhibit abnormal visual scanning of faces. Here, two hypotheses accounting for an association between these observations were tested: i better facial identity recognition is associated with increased gaze time on the Eye region; ii better facial identity recognition is associated with increased eye-movements around the face. METHODOLOGY AND PRINCIPAL FINDINGS: Eye-movements of 11 children with ASD and 11 age-matched typically developing (TD controls were recorded whilst they viewed a series of faces, and then completed a two alternative forced-choice recognition memory test for the faces. Scores on the memory task were standardized according to age. In both groups, there was no evidence of an association between the proportion of time spent looking at the Eye region of faces and age-standardized recognition performance, thus the first hypothesis was rejected. However, the 'Dynamic Scanning Index'--which was incremented each time the participant saccaded into and out of one of the core-feature interest areas--was strongly associated with age-standardized face recognition scores in both groups, even after controlling for various other potential predictors of performance. CONCLUSIONS AND SIGNIFICANCE: In support of the second hypothesis, results suggested that increased saccading between core-features was associated with more accurate face recognition ability, both in typical development and ASD. Causal directions of this relationship remain undetermined.

  4. Knowledge scale effects in face recognition: an electrophysiological investigation.

    Science.gov (United States)

    Abdel Rahman, Rasha; Sommer, Werner

    2012-03-01

    Although the amount or scale of biographical knowledge held in store about a person may differ widely, little is known about whether and how these differences may affect the retrieval processes triggered by the person's face. In a learning paradigm, we manipulated the scale of biographical knowledge while controlling for a common set of minimal knowledge and perceptual experience with the faces. A few days after learning, and again after 6 months, knowledge effects were assessed in three tasks, none of which concerned the additional knowledge. Whereas the performance effects of additional knowledge were small, event-related brain potentials recorded during testing showed amplitude modulations in the time range of the N400 component-indicative of knowledge access--but also at a much earlier latency in the P100 component--reflecting early stages of visual analysis. However, no effects were found in the N170 component, which is taken to reflect structural analyses of faces. The present findings replicate knowledge scale effects in object recognition and suggest that enhanced knowledge affects both early visual processes and the later processes associated with semantic processing, even when this knowledge is not task-relevant.

  5. Multi-modal face parts fusion based on Gabor feature for face recognition

    Institute of Scientific and Technical Information of China (English)

    Xiang Yan; Su Guangda; Shang Yan; Li Congcong

    2009-01-01

    A novel face recognition method, which is a fusion of multi-modal face parts based on Gabor feature (MMP-GF), is proposed in this paper. Firstly, the bare face image detached from the normalized image was convolved with a family of Gabor kernels, and then according to the face structure and the key-points locations, the calculated Gabor images were divided into five parts: Gabor face, Gabor eyebrow, Gabor eye, Gabor nose and Gabor mouth. After that multi-modal Gabor features were spatially partitioned into non-overlapping regions and the averages of regions were concatenated to be a low dimension feature vector, whose dimension was further reduced by principal component analysis (PCA). In the decision level fusion, match results respectively calculated based on the five parts were combined according to linear discriminant analysis (LDA) and a normalized matching algorithm was used to improve the performance. Experiments on FERET database show that the proposed MMP-GF method achieves good robustness to the expression and age variations.

  6. The Effects of Inversion and Familiarity on Face versus Body Cues to Person Recognition

    Science.gov (United States)

    Robbins, Rachel A.; Coltheart, Max

    2012-01-01

    Extensive research has focused on face recognition, and much is known about this topic. However, much of this work seems to be based on an assumption that faces are the most important aspect of person recognition. Here we test this assumption in two experiments. We show that when viewers are forced to choose, they "do" use the face more than the…

  7. Influence of Emotional Facial Expressions on 3-5-Year-Olds' Face Recognition

    Science.gov (United States)

    Freitag, Claudia; Schwarzer, Gudrun

    2011-01-01

    Three experiments examined 3- and 5-year-olds' recognition of faces in constant and varied emotional expressions. Children were asked to identify repeatedly presented target faces, distinguishing them from distractor faces, during an immediate recognition test and during delayed assessments after 10 min and one week. Emotional facial expression…

  8. Influence of Emotional Facial Expressions on 3-5-Year-Olds' Face Recognition

    Science.gov (United States)

    Freitag, Claudia; Schwarzer, Gudrun

    2011-01-01

    Three experiments examined 3- and 5-year-olds' recognition of faces in constant and varied emotional expressions. Children were asked to identify repeatedly presented target faces, distinguishing them from distractor faces, during an immediate recognition test and during delayed assessments after 10 min and one week. Emotional facial expression…

  9. The impact of image quality on the performance of face recognition

    NARCIS (Netherlands)

    Dutta, Abhishek; Veldhuis, Raymond; Spreeuwers, Luuk

    2012-01-01

    The performance of a face recognition system depends on the quality of both test and reference images participating in the face comparison process. In a forensic evaluation case involving face recognition, we do not have any control over the quality of the trace (image captured by a CCTV at a crime

  10. Postencoding cognitive processes in the cross-race effect: Categorization and individuation during face recognition.

    Science.gov (United States)

    Ho, Michael R; Pezdek, Kathy

    2016-06-01

    The cross-race effect (CRE) describes the finding that same-race faces are recognized more accurately than cross-race faces. According to social-cognitive theories of the CRE, processes of categorization and individuation at encoding account for differential recognition of same- and cross-race faces. Recent face memory research has suggested that similar but distinct categorization and individuation processes also occur postencoding, at recognition. Using a divided-attention paradigm, in Experiments 1A and 1B we tested and confirmed the hypothesis that distinct postencoding categorization and individuation processes occur during the recognition of same- and cross-race faces. Specifically, postencoding configural divided-attention tasks impaired recognition accuracy more for same-race than for cross-race faces; on the other hand, for White (but not Black) participants, postencoding featural divided-attention tasks impaired recognition accuracy more for cross-race than for same-race faces. A social categorization paradigm used in Experiments 2A and 2B tested the hypothesis that the postencoding in-group or out-group social orientation to faces affects categorization and individuation processes during the recognition of same-race and cross-race faces. Postencoding out-group orientation to faces resulted in categorization for White but not for Black participants. This was evidenced by White participants' impaired recognition accuracy for same-race but not for cross-race out-group faces. Postencoding in-group orientation to faces had no effect on recognition accuracy for either same-race or cross-race faces. The results of Experiments 2A and 2B suggest that this social orientation facilitates White but not Black participants' individuation and categorization processes at recognition. Models of recognition memory for same-race and cross-race faces need to account for processing differences that occur at both encoding and recognition.

  11. Adaptive feature-specific imaging: a face recognition example.

    Science.gov (United States)

    Baheti, Pawan K; Neifeld, Mark A

    2008-04-01

    We present an adaptive feature-specific imaging (AFSI) system and consider its application to a face recognition task. The proposed system makes use of previous measurements to adapt the projection basis at each step. Using sequential hypothesis testing, we compare AFSI with static-FSI (SFSI) and static or adaptive conventional imaging in terms of the number of measurements required to achieve a specified probability of misclassification (Pe). The AFSI system exhibits significant improvement compared to SFSI and conventional imaging at low signal-to-noise ratio (SNR). It is shown that for M=4 hypotheses and desired Pe=10(-2), AFSI requires 100 times fewer measurements than the adaptive conventional imager at SNR= -20 dB. We also show a trade-off, in terms of average detection time, between measurement SNR and adaptation advantage, resulting in an optimal value of integration time (equivalent to SNR) per measurement.

  12. Multiple Kernel Learning in Fisher Discriminant Analysis for Face Recognition

    Directory of Open Access Journals (Sweden)

    Xiao-Zhang Liu

    2013-02-01

    Full Text Available Recent applications and developments based on support vector machines (SVMs have shown that using multiple kernels instead of a single one can enhance classifier performance. However, there are few reports on performance of the kernel‐based Fisher discriminant analysis (kernel‐based FDA method with multiple kernels. This paper proposes a multiple kernel construction method for kernel‐based FDA. The constructed kernel is a linear combination of several base kernels with a constraint on their weights. By maximizing the margin maximization criterion (MMC, we present an iterative scheme for weight optimization. The experiments on the FERET and CMU PIE face databases show that, our multiple kernel Fisher discriminant analysis (MKFD achieves high recognition performance, compared with single‐kernel‐based FDA. The experiments also show that the constructed kernel relaxes parameter selection for kernel‐based FDA to some extent.

  13. Using Face Recognition System in Ship Protection Process

    Directory of Open Access Journals (Sweden)

    Miroslav Bača

    2006-03-01

    Full Text Available The process of security improvement is a huge problem especiallyin large ships. Terrorist attacks and everyday threatsagainst life and property destroy transport and tourist companies,especially large tourist ships. Every person on a ship can berecognized and identified using something that the personknows or by means of something the person possesses. The bestresults will be obtained by using a combination of the person'sknowledge with one biometric characteristic. Analyzing theproblem of biometrics in ITS security we can conclude that facerecognition process supported by one or two traditional biometriccharacteristics can give very good results regarding ship security.In this paper we will describe a biometric system basedon face recognition. Special focus will be given to crew member'sbiometric security in crisis situation like kidnapping, robbelyor illness.

  14. A NON-PARAMETER BAYESIAN CLASSIFIER FOR FACE RECOGNITION

    Institute of Scientific and Technical Information of China (English)

    Liu Qingshan; Lu Hanqing; Ma Songde

    2003-01-01

    A non-parameter Bayesian classifier based on Kernel Density Estimation (KDE)is presented for face recognition, which can be regarded as a weighted Nearest Neighbor (NN)classifier in formation. The class conditional density is estimated by KDE and the bandwidthof the kernel function is estimated by Expectation Maximum (EM) algorithm. Two subspaceanalysis methods-linear Principal Component Analysis (PCA) and Kernel-based PCA (KPCA)are respectively used to extract features, and the proposed method is compared with ProbabilisticReasoning Models (PRM), Nearest Center (NC) and NN classifiers which are widely used in facerecognition systems. The experiments are performed on two benchmarks and the experimentalresults show that the KDE outperforms PRM, NC and NN classifiers.

  15. Invariant Robust 3-D Face Recognition based on the Hilbert Transform in Spectral Space

    Directory of Open Access Journals (Sweden)

    Eric Paquet

    2006-04-01

    Full Text Available One of the main objectives of face recognition is to determine whether an acquired face belongs to a reference database and to subsequently identify the corresponding individual. Face recognition has application in, for instance, forensic science and security. A face recognition algorithm, to be useful in real applications, must discriminate in between individuals, process data in real-time and be robust against occlusion, facial expression and noise.A new method for robust recognition of three-dimensional faces is presented. The method is based on harmonic coding, Hilbert transform and spectral analysis of 3-D depth distributions. Experimental results with three-dimensional faces, which were scanned with a laser scanner, are presented. The proposed method recognises a face with various facial expressions in the presence of occlusion, has a good discrimination, is able to compare a face against a large database of faces in real-time and is robust against shot noise and additive noise.

  16. A Statistical Nonparametric Approach of Face Recognition: Combination of Eigenface & Modified k-Means Clustering

    CERN Document Server

    Bag, Soumen; Sen, Prithwiraj; Sanyal, Gautam

    2011-01-01

    Facial expressions convey non-verbal cues, which play an important role in interpersonal relations. Automatic recognition of human face based on facial expression can be an important component of natural human-machine interface. It may also be used in behavioural science. Although human can recognize the face practically without any effort, but reliable face recognition by machine is a challenge. This paper presents a new approach for recognizing the face of a person considering the expressions of the same human face at different instances of time. This methodology is developed combining Eigenface method for feature extraction and modified k-Means clustering for identification of the human face. This method endowed the face recognition without using the conventional distance measure classifiers. Simulation results show that proposed face recognition using perception of k-Means clustering is useful for face images with different facial expressions.

  17. Facial expression influences face identity recognition during the attentional blink.

    Science.gov (United States)

    Bach, Dominik R; Schmidt-Daffy, Martin; Dolan, Raymond J

    2014-12-01

    Emotional stimuli (e.g., negative facial expressions) enjoy prioritized memory access when task relevant, consistent with their ability to capture attention. Whether emotional expression also impacts on memory access when task-irrelevant is important for arbitrating between feature-based and object-based attentional capture. Here, the authors address this question in 3 experiments using an attentional blink task with face photographs as first and second target (T1, T2). They demonstrate reduced neutral T2 identity recognition after angry or happy T1 expression, compared to neutral T1, and this supports attentional capture by a task-irrelevant feature. Crucially, after neutral T1, T2 identity recognition was enhanced and not suppressed when T2 was angry-suggesting that attentional capture by this task-irrelevant feature may be object-based and not feature-based. As an unexpected finding, both angry and happy facial expressions suppress memory access for competing objects, but only angry facial expression enjoyed privileged memory access. This could imply that these 2 processes are relatively independent from one another.

  18. Infrared Face Recognition%红外人脸识别

    Institute of Scientific and Technical Information of China (English)

    伍世虔

    2013-01-01

    It has been found that facial thermograms vary with ambient temperatures,as well as physiological and psychological conditions,and result in severe decline in the facial recognition rate.To cope with this problem,a blood perfusion model is proposed in this paper.The proposed model converts the facial thermograms into blood perfusion data which are more consistent in representing facial features.Then,our developed real-time infrared face recognition system (RIFARS) is introduced.The experiments conducted on both same-session and time-lapse data have demonstrated that (1) the blood perfusion data are less sensitive to ambient temperature if the human bodies are in the steady state; (2) for time-lapse data,the performance with the blood perfusion data is nearly identical to that of the same-session data,while the recognition rate with the temperature data dramatically decreases in this case.%由于红外温谱图会随着环境温度、生理及心理因素的变化而变化,从而导致红外人脸识别性能的急剧下降,针对此问题,提出一个血流模型,把红外温谱图转化为更为稳定的血流图,并介绍基于血流模型的实时红外人脸识别系统.对同时段及时延图像的实验表明:(1)当人体处于稳态时,血流图对环境温度更加鲁棒;(2)对时延图像,基于血流图的识别率接近同时段图像的识别率,而基于温谱图的识别率却大为降低.

  19. Face learning and the emergence of view-independent face recognition: an event-related brain potential study.

    Science.gov (United States)

    Zimmermann, Friederike G S; Eimer, Martin

    2013-06-01

    Recognizing unfamiliar faces is more difficult than familiar face recognition, and this has been attributed to qualitative differences in the processing of familiar and unfamiliar faces. Familiar faces are assumed to be represented by view-independent codes, whereas unfamiliar face recognition depends mainly on view-dependent low-level pictorial representations. We employed an electrophysiological marker of visual face recognition processes in order to track the emergence of view-independence during the learning of previously unfamiliar faces. Two face images showing either the same or two different individuals in the same or two different views were presented in rapid succession, and participants had to perform an identity-matching task. On trials where both faces showed the same view, repeating the face of the same individual triggered an N250r component at occipito-temporal electrodes, reflecting the rapid activation of visual face memory. A reliable N250r component was also observed on view-change trials. Crucially, this view-independence emerged as a result of face learning. In the first half of the experiment, N250r components were present only on view-repetition trials but were absent on view-change trials, demonstrating that matching unfamiliar faces was initially based on strictly view-dependent codes. In the second half, the N250r was triggered not only on view-repetition trials but also on view-change trials, indicating that face recognition had now become more view-independent. This transition may be due to the acquisition of abstract structural codes of individual faces during face learning, but could also reflect the formation of associative links between sets of view-specific pictorial representations of individual faces.

  20. An Information-Theoretic Measure for Face Recognition: Comparison with Structural Similarity

    OpenAIRE

    Asmhan Flieh Hassan; Zahir M. Hussain; Dong Cai-lin

    2014-01-01

    Automatic recognition of people faces is a challenging problem that has received significant attention from signal processing researchers in recent years. This is due to its several applications in different fields, including security and forensic analysis. Despite this attention, face recognition is still one among the most challenging problems. Up to this moment, there is no technique that provides a reliable solution to all situations. In this paper a novel technique for face recognition i...

  1. Computer-Assisted Face Processing Instruction Improves Emotion Recognition, Mentalizing, and Social Skills in Students with ASD.

    Science.gov (United States)

    Rice, Linda Marie; Wall, Carla Anne; Fogel, Adam; Shic, Frederick

    2015-07-01

    This study examined the extent to which a computer-based social skills intervention called FaceSay was associated with improvements in affect recognition, mentalizing, and social skills of school-aged children with Autism Spectrum Disorder (ASD). FaceSay offers students simulated practice with eye gaze, joint attention, and facial recognition skills. This randomized control trial included school-aged children meeting educational criteria for autism (N = 31). Results demonstrated that participants who received the intervention improved their affect recognition and mentalizing skills, as well as their social skills. These findings suggest that, by targeting face-processing skills, computer-based interventions may produce changes in broader cognitive and social-skills domains in a cost- and time-efficient manner.

  2. More Pronounced Deficits in Facial Emotion Recognition for Schizophrenia than Bipolar Disorder

    Science.gov (United States)

    Goghari, Vina M; Sponheim, Scott R

    2012-01-01

    Schizophrenia and bipolar disorder are typically separated in diagnostic systems. Behavioural, cognitive, and brain abnormalities associated with each disorder nonetheless overlap. We evaluated the diagnostic specificity of facial emotion recognition deficits in schizophrenia and bipolar disorder to determine whether select aspects of emotion recognition differed for the two disorders. The investigation used an experimental task that included the same facial images in an emotion recognition condition and an age recognition condition (to control for processes associated with general face recognition) in 27 schizophrenia patients, 16 bipolar I patients, and 30 controls. Schizophrenia and bipolar patients exhibited both shared and distinct aspects of facial emotion recognition deficits. Schizophrenia patients had deficits in recognizing angry facial expressions compared to healthy controls and bipolar patients. Compared to control participants, both schizophrenia and bipolar patients were more likely to mislabel facial expressions of anger as fear. Given that schizophrenia patients exhibited a deficit in emotion recognition for angry faces, which did not appear due to generalized perceptual and cognitive dysfunction, improving recognition of threat-related expression may be an important intervention target to improve social functioning in schizophrenia. PMID:23218816

  3. An event-related brain potential study of explicit face recognition.

    Science.gov (United States)

    Gosling, Angela; Eimer, Martin

    2011-07-01

    To determine the time course of face recognition and its links to face-sensitive event-related potential (ERP) components, ERPs elicited by faces of famous individuals and ERPs to non-famous control faces were compared in a task that required explicit judgements of facial identity. As expected, the face-selective N170 component was unaffected by the difference between famous and non-famous faces. In contrast, the occipito-temporal N250 component was linked to face recognition, as it was selectively triggered by famous faces. Importantly, this component was present for famous faces that were judged to be definitely known relative to famous faces that just appeared familiar, demonstrating that it is associated with the explicit identification of a particular face. The N250 is likely to reflect early perceptual stages of face recognition where long-term memory traces of familiar faces in ventral visual cortex are activated by matching on-line face representations. Famous faces also triggered a broadly distributed longer-latency positivity (P600f) that showed a left-hemisphere bias and was larger for definitely known faces, suggesting links between this component and name generation. These results show that successful face recognition is predicted by ERP components over face-specific visual areas that emerge within 230 ms after stimulus onset.

  4. Automatic recognition of facial movement for paralyzed face.

    Science.gov (United States)

    Wang, Ting; Dong, Junyu; Sun, Xin; Zhang, Shu; Wang, Shengke

    2014-01-01

    Facial nerve paralysis is a common disease due to nerve damage. Most approaches for evaluating the degree of facial paralysis rely on a set of different facial movements as commanded by doctors. Therefore, automatic recognition of the patterns of facial movement is fundamental to the evaluation of the degree of facial paralysis. In this paper, a novel method named Active Shape Models plus Local Binary Patterns (ASMLBP) is presented for recognizing facial movement patterns. Firstly, the Active Shape Models (ASMs) are used in the method to locate facial key points. According to these points, the face is divided into eight local regions. Then the descriptors of these regions are extracted by using Local Binary Patterns (LBP) to recognize the patterns of facial movement. The proposed ASMLBP method is tested on both the collected facial paralysis database with 57 patients and another publicly available database named the Japanese Female Facial Expression (JAFFE). Experimental results demonstrate that the proposed method is efficient for both paralyzed and normal faces.

  5. Supervised orthogonal discriminant subspace projects learning for face recognition.

    Science.gov (United States)

    Chen, Yu; Xu, Xiao-Hong

    2014-02-01

    In this paper, a new linear dimension reduction method called supervised orthogonal discriminant subspace projection (SODSP) is proposed, which addresses high-dimensionality of data and the small sample size problem. More specifically, given a set of data points in the ambient space, a novel weight matrix that describes the relationship between the data points is first built. And in order to model the manifold structure, the class information is incorporated into the weight matrix. Based on the novel weight matrix, the local scatter matrix as well as non-local scatter matrix is defined such that the neighborhood structure can be preserved. In order to enhance the recognition ability, we impose an orthogonal constraint into a graph-based maximum margin analysis, seeking to find a projection that maximizes the difference, rather than the ratio between the non-local scatter and the local scatter. In this way, SODSP naturally avoids the singularity problem. Further, we develop an efficient and stable algorithm for implementing SODSP, especially, on high-dimensional data set. Moreover, the theoretical analysis shows that LPP is a special instance of SODSP by imposing some constraints. Experiments on the ORL, Yale, Extended Yale face database B and FERET face database are performed to test and evaluate the proposed algorithm. The results demonstrate the effectiveness of SODSP.

  6. Face recognition system for set-top box-based intelligent TV

    National Research Council Canada - National Science Library

    Lee, Won Oh; Kim, Yeong Gon; Hong, Hyung Gil; Park, Kang Ryoung

    2014-01-01

    .... Because of this, negligible research has been conducted regarding face recognition for conventional TVs with supplementary STBs, even though many such studies have been conducted with smart TVs...

  7. Face Recognition Based on Support Vector Machine and Nearest Neighbor Classifier

    Institute of Scientific and Technical Information of China (English)

    张燕昆; 刘重庆

    2003-01-01

    Support vector machine (SVM), as a novel approach in pattern recognition, has demonstrated a success in face detection and face recognition. In this paper, a face recognition approach based on the SVM classifier with the nearest neighbor classifier (NNC) is proposed. The principal component analysis (PCA) is used to reduce the dimension and extract features. Then one-against-all stratedy is used to train the SVM classifiers. At the testing stage, we propose an algorithm by combining SVM classifier with NNC to improve the correct recognition rate. We conduct the experiment on the Cambridge ORL face database. The result shows that our approach outperforms the standard eigenface approach and some other approaches.

  8. Face recognition based on matching of local features on 3D dynamic range sequences

    Science.gov (United States)

    Echeagaray-Patrón, B. A.; Kober, Vitaly

    2016-09-01

    3D face recognition has attracted attention in the last decade due to improvement of technology of 3D image acquisition and its wide range of applications such as access control, surveillance, human-computer interaction and biometric identification systems. Most research on 3D face recognition has focused on analysis of 3D still data. In this work, a new method for face recognition using dynamic 3D range sequences is proposed. Experimental results are presented and discussed using 3D sequences in the presence of pose variation. The performance of the proposed method is compared with that of conventional face recognition algorithms based on descriptors.

  9. Real-world face recognition: the importance of surface reflectance properties.

    Science.gov (United States)

    Russell, Richard; Sinha, Pawan

    2007-01-01

    The face recognition task we perform [corrected] most often in everyday experience is the identification of people with whom we are familiar. However, because of logistical challenges, most studies focus on unfamiliar-face recognition, wherein subjects are asked to match or remember images of unfamiliar people's faces. Here we explore the importance of two facial attributes -shape and surface reflectance-in the context of a familiar-face recognition task. In our experiment, subjects were asked to recognise color images of the faces of their friends. The images were manipulated such that only reflectance or only shape information was useful for recognizing any particular face. Subjects were actually better at recognizing their friends' faces from reflectance information than from shape information. This provides evidence that reflectance information is important for face recognition in ecologically relevant contexts.

  10. Designing of Medium-Size Humanoid Robot with Face Recognition Features

    Directory of Open Access Journals (Sweden)

    Christian Tarunajaya

    2016-02-01

    Full Text Available owadays, there have been so many development of robot that can receive command and do speech recognition and face recognition. In this research, we develop a humanoid robot system with a controller that based on Raspberry Pi 2. The methods we used are based on Audio recognition and detection, and also face recognition using PCA (Principal Component Analysis with OpenCV and Python. PCA is one of the algorithms to do face detection by doing reduction to the number of dimension of the image possessed. The result of this reduction process is then known as eigenface to do face recognition process. In this research, we still find a false recognition. It can be caused by many things, like database condition, maybe the images are too dark or less varied, blur test image, etc. The accuracy from 3 tests on different people is about 93% (28 correct recognitions out of 30.

  11. Kernel Learning of Histogram of Local Gabor Phase Patterns for Face Recognition

    Directory of Open Access Journals (Sweden)

    Bineng Zhong

    2008-06-01

    Full Text Available This paper proposes a new face recognition method, named kernel learning of histogram of local Gabor phase pattern (K-HLGPP, which is based on Daugman’s method for iris recognition and the local XOR pattern (LXP operator. Unlike traditional Gabor usage exploiting the magnitude part in face recognition, we encode the Gabor phase information for face classification by the quadrant bit coding (QBC method. Two schemes are proposed for face recognition. One is based on the nearest-neighbor classifier with chi-square as the similarity measurement, and the other makes kernel discriminant analysis for HLGPP (K-HLGPP using histogram intersection and Gaussian-weighted chi-square kernels. The comparative experiments show that K-HLGPP achieves a higher recognition rate than other well-known face recognition systems on the large-scale standard FERET, FERET200, and CAS-PEAL-R1 databases.

  12. Face recognition ability matures late: evidence from individual differences in young adults.

    Science.gov (United States)

    Susilo, Tirta; Germine, Laura; Duchaine, Bradley

    2013-10-01

    Does face recognition ability mature early in childhood (early maturation hypothesis) or does it continue to develop well into adulthood (late maturation hypothesis)? This fundamental issue in face recognition is typically addressed by comparing child and adult participants. However, the interpretation of such studies is complicated by children's inferior test-taking abilities and general cognitive functions. Here we examined the developmental trajectory of face recognition ability in an individual differences study of 18-33 year-olds (n = 2,032), an age interval in which participants are competent test takers with comparable general cognitive functions. We found a positive association between age and face recognition, controlling for nonface visual recognition, verbal memory, sex, and own-race bias. Our study supports the late maturation hypothesis in face recognition, and illustrates how individual differences investigations of young adults can address theoretical issues concerning the development of perceptual and cognitive abilities.

  13. Haar-like Features for Robust Real-Time Face Recognition

    DEFF Research Database (Denmark)

    Nasrollahi, Kamal; Moeslund, Thomas B.

    2013-01-01

    Face recognition is still a very challenging task when the input face image is noisy, occluded by some obstacles, of very low-resolution, not facing the camera, and not properly illuminated. These problems make the feature extraction and consequently the face recognition system unstable....... The proposed system in this paper introduces the novel idea of using Haar-like features, which have commonly been used for object detection, along with a probabilistic classifier for face recognition. The proposed system is simple, real-time, effective and robust against most of the mentioned problems...

  14. An MPCA/LDA Based Dimensionality Reduction Algorithm for Face Recognition

    Directory of Open Access Journals (Sweden)

    Jun Huang

    2014-01-01

    Full Text Available We proposed a face recognition algorithm based on both the multilinear principal component analysis (MPCA and linear discriminant analysis (LDA. Compared with current traditional existing face recognition methods, our approach treats face images as multidimensional tensor in order to find the optimal tensor subspace for accomplishing dimension reduction. The LDA is used to project samples to a new discriminant feature space, while the K nearest neighbor (KNN is adopted for sample set classification. The results of our study and the developed algorithm are validated with face databases ORL, FERET, and YALE and compared with PCA, MPCA, and PCA + LDA methods, which demonstrates an improvement in face recognition accuracy.

  15. New Fuzzy-based Retinex Method for the Illumination Normalization of Face Recognition

    Directory of Open Access Journals (Sweden)

    Gi Pyo Nam

    2012-10-01

    Full Text Available We propose a new illumination normalization for face recognition which robust in relation to the illumination variations on mobile devices. This research is novel in the following five ways when compared to previous works: (i a new fuzzy‐based Retinex method is proposed for illumination normalization; (ii the performance of face recognition is enhanced by determining the optimal parameter of Retinex filtering based on fuzzy logic; (iii the output of the fuzzy membership function is adaptively determined based on the mean and standard deviations of the grey values of the detected face region; (iv through the comparison of various defuzzification methods in terms of the accuracy of face recognition, one optimal method was selected; (v we proved the validations of the proposed method by testing it with various face recognition methods. Experimental results showed that the accuracy of the face recognition with the proposed method was enhanced compared to previous ones.

  16. The Cambridge Face Memory Test for Children (CFMT-C): a new tool for measuring face recognition skills in childhood.

    Science.gov (United States)

    Croydon, Abigail; Pimperton, Hannah; Ewing, Louise; Duchaine, Brad C; Pellicano, Elizabeth

    2014-09-01

    Face recognition ability follows a lengthy developmental course, not reaching maturity until well into adulthood. Valid and reliable assessments of face recognition memory ability are necessary to examine patterns of ability and disability in face processing, yet there is a dearth of such assessments for children. We modified a well-known test of face memory in adults, the Cambridge Face Memory Test (Duchaine & Nakayama, 2006, Neuropsychologia, 44, 576-585), to make it developmentally appropriate for children. To establish its utility, we administered either the upright or inverted versions of the computerised Cambridge Face Memory Test - Children (CFMT-C) to 401 children aged between 5 and 12 years. Our results show that the CFMT-C is sufficiently sensitive to demonstrate age-related gains in the recognition of unfamiliar upright and inverted faces, does not suffer from ceiling or floor effects, generates robust inversion effects, and is capable of detecting difficulties in face memory in children diagnosed with autism. Together, these findings indicate that the CFMT-C constitutes a new valid assessment tool for children's face recognition skills.

  17. 3D Face Recognition Benchmarks on the Bosphorus Database with Focus on Facial Expressions

    NARCIS (Netherlands)

    N. Alyuz; B. Gökberk; H. Dibeklioğ lu; A. Savran; A.A. Salah (Albert Ali); L. Akarun; B. Sankur

    2008-01-01

    htmlabstractThis paper presents an evaluation of several 3D face recognizers on the Bosphorus database, which was gathered for studies on expression and pose invariant face analysis. We provide identification results of three 3D face recognition algorithms, namely generic face template based ICP

  18. Orientation and Affective Expression Effects on Face Recognition in Williams Syndrome and Autism

    Science.gov (United States)

    Rose, Fredric E.; Lincoln, Alan J.; Lai, Zona; Ene, Michaela; Searcy, Yvonne M.; Bellugi, Ursula

    2007-01-01

    We sought to clarify the nature of the face processing strength commonly observed in individuals with Williams syndrome (WS) by comparing the face recognition ability of persons with WS to that of persons with autism and to healthy controls under three conditions: Upright faces with neutral expressions, upright faces with varying affective…

  19. The effect of gaze direction on three-dimensional face recognition in infants.

    Science.gov (United States)

    Yamashita, Wakayo; Kanazawa, So; Yamaguchi, Masami K

    2012-09-01

    Eye gaze is an important tool for social contact. In this study, we investigated whether direct gaze facilitates the recognition of three-dimensional face images in infants. We presented artificially produced face images in rotation to 6-8 month-old infants. The eye gaze of the face images was either direct or averted. Sixty-one sequential images of each face were created by rotating the vertical axis of the face from frontal view to ± 30°. The recognition performances of the infants were then compared between faces with direct gaze and faces with averted gaze. Infants showed evidence that they were able to discriminate the novel from familiarized face by 8 months of age and only when gaze is direct. These results suggest that gaze direction may affect three-dimensional face recognition in infants.

  20. Extraction and Recognition of Nonlinear Interval-Type Features Using Symbolic KDA Algorithm with Application to Face Recognition

    Directory of Open Access Journals (Sweden)

    P. S. Hiremath

    2008-01-01

    recognition in the framework of symbolic data analysis. Classical KDA extracts features, which are single-valued in nature to represent face images. These single-valued variables may not be able to capture variation of each feature in all the images of same subject; this leads to loss of information. The symbolic KDA algorithm extracts most discriminating nonlinear interval-type features which optimally discriminate among the classes represented in the training set. The proposed method has been successfully tested for face recognition using two databases, ORL database and Yale face database. The effectiveness of the proposed method is shown in terms of comparative performance against popular face recognition methods such as kernel Eigenface method and kernel Fisherface method. Experimental results show that symbolic KDA yields improved recognition rate.

  1. Italian normative data and validation of two neuropsychological tests of face recognition: Benton Facial Recognition Test and Cambridge Face Memory Test.

    Science.gov (United States)

    Albonico, Andrea; Malaspina, Manuela; Daini, Roberta

    2017-06-21

    The Benton Facial Recognition Test (BFRT) and Cambridge Face Memory Test (CFMT) are two of the most common tests used to assess face discrimination and recognition abilities and to identify individuals with prosopagnosia. However, recent studies highlighted that participant-stimulus match ethnicity, as much as gender, has to be taken into account in interpreting results from these tests. Here, in order to obtain more appropriate normative data for an Italian sample, the CFMT and BFRT were administered to a large cohort of young adults. We found that scores from the BFRT are not affected by participants' gender and are only slightly affected by participant-stimulus ethnicity match, whereas both these factors seem to influence the scores of the CFMT. Moreover, the inclusion of a sample of individuals with suspected face recognition impairment allowed us to show that the use of more appropriate normative data can increase the BFRT efficacy in identifying individuals with face discrimination impairments; by contrast, the efficacy of the CFMT in classifying individuals with a face recognition deficit was confirmed. Finally, our data show that the lack of inversion effect (the difference between the total score of the upright and inverted versions of the CFMT) could be used as further index to assess congenital prosopagnosia. Overall, our results confirm the importance of having norms derived from controls with a similar experience of faces as the "potential" prosopagnosic individuals when assessing face recognition abilities.

  2. Integrated Detection, Tracking, and Recognition of Faces with Omnivideo Array in Intelligent Environments

    Directory of Open Access Journals (Sweden)

    Mohan M. Trivedi

    2008-04-01

    Full Text Available We present a multilevel system architecture for intelligent environments equipped with omnivideo arrays. In order to gain unobtrusive human awareness, real-time 3D human tracking as well as robust video-based face detection and tracking and face recognition algorithms are needed. We first propose a multiprimitive face detection and tracking loop to crop face videos as the front end of our face recognition algorithm. Both skin-tone and elliptical detections are used for robust face searching, and view-based face classification is applied to the candidates before updating the Kalman filters for face tracking. For video-based face recognition, we propose three decision rules on the facial video segments. The majority rule and discrete HMM (DHMM rule accumulate single-frame face recognition results, while continuous density HMM (CDHMM works directly with the PCA facial features of the video segment for accumulated maximum likelihood (ML decision. The experiments demonstrate the robustness of the proposed face detection and tracking scheme and the three streaming face recognition schemes with 99% accuracy of the CDHMM rule. We then experiment on the system interactions with single person and group people by the integrated layers of activity awareness. We also discuss the speech-aided incremental learning of new faces.

  3. Integrated Detection, Tracking, and Recognition of Faces with Omnivideo Array in Intelligent Environments

    Directory of Open Access Journals (Sweden)

    Huang KohsiaS

    2008-01-01

    Full Text Available Abstract We present a multilevel system architecture for intelligent environments equipped with omnivideo arrays. In order to gain unobtrusive human awareness, real-time 3D human tracking as well as robust video-based face detection and tracking and face recognition algorithms are needed. We first propose a multiprimitive face detection and tracking loop to crop face videos as the front end of our face recognition algorithm. Both skin-tone and elliptical detections are used for robust face searching, and view-based face classification is applied to the candidates before updating the Kalman filters for face tracking. For video-based face recognition, we propose three decision rules on the facial video segments. The majority rule and discrete HMM (DHMM rule accumulate single-frame face recognition results, while continuous density HMM (CDHMM works directly with the PCA facial features of the video segment for accumulated maximum likelihood (ML decision. The experiments demonstrate the robustness of the proposed face detection and tracking scheme and the three streaming face recognition schemes with 99% accuracy of the CDHMM rule. We then experiment on the system interactions with single person and group people by the integrated layers of activity awareness. We also discuss the speech-aided incremental learning of new faces.

  4. An Efficient Hybrid Face Recognition Algorithm Using PCA and GABOR Wavelets

    Directory of Open Access Journals (Sweden)

    Hyunjong Cho

    2014-04-01

    Full Text Available With the rapid development of computers and the increasing, mass use of high-tech mobile devices, vision-based face recognition has advanced significantly. However, it is hard to conclude that the performance of computers surpasses that of humans, as humans have generally exhibited better performance in challenging situations involving occlusion or variations. Motivated by the recognition method of humans who utilize both holistic and local features, we present a computationally efficient hybrid face recognition method that employs dual-stage holistic and local feature-based recognition algorithms. In the first coarse recognition stage, the proposed algorithm utilizes Principal Component Analysis (PCA to identify a test image. The recognition ends at this stage if the confidence level of the result turns out to be reliable. Otherwise, the algorithm uses this result for filtering out top candidate images with a high degree of similarity, and passes them to the next fine recognition stage where Gabor filters are employed. As is well known, recognizing a face image with Gabor filters is a computationally heavy task. The contribution of our work is in proposing a flexible dual-stage algorithm that enables fast, hybrid face recognition. Experimental tests were performed with the Extended Yale Face Database B to verify the effectiveness and validity of the research, and we obtained better recognition results under illumination variations not only in terms of computation time but also in terms of the recognition rate in comparison to PCA- and Gabor wavelet-based recognition algorithms.

  5. An Efficient Hybrid Face Recognition Algorithm Using PCA and GABOR Wavelets

    Directory of Open Access Journals (Sweden)

    Hyunjong Cho

    2014-04-01

    Full Text Available With the rapid development of computers and the increasing, mass use of high-tech mobile devices, vision-based face recognition has advanced significantly. However, it is hard to conclude that the performance of computers surpasses that of humans, as humans have generally exhibited better performance in challenging situations involving occlusion or variations. Motivated by the recognition method of humans who utilize both holistic and local features, we present a computationally efficient hybrid face recognition method that employs dual-stage holistic and local feature-based recognition algorithms. In the first coarse recognition stage, the proposed algorithm utilizes Principal Component Analysis (PCA to identify a test image. The recognition ends at this stage if the confidence level of the result turns out to be reliable. Otherwise, the algorithm uses this result for filtering out top candidate images with a high degree of similarity, and passes them to the next fine recognition stage where Gabor filters are employed. As is well known, recognizing a face image with Gabor filters is a computationally heavy task. The contribution of our work is in proposing a flexible dual-stage algorithm that enables fast, hybrid face recognition. Experimental tests were performed with the Extended Yale Face Database B to verify the effectiveness and validity of the research, and we obtained better recognition results under illumination variations not only in terms of computation time but also in terms of the recognition rate in comparison to PCA- and Gabor wavelet-based recognition algorithms.

  6. 4D Unconstrained Real-time Face Recognition Using a Commodity Depthh Camera

    NARCIS (Netherlands)

    Schimbinschi, Florin; Wiering, Marco; Mohan, R.E.; Sheba, J.K.

    2012-01-01

    Robust unconstrained real-time face recognition still remains a challenge today. The recent addition to the market of lightweight commodity depth sensors brings new possibilities for human-machine interaction and therefore face recognition. This article accompanies the reader through a succinct surv

  7. Designing a Low-Resolution Face Recognition System for Long-Range Surveillance

    NARCIS (Netherlands)

    Peng, Y.; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.

    2016-01-01

    Most face recognition systems deal well with high-resolution facial images, but perform much worse on low-resolution facial images. In low-resolution face recognition, there is a specific but realistic surveillance scenario: a surveillance camera monitoring a large area. In this scenario, usually

  8. Predicting performance of a face recognition system based on image quality

    NARCIS (Netherlands)

    Dutta, Abhishek

    2015-01-01

    In this dissertation, we present a generative model to capture the relation between facial image quality features (like pose, illumination direction, etc) and face recognition performance. Such a model can be used to predict the performance of a face recognition system. Since the model is based sole

  9. Development of Face Recognition in 5- to 15-Year-Olds

    Science.gov (United States)

    Kinnunen, Suna; Korkman, Marit; Laasonen, Marja; Lahti-Nuuttila, Pekka

    2013-01-01

    This study focuses on the development of face recognition in typically developing preschool- and school-aged children (aged 5 to 15 years old, "n" = 611, 336 girls). Social predictors include sex differences and own-sex bias. At younger ages, the development of face recognition was rapid and became more gradual as the age increased up…

  10. A novel pose and illumination robust face recognition with a single training image per person algorithm

    Institute of Scientific and Technical Information of China (English)

    Junbao Li; Jeng-Shyang Pan

    2008-01-01

    @@ In the real-world application of face recognition system, owing to the difficulties of collecting samples or storage space of systems, only one sample image per person is stored in the system, which is so-called one sample per person problem. Moreover, pose and illumination have impact on recognition performance. We propose a novel pose and illumination robust algorithm for face recognition with a single training image per person to solve the above limitations. Experimental results show that the proposed algorithm is an efficient and practical approach for face recognition.

  11. A Collaborative Neighbor Representation Based Face Recognition Algorithm

    Directory of Open Access Journals (Sweden)

    Zhengming Li

    2013-01-01

    Full Text Available We propose a new collaborative neighbor representation algorithm for face recognition based on a revised regularized reconstruction error (RRRE, called the two-phase collaborative neighbor representation algorithm (TCNR. Specifically, the RRRE is the division of  l2-norm of reconstruction error of each class into a linear combination of  l2-norm of reconstruction coefficients of each class, which can be used to increase the discrimination information for classification. The algorithm is as follows: in the first phase, the test sample is represented as a linear combination of all the training samples by incorporating the neighbor information into the objective function. In the second phase, we use the k classes to represent the test sample and calculate the collaborative neighbor representation coefficients. TCNR not only can preserve locality and similarity information of sparse coding but also can eliminate the side effect on the classification decision of the class that is far from the test sample. Moreover, the rationale and alternative scheme of TCNR are given. The experimental results show that TCNR algorithm achieves better performance than seven previous algorithms.

  12. Local uncorrelated local discriminant embedding for face recognition

    Institute of Scientific and Technical Information of China (English)

    Xiao-hu MA; Meng YANG; Zhao ZHANG

    2016-01-01

    The feature extraction algorithm plays an important role in face recognition. However, the extracted features also have overlapping discriminant information. A property of the statistical uncorrelated criterion is that it eliminates the redundancy among the extracted discriminant features, while many algorithms generally ignore this property. In this paper, we introduce a novel feature extraction method called local uncorrelated local discriminant embedding (LULDE). The proposed approach can be seen as an extension of a local discriminant embedding (LDE) framework in three ways. First, a new local statistical uncorrelated criterion is proposed, which effectively captures the local information of interclass and intraclass. Second, we reconstruct the affinity matrices of an intrinsic graph and a penalty graph, which are mentioned in LDE to enhance the discriminant property. Finally, it overcomes the small-sample-size problem without using principal component analysis to preprocess the original data, which avoids losing some discriminant information. Experimental results on Yale, ORL, Extended Yale B, and FERET databases demonstrate that LULDE outperforms LDE and other representative uncorrelated feature extraction methods.

  13. Face recognition: a convolutional neural-network approach.

    Science.gov (United States)

    Lawrence, S; Giles, C L; Tsoi, A C; Back, A D

    1997-01-01

    We present a hybrid neural-network for human face recognition which compares favourably with other methods. The system combines local image sampling, a self-organizing map (SOM) neural network, and a convolutional neural network. The SOM provides a quantization of the image samples into a topological space where inputs that are nearby in the original space are also nearby in the output space, thereby providing dimensionality reduction and invariance to minor changes in the image sample, and the convolutional neural network provides partial invariance to translation, rotation, scale, and deformation. The convolutional network extracts successively larger features in a hierarchical set of layers. We present results using the Karhunen-Loeve transform in place of the SOM, and a multilayer perceptron (MLP) in place of the convolutional network for comparison. We use a database of 400 images of 40 individuals which contains quite a high degree of variability in expression, pose, and facial details. We analyze the computational complexity and discuss how new classes could be added to the trained recognizer.

  14. Face recognition in age related macular degeneration: perceived disability, measured disability, and performance with a bioptic device

    OpenAIRE

    Tejeria, L; Harper, R. A.; Artes, P H; Dickinson, C M

    2002-01-01

    Aims: (1) To explore the relation between performance on tasks of familiar face recognition (FFR) and face expression difference discrimination (FED) with both perceived disability in face recognition and clinical measures of visual function in subjects with age related macular degeneration (AMD). (2) To quantify the gain in performance for face recognition tasks when subjects use a bioptic telescopic low vision device.

  15. On the relation between face and object recognition in developmental prosopagnosia

    DEFF Research Database (Denmark)

    Gerlach, Christian; Klargaard, Solja; Starrfelt, Randi

    2016-01-01

    There is an ongoing debate about whether face recognition and object recognition constitute separate cognitive domains. Clarification of this issue can have important theoretical consequences as face recognition is often used as a prime example of domain-specificity in mind and brain. An important...... of visual object processing involving both regular and degraded drawings. None of the individuals exhibited a dissociation between face and object recognition, and as a group they were significantly more affected by degradation of objects than control participants. Importantly, we also find positive...... correlations between the severity of the face recognition impairment and the degree of impaired performance with degraded objects. This suggests that the face and object deficits are systematically related rather than coincidental. We conclude that at present, there is no strong evidence in the literature...

  16. Social and attention-to-detail subclusters of autistic traits differentially predict looking at eyes and face identity recognition ability.

    Science.gov (United States)

    Davis, Joshua; McKone, Elinor; Zirnsak, Marc; Moore, Tirin; O'Kearney, Richard; Apthorp, Deborah; Palermo, Romina

    2017-02-01

    This study distinguished between different subclusters of autistic traits in the general population and examined the relationships between these subclusters, looking at the eyes of faces, and the ability to recognize facial identity. Using the Autism Spectrum Quotient (AQ) measure in a university-recruited sample, we separate the social aspects of autistic traits (i.e., those related to communication and social interaction; AQ-Social) from the non-social aspects, particularly attention-to-detail (AQ-Attention). We provide the first evidence that these social and non-social aspects are associated differentially with looking at eyes: While AQ-Social showed the commonly assumed tendency towards reduced looking at eyes, AQ-Attention was associated with increased looking at eyes. We also report that higher attention-to-detail (AQ-Attention) was then indirectly related to improved face recognition, mediated by increased number of fixations to the eyes during face learning. Higher levels of socially relevant autistic traits (AQ-Social) trended in the opposite direction towards being related to poorer face recognition (significantly so in females on the Cambridge Face Memory Test). There was no evidence of any mediated relationship between AQ-Social and face recognition via reduced looking at the eyes. These different effects of AQ-Attention and AQ-Social suggest face-processing studies in Autism Spectrum Disorder might similarly benefit from considering symptom subclusters. Additionally, concerning mechanisms of face recognition, our results support the view that more looking at eyes predicts better face memory.

  17. The social face of emotion recognition: Evaluations versus stereotypes

    NARCIS (Netherlands)

    Bijlstra, G.; Holland, R.W.; Wigboldus, D.H.J.

    2010-01-01

    The goal of the present paper was to demonstrate the influence of general evaluations and stereotype associations on emotion recognition. Earlier research has shown that evaluative connotations between social category members and emotional expression predict whether recognition of positive or

  18. Size determines whether specialized expert processes are engaged for recognition of faces.

    Science.gov (United States)

    Yang, Nan; Shafai, Fakhri; Oruc, Ipek

    2014-07-22

    Many influential models of face recognition postulate specialized expert processes that are engaged when viewing upright, own-race faces, as opposed to a general-purpose recognition route used for nonface objects and inverted or other-race faces. In contrast, others have argued that empirical differences do not stem from qualitatively distinct processing. We offer a potential resolution to this ongoing controversy. We hypothesize that faces engage specialized processes at large sizes only. To test this, we measured recognition efficiencies for a wide range of sizes. Upright face recognition efficiency increased with size. This was not due to better visibility of basic image features at large sizes. We ensured this by calculating efficiency relative to a specialized ideal observer unique to each individual that incorporated size-related changes in visibility and by measuring inverted efficiencies across the same range of face sizes. Inverted face recognition efficiencies did not change with size. A qualitative face inversion effect, defined as the ratio of relative upright and inverted efficiencies, showed a complete lack of inversion effects for small sizes up to 6°. In contrast, significant face inversion effects were found for all larger sizes. Size effects may stem from predominance of larger faces in the overall exposure to faces, which occur at closer viewing distances typical of social interaction. Our results offer a potential explanation for the contradictory findings in the literature regarding the special status of faces.

  19. An Efficient Face Recognition System Based On the Hybridization of Pose Invariant and Illumination Process

    Directory of Open Access Journals (Sweden)

    S. Muruganantham

    2012-07-01

    Full Text Available In the previous decade, one of the most effectual applications of image analysis and indulgent that attracted significant consideration is the human face recognition. One of the diverse techniques used for identifying an individual is the Face recognition. Normally the image variations for the reason that of the change in face identity are less than the variations between the images of the same face under different illumination and viewing angle. Among several factors that manipulate face recognition, illumination and pose are the two major challenges. Pose and illumination variations harshly affect the performance of face recognition. Considerably less effort has been taken to deal with the problem of mutual variations of pose and illumination in face recognition, while several algorithms have been proposed for face recognition from fixed points. In this paper we intend a face recognition method that is forceful to pose and illumination variations. We first put forward a simple pose estimation method based on 2D images, which uses a proper classification rule and image representation to classify a pose of a face image. After that, the image can be assigned to a pose class by a classification rule in a low-dimensional subspace constructed by a feature extraction method. We offer a shadow compensation method that compensates for illumination variation in a face image so that the image can be predictable by a face recognition system designed for images under normal illumination condition. Starting the accomplishment result, it is obvious that our projected technique based on the hybridization system recognizes the face images effectively.

  20. Face Recognition Based Door Lock System Using Opencv and C# with Remote Access and Security Features

    Directory of Open Access Journals (Sweden)

    Prathamesh Timse

    2014-04-01

    Full Text Available This paper investigates the accuracy and effectiveness of the face detection and recognition algorithms using OpenCV and C# computer language. The adaboost algorithm [2] is used for face detection and PCA algorithm[1] is used for face recognition. This paper also investigates the robustness of the face recognition system when an unknown person is being detected, wherein the system will send an email to the owner of the system using SMTP [7]. The door lock can also be accessed remotely from any part of the world by using a Dropbox [8] account.

  1. Sistem Kontrol Akses Berbasis Real Time Face Recognition dan Gender Information

    Directory of Open Access Journals (Sweden)

    Putri Nurmala

    2015-06-01

    Full Text Available Face recognition and gender information is a computer application for automatically identifying or verifying a person's face from a camera to capture a person's face. It is usually used in access control systems and it can be compared to other biometrics such as finger print identification system or iris. Many of face recognition algorithms have been developed in recent years. Face recognition system and gender information in this system based on the Principal Component Analysis method (PCA. Computational method has a simple and fast compared with the use of the method requires a lot of learning, such as artificial neural network. In this access control system, relay used and Arduino controller. In this essay focuses on face recognition and gender -based information in real time using the method of Principal Component Analysis ( PCA . The result achieved from the application design is the identification of a persons face with gender using PCA. The results achieved by the application is face recognition system using PCA can obtain good results the 85 % success rate in face recognition with face images that have been tested by a few people and a fairly high degree of accuracy.

  2. Understanding gender bias in face recognition: effects of divided attention at encoding.

    Science.gov (United States)

    Palmer, Matthew A; Brewer, Neil; Horry, Ruth

    2013-03-01

    Prior research has demonstrated a female own-gender bias in face recognition, with females better at recognizing female faces than male faces. We explored the basis for this effect by examining the effect of divided attention during encoding on females' and males' recognition of female and male faces. For female participants, divided attention impaired recognition performance for female faces to a greater extent than male faces in a face recognition paradigm (Study 1; N=113) and an eyewitness identification paradigm (Study 2; N=502). Analysis of remember-know judgments (Study 2) indicated that divided attention at encoding selectively reduced female participants' recollection of female faces at test. For male participants, divided attention selectively reduced recognition performance (and recollection) for male stimuli in Study 2, but had similar effects on recognition of male and female faces in Study 1. Overall, the results suggest that attention at encoding contributes to the female own-gender bias by facilitating the later recollection of female faces.

  3. Setting a world record in 3D face recognition

    NARCIS (Netherlands)

    Spreeuwers, Lieuwe Jan

    Biometrics - recognition of persons based on how they look or behave, is the main subject of research at the Chair of Biometric Pattern Recognition (BPR) of the Services, Cyber Security and Safety Group (SCS) of the EEMCS Faculty at the University of Twente. Examples are finger print recognition,

  4. Setting a world record in 3D face recognition

    NARCIS (Netherlands)

    Spreeuwers, Luuk

    2015-01-01

    Biometrics - recognition of persons based on how they look or behave, is the main subject of research at the Chair of Biometric Pattern Recognition (BPR) of the Services, Cyber Security and Safety Group (SCS) of the EEMCS Faculty at the University of Twente. Examples are finger print recognition, ir

  5. Faces affect recognition in schizophrenia [Rozpoznawanie emocjonalnej ekspresji mimicznej przez osoby chore na schizofrenię

    Directory of Open Access Journals (Sweden)

    Prochwicz, Katarzyna

    2012-12-01

    Full Text Available Clinical observations and the results of many experimental researches indicate that individuals suffering from schizophrenia reveal difficulties in the recognition of emotional states experienced by other people; however the causes and the range of these problems have not been clearly described. Despite early research results confirming that difficulties in emotion recognition are related only to negative emotions, the results of the researches conducted over the lat 30 years indicate that emotion recognition problems are a manifestation of a general cognitive deficit, and they do not concern specific emotions.The article contains a review of the research on face affect recognition in schizophrenia. It discusses the causes of these difficulties, the differences in the accuracy of the recognition of specific emotions, the relationship between the symptoms of schizophrenia and the severity of problems with face perception, and the types of cognitive processes which influence the disturbances in face affect recognition. Particular attention was paid to the methodology of the research on face affect recognition, including the methods used in control tasks relying on the identification of neutral faces designed to assess the range of deficit underlying the face affect recognition problems. The analysis of methods used in particular researches revealed some weaknesses. The article also deals with the question of the possibilities of improving the ability to recognise the emotions, and briefly discusses the efficiency of emotion recognition training programs designed for patients suffering from schizophrenia.

  6. Eye tracking reveals a crucial role for facial motion in recognition of faces by infants.

    Science.gov (United States)

    Xiao, Naiqi G; Quinn, Paul C; Liu, Shaoying; Ge, Liezhong; Pascalis, Olivier; Lee, Kang

    2015-06-01

    Current knowledge about face processing in infancy comes largely from studies using static face stimuli, but faces that infants see in the real world are mostly moving ones. To bridge this gap, 3-, 6-, and 9-month-old Asian infants (N = 118) were familiarized with either moving or static Asian female faces, and then their face recognition was tested with static face images. Eye-tracking methodology was used to record eye movements during the familiarization and test phases. The results showed a developmental change in eye movement patterns, but only for the moving faces. In addition, the more infants shifted their fixations across facial regions, the better their face recognition was, but only for the moving faces. The results suggest that facial movement influences the way faces are encoded from early in development.

  7. Race, Ethnicity, and Eating Disorder Recognition by Peers

    OpenAIRE

    Sala, Margarita; Reyes-Rodríguez, Mae Lynn; Bulik, Cynthia M.; Bardone-Cone, Anna

    2013-01-01

    We investigated racial/ethnic stereotyping in the recognition and referral of eating disorders with 663 university students. We explored responses to problem and eating disorder recognition, and health care referral after reading a vignette concerning a patient of different race/ethnic background presenting with eating disorders. A series of three 4 × 3 ANOVAs revealed significant main effects for eating disorder across all three outcome variables. There were no significant main effects acros...

  8. Face Recognition Is Affected by Similarity in Spatial Frequency Range to a Greater Degree Than Within-Category Object Recognition

    Science.gov (United States)

    Collin, Charles A.; Liu, Chang Hong; Troje, Nikolaus F.; McMullen, Patricia A.; Chaudhuri, Avi

    2004-01-01

    Previous studies have suggested that face identification is more sensitive to variations in spatial frequency content than object recognition, but none have compared how sensitive the 2 processes are to variations in spatial frequency overlap (SFO). The authors tested face and object matching accuracy under varying SFO conditions. Their results…

  9. Emotion Recognition in Children with Autism Spectrum Disorders: Relations to Eye Gaze and Autonomic State

    Science.gov (United States)

    Bal, Elgiz; Harden, Emily; Lamb, Damon; Van Hecke, Amy Vaughan; Denver, John W.; Porges, Stephen W.

    2010-01-01

    Respiratory Sinus Arrhythmia (RSA), heart rate, and accuracy and latency of emotion recognition were evaluated in children with autism spectrum disorders (ASD) and typically developing children while viewing videos of faces slowly transitioning from a neutral expression to one of six basic emotions (e.g., anger, disgust, fear, happiness, sadness,…

  10. Emotion Recognition in Animated Compared to Human Stimuli in Adolescents with Autism Spectrum Disorder

    Science.gov (United States)

    Brosnan, Mark; Johnson, Hilary; Grawmeyer, Beate; Chapman, Emma; Benton, Laura

    2015-01-01

    There is equivocal evidence as to whether there is a deficit in recognising emotional expressions in Autism spectrum disorder (ASD). This study compared emotion recognition in ASD in three types of emotion expression media (still image, dynamic image, auditory) across human stimuli (e.g. photo of a human face) and animated stimuli (e.g. cartoon…

  11. Do congenital prosopagnosia and the other-race effect affect the same face recognition mechanisms?

    OpenAIRE

    Janina eEsins; Johannes eSchultz; Christian eWallraven; Isabelle eBülthoff

    2014-01-01

    Congenital prosopagnosia, an innate impairment in recognizing faces, as well as the other-race effect, a disadvantage in recognizing faces of foreign races, both affect face recognition abilities. Are the same face processing mechanisms affected in both situations? To investigate this question, we tested three groups of 21 participants: German congenital prosopagnosics, South Korean participants and German controls in three different tasks involving faces and objects. First we tested all part...

  12. Do congenital prosopagnosia and the other-race effect affect the same face recognition mechanisms?

    OpenAIRE

    Esins, Janina; Schultz, Johannes; Wallraven, Christian; Bülthoff, Isabelle

    2014-01-01

    Congenital prosopagnosia (CP), an innate impairment in recognizing faces, as well as the other-race effect (ORE), a disadvantage in recognizing faces of foreign races, both affect face recognition abilities. Are the same face processing mechanisms affected in both situations? To investigate this question, we tested three groups of 21 participants: German congenital prosopagnosics, South Korean participants and German controls on three different tasks involving faces and objects. First we test...

  13. Experience moderates overlap between object and face recognition, suggesting a common ability.

    Science.gov (United States)

    Gauthier, Isabel; McGugin, Rankin W; Richler, Jennifer J; Herzmann, Grit; Speegle, Magen; Van Gulick, Ana E

    2014-07-03

    Some research finds that face recognition is largely independent from the recognition of other objects; a specialized and innate ability to recognize faces could therefore have little or nothing to do with our ability to recognize objects. We propose a new framework in which recognition performance for any category is the product of domain-general ability and category-specific experience. In Experiment 1, we show that the overlap between face and object recognition depends on experience with objects. In 256 subjects we measured face recognition, object recognition for eight categories, and self-reported experience with these categories. Experience predicted neither face recognition nor object recognition but moderated their relationship: Face recognition performance is increasingly similar to object recognition performance with increasing object experience. If a subject has a lot of experience with objects and is found to perform poorly, they also prove to have a low ability with faces. In a follow-up survey, we explored the dimensions of experience with objects that may have contributed to self-reported experience in Experiment 1. Different dimensions of experience appear to be more salient for different categories, with general self-reports of expertise reflecting judgments of verbal knowledge about a category more than judgments of visual performance. The complexity of experience and current limitations in its measurement support the importance of aggregating across multiple categories. Our findings imply that both face and object recognition are supported by a common, domain-general ability expressed through experience with a category and best measured when accounting for experience.

  14. Experience moderates overlap between object and face recognition, suggesting a common ability

    Science.gov (United States)

    Gauthier, Isabel; McGugin, Rankin W.; Richler, Jennifer J.; Herzmann, Grit; Speegle, Magen; Van Gulick, Ana E.

    2014-01-01

    Some research finds that face recognition is largely independent from the recognition of other objects; a specialized and innate ability to recognize faces could therefore have little or nothing to do with our ability to recognize objects. We propose a new framework in which recognition performance for any category is the product of domain-general ability and category-specific experience. In Experiment 1, we show that the overlap between face and object recognition depends on experience with objects. In 256 subjects we measured face recognition, object recognition for eight categories, and self-reported experience with these categories. Experience predicted neither face recognition nor object recognition but moderated their relationship: Face recognition performance is increasingly similar to object recognition performance with increasing object experience. If a subject has a lot of experience with objects and is found to perform poorly, they also prove to have a low ability with faces. In a follow-up survey, we explored the dimensions of experience with objects that may have contributed to self-reported experience in Experiment 1. Different dimensions of experience appear to be more salient for different categories, with general self-reports of expertise reflecting judgments of verbal knowledge about a category more than judgments of visual performance. The complexity of experience and current limitations in its measurement support the importance of aggregating across multiple categories. Our findings imply that both face and object recognition are supported by a common, domain-general ability expressed through experience with a category and best measured when accounting for experience. PMID:24993021

  15. Biased recognition of positive faces in aging and amnestic mild cognitive impairment.

    Science.gov (United States)

    Werheid, Katja; Gruno, Maria; Kathmann, Norbert; Fischer, Håkan; Almkvist, Ove; Winblad, Bengt

    2010-03-01

    We investigated age differences in biased recognition of happy, neutral, or angry faces in 4 experiments. Experiment 1 revealed increased true and false recognition for happy faces in older adults, which persisted even when changing each face's emotional expression from study to test in Experiment 2. In Experiment 3, we examined the influence of reduced memory capacity on the positivity-induced recognition bias, which showed the absence of emotion-induced memory enhancement but a preserved recognition bias for positive faces in patients with amnestic mild cognitive impairment compared with older adults with normal memory performance. In Experiment 4, we used semantic differentials to measure the connotations of happy and angry faces. Younger and older participants regarded happy faces as more familiar than angry faces, but the older group showed a larger recognition bias for happy faces. This finding indicates that older adults use a gist-based memory strategy based on a semantic association between positive emotion and familiarity. Moreover, older adults' judgments of valence were more positive for both angry and happy faces, supporting the hypothesis of socioemotional selectivity. We propose that the positivity-induced recognition bias might be based on fluency, which in turn is based on both positivity-oriented emotional goals and on preexisting semantic associations.

  16. Face Recognition Performance Improvement using a Similarity Score of Feature Vectors based on Probabilistic Histograms

    Directory of Open Access Journals (Sweden)

    SRIKOTE, G.

    2016-08-01

    Full Text Available This paper proposes an improved performance algorithm of face recognition to identify two face mismatch pairs in cases of incorrect decisions. The primary feature of this method is to deploy the similarity score with respect to Gaussian components between two previously unseen faces. Unlike the conventional classical vector distance measurement, our algorithms also consider the plot of summation of the similarity index versus face feature vector distance. A mixture of Gaussian models of labeled faces is also widely applicable to different biometric system parameters. By comparative evaluations, it has been shown that the efficiency of the proposed algorithm is superior to that of the conventional algorithm by an average accuracy of up to 1.15% and 16.87% when compared with 3x3 Multi-Region Histogram (MRH direct-bag-of-features and Principal Component Analysis (PCA-based face recognition systems, respectively. The experimental results show that similarity score consideration is more discriminative for face recognition compared to feature distance. Experimental results of Labeled Face in the Wild (LFW data set demonstrate that our algorithms are suitable for real applications probe-to-gallery identification of face recognition systems. Moreover, this proposed method can also be applied to other recognition systems and therefore additionally improves recognition scores.

  17. Component Structure of Individual Differences in True and False Recognition of Faces

    Science.gov (United States)

    Bartlett, James C.; Shastri, Kalyan K.; Abdi, Herve; Neville-Smith, Marsha

    2009-01-01

    Principal-component analyses of 4 face-recognition studies uncovered 2 independent components. The first component was strongly related to false-alarm errors with new faces as well as to facial "conjunctions" that recombine features of previously studied faces. The second component was strongly related to hits as well as to the conjunction/new…

  18. Improved RGB-D-T based Face Recognition

    DEFF Research Database (Denmark)

    Oliu Simon, Marc; Corneanu, Ciprian; Nasrollahi, Kamal

    2016-01-01

    Reliable facial recognition systems are of crucial importance in various applications from entertainment to security. Thanks to the deep-learning concepts introduced in the field, a significant improvement in the performance of the unimodal facial recognition systems has been observed in the recent...... years. At the same time a multimodal facial recognition is a promising approach. This paper combines the latest successes in both directions by applying deep learning Convolutional Neural Networks (CNN) to the multimodal RGB-D-T based facial recognition problem outperforming previously published results....... Furthermore, a late fusion of the CNN-based recognition block with various hand-crafted features (LBP, HOG, HAAR, HOGOM) is introduced, demonstrating even better recognition performance on a benchmark RGB-D-T database. The obtained results in this paper show that the classical engineered features and CNN...

  19. 3D FACE RECOGNITION FROM RANGE IMAGES BASED ON CURVATURE ANALYSIS

    Directory of Open Access Journals (Sweden)

    Suranjan Ganguly

    2014-02-01

    Full Text Available In this paper, we present a novel approach for three-dimensional face recognition by extracting the curvature maps from range images. There are four types of curvature maps: Gaussian, Mean, Maximum and Minimum curvature maps. These curvature maps are used as a feature for 3D face recognition purpose. The dimension of these feature vectors is reduced using Singular Value Decomposition (SVD technique. Now from calculated three components of SVD, the non-negative values of ‘S’ part of SVD is ranked and used as feature vector. In this proposed method, two pair-wise curvature computations are done. One is Mean, and Maximum curvature pair and another is Gaussian and Mean curvature pair. These are used to compare the result for better recognition rate. This automated 3D face recognition system is focused in different directions like, frontal pose with expression and illumination variation, frontal face along with registered face, only registered face and registered face from different pose orientation across X, Y and Z axes. 3D face images used for this research work are taken from FRAV3D database. The pose variation of 3D facial image is being registered to frontal pose by applying one to all registration technique then curvature mapping is applied on registered face images along with remaining frontal face images. For the classification and recognition purpose five layer feed-forward back propagation neural network classifiers is used, and the corresponding result is discussed in section 4.

  20. Fearful contextual expression impairs the encoding and recognition of target faces: an ERP study

    Directory of Open Access Journals (Sweden)

    Huiyan eLin

    2015-09-01

    Full Text Available Previous event-related potential (ERP studies have shown that the N170 to faces is modulated by the emotion of the face and its context. However, it is unclear how the encoding of emotional target faces as reflected in the N170 is modulated by the preceding contextual facial expression when temporal onset and identity of target faces are unpredictable. In addition, no study as yet has investigated whether contextual facial expression modulates later recognition of target faces. To address these issues, participants in the present study were asked to identify target faces (fearful or neutral that were presented after a sequence of fearful or neutral contextual faces. The number of sequential contextual faces was random and contextual and target faces were of different identities so that temporal onset and identity of target faces were unpredictable. Electroencephalography (EEG data was recorded during the encoding phase. Subsequently, participants had to perform an unexpected old/new recognition task in which target face identities were presented in either the encoded or the non-encoded expression. ERP data showed a reduced N170 to target faces in fearful as compared to neutral context regardless of target facial expression. In the later recognition phase, recognition rates were reduced for target faces in the encoded expression when they had been encountered in fearful as compared to neutral context. The present findings suggest that fearful compared to neutral contextual faces reduce the allocation of attentional resources towards target faces, which results in limited encoding and recognition of target faces.

  1. Fearful contextual expression impairs the encoding and recognition of target faces: an ERP study

    Science.gov (United States)

    Lin, Huiyan; Schulz, Claudia; Straube, Thomas

    2015-01-01

    Previous event-related potential (ERP) studies have shown that the N170 to faces is modulated by the emotion of the face and its context. However, it is unclear how the encoding of emotional target faces as reflected in the N170 is modulated by the preceding contextual facial expression when temporal onset and identity of target faces are unpredictable. In addition, no study as yet has investigated whether contextual facial expression modulates later recognition of target faces. To address these issues, participants in the present study were asked to identify target faces (fearful or neutral) that were presented after a sequence of fearful or neutral contextual faces. The number of sequential contextual faces was random and contextual and target faces were of different identities so that temporal onset and identity of target faces were unpredictable. Electroencephalography (EEG) data was recorded during the encoding phase. Subsequently, participants had to perform an unexpected old/new recognition task in which target face identities were presented in either the encoded or the non-encoded expression. ERP data showed a reduced N170 to target faces in fearful as compared to neutral context regardless of target facial expression. In the later recognition phase, recognition rates were reduced for target faces in the encoded expression when they had been encountered in fearful as compared to neutral context. The present findings suggest that fearful compared to neutral contextual faces reduce the allocation of attentional resources towards target faces, which results in limited encoding and recognition of target faces. PMID:26388751

  2. Fusion of Visible and Thermal Descriptors Using Genetic Algorithms for Face Recognition Systems

    Directory of Open Access Journals (Sweden)

    Gabriel Hermosilla

    2015-07-01

    Full Text Available The aim of this article is to present a new face recognition system based on the fusion of visible and thermal features obtained from the most current local matching descriptors by maximizing face recognition rates through the use of genetic algorithms. The article considers a comparison of the performance of the proposed fusion methodology against five current face recognition methods and classic fusion techniques used commonly in the literature. These were selected by considering their performance in face recognition. The five local matching methods and the proposed fusion methodology are evaluated using the standard visible/thermal database, the Equinox database, along with a new database, the PUCV-VTF, designed for visible-thermal studies in face recognition and described for the first time in this work. The latter is created considering visible and thermal image sensors with different real-world conditions, such as variations in illumination, facial expression, pose, occlusion, etc. The main conclusions of this article are that two variants of the proposed fusion methodology surpass current face recognition methods and the classic fusion techniques reported in the literature, attaining recognition rates of over 97% and 99% for the Equinox and PUCV-VTF databases, respectively. The fusion methodology is very robust to illumination and expression changes, as it combines thermal and visible information efficiently by using genetic algorithms, thus allowing it to choose optimal face areas where one spectrum is more representative than the other.

  3. Fusion of Visible and Thermal Descriptors Using Genetic Algorithms for Face Recognition Systems.

    Science.gov (United States)

    Hermosilla, Gabriel; Gallardo, Francisco; Farias, Gonzalo; San Martin, Cesar

    2015-07-23

    The aim of this article is to present a new face recognition system based on the fusion of visible and thermal features obtained from the most current local matching descriptors by maximizing face recognition rates through the use of genetic algorithms. The article considers a comparison of the performance of the proposed fusion methodology against five current face recognition methods and classic fusion techniques used commonly in the literature. These were selected by considering their performance in face recognition. The five local matching methods and the proposed fusion methodology are evaluated using the standard visible/thermal database, the Equinox database, along with a new database, the PUCV-VTF, designed for visible-thermal studies in face recognition and described for the first time in this work. The latter is created considering visible and thermal image sensors with different real-world conditions, such as variations in illumination, facial expression, pose, occlusion, etc. The main conclusions of this article are that two variants of the proposed fusion methodology surpass current face recognition methods and the classic fusion techniques reported in the literature, attaining recognition rates of over 97% and 99% for the Equinox and PUCV-VTF databases, respectively. The fusion methodology is very robust to illumination and expression changes, as it combines thermal and visible information efficiently by using genetic algorithms, thus allowing it to choose optimal face areas where one spectrum is more representative than the other.

  4. Solving the Border Control Problem: Evidence of Enhanced Face Matching in Individuals with Extraordinary Face Recognition Skills.

    Science.gov (United States)

    Bobak, Anna Katarzyna; Dowsett, Andrew James; Bate, Sarah

    2016-01-01

    Photographic identity documents (IDs) are commonly used despite clear evidence that unfamiliar face matching is a difficult and error-prone task. The current study set out to examine the performance of seven individuals with extraordinary face recognition memory, so called "super recognisers" (SRs), on two face matching tasks resembling border control identity checks. In Experiment 1, the SRs as a group outperformed control participants on the "Glasgow Face Matching Test", and some case-by-case comparisons also reached significance. In Experiment 2, a perceptually difficult face matching task was used: the "Models Face Matching Test". Once again, SRs outperformed controls both on group and mostly in case-by-case analyses. These findings suggest that SRs are considerably better at face matching than typical perceivers, and would make proficient personnel for border control agencies.

  5. Face Detection and Recognition Using Viola-Jones with PCA-LDA and Square Euclidean Distance

    Directory of Open Access Journals (Sweden)

    Nawaf Hazim Barnouti

    2016-05-01

    Full Text Available In this paper, an automatic face recognition system is proposed based on appearance-based features that focus on the entire face image rather than local facial features. The first step in face recognition system is face detection. Viola-Jones face detection method that capable of processing images extremely while achieving high detection rates is used. This method has the most impact in the 2000’s and known as the first object detection framework to provide relevant object detection that can run in real time. Feature extraction and dimension reduction method will be applied after face detection. Principal Component Analysis (PCA method is widely used in pattern recognition. Linear Discriminant Analysis (LDA method that used to overcome drawback the PCA has been successfully applied to face recognition. It is achieved by projecting the image onto the Eigenface space by PCA after that implementing pure LDA over it. Square Euclidean Distance (SED is used. The distance between two images is a major concern in pattern recognition. The distance between the vectors of two images leads to image similarity. The proposed method is tested on three databases (MUCT, Face94, and Grimace. Different number of training and testing images are used to evaluate the system performance and it show that increasing the number of training images will increase the recognition rate.

  6. Local Illumination Normalization and Facial Feature Point Selection for Robust Face Recognition

    Directory of Open Access Journals (Sweden)

    Song HAN

    2013-03-01

    Full Text Available Face recognition systems must be robust to the variation of various factors such as facial expression, illumination, head pose and aging. Especially, the robustness against illumination variation is one of the most important problems to be solved for the practical use of face recognition systems. Gabor wavelet is widely used in face detection and recognition because it gives the possibility to simulate the function of human visual system. In this paper, we propose a method for extracting Gabor wavelet features which is stable under the variation of local illumination and show experiment results demonstrating its effectiveness.

  7. ONLINE TRAINING FOR FACE RECOGNITION SYSTEM USING IMPROVED PCA

    Directory of Open Access Journals (Sweden)

    Widodo Budiharto

    2011-11-01

    where the tested images are captured real-time from camera. Varying illumination in tested images will increase the accuracy using ITS face database which its accuracy is 95.5 %, higher than ATT face databases as 95.4% and Indian face databases as 72%. The results from this experiment are still evaluated to be improved in the future.

  8. Description and recognition of faces from 3D data

    Science.gov (United States)

    Coombes, Anne M.; Richards, Robin; Linney, Alfred D.; Bruce, Vicki; Fright, Rick

    1992-12-01

    A method based on differential geometry, is presented for mathematically describing the shape of the facial surface. Three-dimensional data for the face are collected by optical surface scanning. The method allows the segmentation of the face into regions of a particular `surface type,' according to the surface curvature. Eight different surface types are produced which all have perceptually meaningful interpretations. The correspondence of the surface type regions to the facial features are easily visualized, allowing a qualitative assessment of the face. A quantitative description of the face in terms of the surface type regions can be produced and the variation of the description between faces is demonstrated. A set of optical surface scans can be registered together and averages to produce an average male and average female face. Thus an assessment of how individuals vary from the average can be made as well as a general statement about the differences between male and female faces. This method will enable an investigation to be made as to how reliably faces can be individuated by their surface shape which, if feasible, may be the basis of an automatic system for recognizing faces. It also has applications in physical anthropology, for classification of the face, facial reconstructive surgery, to quantify the changes in a face altered by reconstructive surgery and growth, and in visual perception, to assess the recognizability of faces. Examples of some of these applications are presented.

  9. Robust Face Recognition by Hierarchical Kernel Associative Memory Models Based on Spatial Domain Gabor Transforms

    Directory of Open Access Journals (Sweden)

    Bai-ling Zhang

    2006-07-01

    Full Text Available Face recognition can be studied as an associative memory (AM problem and kernel-based AM models have been proven efficient. In this paper, a hierarchical Kernel Associative Memory (KAM face recognition scheme with a multiscale Gabor transform, is proposed. The pyramidal multiscale Gabor decomposition proposed by Nestares, Navarro, Portilla and Tabernero not only provides a very efficient implementation of the Gabor transform in the spatial domain, but also permits a fast reconstruction of images. In our method, face images of each person are first decomposed into their multiscale representations by a quasicomplete Gabor transform, which are then modelled by Kernel Associative Memories. In the recognition stage, a query face image is also represented by a Gabor multiresolution pyramid and the reconstructions from different KAM models corresponding to even Gabor channels are then simply summed to give the recall. The recognition scheme was thoroughly tested using several benchmarking face datasets, including the AR faces, UMIST faces, JAFFE faces and Yale A faces, which include different kind of face variations from occlusions, pose, expression and illumination. The experiment results show that the proposed method demonstrated strong robustness in recognizing faces under different conditions, particularly under occlusions, pose alterations and expression changes.

  10. Locality constrained joint dynamic sparse representation for local matching based face recognition.

    Directory of Open Access Journals (Sweden)

    Jianzhong Wang

    Full Text Available Recently, Sparse Representation-based Classification (SRC has attracted a lot of attention for its applications to various tasks, especially in biometric techniques such as face recognition. However, factors such as lighting, expression, pose and disguise variations in face images will decrease the performances of SRC and most other face recognition techniques. In order to overcome these limitations, we propose a robust face recognition method named Locality Constrained Joint Dynamic Sparse Representation-based Classification (LCJDSRC in this paper. In our method, a face image is first partitioned into several smaller sub-images. Then, these sub-images are sparsely represented using the proposed locality constrained joint dynamic sparse representation algorithm. Finally, the representation results for all sub-images are aggregated to obtain the final recognition result. Compared with other algorithms which process each sub-image of a face image independently, the proposed algorithm regards the local matching-based face recognition as a multi-task learning problem. Thus, the latent relationships among the sub-images from the same face image are taken into account. Meanwhile, the locality information of the data is also considered in our algorithm. We evaluate our algorithm by comparing it with other state-of-the-art approaches. Extensive experiments on four benchmark face databases (ORL, Extended YaleB, AR and LFW demonstrate the effectiveness of LCJDSRC.

  11. Separability oriented fusion of LBP and CS-LDP for infrared face recognition

    Science.gov (United States)

    Xie, Zhihua; Liu, Guodong

    2015-10-01

    Due to low resolutions of infrared face image, the local texture features are more appreciated for infrared face feature extraction. To extract rich facial texture features, infrared face recognition based on local binary pattern (LBP) and center-symmetric local derivative pattern (CS-LDP) is proposed. Firstly, LBP is utilized to extract the first order texture from the original infrared face image; Secondly, the second order features are extracted CS-LDP. Finally, an adaptive weighted fusion algorithm based separability discriminant criterion is proposed to get final recognition features. Experimental results on our infrared faces databases demonstrate that separability oriented fusion of LBP and CS-LDP contributes complementary discriminant ability, which can improve the performance for infrared face recognition

  12. Galactose uncovers face recognition and mental images in congenital prosopagnosia: the first case report.

    Science.gov (United States)

    Esins, Janina; Schultz, Johannes; Bülthoff, Isabelle; Kennerknecht, Ingo

    2014-09-01

    A woman in her early 40s with congenital prosopagnosia and attention deficit hyperactivity disorder observed for the first time sudden and extensive improvement of her face recognition abilities, mental imagery, and sense of navigation after galactose intake. This effect of galactose on prosopagnosia has never been reported before. Even if this effect is restricted to a subform of congenital prosopagnosia, galactose might improve the condition of other prosopagnosics. Congenital prosopagnosia, the inability to recognize other people by their face, has extensive negative impact on everyday life. It has a high prevalence of about 2.5%. Monosaccharides are known to have a positive impact on cognitive performance. Here, we report the case of a prosopagnosic woman for whom the daily intake of 5 g of galactose resulted in a remarkable improvement of her lifelong face blindness, along with improved sense of orientation and more vivid mental imagery. All these improvements vanished after discontinuing galactose intake. The self-reported effects of galactose were wide-ranging and remarkably strong but could not be reproduced for 16 other prosopagnosics tested. Indications about heterogeneity within prosopagnosia have been reported; this could explain the difficulty to find similar effects in other prosopagnosics. Detailed analyses of the effects of galactose in prosopagnosia might give more insight into the effects of galactose on human cognition in general. Galactose is cheap and easy to obtain, therefore, a systematic test of its positive effects on other cases of congenital prosopagnosia may be warranted.

  13. Effects of exposure to facial expression variation in face learning and recognition.

    Science.gov (United States)

    Liu, Chang Hong; Chen, Wenfeng; Ward, James

    2015-11-01

    Facial expression is a major source of image variation in face images. Linking numerous expressions to the same face can be a huge challenge for face learning and recognition. It remains largely unknown what level of exposure to this image variation is critical for expression-invariant face recognition. We examined this issue in a recognition memory task, where the number of facial expressions of each face being exposed during a training session was manipulated. Faces were either trained with multiple expressions or a single expression, and they were later tested in either the same or different expressions. We found that recognition performance after learning three emotional expressions had no improvement over learning a single emotional expression (Experiments 1 and 2). However, learning three emotional expressions improved recognition compared to learning a single neutral expression (Experiment 3). These findings reveal both the limitation and the benefit of multiple exposures to variations of emotional expression in achieving expression-invariant face recognition. The transfer of expression training to a new type of expression is likely to depend on a relatively extensive level of training and a certain degree of variation across the types of expressions.

  14. An Information-Theoretic Measure for Face Recognition: Comparison with Structural Similarity

    Directory of Open Access Journals (Sweden)

    Asmhan Flieh Hassan

    2014-11-01

    Full Text Available Automatic recognition of people faces is a challenging problem that has received significant attention from signal processing researchers in recent years. This is due to its several applications in different fields, including security and forensic analysis. Despite this attention, face recognition is still one among the most challenging problems. Up to this moment, there is no technique that provides a reliable solution to all situations. In this paper a novel technique for face recognition is presented. This technique, which is called ISSIM, is derived from our recently published information - theoretic similarity measure HSSIM, which was based on joint histogram. Face recognition with ISSIM is still based on joint histogram of a test image and a database images. Performance evaluation was performed on MATLAB using part of the well-known AT&T image database that consists of 49 face images, from which seven subjects are chosen, and for each subject seven views (poses are chosen with different facial expressions. The goal of this paper is to present a simplified approach for face recognition that may work in real-time environments. Performance of our information - theoretic face recognition method (ISSIM has been demonstrated experimentally and is shown to outperform the well-known, statistical-based method (SSIM.

  15. Face recognition impairment in small for gestational age and preterm children.

    Science.gov (United States)

    Perez-Roche, T; Altemir, I; Giménez, G; Prieto, E; González, I; López Pisón, J; Pueyo, V

    2017-03-01

    Infants born prematurely or with low birth weight are at increased risk of visual perceptual impairment. Face recognition is a high-order visual ability important for social development, which has been rarely assessed in premature or low birth weight children. To evaluate the influence of prematurity and low birth weight on face recognition skills. Seventy-seven children were evaluated as part of a prospective cohort study. They were divided into premature and term birth cohorts. Children with a birth weight below the 10th centile were considered small for gestational age. All children underwent a full ophthalmologic assessment and evaluation of face recognition skills using the Facial Memory subtest from the Test of Memory and Learning. Premature infants scored worse on immediate face recognition compared to term infants. However, after adjusting for birth weight, prematurity was not associated with worse outcomes. Independent of gestational age, outcomes of low birth weight children were worse than those of appropriate birth weight children, for immediate face recognition (odds ratio [OR], 5.14; 95% confidence interval [CI], 1.32-21.74) and for face memory (OR, 4.48; 95% CI, 1.14-16.95). Being born small for gestational age is associated with suboptimal face recognition skills, even in children without major neurodevelopmental problems. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. A parametric and indexing method for faces and textures recognition

    Directory of Open Access Journals (Sweden)

    Kais Loukil

    2012-09-01

    Full Text Available Image recognition which is one large library of images is really an important topic in multimedia research. Several methods such as wavelet transform or raw pyramid transform methods are used in image recognition. Many other techniques which are based on statistical images analysis exist. These methods using statistics give interesting results without being touching the images characteristics. The proposed approach is based on multi resolution and wavelet transform. The chosen distance for distance comparison is the Kullback-Leiber one. This approach is tested for texture images and for facial recognition one.

  17. High and low performers differ in the use of shape information for face recognition.

    Science.gov (United States)

    Kaufmann, Jürgen M; Schulz, Claudia; Schweinberger, Stefan R

    2013-06-01

    Previous findings demonstrated that increasing facial distinctiveness by means of spatial caricaturing improves face learning and results in modulations of event-related-potential (ERP) components associated with the processing of typical shape information (P200) and with face learning and recognition (N250). The current study investigated performance-based differences in the effects of spatial caricaturing: a modified version of the Bielefelder famous faces test (BFFT) was applied to subdivide a non-clinical group of 28 participants into better and worse face recognizers. Overall, a learning benefit was seen for caricatured compared to veridical faces. In addition, for learned faces we found larger caricaturing effects in response times, inverse efficiency scores as well as in P200 and N250 amplitudes in worse face recognizers, indicating that these individuals profited disproportionately from exaggerated idiosyncratic face shape. During learning and for novel faces at test, better and worse recognizers showed similar caricaturing effects. We suggest that spatial caricaturing helps better and worse face recognizers accessing critical idiosyncratic shape information that supports identity processing and learning of unfamiliar faces. For familiarized faces, better face recognizers might depend less on exaggerated shape and make better use of texture information than worse recognizers. These results shed light on the transition from unfamiliar to familiar face processing and may also be relevant for developing training-programmes for people with difficulties in face recognition.

  18. Effects of acute psychosocial stress on neural activity to emotional and neutral faces in a face recognition memory paradigm.

    Science.gov (United States)

    Li, Shijia; Weerda, Riklef; Milde, Christopher; Wolf, Oliver T; Thiel, Christiane M

    2014-12-01

    Previous studies have shown that acute psychosocial stress impairs recognition of declarative memory and that emotional material is especially sensitive to this effect. Animal studies suggest a central role of the amygdala which modulates memory processes in hippocampus, prefrontal cortex and other brain areas. We used functional magnetic resonance imaging (fMRI) to investigate neural correlates of stress-induced modulation of emotional recognition memory in humans. Twenty-seven healthy, right-handed, non-smoker male volunteers performed an emotional face recognition task. During encoding, participants were presented with 50 fearful and 50 neutral faces. One hour later, they underwent either a stress (Trier Social Stress Test) or a control procedure outside the scanner which was followed immediately by the recognition session inside the scanner, where participants had to discriminate between 100 old and 50 new faces. Stress increased salivary cortisol, blood pressure and pulse, and decreased the mood of participants but did not impact recognition memory. BOLD data during recognition revealed a stress condition by emotion interaction in the left inferior frontal gyrus and right hippocampus which was due to a stress-induced increase of neural activity to fearful and a decrease to neutral faces. Functional connectivity analyses revealed a stress-induced increase in coupling between the right amygdala and the right fusiform gyrus, when processing fearful as compared to neutral faces. Our results provide evidence that acute psychosocial stress affects medial temporal and frontal brain areas differentially for neutral and emotional items, with a stress-induced privileged processing of emotional stimuli.

  19. Face recognition in newly hatched chicks at the onset of vision.

    Science.gov (United States)

    Wood, Samantha M W; Wood, Justin N

    2015-04-01

    How does face recognition emerge in the newborn brain? To address this question, we used an automated controlled-rearing method with a newborn animal model: the domestic chick (Gallus gallus). This automated method allowed us to examine chicks' face recognition abilities at the onset of both face experience and object experience. In the first week of life, newly hatched chicks were raised in controlled-rearing chambers that contained no objects other than a single virtual human face. In the second week of life, we used an automated forced-choice testing procedure to examine whether chicks could distinguish that familiar face from a variety of unfamiliar faces. Chicks successfully distinguished the familiar face from most of the unfamiliar faces-for example, chicks were sensitive to changes in the face's age, gender, and orientation (upright vs. inverted). Thus, chicks can build an accurate representation of the first face they see in their life. These results show that the initial state of face recognition is surprisingly powerful: Newborn visual systems can begin encoding and recognizing faces at the onset of vision.

  20. Face recognition in emotional scenes: observers remember the eye shape but forget the nose.

    Science.gov (United States)

    Ryan, Kaitlin F; Schwartz, Noah Z

    2013-01-01

    Face recognition is believed to be a highly specialized process that allows individuals to recognize faces faster and more accurately than ordinary objects. However, when faces are viewed in highly emotional contexts, the process becomes slower and less accurate. This suggests a change in recognition strategy compared to recognition in non-arousing contexts. Here we explore this finding by using a novel paradigm to determine which face dimensions are most important for recognizing faces that were initially encoded in highly emotional contexts. Participants were asked to recognize faces from a 3-alternative display after viewing a similar face that was embedded in either a neutral, positive, or negative emotional scene. Results showed that individuals rely on eye shape when recognizing faces that were encoded while embedded in either positive or negative emotional contexts, and ignore nose shape when recognizing faces that were encoded while embedded in negative emotional scenes. The findings suggest that, after encoding that face during heightened emotional arousal, individuals are more likely to commit errors when identifying a face on the basis of nose shape, and less likely to commit errors when identifying a face on the basis of eye shape.