Duan, Xiaodong; Tan, Zheng-Hua
In this paper, we present a local feature learning method for face recognition to deal with varying poses. As opposed to the commonly used approaches of recovering frontal face images from profile views, the proposed method extracts the subject related part from a local feature by removing the pose...... related part in it on the basis of a pose feature. The method has a closed-form solution, hence being time efficient. For performance evaluation, cross pose face recognition experiments are conducted on two public face recognition databases FERET and FEI. The proposed method shows a significant...... recognition improvement under varying poses over general local feature approaches and outperforms or is comparable with related state-of-the-art pose invariant face recognition approaches. Copyright ©2015 by IEEE....
Spence, Morgan L; Storrs, Katherine R; Arnold, Derek H
Humans are experts at face recognition. The mechanisms underlying this complex capacity are not fully understood. Recently, it has been proposed that face recognition is supported by a coarse-scale analysis of visual information contained in horizontal bands of contrast distributed along the vertical image axis-a biological facial "barcode" (Dakin & Watt, 2009). A critical prediction of the facial barcode hypothesis is that the distribution of image contrast along the vertical axis will be more important for face recognition than image distributions along the horizontal axis. Using a novel paradigm involving dynamic image distortions, a series of experiments are presented examining famous face recognition impairments from selectively disrupting image distributions along the vertical or horizontal image axes. Results show that disrupting the image distribution along the vertical image axis is more disruptive for recognition than matched distortions along the horizontal axis. Consistent with the facial barcode hypothesis, these results suggest that human face recognition relies disproportionately on appropriately scaled distributions of image contrast along the vertical image axis. © 2014 ARVO.
Hsieh, Chao-Kuei; Lai, Shang-Hong; Chen, Yung-Chang
Face recognition is one of the most intensively studied topics in computer vision and pattern recognition, but few are focused on how to robustly recognize faces with expressions under the restriction of one single training sample per class. A constrained optical flow algorithm, which combines the advantages of the unambiguous correspondence of feature point labeling and the flexible representation of optical flow computation, has been developed for face recognition from expressional face images. In this paper, we propose an integrated face recognition system that is robust against facial expressions by combining information from the computed intraperson optical flow and the synthesized face image in a probabilistic framework. Our experimental results show that the proposed system improves the accuracy of face recognition from expressional face images.
Jain, Anil K
This report describes research efforts towards developing algorithms for a robust face recognition system to overcome many of the limitations found in existing two-dimensional facial recognition systems...
Full Text Available A novel face recognition approach under varying illumination condition based on Green’s function in tension-based bid imensional empirical mode decomposition (GiT-BEMD and gradient faces (GBEMDGF is present. Firstly, face image was illumination normalization by discrete cosine transform that an appropriate number of DCT coefficients are truncated in logarithm domain. And then, two intrinsic mode functions (IMFs that relevant of intrinsic physical significances of face images are produced by Gi T-BEMD. At the same time, gradient faces is used to improve the high frequency component of face images and to extract illumination insensitive facial feature. The facial feature of discriminately are fused using IMFs and illumination insensitive feature. Secondly, the principal component analysis is adopted to reduce the dimension of face image. The nearest neighbourhood classifier based on cosine distance is implemented for face classification. Experimental results on Yale B database and CUM PIE face database demonstrate that the present technique is robust to varying lighting resource.
Jain, Anil K
.... Specifically, the report addresses the problem of detecting faces in color images in the presence of various lighting conditions and complex backgrounds as well as recognizing faces under variations...
Caldara, Roberto; Zhou, Xinyue; Miellet, Sébastien
Eye movement strategies employed by humans to identify conspecifics are not universal. Westerners predominantly fixate the eyes during face recognition, whereas Easterners more the nose region, yet recognition accuracy is comparable. However, natural fixations do not unequivocally represent information extraction. So the question of whether humans universally use identical facial information to recognize faces remains unresolved. We monitored eye movements during face recognition of Western Caucasian (WC) and East Asian (EA) observers with a novel technique in face recognition that parametrically restricts information outside central vision. We used 'Spotlights' with Gaussian apertures of 2 degrees, 5 degrees or 8 degrees dynamically centered on observers' fixations. Strikingly, in constrained Spotlight conditions (2 degrees and 5 degrees) observers of both cultures actively fixated the same facial information: the eyes and mouth. When information from both eyes and mouth was simultaneously available when fixating the nose (8 degrees), as expected EA observers shifted their fixations towards this region. Social experience and cultural factors shape the strategies used to extract information from faces, but these results suggest that external forces do not modulate information use. Human beings rely on identical facial information to recognize conspecifics, a universal law that might be dictated by the evolutionary constraints of nature and not nurture.
Ali, Tauseef; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.; Quaglia, Adamo; Epifano, Calogera M.
The improvements of automatic face recognition during the last 2 decades have disclosed new applications like border control and camera surveillance. A new application field is forensic face recognition. Traditionally, face recognition by human experts has been used in forensics, but now there is a
Artificial Intelligence 17(1): 41–62. Hu M K 1962 Visual pattern recognition by moment invariants. IRE Trans. on Information Theory, IT-8,. 179–187. Huang F T, Zhou Z, Zhang H-J and Chen T 2000 Pose invariant face recognition, Proc. Fourth IEEE. International Conference on Automatic Face and Gesture Recognition, ...
Santemiz, P.; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.
Side-view face recognition is a challenging problem with many applications. Especially in real-life scenarios where the environment is uncontrolled, coping with pose variations up to side-view positions is an important task for face recognition. In this paper we discuss the use of side view face
Santemiz, P.; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.; van den Biggelaar, Olivier
As a widely used biometrics, face recognition has many advantages such as being non-intrusive, natural and passive. On the other hand, in real-life scenarios with uncontrolled environment, pose variation up to side-view positions makes face recognition a challenging work. In this paper we discuss
Ali, Tauseef; Veldhuis, Raymond N.J.; Spreeuwers, Lieuwe Jan
Beside a few papers which focus on the forensic aspects of automatic face recognition, there is not much published about it in contrast to the literature on developing new techniques and methodologies for biometric face recognition. In this report, we review forensic facial identification which is
Kowalski, M.; Grudzień, A.; Palka, N.; Szustakowski, M.
Biometrics refers to unique human characteristics. Each unique characteristic may be used to label and describe individuals and for automatic recognition of a person based on physiological or behavioural properties. One of the most natural and the most popular biometric trait is a face. The most common research methods on face recognition are based on visible light. State-of-the-art face recognition systems operating in the visible light spectrum achieve very high level of recognition accuracy under controlled environmental conditions. Thermal infrared imagery seems to be a promising alternative or complement to visible range imaging due to its relatively high resistance to illumination changes. A thermal infrared image of the human face presents its unique heat-signature and can be used for recognition. The characteristics of thermal images maintain advantages over visible light images, and can be used to improve algorithms of human face recognition in several aspects. Mid-wavelength or far-wavelength infrared also referred to as thermal infrared seems to be promising alternatives. We present the study on 1:1 recognition in thermal infrared domain. The two approaches we are considering are stand-off face verification of non-moving person as well as stop-less face verification on-the-move. The paper presents methodology of our studies and challenges for face recognition systems in the thermal infrared domain.
Boom, B.J.; Beumer, G.M.; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.
Accurate face registration is of vital importance to the performance of a face recognition algorithm. We propose a new method: matching score based face registration, which searches for optimal alignment by maximizing the matching score output of a classifier as a function of the different
Sochenkov, I.; Sochenkova, A.; Vokhmintsev, A.; Makovetskii, A.; Melnikov, A.
Face recognition is one of the most important tasks in computer vision and pattern recognition. Face recognition is useful for security systems to provide safety. In some situations it is necessary to identify the person among many others. In this case this work presents new approach in data indexing, which provides fast retrieval in big image collections. Data indexing in this research consists of five steps. First, we detect the area containing face, second we align face, and then we detect areas containing eyes and eyebrows, nose, mouth. After that we find key points of each area using different descriptors and finally index these descriptors with help of quantization procedure. The experimental analysis of this method is performed. This paper shows that performing method has results at the level of state-of-the-art face recognition methods, but it is also gives results fast that is important for the systems that provide safety.
Choi, Jonghyun; Hu, Shuowen; Young, S. Susan; Davis, Larry S.
In low light conditions, visible light face identification is infeasible due to the lack of illumination. For nighttime surveillance, thermal imaging is commonly used because of the intrinsic emissivity of thermal radiation from the human body. However, matching thermal images of faces acquired at nighttime to the predominantly visible light face imagery in existing government databases and watch lists is a challenging task. The difficulty arises from the significant difference between the face's thermal signature and its visible signature (i.e. the modality gap). To match the thermal face to the visible face acquired by the two different modalities, we applied face recognition algorithms that reduce the modality gap in each step of face identification, from low-level analysis to machine learning techniques. Specifically, partial least squares-discriminant analysis (PLS-DA) based approaches were used to correlate the thermal face signatures to the visible face signatures, yielding a thermal-to-visible face identification rate of 49.9%. While this work makes progress for thermal-to-visible face recognition, more efforts need to be devoted to solving this difficult task. Successful development of a thermal-to-visible face recognition system would significantly enhance the Nation's nighttime surveillance capabilities.
Over the past few decades, face recognition has become a rapidly growing research topic due to the increasing demands in many applications of our daily life such as airport surveillance, personal identification in law enforcement, surveillance systems, information safety, securing financial transactions, and computer security. The objective of this thesis is to develop a face recognition system capable of recognizing persons with a high recognition capability, low processing time, and under different illumination conditions, and different facial expressions. The thesis presents a study for the performance of the face recognition system using two techniques; the Principal Component Analysis (PCA), and the Zernike Moments (ZM). The performance of the recognition system is evaluated according to several aspects including the recognition rate, and the processing time. Face recognition systems that use visual images are sensitive to variations in the lighting conditions and facial expressions. The performance of these systems may be degraded under poor illumination conditions or for subjects of various skin colors. Several solutions have been proposed to overcome these limitations. One of these solutions is to work in the Infrared (IR) spectrum. IR images have been suggested as an alternative source of information for detection and recognition of faces, when there is little or no control over lighting conditions. This arises from the fact that these images are formed due to thermal emissions from skin, which is an intrinsic property because these emissions depend on the distribution of blood vessels under the skin. On the other hand IR face recognition systems still have limitations with temperature variations and recognition of persons wearing eye glasses. In this thesis we will fuse IR images with visible images to enhance the performance of face recognition systems. Images are fused using the wavelet transform. Simulation results show that the fusion of visible and
Face recognition has several applications, including security, such as (authentication and identification of device users and criminal suspects), and in medicine (corrective surgery and diagnosis). Facial recognition programs rely on algorithms that can compare and compute the similarity between two sets of images. This eBook explains some of the similarity measures used in facial recognition systems in a single volume. Readers will learn about various measures including Minkowski distances, Mahalanobis distances, Hansdorff distances, cosine-based distances, among other methods. The book also summarizes errors that may occur in face recognition methods. Computer scientists "facing face" and looking to select and test different methods of computing similarities will benefit from this book. The book is also useful tool for students undertaking computer vision courses.
Abstract. Feature extraction is one of the important tasks in face recognition. Moments are widely used feature extractor due to their superior discriminatory power and geometrical invariance. Moments generally capture the global features of the image. This paper proposes Krawtchouk moment for feature extraction in face ...
Fatima Maria Felisberti
Full Text Available BACKGROUND: The ability to recognize the faces of potential cooperators and cheaters is fundamental to social exchanges, given that cooperation for mutual benefit is expected. Studies addressing biases in face recognition have so far proved inconclusive, with reports of biases towards faces of cheaters, biases towards faces of cooperators, or no biases at all. This study attempts to uncover possible causes underlying such discrepancies. METHODOLOGY AND FINDINGS: Four experiments were designed to investigate biases in face recognition during social exchanges when behavioral descriptors (prosocial, antisocial or neutral embedded in different scenarios were tagged to faces during memorization. Face recognition, measured as accuracy and response latency, was tested with modified yes-no, forced-choice and recall tasks (N = 174. An enhanced recognition of faces tagged with prosocial descriptors was observed when the encoding scenario involved financial transactions and the rules of the social contract were not explicit (experiments 1 and 2. Such bias was eliminated or attenuated by making participants explicitly aware of "cooperative", "cheating" and "neutral/indifferent" behaviors via a pre-test questionnaire and then adding such tags to behavioral descriptors (experiment 3. Further, in a social judgment scenario with descriptors of salient moral behaviors, recognition of antisocial and prosocial faces was similar, but significantly better than neutral faces (experiment 4. CONCLUSION: The results highlight the relevance of descriptors and scenarios of social exchange in face recognition, when the frequency of prosocial and antisocial individuals in a group is similar. Recognition biases towards prosocial faces emerged when descriptors did not state the rules of a social contract or the moral status of a behavior, and they point to the existence of broad and flexible cognitive abilities finely tuned to minor changes in social context.
Felisberti, Fatima Maria; Pavey, Louisa
The ability to recognize the faces of potential cooperators and cheaters is fundamental to social exchanges, given that cooperation for mutual benefit is expected. Studies addressing biases in face recognition have so far proved inconclusive, with reports of biases towards faces of cheaters, biases towards faces of cooperators, or no biases at all. This study attempts to uncover possible causes underlying such discrepancies. Four experiments were designed to investigate biases in face recognition during social exchanges when behavioral descriptors (prosocial, antisocial or neutral) embedded in different scenarios were tagged to faces during memorization. Face recognition, measured as accuracy and response latency, was tested with modified yes-no, forced-choice and recall tasks (N = 174). An enhanced recognition of faces tagged with prosocial descriptors was observed when the encoding scenario involved financial transactions and the rules of the social contract were not explicit (experiments 1 and 2). Such bias was eliminated or attenuated by making participants explicitly aware of "cooperative", "cheating" and "neutral/indifferent" behaviors via a pre-test questionnaire and then adding such tags to behavioral descriptors (experiment 3). Further, in a social judgment scenario with descriptors of salient moral behaviors, recognition of antisocial and prosocial faces was similar, but significantly better than neutral faces (experiment 4). The results highlight the relevance of descriptors and scenarios of social exchange in face recognition, when the frequency of prosocial and antisocial individuals in a group is similar. Recognition biases towards prosocial faces emerged when descriptors did not state the rules of a social contract or the moral status of a behavior, and they point to the existence of broad and flexible cognitive abilities finely tuned to minor changes in social context.
Face recognition is image processing technique which aims to identify human faces and found its use in various diﬀerent ﬁelds for example in security. Throughout the years this ﬁeld evolved and there are many approaches and many diﬀerent algorithms which aim to make the face recognition as eﬀective...... as possible. The use of diﬀerent approaches such as neural networks and machine learning can lead to fast and eﬃcient solutions however, these solutions are expensive in terms of hardware resources and power consumption. A possible solution to this problem can be use of approximate arithmetic. In many image...... processing applications the results do not need to be completely precise and use of the approximate arithmetic can lead to reduction in terms of delay, space and power consumption. In this paper we examine possible use of approximate arithmetic in face recognition using Eigenfaces algorithm....
Li, Jun-Bao; Pan, Jeng-Shyang
Kernel Learning Algorithms for Face Recognition covers the framework of kernel based face recognition. This book discusses the advanced kernel learning algorithms and its application on face recognition. This book also focuses on the theoretical deviation, the system framework and experiments involving kernel based face recognition. Included within are algorithms of kernel based face recognition, and also the feasibility of the kernel based face recognition method. This book provides researchers in pattern recognition and machine learning area with advanced face recognition methods and its new
Full Text Available The rapid recognition of familiar faces is crucial for social interactions. However the actual speed with which recognition can be achieved remains largely unknown as most studies have been carried out without any speed constraints. Different paradigms have been used, leading to conflicting results, and although many authors suggest that face recognition is fast, the speed of face recognition has not been directly compared to fast visual tasks. In this study, we sought to overcome these limitations. Subjects performed three tasks, a familiarity categorization task (famous faces among unknown faces, a superordinate categorization task (human faces among animal ones and a gender categorization task. All tasks were performed under speed constraints. The results show that, despite the use of speed constraints, subjects were slow when they had to categorize famous faces: minimum reaction time was 467 ms, which is 180 ms more than during superordinate categorization and 160 ms more than in the gender condition. Our results are compatible with a hierarchy of face processing from the superordinate level to the familiarity level. The processes taking place between detection and recognition need to be investigated in detail.
Dakshina Ranjan Kisku
Full Text Available Faces are highly challenging and dynamic objects that are employed as biometrics evidence in identity verification. Recently, biometrics systems have proven to be an essential security tools, in which bulk matching of enrolled people and watch lists is performed every day. To facilitate this process, organizations with large computing facilities need to maintain these facilities. To minimize the burden of maintaining these costly facilities for enrollment and recognition, multinational companies can transfer this responsibility to third-party vendors who can maintain cloud computing infrastructures for recognition. In this paper, we showcase cloud computing-enabled face recognition, which utilizes PCA-characterized face instances and reduces the number of invariant SIFT points that are extracted from each face. To achieve high interclass and low intraclass variances, a set of six PCA-characterized face instances is computed on columns of each face image by varying the number of principal components. Extracted SIFT keypoints are fused using sum and max fusion rules. A novel cohort selection technique is applied to increase the total performance. The proposed protomodel is tested on BioID and FEI face databases, and the efficacy of the system is proven based on the obtained results. We also compare the proposed method with other well-known methods.
Lewinski, Peter; Trzaskowski, Jan; Luzak, Joasia
This paper integrates and cuts through domains of privacy law and biometrics. Specifically, this paper presents a legal analysis on the use of Automated Facial Recognition Systems (the AFRS) in commercial (retail store) settings within the European Union data protection framework. The AFRS is a t...... and legitimate processing of personal data, which, finally, leads to an overview of measures that traders can take to comply with data protection law, including by means of information, consent, and anonymization....
Zou, Wilman W W; Yuen, Pong C
This paper addresses the very low resolution (VLR) problem in face recognition in which the resolution of the face image to be recognized is lower than 16 × 16. With the increasing demand of surveillance camera-based applications, the VLR problem happens in many face application systems. Existing face recognition algorithms are not able to give satisfactory performance on the VLR face image. While face super-resolution (SR) methods can be employed to enhance the resolution of the images, the existing learning-based face SR methods do not perform well on such a VLR face image. To overcome this problem, this paper proposes a novel approach to learn the relationship between the high-resolution image space and the VLR image space for face SR. Based on this new approach, two constraints, namely, new data and discriminative constraints, are designed for good visuality and face recognition applications under the VLR problem, respectively. Experimental results show that the proposed SR algorithm based on relationship learning outperforms the existing algorithms in public face databases.
Face recognition is a technology that appeals to the imagination of many people. This is particularly reflected in the popularity of science-fiction films and forensic detective series such as CSI, CSI New York, CSI Miami, Bones and NCIS. Although these series tend to be set in the present, their
Aug 26, 2016 ... Feature extraction is one of the important tasks in face recognition. Moments are widely used feature extractor due to their superior discriminatory power and geometrical invariance. Moments generally capture the global features of the image. This paper proposes Krawtchouk moment for feature extraction ...
Ali, Tauseef; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.
In this paper we present a methodology and experimental results for evidence evaluation in the context of forensic face recognition. In forensic applications, the matching score (hereafter referred to as similarity score) from a biometric system must be represented as a Likelihood Ratio (LR). In our
Full Text Available The need to increase security in open or public spaces has in turn given rise to the requirement to monitor these spaces and analyse those images on-site and on-time. At this point, the use of smart cameras – of which the popularity has been increasing – is one step ahead. With sensors and Digital Signal Processors (DSPs, smart cameras generate ad hoc results by analysing the numeric images transmitted from the sensor by means of a variety of image-processing algorithms. Since the images are not transmitted to a distance processing unit but rather are processed inside the camera, it does not necessitate high-bandwidth networks or high processor powered systems; it can instantaneously decide on the required access. Nonetheless, on account of restricted memory, processing power and overall power, image processing algorithms need to be developed and optimized for embedded processors. Among these algorithms, one of the most important is for face detection and recognition. A number of face detection and recognition methods have been proposed recently and many of these methods have been tested on general-purpose processors. In smart cameras – which are real-life applications of such methods – the widest use is on DSPs. In the present study, the Viola-Jones face detection method – which was reported to run faster on PCs – was optimized for DSPs; the face recognition method was combined with the developed sub-region and mask-based DCT (Discrete Cosine Transform. As the employed DSP is a fixed-point processor, the processes were performed with integers insofar as it was possible. To enable face recognition, the image was divided into sub-regions and from each sub-region the robust coefficients against disruptive elements – like face expression, illumination, etc. – were selected as the features. The discrimination of the selected features was enhanced via LDA (Linear Discriminant Analysis and then employed for recognition. Thanks to its
Full Text Available Automatic face recognition remains an interesting but challenging computer vision open problem. Poor illumination is considered as one of the major issue, since illumination changes cause large variation in the facial features. To resolve this, illumination normalization preprocessing techniques are employed in this paper to enhance the face recognition rate. The methods such as Histogram Equalization (HE, Gamma Intensity Correction (GIC, Normalization chain and Modified Homomorphic Filtering (MHF are used for preprocessing. Owing to great success, the texture features are commonly used for face recognition. But these features are severely affected by lighting changes. Hence texture based models Local Binary Pattern (LBP, Local Derivative Pattern (LDP, Local Texture Pattern (LTP and Local Tetra Patterns (LTrPs are experimented under different lighting conditions. In this paper, illumination invariant face recognition technique is developed based on the fusion of illumination preprocessing with local texture descriptors. The performance has been evaluated using YALE B and CMU-PIE databases containing more than 1500 images. The results demonstrate that MHF based normalization gives significant improvement in recognition rate for the face images with large illumination conditions.
Full Text Available An Elastic Bunch Graph Map (EBGM algorithm is being proposed in this research paper that successfully implements face recognition using Gabor filters. The proposed system applies 40 different Gabor filters on an image. As aresult of which 40 images with different angles and orientation are received. Next, maximum intensity points in each filtered image are calculated and mark them as Fiducial points. The system reduces these points in accordance to distance between them. The next step is calculating the distances between the reduced points using distance formula. At last, the distances are compared with database. If match occurs, it means that the image is recognized.
Full Text Available Face recognition is not rooted in a universal eye movement information-gathering strategy. Western observers favor a local facial feature sampling strategy, whereas Eastern observers prefer sampling face information from a global, central fixation strategy. Yet, the precise qualitative (the diagnostic and quantitative (the amount information underlying these cultural perceptual biases in face recognition remains undetermined.To this end, we monitored the eye movements of Western and Eastern observers during a face recognition task, with a novel gaze-contingent technique: the Expanding Spotlight. We used 2° Gaussian apertures centered on the observers' fixations expanding dynamically at a rate of 1° every 25ms at each fixation - the longer the fixation duration, the larger the aperture size. Identity-specific face information was only displayed within the Gaussian aperture; outside the aperture, an average face template was displayed to facilitate saccade planning. Thus, the Expanding Spotlight simultaneously maps out the facial information span at each fixation location.Data obtained with the Expanding Spotlight technique confirmed that Westerners extract more information from the eye region, whereas Easterners extract more information from the nose region. Interestingly, this quantitative difference was paired with a qualitative disparity. Retinal filters based on spatial frequency decomposition built from the fixations maps revealed that Westerners used local high-spatial frequency information sampling, covering all the features critical for effective face recognition (the eyes and the mouth. In contrast, Easterners achieved a similar result by using global low-spatial frequency information from those facial features.Our data show that the face system flexibly engages into local or global eye movement strategies across cultures, by relying on distinct facial information span and culturally tuned spatially filtered information. Overall, our
Full Text Available In this paper, a noble nonintrusive three-dimensional (3D face modeling system for random-profile-based 3D face recognition is presented. Although recent two-dimensional (2D face recognition systems can achieve a reliable recognition rate under certain conditions, their performance is limited by internal and external changes, such as illumination and pose variation. To address these issues, 3D face recognition, which uses 3D face data, has recently received much attention. However, the performance of 3D face recognition highly depends on the precision of acquired 3D face data, while also requiring more computational power and storage capacity than 2D face recognition systems. In this paper, we present a developed nonintrusive 3D face modeling system composed of a stereo vision system and an invisible near-infrared line laser, which can be directly applied to profile-based 3D face recognition. We further propose a novel random-profile-based 3D face recognition method that is memory-efficient and pose-invariant. The experimental results demonstrate that the reconstructed 3D face data consists of more than 50 k 3D point clouds and a reliable recognition rate against pose variation.
Hteik Htar Lwin
Full Text Available Abstract Most doors are controlled by persons with the use of keys security cards password or pattern to open the door. Theaim of this paper is to help users forimprovement of the door security of sensitive locations by using face detection and recognition. Face is a complex multidimensional structure and needs good computing techniques for detection and recognition. This paper is comprised mainly of three subsystems namely face detection face recognition and automatic door access control. Face detection is the process of detecting the region of face in an image. The face is detected by using the viola jones method and face recognition is implemented by using the Principal Component Analysis PCA. Face Recognition based on PCA is generally referred to as the use of Eigenfaces.If a face is recognized it is known else it is unknown. The door will open automatically for the known person due to the command of the microcontroller. On the other hand alarm will ring for the unknown person. Since PCA reduces the dimensions of face images without losing important features facial images for many persons can be stored in the database. Although many training images are used computational efficiency cannot be decreased significantly. Therefore face recognition using PCA can be more useful for door security system than other face recognition schemes.
Full Text Available The ability to recognise face images under random pose is a task that is done effortlessly by human beings. However, for a computer system, recognising face images under varying poses still remains an open research area. Face recognition across pose...
Dutta, A.; van Rootseler, R.T.A.; Veldhuis, Raymond N.J.; Spreeuwers, Lieuwe Jan
Face recognition is a challenging problem for surveillance view images commonly encountered in a forensic face recognition case. One approach to deal with a non-frontal test image is to synthesize the corresponding frontal view image and compare it with frontal view reference images. However, it is
Hsieh, Cheng-Ta; Huang, Kae-Horng; Lee, Chang-Hsing; Han, Chin-Chuan; Fan, Kuo-Chin
Robust face recognition under illumination variations is an important and challenging task in a face recognition system, particularly for face recognition in the wild. In this paper, a face image preprocessing approach, called spatial adaptive shadow compensation (SASC), is proposed to eliminate shadows in the face image due to different lighting directions. First, spatial adaptive histogram equalization (SAHE), which uses face intensity prior model, is proposed to enhance the contrast of each local face region without generating visible noises in smooth face areas. Adaptive shadow compensation (ASC), which performs shadow compensation in each local image block, is then used to produce a wellcompensated face image appropriate for face feature extraction and recognition. Finally, null-space linear discriminant analysis (NLDA) is employed to extract discriminant features from SASC compensated images. Experiments performed on the Yale B, Yale B extended, and CMU PIE face databases have shown that the proposed SASC always yields the best face recognition accuracy. That is, SASC is more robust to face recognition under illumination variations than other shadow compensation approaches.
Nikisins, Olegs; Nasrollahi, Kamal; Greitans, Modris
Facial images are of critical importance in many real-world applications from gaming to surveillance. The current literature on facial image analysis, from face detection to face and facial expression recognition, are mainly performed in either RGB, Depth (D), or both of these modalities. But......, such analyzes have rarely included Thermal (T) modality. This paper paves the way for performing such facial analyzes using synchronized RGB-D-T facial images by introducing a database of 51 persons including facial images of different rotations, illuminations, and expressions. Furthermore, a face recognition...... algorithm has been developed to use these images. The experimental results show that face recognition using such three modalities provides better results compared to face recognition in any of such modalities in most of the cases....
Sang, Gaoli; Li, Jing; Zhao, Qijun
Three-dimensional (3D) face models can intrinsically handle large pose face recognition problem. In this paper, we propose a novel pose-invariant face recognition method via RGB-D images. By employing depth, our method is able to handle self-occlusion and deformation, both of which are challenging problems in two-dimensional (2D) face recognition. Texture images in the gallery can be rendered to the same view as the probe via depth. Meanwhile, depth is also used for similarity measure via frontalization and symmetric filling. Finally, both texture and depth contribute to the final identity estimation. Experiments on Bosphorus, CurtinFaces, Eurecom, and Kiwi databases demonstrate that the additional depth information has improved the performance of face recognition with large pose variations and under even more challenging conditions.
The present invention provides a novel system and method for identifying individuals and for face recognition utilizing facial features for face identification. The system and method of the invention comprise creating facial features or face patterns called face pattern words and face pattern bytes for face identification. The invention also provides for pattern recognitions for identification other than face recognition. The invention further provides a means for identifying individuals based on visible and/or thermal images of those individuals by utilizing computer software implemented by instructions on a computer or computer system and a computer readable medium containing instructions on a computer system for face recognition and identification.
Full Text Available Recently face recognition is attracting much attention in the society of network multimedia information access. Areas such as network security, content indexing and retrieval, and video compression benefits from face recognition technology because "people" are the center of attention in a lot of video. Network access control via face recognition not only makes hackers virtually impossible to steal one's "password", but also increases the user-friendliness in human-computer interaction. Indexing and/or retrieving video data based on the appearances of particular persons will be useful for users such as news reporters, political scientists, and moviegoers. For the applications of videophone and teleconferencing, the assistance of face recognition also provides a more efficient coding scheme. In this paper, we give an introductory course of this new information processing technology. The paper shows the readers the generic framework for the face recognition system, and the variants that are frequently encountered by the face recognizer. Several famous face recognition algorithms, such as eigenfaces and neural networks, will also be explained.
Krüger, Volker; Zhou, Shaohua; Chellappa, Rama
The ability to integrate information over time in order to come to a conclusion is a strength of a cognitive system. It allows the cognitive system to verify insecure observations: This is the case when the data is noisy or the conditions are non-optimal exploit general knowledge about spatio......-temporal relations: This allows the system to use dynamics as well as to generate warnings when 'implausible' situations occur or to circumvent these altogether. We have studied the effectiveness of temporal integration for recognition purposes by using the face recognition as an example problem. Face recognition...... is a prominent problem and has been studied more extensively than almost any other recognition problem. An observation is that face recognition works well in ideal conditions. If those conditions, however, are not met, then all present algorithms break down disgracefully. This probelm appears to be general...
Chen, Weiping; Gao, Yongsheng
In this paper, we present a syntactic string matching approach to solve the frontal face recognition problem. String matching is a powerful partial matching technique, but is not suitable for frontal face recognition due to its requirement of globally sequential representation and the complex nature of human faces, containing discontinuous and non-sequential features. Here, we build a compact syntactic Stringface representation, which is an ensemble of strings. A novel ensemble string matching approach that can perform non-sequential string matching between two Stringfaces is proposed. It is invariant to the sequential order of strings and the direction of each string. The embedded partial matching mechanism enables our method to automatically use every piece of non-occluded region, regardless of shape, in the recognition process. The encouraging results demonstrate the feasibility and effectiveness of using syntactic methods for face recognition from a single exemplar image per person, breaking the barrier that prevents string matching techniques from being used for addressing complex image recognition problems. The proposed method not only achieved significantly better performance in recognizing partially occluded faces, but also showed its ability to perform direct matching between sketch faces and photo faces.
Liu, Ran R.; Pancaroglu, Raika; Hills, Charlotte S.; Duchaine, Brad; Barton, Jason J. S.
Right or bilateral anterior temporal damage can impair face recognition, but whether this is an associative variant of prosopagnosia or part of a multimodal disorder of person recognition is an unsettled question, with implications for cognitive and neuroanatomic models of person recognition. We assessed voice perception and short-term recognition of recently heard voices in 10 subjects with impaired face recognition acquired after cerebral lesions. All 4 subjects with apperceptive prosopagnosia due to lesions limited to fusiform cortex had intact voice discrimination and recognition. One subject with bilateral fusiform and anterior temporal lesions had a combined apperceptive prosopagnosia and apperceptive phonagnosia, the first such described case. Deficits indicating a multimodal syndrome of person recognition were found only in 2 subjects with bilateral anterior temporal lesions. All 3 subjects with right anterior temporal lesions had normal voice perception and recognition, 2 of whom performed normally on perceptual discrimination of faces. This confirms that such lesions can cause a modality-specific associative prosopagnosia. PMID:25349193
Sochenkova, A.; Sochenkov, I.; Makovetskii, A.; Vokhmintsev, A.; Melnikov, A.
Computer vision tasks are remaining very important for the last couple of years. One of the most complicated problems in computer vision is face recognition that could be used in security systems to provide safety and to identify person among the others. There is a variety of different approaches to solve this task, but there is still no universal solution that would give adequate results in some cases. Current paper presents following approach. Firstly, we extract an area containing face, then we use Canny edge detector. On the next stage we use convolutional neural networks (CNN) to finally solve face recognition and person identification task.
Hsiao, Janet H; Liu, Tina T
In English word recognition, the best recognition performance is usually obtained when the initial fixation is directed to the left of the center (optimal viewing position, OVP). This effect has been argued to involve an interplay of left hemisphere lateralization for language processing and the perceptual experience of fixating at word beginnings most often. While both factors predict a left-biased OVP in visual word recognition, in face recognition they predict contrasting biases: People prefer to fixate the left half-face, suggesting that the OVP should be to the left of the center; nevertheless, the right hemisphere lateralization in face processing suggests that the OVP should be to the right of the center in order to project most of the face to the right hemisphere. Here, we show that the OVP in face recognition was to the left of the center, suggesting greater influence from the perceptual experience than hemispheric asymmetry in central vision. In contrast, hemispheric lateralization effects emerged when faces were presented away from the center; there was an interaction between presented visual field and location (center vs. periphery), suggesting differential influence from perceptual experience and hemispheric asymmetry in central and peripheral vision.
Crenna, F; Bovio, L; Rossi, G B; Zappa, E; Testa, R; Gasparetto, M
Automatic face recognition is a biometric technique particularly appreciated in security applications. In fact face recognition presents the opportunity to operate at a low invasive level without the collaboration of the subjects under tests, with face images gathered either from surveillance systems or from specific cameras located in strategic points. The automatic recognition algorithms perform a measurement, on the face images, of a set of specific characteristics of the subject and provide a recognition decision based on the measurement results. Unfortunately several quantities may influence the measurement of the face geometry such as its orientation, the lighting conditions, the expression and so on, affecting the recognition rate. On the other hand human recognition of face is a very robust process far less influenced by the surrounding conditions. For this reason it may be interesting to insert perceptual aspects in an automatic facial-based recognition algorithm to improve its robustness. This paper presents a first study in this direction investigating the correlation between the results of a perception experiment and the facial geometry, estimated by means of the position of a set of repere points
Daoudi, Mohamed; Veltkamp, Remco
3D Face Modeling, Analysis and Recognition presents methodologies for analyzing shapes of facial surfaces, develops computational tools for analyzing 3D face data, and illustrates them using state-of-the-art applications. The methodologies chosen are based on efficient representations, metrics, comparisons, and classifications of features that are especially relevant in the context of 3D measurements of human faces. These frameworks have a long-term utility in face analysis, taking into account the anticipated improvements in data collection, data storage, processing speeds, and application s
are frames extracted from the movie Little Mermaid II. (Disney Enterprises, Inc.) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.8...movie Pocahontas; (c) and (d) are frames extracted from the movie Little Mermaid II. (Disney Enterprises, Inc.) in Fig. 1.6 are represented only by...promising solution to the representation of facial components for recognition. However, very little work has been done in face recognition based on facial
Zhou, Saohua; Krüger, Volker; Chellappa, Rama
Recognition of human faces using a gallery of still or video images and a probe set of videos is systematically investigated using a probabilistic framework. In still-to-video recognition, where the gallery consists of still images, a time series state space model is proposed to fuse temporal...... demonstrate that, due to the propagation of the identity variable over time, a degeneracy in posterior probability of the identity variable is achieved to give improved recognition. The gallery is generalized to videos in order to realize video-to-video recognition. An exemplar-based learning strategy...... of the identity variable produces the recognition result. The model formulation is very general and it allows a variety of image representations and transformations. Experimental results using videos collected by NIST/USF and CMU illustrate the effectiveness of this approach for both still-to-video and video...
Zhou, Saohua; Krüger, Volker; Chellappa, Rama
Recognition of human faces using a gallery of still or video images and a probe set of videos is systematically investigated using a probabilistic framework. In still-to-video recognition, where the gallery consists of still images, a time series state space model is proposed to fuse temporal...... of the identity variable produces the recognition result. The model formulation is very general and it allows a variety of image representations and transformations. Experimental results using videos collected by NIST/USF and CMU illustrate the effectiveness of this approach for both still-to-video and video...... demonstrate that, due to the propagation of the identity variable over time, a degeneracy in posterior probability of the identity variable is achieved to give improved recognition. The gallery is generalized to videos in order to realize video-to-video recognition. An exemplar-based learning strategy...
Abboud, Ali J.; Sellahewa, Harin; Jassim, Sabah A.
Recent advances in biometric technology have pushed towards more robust and reliable systems. We aim to build systems that have low recognition errors and are less affected by variation in recording conditions. Recognition errors are often attributed to the usage of low quality biometric samples. Hence, there is a need to develop new intelligent techniques and strategies to automatically measure/quantify the quality of biometric image samples and if necessary restore image quality according to the need of the intended application. In this paper, we present no-reference image quality measures in the spatial domain that have impact on face recognition. The first is called symmetrical adaptive local quality index (SALQI) and the second is called middle halve (MH). Also, an adaptive strategy has been developed to select the best way to restore the image quality, called symmetrical adaptive histogram equalization (SAHE). The main benefits of using quality measures for adaptive strategy are: (1) avoidance of excessive unnecessary enhancement procedures that may cause undesired artifacts, and (2) reduced computational complexity which is essential for real time applications. We test the success of the proposed measures and adaptive approach for a wavelet-based face recognition system that uses the nearest neighborhood classifier. We shall demonstrate noticeable improvements in the performance of adaptive face recognition system over the corresponding non-adaptive scheme.
Farokhi, Sajad; Flusser, Jan; Sheikh, U. U.
Roč. 21, č. 1 (2016), s. 1-17 ISSN 1574-0137 R&D Projects: GA ČR GA15-16928S Institutional support: RVO:67985556 Keywords : Literature survey * Biometrics * Face recognition * Near infrared * Illumination invariant Subject RIV: JD - Computer Applications, Robotics http://library.utia.cas.cz/separaty/2016/ZOI/flusser-0461834.pdf
Full Text Available The quality of the smartphone’s camera enables us to capture high quality pictures at a high resolution, so we can perform different types of recognition on these images. Face detection is one of these types of recognition that is very common in our society. We use it every day on Facebook to tag friends in our pictures. It is also used in video games alongside Kinect concept, or in security to allow the access to private places only to authorized persons. These are just some examples of using facial recognition, because in modern society, detection and facial recognition tend to surround us everywhere. The aim of this article is to create an appli-cation for smartphones that can recognize human faces. The main goal of this application is to grant access to certain areas or rooms only to certain authorized persons. For example, we can speak here of hospitals or educational institutions where there are rooms where only certain employees can enter. Of course, this type of application can cover a wide range of uses, such as helping people suffering from Alzheimer's to recognize the people they loved, to fill gaps persons who can’t remember the names of their relatives or for example to automatically capture the face of our own children when they smile.
Full Text Available Representation based classification methods, such as Sparse Representation Classification (SRC and Linear Regression Classification (LRC have been developed for face recognition problem successfully. However, most of these methods use the original face images without any preprocessing for recognition. Thus, their performances may be affected by some problematic factors (such as illumination and expression variances in the face images. In order to overcome this limitation, a novel supervised filter learning algorithm is proposed for representation based face recognition in this paper. The underlying idea of our algorithm is to learn a filter so that the within-class representation residuals of the faces' Local Binary Pattern (LBP features are minimized and the between-class representation residuals of the faces' LBP features are maximized. Therefore, the LBP features of filtered face images are more discriminative for representation based classifiers. Furthermore, we also extend our algorithm for heterogeneous face recognition problem. Extensive experiments are carried out on five databases and the experimental results verify the efficacy of the proposed algorithm.
Bi, Chao; Zhang, Lei; Qi, Miao; Zheng, Caixia; Yi, Yugen; Wang, Jianzhong; Zhang, Baoxue
Representation based classification methods, such as Sparse Representation Classification (SRC) and Linear Regression Classification (LRC) have been developed for face recognition problem successfully. However, most of these methods use the original face images without any preprocessing for recognition. Thus, their performances may be affected by some problematic factors (such as illumination and expression variances) in the face images. In order to overcome this limitation, a novel supervised filter learning algorithm is proposed for representation based face recognition in this paper. The underlying idea of our algorithm is to learn a filter so that the within-class representation residuals of the faces' Local Binary Pattern (LBP) features are minimized and the between-class representation residuals of the faces' LBP features are maximized. Therefore, the LBP features of filtered face images are more discriminative for representation based classifiers. Furthermore, we also extend our algorithm for heterogeneous face recognition problem. Extensive experiments are carried out on five databases and the experimental results verify the efficacy of the proposed algorithm.
Full Text Available Multiple imaging modalities based face recognition has become a hot research topic. A great number of multispectral face recognition algorithms/systems have been designed in the last decade. How to extract features of different spectrum has still been an important issue for face recognition. To address this problem, we propose a robust tensor preserving projection (RTPP algorithm which represents a multispectral image as a third-order tensor. RTPP constructs sparse neighborhoods and then computes weights of the tensor. RTPP iteratively obtains one spectral space transformation matrix through preserving the sparse neighborhoods. Due to sparse representation, RTPP can not only keep the underlying spatial structure of multispectral images but also enhance robustness. The experiments on both Equinox and DHUFO face databases show that the performance of the proposed method is better than those of related algorithms.
Yi, Lihamu; Ya, Ermaimaiti
In this paper, in light of the reduced recognition rate and poor robustness of Uyghur face under illumination and partial occlusion, a Uyghur face recognition method combining Two Dimension Discrete Cosine Transform (2DDCT) with Patterns Oriented Edge Magnitudes (POEM) was proposed. Firstly, the Uyghur face images were divided into 8×8 block matrix, and the Uyghur face images after block processing were converted into frequency-domain status using 2DDCT; secondly, the Uyghur face images were compressed to exclude non-sensitive medium frequency parts and non-high frequency parts, so it can reduce the feature dimensions necessary for the Uyghur face images, and further reduce the amount of computation; thirdly, the corresponding POEM histograms of the Uyghur face images were obtained by calculating the feature quantity of POEM; fourthly, the POEM histograms were cascaded together as the texture histogram of the center feature point to obtain the texture features of the Uyghur face feature points; finally, classification of the training samples was carried out using deep learning algorithm. The simulation experiment results showed that the proposed algorithm further improved the recognition rate of the self-built Uyghur face database, and greatly improved the computing speed of the self-built Uyghur face database, and had strong robustness.
Berretti, Stefano; Del Bimbo, Alberto; Pala, Pietro
In this paper, we present a novel approach to 3D face matching that shows high effectiveness in distinguishing facial differences between distinct individuals from differences induced by nonneutral expressions within the same individual. The approach takes into account geometrical information of the 3D face and encodes the relevant information into a compact representation in the form of a graph. Nodes of the graph represent equal width isogeodesic facial stripes. Arcs between pairs of nodes are labeled with descriptors, referred to as 3D Weighted Walkthroughs (3DWWs), that capture the mutual relative spatial displacement between all the pairs of points of the corresponding stripes. Face partitioning into isogeodesic stripes and 3DWWs together provide an approximate representation of local morphology of faces that exhibits smooth variations for changes induced by facial expressions. The graph-based representation permits very efficient matching for face recognition and is also suited to being employed for face identification in very large data sets with the support of appropriate index structures. The method obtained the best ranking at the SHREC 2008 contest for 3D face recognition. We present an extensive comparative evaluation of the performance with the FRGC v2.0 data set and the SHREC08 data set.
Pujol Francisco A.
Full Text Available In this work, a modified version of the elastic bunch graph matching (EBGM algorithm for face recognition is introduced. First, faces are detected by using a fuzzy skin detector based on the RGB color space. Then, the fiducial points for the facial graph are extracted automatically by adjusting a grid of points to the result of an edge detector. After that, the position of the nodes, their relation with their neighbors and their Gabor jets are calculated in order to obtain the feature vector defining each face. A self-organizing map (SOM framework is shown afterwards. Thus, the calculation of the winning neuron and the recognition process are performed by using a similarity function that takes into account both the geometric and texture information of the facial graph. The set of experiments carried out for our SOM-EBGM method shows the accuracy of our proposal when compared with other state-of the-art methods.
Etchells, David B; Brooks, Joseph L; Johnston, Robert A
Many models of face recognition incorporate the idea of a face recognition unit (FRU), an abstracted representation formed from each experience of a face which aids recognition under novel viewing conditions. Some previous studies have failed to find evidence of this FRU representation. Here, we report three experiments which investigated this theoretical construct by modifying the face learning procedure from that in previous work. During learning, one or two views of previously unfamiliar faces were shown to participants in a serial matching task. Later, participants attempted to recognize both seen and novel views of the learned faces (recognition phase). Experiment 1 tested participants' recognition of a novel view, a day after learning. Experiment 2 was identical, but tested participants on the same day as learning. Experiment 3 repeated Experiment 1, but tested participants on a novel view that was outside the rotation of those views learned. Results revealed a significant advantage, across all experiments, for recognizing a novel view when two views had been learned compared to single view learning. The observed view invariance supports the notion that an FRU representation is established during multi-view face learning under particular learning conditions.
Ge, Liezhong; Zhang, Hongchuan; Wang, Zhe; Quinn, Paul C; Pascalis, Olivier; Kelly, David; Slater, Alan; Tian, Jie; Lee, Kang
The other-race effect is a collection of phenomena whereby faces of one's own race are processed differently from those of other races. Previous studies have revealed a paradoxical mirror pattern of an own-race advantage in face recognition and an other-race advantage in race-based categorisation. With a well-controlled design, we compared recognition and categorisation of own-race and other-race faces in both Caucasian and Chinese participants. Compared with own-race faces, other-race faces were less accurately and more slowly recognised, whereas they were more rapidly categorised by race. The mirror pattern was confirmed by a unique negative correlation between the two effects in terms of reaction time with a hierarchical regression analysis. This finding suggests an antagonistic interaction between the processing of face identity and that of face category, and a common underlying processing mechanism.
Zeinstra, Christopher Gerard
Forensic Face Recognition (FFR) is the use of biometric face recognition for several appli- cations in forensic science. Biometric face recognition uses the face modality as a means to discriminate between human beings; forensic science is the application of science and tech- nology to law
Full Text Available In this paper, I present a novel hybrid face recognition approach based on a convolutional neural architecture, designed to robustly detect highly variable face patterns. The convolutional network extracts successively larger features in a hierarchical set of layers. With the weights of the trained neural networks there are created kernel windows used for feature extraction in a 3-stage algorithm. I present experimental results illustrating the efficiency of the proposed approach. I use a database of 796 images of 159 individuals from Reims University which contains quite a high degree of variability in expression, pose, and facial details.
Wilmer, Jeremy B; Germine, Laura T; Nakayama, Ken
In our everyday lives, we view it as a matter of course that different people are good at different things. It can be surprising, in this context, to learn that most of what is known about cognitive ability variation across individuals concerns the broadest of all cognitive abilities; an ability referred to as general intelligence, general mental ability, or just g. In contrast, our knowledge of specific abilities, those that correlate little with g, is severely constrained. Here, we draw upon our experience investigating an exceptionally specific ability, face recognition, to make the case that many specific abilities could easily have been missed. In making this case, we derive key insights from earlier false starts in the measurement of face recognition's variation across individuals, and we highlight the convergence of factors that enabled the recent discovery that this variation is specific. We propose that the case of face recognition ability illustrates a set of tools and perspectives that could accelerate fruitful work on specific cognitive abilities. By revealing relatively independent dimensions of human ability, such work would enhance our capacity to understand the uniqueness of individual minds.
Thai Hoang Le
Full Text Available This paper introduces some novel models for all steps of a face recognition system. In the step of face detection, we propose a hybrid model combining AdaBoost and Artificial Neural Network (ABANN to solve the process efficiently. In the next step, labeled faces detected by ABANN will be aligned by Active Shape Model and Multi Layer Perceptron. In this alignment step, we propose a new 2D local texture model based on Multi Layer Perceptron. The classifier of the model significantly improves the accuracy and the robustness of local searching on faces with expression variation and ambiguous contours. In the feature extraction step, we describe a methodology for improving the efficiency by the association of two methods: geometric feature based method and Independent Component Analysis method. In the face matching step, we apply a model combining many Neural Networks for matching geometric features of human face. The model links many Neural Networks together, so we call it Multi Artificial Neural Network. MIT + CMU database is used for evaluating our proposed methods for face detection and alignment. Finally, the experimental results of all steps on CallTech database show the feasibility of our proposed model.
Boom, B.J.; Tao, Q.; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.
Face Recognition under uncontrolled illumination conditions is partly an unsolved problem. There are two categories of illumination normalization methods. The first category performs a local preprocessing, where they correct a pixel value based on a local neighborhood in the images. The second
2D face analysis techniques, such as face landmarking, face recognition and face verification, are reasonably dependent on illumination conditions which are usually uncontrolled and unpredictable in the real world. An illumination robust preprocessing method thus remains a significant challenge in reliable face analysis. In this paper we propose a novel approach for improving lighting normalization through building the underlying reflectance model which characterizes interactions between skin surface, lighting source and camera sensor, and elaborates the formation of face color appearance. Specifically, the proposed illumination processing pipeline enables the generation of Chromaticity Intrinsic Image (CII) in a log chromaticity space which is robust to illumination variations. Moreover, as an advantage over most prevailing methods, a photo-realistic color face image is subsequently reconstructed which eliminates a wide variety of shadows whilst retaining the color information and identity details. Experimental results under different scenarios and using various face databases show the effectiveness of the proposed approach to deal with lighting variations, including both soft and hard shadows, in face recognition.
Full Text Available Nonnegative matrix factorization (NMF is a promising approach for local feature extraction in face recognition tasks. However, there are two major drawbacks in almost all existing NMF-based methods. One shortcoming is that the computational cost is expensive for large matrix decomposition. The other is that it must conduct repetitive learning, when the training samples or classes are updated. To overcome these two limitations, this paper proposes a novel incremental nonnegative matrix factorization (INMF for face representation and recognition. The proposed INMF approach is based on a novel constraint criterion and our previous block strategy. It thus has some good properties, such as low computational complexity, sparse coefficient matrix. Also, the coefficient column vectors between different classes are orthogonal. In particular, it can be applied to incremental learning. Two face databases, namely FERET and CMU PIE face databases, are selected for evaluation. Compared with PCA and some state-of-the-art NMF-based methods, our INMF approach gives the best performance.
Rao K Srinivasa
Full Text Available We propose a novel probabilistic framework that combines information acquired from different facial features for robust face recognition. The features used are the entire face, the edginess image of the face, and the eyes. In the training stage, individual feature spaces are constructed using principal component analysis (PCA and Fisher's linear discriminant (FLD. By using the distance-in-feature-space (DIFS values of the training images, the distributions of the DIFS values in each feature space are computed. For a given image, the distributions of the DIFS values yield confidence weights for the three facial features extracted from the image. The final score is computed using a probabilistic fusion criterion and the match with the highest score is used to establish the identity of a person. A new preprocessing scheme for illumination compensation is also advocated. The proposed fusion approach is more reliable than a recognition system which uses only one feature, trained individually. The method is validated on different face datasets, including the FERET database.
Liao, Shu; Shen, Dinggang; Chung, Albert C S
In this paper, we propose a new framework for tackling face recognition problem. The face recognition problem is formulated as groupwise deformable image registration and feature matching problem. The main contributions of the proposed method lie in the following aspects: (1) Each pixel in a facial image is represented by an anatomical signature obtained from its corresponding most salient scale local region determined by the survival exponential entropy (SEE) information theoretic measure. (2) Based on the anatomical signature calculated from each pixel, a novel Markov random field based groupwise registration framework is proposed to formulate the face recognition problem as a feature guided deformable image registration problem. The similarity between different facial images are measured on the nonlinear Riemannian manifold based on the deformable transformations. (3) The proposed method does not suffer from the generalizability problem which exists commonly in learning based algorithms. The proposed method has been extensively evaluated on four publicly available databases: FERET, CAS-PEAL-R1, FRGC ver 2.0, and the LFW. It is also compared with several state-of-the-art face recognition approaches, and experimental results demonstrate that the proposed method consistently achieves the highest recognition rates among all the methods under comparison.
Full Text Available Based on a special type of denoising autoencoder (DAE and image reconstruction, we present a novel supervised deep learning framework for face recognition (FR. Unlike existing deep autoencoder which is unsupervised face recognition method, the proposed method takes class label information from training samples into account in the deep learning procedure and can automatically discover the underlying nonlinear manifold structures. Specifically, we define an Adaptive Deep Supervised Network Template (ADSNT with the supervised autoencoder which is trained to extract characteristic features from corrupted/clean facial images and reconstruct the corresponding similar facial images. The reconstruction is realized by a so-called “bottleneck” neural network that learns to map face images into a low-dimensional vector and reconstruct the respective corresponding face images from the mapping vectors. Having trained the ADSNT, a new face image can then be recognized by comparing its reconstruction image with individual gallery images, respectively. Extensive experiments on three databases including AR, PubFig, and Extended Yale B demonstrate that the proposed method can significantly improve the accuracy of face recognition under enormous illumination, pose change, and a fraction of occlusion.
He, Ran; Zheng, Wei-Shi; Hu, Bao-Gang
In this paper, we present a sparse correntropy framework for computing robust sparse representations of face images for recognition. Compared with the state-of-the-art l(1)norm-based sparse representation classifier (SRC), which assumes that noise also has a sparse representation, our sparse algorithm is developed based on the maximum correntropy criterion, which is much more insensitive to outliers. In order to develop a more tractable and practical approach, we in particular impose nonnegativity constraint on the variables in the maximum correntropy criterion and develop a half-quadratic optimization technique to approximately maximize the objective function in an alternating way so that the complex optimization problem is reduced to learning a sparse representation through a weighted linear least squares problem with nonnegativity constraint at each iteration. Our extensive experiments demonstrate that the proposed method is more robust and efficient in dealing with the occlusion and corruption problems in face recognition as compared to the related state-of-the-art methods. In particular, it shows that the proposed method can improve both recognition accuracy and receiver operator characteristic (ROC) curves, while the computational cost is much lower than the SRC algorithms.
Ding, Changxing; Xu, Chang; Tao, Dacheng
Face images captured in unconstrained environments usually contain significant pose variation, which dramatically degrades the performance of algorithms designed to recognize frontal faces. This paper proposes a novel face identification framework capable of handling the full range of pose variations within ±90° of yaw. The proposed framework first transforms the original pose-invariant face recognition problem into a partial frontal face recognition problem. A robust patch-based face representation scheme is then developed to represent the synthesized partial frontal faces. For each patch, a transformation dictionary is learnt under the proposed multi-task learning scheme. The transformation dictionary transforms the features of different poses into a discriminative subspace. Finally, face matching is performed at patch level rather than at the holistic level. Extensive and systematic experimentation on FERET, CMU-PIE, and Multi-PIE databases shows that the proposed method consistently outperforms single-task-based baselines as well as state-of-the-art methods for the pose problem. We further extend the proposed algorithm for the unconstrained face verification problem and achieve top-level performance on the challenging LFW data set.
A review of face recognition techniques has been carried out. Face recognition has been an attractive field in the society of both biological and computer vision of research. It exhibits the characteristics of being natural and low-intrusive. In this paper, an updated survey of techniques for face recognition is made. Methods of ...
Jeremy B Wilmer
Full Text Available In our everyday lives, we view it as a matter of course that different people are good at different things. It can be surprising, in this context, to learn that most of what is known about cognitive ability variation across individuals concerns the broadest of all cognitive abilities, often labeled g. In contrast, our knowledge of specific abilities, those that correlate little with g, is severely constrained. Here, we draw upon our experience investigating an exceptionally specific ability, face recognition, to make the case that many specific abilities could easily have been missed. In making this case, we derive key insights from earlier false starts in the measurement of face recognition’s variation across individuals, and we highlight the convergence of factors that enabled the recent discovery that this variation is specific. We propose that the case of face recognition ability illustrates a set of tools and perspectives that could accelerate fruitful work on specific cognitive abilities. By revealing relatively independent dimensions of human ability, such work would enhance our capacity to understand the uniqueness of individual minds.
Full Text Available We present methods for processing the LBPs (local binary patterns with a massively parallel hardware, especially with CNN-UM (cellular nonlinear network-universal machine. In particular, we present a framework for implementing a massively parallel face recognition system, including a dedicated highly accurate algorithm suitable for various types of platforms (e.g., CNN-UM and digital FPGA. We study in detail a dedicated mixed-mode implementation of the algorithm and estimate its implementation cost in the view of its performance and accuracy restrictions.
Full Text Available We present methods for processing the LBPs (local binary patterns with a massively parallel hardware, especially with CNN-UM (cellular nonlinear network-universal machine. In particular, we present a framework for implementing a massively parallel face recognition system, including a dedicated highly accurate algorithm suitable for various types of platforms (e.g., CNN-UM and digital FPGA. We study in detail a dedicated mixed-mode implementation of the algorithm and estimate its implementation cost in the view of its performance and accuracy restrictions.
Chandra, Sadanandavalli Retnaswami; Patwardhan, Ketaki; Pai, Anupama Ramakanth
Faces are very special as they are most essential for social cognition in humans. It is partly understood that face processing in its abstractness involves several extra striate areas. One of the most important causes for caregiver suffering in patients with anterior dementia is lack of empathy. This apart from being a behavioral disorder could be also due to failure to categorize the emotions of the people around them. Inlusion criteria: DSM IV for Bv FTD Tested for prosopagnosia - familiar faces, famous face, smiling face, crying face and reflected face using a simple picture card (figure 1). Advanced illness and mixed causes. 46 patients (15 females, 31 males) 24 had defective face recognition. (mean age 51.5),10/15 females (70%) and 14/31males(47. Familiar face recognition defect was found in 6/10 females and 6/14 males. Total- 40%(6/15) females and 19.35%(6/31)males with FTD had familiar face recognition. Famous Face: 9/10 females and 7/14 males. Total- 60% (9/15) females with FTD had famous face recognition defect as against 22.6%(7/31) males with FTD Smiling face defects in 8/10 female and no males. Total- 53.33% (8/15) females. Crying face recognition defect in 3/10 female and 2 /14 males. Total- 20%(3/15) females and 6.5%(2/31) males. Reflected face recognition defect in 4 females. Famous face recognition and positive emotion recognition defect in 80%, only 20% comprehend positive emotions, Face recognition defects are found in only 45% of males and more common in females. Face recognition is more affected in females with FTD There is differential involvement of different aspects of the face recognition could be one of the important factor underlying decline in the emotional and social behavior of these patients. Understanding these pathological processes will give more insight regarding patient behavior.
Yang, Meng; Zhang, Lei; Yang, Jian; Zhang, David
Recently the sparse representation based classification (SRC) has been proposed for robust face recognition (FR). In SRC, the testing image is coded as a sparse linear combination of the training samples, and the representation fidelity is measured by the l2-norm or l1 -norm of the coding residual. Such a sparse coding model assumes that the coding residual follows Gaussian or Laplacian distribution, which may not be effective enough to describe the coding residual in practical FR systems. Meanwhile, the sparsity constraint on the coding coefficients makes the computational cost of SRC very high. In this paper, we propose a new face coding model, namely regularized robust coding (RRC), which could robustly regress a given signal with regularized regression coefficients. By assuming that the coding residual and the coding coefficient are respectively independent and identically distributed, the RRC seeks for a maximum a posterior solution of the coding problem. An iteratively reweighted regularized robust coding (IR(3)C) algorithm is proposed to solve the RRC model efficiently. Extensive experiments on representative face databases demonstrate that the RRC is much more effective and efficient than state-of-the-art sparse representation based methods in dealing with face occlusion, corruption, lighting, and expression changes, etc.
Full Text Available In order to improve the recognition rate of face recognition, face recognition algorithm based on histogram equalization, PCA and BP neural network is proposed. First, the face image is preprocessed by histogram equalization. Then, the classical PCA algorithm is used to extract the features of the histogram equalization image, and extract the principal component of the image. And then train the BP neural network using the trained training samples. This improved BP neural network weight adjustment method is used to train the network because the conventional BP algorithm has the disadvantages of slow convergence, easy to fall into local minima and training process. Finally, the BP neural network with the test sample input is trained to classify and identify the face images, and the recognition rate is obtained. Through the use of ORL database face image simulation experiment, the analysis results show that the improved BP neural network face recognition method can effectively improve the recognition rate of face recognition.
Karam, Lina J.; Zhu, Tong
The varying quality of face images is an important challenge that limits the effectiveness of face recognition technology when applied in real-world applications. Existing face image databases do not consider the effect of distortions that commonly occur in real-world environments. This database (QLFW) represents an initial attempt to provide a set of labeled face images spanning the wide range of quality, from no perceived impairment to strong perceived impairment for face detection and face recognition applications. Types of impairment include JPEG2000 compression, JPEG compression, additive white noise, Gaussian blur and contrast change. Subjective experiments are conducted to assess the perceived visual quality of faces under different levels and types of distortions and also to assess the human recognition performance under the considered distortions. One goal of this work is to enable automated performance evaluation of face recognition technologies in the presence of different types and levels of visual distortions. This will consequently enable the development of face recognition systems that can operate reliably on real-world visual content in the presence of real-world visual distortions. Another goal is to enable the development and assessment of visual quality metrics for face images and for face detection and recognition applications.
Features obtained from the corresponding occlusion-free patches of training images are used for face image recognition. The SVM classifier is used for occlusion detection for each patch. In the recognition phase, the MBWM bases of occlusion-free image patches are used for face recognition. Euclidean nearest neighbour ...
David M. Ryer
Full Text Available A qualia exploitation of sensor technology (QUEST motivated architecture using algorithm fusion and adaptive feedback loops for face recognition for hyperspectral imagery (HSI is presented. QUEST seeks to develop a general purpose computational intelligence system that captures the beneficial engineering aspects of qualia-based solutions. Qualia-based approaches are constructed from subjective representations and have the ability to detect, distinguish, and characterize entities in the environment Adaptive feedback loops are implemented that enhance performance by reducing candidate subjects in the gallery and by injecting additional probe images during the matching process. The architecture presented provides a framework for exploring more advanced integration strategies beyond those presented. Algorithmic results and performance improvements are presented as spatial, spectral, and temporal effects are utilized; additionally, a Matlab-based graphical user interface (GUI is developed to aid processing, track performance, and to display results.
Face recognition (FR) is one of the biometric methods to identify the individuals by the features of face. Two Face Recognition Systems (FRS) based on Artificial Neural Network (ANN) have been proposed in this paper based on feature extraction techniques. In the first system, Principal Component Analysis (PCA) has been ...
Mahmood, Zahid; Ali, Tauseef; Khan, Samee U.
The popularity of face recognition systems have increased due to their use in widespread applications. Driven by the enormous number of potential application domains, several algorithms have been proposed for face recognition. Face pose and image resolutions are among the two important factors that
Haar, F.B. Ter; Veltkamp, R.C.
Morphable face models have proven to be an effective tool for 3D face modeling and face recognition, but the extension to 3D face scans with expressions is still a challenge. The two main difficulties are (1) how to build a new morphable face model that deals with expressions, and (2) how to fit
Full Text Available In this paper, a novel illumination invariant face recognition approach is proposed. Different from most existing methods, an additive term as noise is considered in the face model under varying illuminations in addition to a multiplicative illumination term. High frequency coefficients of Discrete Cosine Transform (DCT are discarded to eliminate the effect caused by noise. Based on the local characteristics of the human face, a simple but effective illumination invariant feature local relation map is proposed. Experimental results on the Yale B, Extended Yale B and CMU PIE demonstrate the outperformance and lower computational burden of the proposed method compared to other existing methods. The results also demonstrate the validity of the proposed face model and the assumption on noise.
Barsics, Catherine; Brédart, Serge
Autonoetic consciousness is a fundamental property of human memory, enabling us to experience mental time travel, to recollect past events with a feeling of self-involvement, and to project ourselves in the future. Autonoetic consciousness is a characteristic of episodic memory. By contrast, awareness of the past associated with a mere feeling of familiarity or knowing relies on noetic consciousness, depending on semantic memory integrity. Present research was aimed at evaluating whether conscious recollection of episodic memories is more likely to occur following the recognition of a familiar face than following the recognition of a familiar voice. Recall of semantic information (biographical information) was also assessed. Previous studies that investigated the recall of biographical information following person recognition used faces and voices of famous people as stimuli. In this study, the participants were presented with personally familiar people's voices and faces, thus avoiding the presence of identity cues in the spoken extracts and allowing a stricter control of frequency exposure with both types of stimuli (voices and faces). In the present study, the rate of retrieved episodic memories, associated with autonoetic awareness, was significantly higher from familiar faces than familiar voices even though the level of overall recognition was similar for both these stimuli domains. The same pattern was observed regarding semantic information retrieval. These results and their implications for current Interactive Activation and Competition person recognition models are discussed.
Muhammed Tayyib Kadak
Full Text Available Autism is a genetically transferred neurodevelopmental disorder characterized by severe and permanent deficits in many interpersonal relation areas like communication, social interaction and emotional responsiveness. Patients with autism have deficits in face recognition, eye contact and recognition of emotional expression. Both recognition of face and expression of facial emotion carried on face processing. Structural and functional impairment in fusiform gyrus, amygdala, superior temporal sulcus and other brain regions lead to deficits in recognition of face and facial emotion. Therefore studies suggest that face processing deficits resulted in problems in areas of social interaction and emotion in autism. Studies revealed that children with autism had problems in recognition of facial expression and used mouth region more than eye region. It was also shown that autistic patients interpreted ambiguous expressions as negative emotion. In autism, deficits related in various stages of face processing like detection of gaze, face identity, recognition of emotional expression were determined, so far. Social interaction impairments in autistic spectrum disorders originated from face processing deficits during the periods of infancy, childhood and adolescence. Recognition of face and expression of facial emotion could be affected either automatically by orienting towards faces after birth, or by “learning” processes in developmental periods such as identity and emotion processing. This article aimed to review neurobiological basis of face processing and recognition of emotional facial expressions during normal development and in autism.
Full Text Available This paper proposes a new face recognition algorithm called local derivative tetra pattern (LDTrP. The new technique LDTrP is used to alleviate the face recognition rate under real-time challenges. Local derivative pattern (LDP is a directional feature extraction method to encode directional pattern features based on local derivative variations. The nth -order LDP is proposed to encode the first (n-1th order local derivative direction variations. The LDP templates extract high-order local information by encoding various distinctive spatial relationships contained in a given local region. The local tetra pattern (LTrP encodes the relationship between the reference pixel and its neighbours by using the first-order derivatives in vertical and horizontal directions. LTrP extracts values which are based on the distribution of edges which are coded using four directions. The LDTrP combines the higher order directional feature from both LDP and LTrP. Experimental results on ORL and JAFFE database show that the performance of LDTrP is consistently better than LBP, LTP and LDP for face identification under various conditions. The performance of the proposed method is measured in terms of recognition rate.
Xie, Zhihua; Zhang, Shuai; Liu, Guodong; Xiong, Jinquan
Visible face recognition systems, being vulnerable to illumination, expression, and pose, can not achieve robust performance in unconstrained situations. Meanwhile, near infrared face images, being light- independent, can avoid or limit the drawbacks of face recognition in visible light, but its main challenges are low resolution and signal noise ratio (SNR). Therefore, near infrared and visible fusion face recognition has become an important direction in the field of unconstrained face recognition research. In this paper, a novel fusion algorithm in non-subsampled contourlet transform (NSCT) domain is proposed for Infrared and visible face fusion recognition. Firstly, NSCT is used respectively to process the infrared and visible face images, which exploits the image information at multiple scales, orientations, and frequency bands. Then, to exploit the effective discriminant feature and balance the power of high-low frequency band of NSCT coefficients, the local Gabor binary pattern (LGBP) and Local Binary Pattern (LBP) are applied respectively in different frequency parts to obtain the robust representation of infrared and visible face images. Finally, the score-level fusion is used to fuse the all the features for final classification. The visible and near infrared face recognition is tested on HITSZ Lab2 visible and near infrared face database. Experiments results show that the proposed method extracts the complementary features of near-infrared and visible-light images and improves the robustness of unconstrained face recognition.
Full Text Available Despite the existence of various biometric techniques, like fingerprints, iris scan, as well as hand geometry, the most efficient and more widely-used one is face recognition. This is because it is inexpensive, non-intrusive and natural. Therefore, researchers have developed dozens of face recognition techniques over the last few years. These techniques can generally be divided into three categories, based on the face data processing methodology. There are methods that use the entire face as input data for the proposed recognition system, methods that do not consider the whole face, but only some features or areas of the face and methods that use global and local face characteristics simultaneously. In this paper, we present an overview of some well-known methods in each of these categories. First, we expose the benefits of, as well as the challenges to the use of face recognition as a biometric tool. Then, we present a detailed survey of the well-known methods by expressing each method’s principle. After that, a comparison between the three categories of face recognition techniques is provided. Furthermore, the databases used in face recognition are mentioned, and some results of the applications of these methods on face recognition databases are presented. Finally, we highlight some new promising research directions that have recently appeared.
Stoll, Chloé; Palluel-Germain, Richard; Caldara, Roberto; Lao, Junpeng; Dye, Matthew W. G.; Aptel, Florent; Pascalis, Olivier
Previous research has suggested that early deaf signers differ in face processing. Which aspects of face processing are changed and the role that sign language may have played in that change are however unclear. Here, we compared face categorization (human/non-human) and human face recognition performance in early profoundly deaf signers, hearing…
Padma Vibhushan 2001– the second highest civilian award, from the Government of India. ... model, pursue higher studies, do research, and get a PhD degree. .... doing good research have to go abroad to get recognition for their research and win international awards. RB: What are your views on the current debate on 'big ...
Zheng, Yufeng; Blasch, Erik
Image compression is desired for many image-related applications especially for network-based applications with bandwidth and storage constraints. The face recognition community typical reports concentrate on the maximal compression rate that would not decrease the recognition accuracy. In general, the wavelet-based face recognition methods such as EBGM (elastic bunch graph matching) and FPB (face pattern byte) are of high performance but run slowly due to their high computation demands. The PCA (Principal Component Analysis) and LDA (Linear Discriminant Analysis) algorithms run fast but perform poorly in face recognition. In this paper, we propose a novel face recognition method based on standard image compression algorithm, which is termed as compression-based (CPB) face recognition. First, all gallery images are compressed by the selected compression algorithm. Second, a mixed image is formed with the probe and gallery images and then compressed. Third, a composite compression ratio (CCR) is computed with three compression ratios calculated from: probe, gallery and mixed images. Finally, the CCR values are compared and the largest CCR corresponds to the matched face. The time cost of each face matching is about the time of compressing the mixed face image. We tested the proposed CPB method on the "ASUMSS face database" (visible and thermal images) from 105 subjects. The face recognition accuracy with visible images is 94.76% when using JPEG compression. On the same face dataset, the accuracy of FPB algorithm was reported as 91.43%. The JPEG-compressionbased (JPEG-CPB) face recognition is standard and fast, which may be integrated into a real-time imaging device.
David J Robertson
Full Text Available Our recognition of familiar faces is excellent, and generalises across viewing conditions. However, unfamiliar face recognition is much poorer. For this reason, automatic face recognition systems might benefit from incorporating the advantages of familiarity. Here we put this to the test using the face verification system available on a popular smartphone (the Samsung Galaxy. In two experiments we tested the recognition performance of the smartphone when it was encoded with an individual's 'face-average'--a representation derived from theories of human face perception. This technique significantly improved performance for both unconstrained celebrity images (Experiment 1 and for real faces (Experiment 2: users could unlock their phones more reliably when the device stored an average of the user's face than when they stored a single image. This advantage was consistent across a wide variety of everyday viewing conditions. Furthermore, the benefit did not reduce the rejection of imposter faces. This benefit is brought about solely by consideration of suitable representations for automatic face recognition, and we argue that this is just as important as development of matching algorithms themselves. We propose that this representation could significantly improve recognition rates in everyday settings.
Robertson, David J; Kramer, Robin S S; Burton, A Mike
Our recognition of familiar faces is excellent, and generalises across viewing conditions. However, unfamiliar face recognition is much poorer. For this reason, automatic face recognition systems might benefit from incorporating the advantages of familiarity. Here we put this to the test using the face verification system available on a popular smartphone (the Samsung Galaxy). In two experiments we tested the recognition performance of the smartphone when it was encoded with an individual's 'face-average'--a representation derived from theories of human face perception. This technique significantly improved performance for both unconstrained celebrity images (Experiment 1) and for real faces (Experiment 2): users could unlock their phones more reliably when the device stored an average of the user's face than when they stored a single image. This advantage was consistent across a wide variety of everyday viewing conditions. Furthermore, the benefit did not reduce the rejection of imposter faces. This benefit is brought about solely by consideration of suitable representations for automatic face recognition, and we argue that this is just as important as development of matching algorithms themselves. We propose that this representation could significantly improve recognition rates in everyday settings.
Lampinen, James Michael; Erickson, William Blake; Moore, Kara N; Hittson, Aaron
Eyewitnesses sometimes view faces from a distance, but little research has examined the accuracy of witnesses as a function of distance. The purpose to the present project is to examine the relationship between identification accuracy and distance under carefully controlled conditions. This is one of the first studies to examine the ability to recognize faces of strangers at a distance under free-field conditions. Participants viewed eight live human targets, displayed at one of six outdoor distances that varied between 5 and 40 yards. Participants were shown 16 photographs, 8 of the previously viewed targets and 8 of nonviewed foils that matched a verbal description of the target counterpart. Participants rated their confidence of having seen or not having seen each individual on an 8-point scale. Long distances were associated with poor recognition memory and response bias shifts.
Gothard, Katalin M; Brooks, Kelly N; Peterson, Mary A
Successful integration of individuals in macaque societies suggests that monkeys use fast and efficient perceptual mechanisms to discriminate between conspecifics. Humans and great apes use primarily holistic and configural, but also feature-based, processing for face recognition. The relative contribution of these processes to face recognition in monkeys is not known. We measured face recognition in three monkeys performing a visual paired comparison task. Monkey and humans faces were (1) axially rotated, (2) inverted, (3) high-pass filtered, and (4) low-pass filtered to isolate different face processing strategies. The amount of time spent looking at the eyes, mouth, and other facial features was compared across monkey and human faces for each type of stimulus manipulation. For all monkeys, face recognition, expressed as novelty preference, was intact for monkey faces that were axially rotated or spatially filtered and was supported in general by preferential looking at the eyes, but was impaired for inverted faces in two of the three monkeys. Axially rotated, upright human faces with a full range of spatial frequencies were also recognized, however, the distribution of time spent exploring each facial feature was significantly different compared to monkey faces. No novelty preference, and hence no inferred recognition, was observed for inverted or low-pass filtered human faces. High-pass filtered human faces were recognized, however, the looking pattern on facial features deviated from the pattern observed for monkey faces. Taken together these results indicate large differences in recognition success and in perceptual strategies used by monkeys to recognize humans versus conspecifics. Monkeys use both second-order configural and feature-based processing to recognize the faces of conspecifics, but they use primarily feature-based strategies to recognize human faces.
Abdullah, Nurul Azma; Saidi, Md. Jamri; Rahman, Nurul Hidayah Ab; Wen, Chuah Chai; Hamid, Isredza Rahmi A.
In practice, identification of criminal in Malaysia is done through thumbprint identification. However, this type of identification is constrained as most of criminal nowadays getting cleverer not to leave their thumbprint on the scene. With the advent of security technology, cameras especially CCTV have been installed in many public and private areas to provide surveillance activities. The footage of the CCTV can be used to identify suspects on scene. However, because of limited software developed to automatically detect the similarity between photo in the footage and recorded photo of criminals, the law enforce thumbprint identification. In this paper, an automated facial recognition system for criminal database was proposed using known Principal Component Analysis approach. This system will be able to detect face and recognize face automatically. This will help the law enforcements to detect or recognize suspect of the case if no thumbprint present on the scene. The results show that about 80% of input photo can be matched with the template data.
Robotham, Ro Julia; Starrfelt, Randi
Abstract The relationship between face recognition and visual word recognition/reading has received increasing attention lately. A core question is whether face and word recognition rely on cognitive and cerebral processes that are largely independent, or rather processes that are distributed...... included, as a control, which makes designing experiments all the more challenging. Three main strategies have been used to overcome this problem, each of which has limitations: 1) Compare performances on typical tests of the three stimulus types (e.g., a Face Memory Test, an Object recognition test......). None of these methods, however, has provided measurements that enable direct comparison of performances across categories. We propose a simple framework for classifying tests of face, object, and word recognition according to the level of perceptual processing required to perform each test. Using...
Stara, Vera; Montesanto, Anna; Puliti, Paolo; Tascini, Guido; Sechi, Cristina
Visual recognition of faces is an essential behavior of humans: we have optimal performance in everyday life and just such a performance makes us able to establish the continuity of actors in our social life and to quickly identify and categorize people. This remarkable ability justifies the general interest in face recognition of researchers belonging to different fields and specially of designers of biometrical identification systems able to recognize the features of person's faces in a background. Due to interdisciplinary nature of this topic in this contribute we deal with face recognition through a comprehensive approach with the purpose to reproduce some features of human performance, as evidenced by studies in psychophysics and neuroscience, relevant to face recognition. This approach views face recognition as an emergent phenomenon resulting from the nonlinear interaction of a number of different features. For this reason our model of face recognition has been based on a computational system implemented through an artificial neural network. This synergy between neuroscience and engineering efforts allowed us to implement a model that had a biological plausibility, performed the same tasks as human subjects, and gave a possible account of human face perception and recognition. In this regard the paper reports on an experimental study of performance of a SOM-based neural network in a face recognition task, with reference both to the ability to learn to discriminate different faces, and to the ability to recognize a face already encountered in training phase, when presented in a pose or with an expression differing from the one present in the training context.
Andersen, Rasmus S.; Eliasen, Anders U.; Pedersen, Nicolai
of face recognition. This task concerns the differentiation of brain responses to images of faces and scrambled faces and poses a rather difficult decoding problem at the single trial level. We implement the pipeline using spatially focused features and show that this approach is challenged and source...
Dutta, A.; Günther, Manuel; El Shafey, Laurent; Veldhuis, Raymond N.J.; Spreeuwers, Lieuwe Jan
The locations of the eyes are the most commonly used features to perform face normalisation (i.e. alignment of facial features), which is an essential preprocessing stage of many face recognition systems. In this study, the authors study the sensitivity of open source implementations of five face
Face recognition has been an important topic for both industry and academia for a long time. K-means clustering, autoencoder, and convolutional neural network, each representing a design idea for face recognition method, are three popular algorithms to deal with face recognition problems. It is worthwhile to summarize and compare these three different algorithms. This paper will focus on one specific face recognition problem-sentiment classification from images. Three different algorithms for sentiment classification problems will be summarized, including k-means clustering, autoencoder, and convolutional neural network. An experiment with the application of these algorithms on a specific dataset of human faces will be conducted to illustrate how these algorithms are applied and their accuracy. Finally, the three algorithms are compared based on the accuracy result.
Hirot, France; Lesage, Marine; Pedron, Lya; Meyer, Isabelle; Thomas, Pierre; Cottencin, Olivier; Guardia, Dewi
Body image disturbances and massive weight loss are major clinical symptoms of anorexia nervosa (AN). The aim of the present study was to examine the influence of body changes and eating attitudes on self-face recognition ability in AN. Twenty-seven subjects suffering from AN and 27 control participants performed a self-face recognition task (SFRT). During the task, digital morphs between their own face and a gender-matched unfamiliar face were presented in a random sequence. Participants' self-face recognition failures, cognitive flexibility, body concern and eating habits were assessed with the Self-Face Recognition Questionnaire (SFRQ), Trail Making Test (TMT), Body Shape Questionnaire (BSQ) and Eating Disorder Inventory-2 (EDI-2), respectively. Subjects suffering from AN exhibited significantly greater difficulties than control participants in identifying their own face (p = 0.028). No significant difference was observed between the two groups for TMT (all p > 0.1, non-significant). Regarding predictors of self-face recognition skills, there was a negative correlation between SFRT and body mass index (p = 0.01) and a positive correlation between SFRQ and EDI-2 (p recognition.
Full Text Available This research was inspired by the need of a flexible and cost effective biometric security system. The flexibility of the wireless sensor network makes it a natural choice for data transmission. Swarm intelligence (SI is used to optimize routing in distributed time varying network. In this paper, SI maintains the required bit error rate (BER for varied channel conditions while consuming minimal energy. A specific biometric, the face recognition system, is discussed as an example. Simulation shows that the wireless sensor network is efficient in energy consumption while keeping the transmission accuracy, and the wireless face recognition system is competitive to the traditional wired face recognition system in classification accuracy.
Zhang, J.; Lades, M.
This paper is a comparative study of three recently proposed algorithms for face recognition: eigenface, autoassociation and classification neural nets, and elastic matching. After these algorithms were analyzed under a common statistical decision framework, they were evaluated experimentally on four individual data bases, each with a moderate subject size, and a combined data base with more than a hundred different subjects. Analysis and experimental results indicate that the eigenface algorithm, which is essentially a minimum distance classifier, works well when lighting variation is small. Its performance deteriorates significantly as lighting variation increases. The elastic matching algorithm, on the other hand, is insensitive to lighting, face position, and expression variations and therefore is more versatile. The performance of the autoassociation and classification nets is upper bounded by that of the eigenface but is more difficult to implement in practice
Bao, Tianlong; Ding, Chunhui; Karmoshi, Saleem; Zhu, Ming
Face recognition has been widely studied recently while video-based face recognition still remains a challenging task because of the low quality and large intra-class variation of video captured face images. In this paper, we focus on two scenarios of video-based face recognition: 1)Still-to-Video(S2V) face recognition, i.e., querying a still face image against a gallery of video sequences; 2)Video-to-Still(V2S) face recognition, in contrast to S2V scenario. A novel method was proposed in this paper to transfer still and video face images to an Euclidean space by a carefully designed convolutional neural network, then Euclidean metrics are used to measure the distance between still and video images. Identities of still and video images that group as pairs are used as supervision. In the training stage, a joint loss function that measures the Euclidean distance between the predicted features of training pairs and expanding vectors of still images is optimized to minimize the intra-class variation while the inter-class variation is guaranteed due to the large margin of still images. Transferred features are finally learned via the designed convolutional neural network. Experiments are performed on COX face dataset. Experimental results show that our method achieves reliable performance compared with other state-of-the-art methods.
Full Text Available In the present study, we examined whether social categorization based on university affiliation can induce an advantage in recognizing faces. Moreover, we investigated how the reputation or location of the university affected face recognition performance using an old/new paradigm. We assigned five different university labels to the faces: participants’ own university and four other universities. Among the four other university labels, we manipulated the academic reputation and geographical location of these universities relative to the participants’ own university. The results showed that an own-group face recognition bias emerged for faces with own-university labels comparing to those with other-university labels. Furthermore, we found a robust own-group face recognition bias only when the other university was located in a different city far away from participants’ own university. Interestingly, we failed to find the influence of university reputation on own-group face recognition bias. These results suggest that categorizing a face as a member of one’s own university is sufficient to enhance recognition accuracy and the location will play a more important role in the effect of social categorization on face recognition than reputation. The results provide insight into the role of motivational factors underlying the university membership in face perception.
Yang, Shicai; Bebis, George; Chu, Yongjie; Zhao, Lindu
In past decades, many techniques have been used to improve face recognition performance. The most common and well-studied ways are to use the whole face image to build a subspace based on the reduction of dimensionality. Differing from methods above, we consider face recognition as an image classification problem. The face images of the same person are considered to fall into the same category. Each category and each face image could be both represented by a simple pyramid histogram. Spatial dense scale-invariant feature transform features and bag of features method are used to build categories and face representations. In an effort to make the method more efficient, a linear support vector machine solver, Pegasos, is used for the classification in the kernel space with additive kernels instead of nonlinear SVMs. Our experimental results demonstrate that the proposed method can achieve very high recognition accuracy on the ORL, YALE, and FERET databases.
Akhloufi, Moulay A.; Bendada, Abdelhakim; Batsale, Jean-Christophe
Face recognition in the infrared spectrum has attracted a lot of interest in recent years. Many of the techniques used in infrared are based on their visible counterpart, especially linear techniques like PCA (Principal Component Analysis) and LDA (Linear Discriminant Analysis). In this work, we introduce non linear dimensionality reduction approaches for multispectral face recognition. For this purpose, the following techniques were developed: global non linear techniques (Kernel-PCA, Kernel-LDA) and local non linear techniques (Local Linear Embedding, Locality Preserving Projection). The performances of these techniques were compared to classical linear techniques for face recognition like PCA and LDA. Two multispectral face recognition databases were used in our experiments: Equinox Face Recognition Database and Laval University Database. Equinox database contains images in the Visible, Short, Mid and Long waves infrared spectrums. Laval database contains images in the Visible, Near, Mid and Long waves infrared spectrums with variations in time and metabolic activity of the subjects. The obtained results are interesting and show the increase in recognition performance using local non linear dimensionality reduction techniques for infrared face recognition, particularly in near and short wave infrared spectrums.
Lewis, Amelia K; Porter, Melanie A; Williams, Tracey A; Bzishvili, Samantha; North, Kathryn N; Payne, Jonathan M
This study aimed to investigate face scan paths and face perception abilities in children with Neurofibromatosis Type 1 (NF1) and how these might relate to emotion recognition abilities in this population. The authors investigated facial emotion recognition, face scan paths, and face perception in 29 children with NF1 compared to 29 chronological age-matched typically developing controls. Correlations between facial emotion recognition, face scan paths, and face perception in children with NF1 were examined. Children with NF1 displayed significantly poorer recognition of fearful expressions compared to controls, as well as a nonsignificant trend toward poorer recognition of anger. Although there was no significant difference between groups in time spent viewing individual core facial features (eyes, nose, mouth, and nonfeature regions), children with NF1 spent significantly less time than controls viewing the face as a whole. Children with NF1 also displayed significantly poorer face perception abilities than typically developing controls. Facial emotion recognition deficits were not significantly associated with aberrant face scan paths or face perception abilities in the NF1 group. These results suggest that impairments in the perception, identification, and interpretation of information from faces are important aspects of the social-cognitive phenotype of NF1. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Santemiz, P.; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.; Broemme, Arslan; Busch, Christoph
In real-life scenarios where pose variation is up to side-view positions, face recognition becomes a challenging task. In this paper we propose an automatic side-view face recognition system designed for home-safety applications. Our goal is to recognize people as they pass through doors in order to
Javed, Muhammad Younus; Mohsin, Syed Maajid; Anjum, Muhammad Almas
Human beings are commonly identified by biometric schemes which are concerned with identifying individuals by their unique physical characteristics. The use of passwords and personal identification numbers for detecting humans are being used for years now. Disadvantages of these schemes are that someone else may use them or can easily be forgotten. Keeping in view of these problems, biometrics approaches such as face recognition, fingerprint, iris/retina and voice recognition have been developed which provide a far better solution when identifying individuals. A number of methods have been developed for face recognition. This paper illustrates employment of Gabor filters for extracting facial features by constructing a sliding window frame. Classification is done by assigning class label to the unknown image that has maximum features similar to the image stored in the database of that class. The proposed system gives a recognition rate of 96% which is better than many of the similar techniques being used for face recognition.
Full Text Available In real-world applications, the image of faces varies with illumination, facial expression, and poses. It seems that more training samples are able to reveal possible images of the faces. Though minimum squared error classification (MSEC is a widely used method, its applications on face recognition usually suffer from the problem of a limited number of training samples. In this paper, we improve MSEC by using the mirror faces as virtual training samples. We obtained the mirror faces generated from original training samples and put these two kinds of samples into a new set. The face recognition experiments show that our method does obtain high accuracy performance in classification.
Silapachote, Piyanuch; Karuppiah, Deepak R; Hanson, Allen R
We propose a classification technique for face expression recognition using AdaBoost that learns by selecting the relevant global and local appearance features with the most discriminating information...
Full Text Available We live in an age of ‘selfies.’ Yet, how we look at our own faces has seldom been systematically investigated. In this study we test if the visual processing of the highly familiar self-face is different from other faces, using psychophysics and eye-tracking. This paradigm also enabled us to test the association between the psychophysical properties of self-face representation and visual processing strategies involved in self-face recognition. Thirty-three adults performed a self-face recognition task from a series of self-other face morphs with simultaneous eye-tracking. Participants were found to look longer at the lower part of the face for self-face compared to other-face. Participants with a more distinct self-face representation, as indexed by a steeper slope of the psychometric response curve for self-face recognition, were found to look longer at upper part of the faces identified as ‘self’ vs. those identified as ‘other’. This result indicates that self-face representation can influence where we look when we process our own vs. others’ faces. We also investigated the association of autism-related traits with self-face processing metrics since autism has previously been associated with atypical self-processing. The study did not find any self-face specific association with autistic traits, suggesting that autism-related features may be related to self-processing in a domain specific manner.
Mohammadzade, Hoda; Hatzinakos, Dimitrios
The common approach for 3D face recognition is to register a probe face to each of the gallery faces and then calculate the sum of the distances between their points. This approach is computationally expensive and sensitive to facial expression variation. In this paper, we introduce the iterative closest normal point method for finding the corresponding points between a generic reference face and every input face. The proposed correspondence finding method samples a set of points for each face, denoted as the closest normal points. These points are effectively aligned across all faces, enabling effective application of discriminant analysis methods for 3D face recognition. As a result, the expression variation problem is addressed by minimizing the within-class variability of the face samples while maximizing the between-class variability. As an important conclusion, we show that the surface normal vectors of the face at the sampled points contain more discriminatory information than the coordinates of the points. We have performed comprehensive experiments on the Face Recognition Grand Challenge database, which is presently the largest available 3D face database. We have achieved verification rates of 99.6 and 99.2 percent at a false acceptance rate of 0.1 percent for the all versus all and ROC III experiments, respectively, which, to the best of our knowledge, have seven and four times less error rates, respectively, compared to the best existing methods on this database.
Wang, Jing; Wu, Xiaobei; Lu, Yanyu; Wu, Hao; Kan, Han; Chai, Xinyu
Given the limited visual percepts elicited by current prosthetic devices, it is essential to optimize image content in order to assist implant wearers to achieve better performance of visual tasks. This study focuses on recognition of familiar faces using simulated prosthetic vision. Combined with region-of-interest (ROI) magnification, three face extraction strategies based on a face detection technique were used: the Viola-Jones face region, the statistical face region (SFR) and the matting face region. These strategies significantly enhanced recognition performance compared to directly lowering resolution (DLR) with Gaussian dots. The inclusion of certain external features, such as hairstyle, was beneficial for face recognition. Given the high recognition accuracy achieved and applicable processing speed, SFR-ROI was the preferred strategy. DLR processing resulted in significant face gender recognition differences (i.e. females were more easily recognized than males), but these differences were not apparent with other strategies. Face detection-based image processing strategies improved visual perception by highlighting useful information. Their use is advisable for face recognition when using low-resolution prosthetic vision. These results provide information for the continued design of image processing modules for use in visual prosthetics, thus maximizing the benefits for future prosthesis wearers.
Palermo, Romina; Rossion, Bruno; Rhodes, Gillian; Laguesse, Renaud; Tez, Tolga; Hall, Bronwyn; Albonico, Andrea; Malaspina, Manuela; Daini, Roberta; Irons, Jessica; Al-Janabi, Shahd; Taylor, Libby C; Rivolta, Davide; McKone, Elinor
Diagnosis of developmental or congenital prosopagnosia (CP) involves self-report of everyday face recognition difficulties, which are corroborated with poor performance on behavioural tests. This approach requires accurate self-evaluation. We examine the extent to which typical adults have insight into their face recognition abilities across four experiments involving nearly 300 participants. The experiments used five tests of face recognition ability: two that tap into the ability to learn and recognize previously unfamiliar faces [the Cambridge Face Memory Test, CFMT; Duchaine, B., & Nakayama, K. (2006). The Cambridge Face Memory Test: Results for neurologically intact individuals and an investigation of its validity using inverted face stimuli and prosopagnosic participants. Neuropsychologia, 44(4), 576-585. doi:10.1016/j.neuropsychologia.2005.07.001; and a newly devised test based on the CFMT but where the study phases involve watching short movies rather than viewing static faces-the CFMT-Films] and three that tap face matching [Benton Facial Recognition Test, BFRT; Benton, A., Sivan, A., Hamsher, K., Varney, N., & Spreen, O. (1983). Contribution to neuropsychological assessment. New York: Oxford University Press; and two recently devised sequential face matching tests]. Self-reported ability was measured with the 15-item Kennerknecht et al. questionnaire [Kennerknecht, I., Ho, N. Y., & Wong, V. C. (2008). Prevalence of hereditary prosopagnosia (HPA) in Hong Kong Chinese population. American Journal of Medical Genetics Part A, 146A(22), 2863-2870. doi:10.1002/ajmg.a.32552]; two single-item questions assessing face recognition ability; and a new 77-item meta-cognition questionnaire. Overall, we find that adults with typical face recognition abilities have only modest insight into their ability to recognize faces on behavioural tests. In a fifth experiment, we assess self-reported face recognition ability in people with CP and find that some people who expect to
Longmore, Christopher A; Santos, Isabel M; Silva, Carlos F; Hall, Abi; Faloyin, Dipo; Little, Emily
Research investigating the effect of lighting and viewpoint changes on unfamiliar and newly learnt faces has revealed that such recognition is highly image dependent and that changes in either of these leads to poor recognition accuracy. Three experiments are reported to extend these findings by examining the effect of apparent age on the recognition of newly learnt faces. Experiment 1 investigated the ability to generalize to novel ages of a face after learning a single image. It was found that recognition was best for the learnt image with performance falling the greater the dissimilarity between the study and test images. Experiments 2 and 3 examined whether learning two images aids subsequent recognition of a novel image. The results indicated that interpolation between two studied images (Experiment 2) provided some additional benefit over learning a single view, but that this did not extend to extrapolation (Experiment 3). The results from all studies suggest that recognition was driven primarily by pictorial codes and that the recognition of faces learnt from a limited number of sources operates on stored images of faces as opposed to more abstract, structural, representations.
Brey, Philip A.E.
This essay examines ethical aspects of the use of facial recognition technology for surveillance purposes in public and semipublic areas, focusing particularly on the balance between security and privacy and civil liberties. As a case study, the FaceIt facial recognition engine of Identix
Kuhn, Christina D.; Asperud Thomsen, Johanne; Delfi, Tzvetelina
Abstract Recent findings have challenged the existence of category specific brain areas for perceptual processing of words and faces, suggesting the existence of a common network supporting the recognition of both. We examined the performance of patients with focal lesions in posterior cortical...... areas to investigate whether deficits in recognition of words and faces systematically co-occur as would be expected if both functions rely on a common cerebral network. Seven right-handed patients with unilateral brain damage following stroke in areas supplied by the posterior cerebral artery were...... included (four with right hemisphere damage, three with left, tested at least 1 year post stroke). We examined word and face recognition using a delayed match-to-sample paradigm using four different categories of stimuli: cropped faces, full faces, words, and cars. Reading speed and word length effects...
When two biometric specimens are compared using an automatic biometric recognition system, a similarity metric called “score‿ can be computed. In forensics, one of the biometric specimens is from an unknown source, for example, from a CCTV footage or a fingermark found at a crime scene and the other
Full Text Available Images are one of the key elements of the content of the World Wide Web. One group of web images are also photos of people. When various institutions (universities, research organizations, companies, associations, etc. present their staff, they should include photos of people for the purpose of more informative presentation. The fact is, that there are many specifies how people see face images and how do they remember them. Several methods to investigate person’s behavior during use of web content can be performed and one of the most reliable method among them is eye tracking. It is very common technique, particularly when it comes to observing web images. Our research focused on behavior of observing face images in process of memorizing them. Test participants were presented with face images shown at different time scale. We focused on three main face elements: eyes, mouth and nose. The results of our analysis can help not only in web presentation, which are, in principle, not limited by time observation, but especially in public presentations (conferences, symposia, and meetings.
The classical curvatures of smooth surfaces (Gaussian, mean and principal curvatures) have been widely used in 3D face recognition (FR). However, facial surfaces resulting from 3D sensors are discrete meshes. In this paper, we present a general framework and define three principal curvatures on discrete surfaces for the purpose of 3D FR. These principal curvatures are derived from the construction of asymptotic cones associated to any Borel subset of the discrete surface. They describe the local geometry of the underlying mesh. First two of them correspond to the classical principal curvatures in the smooth case. We isolate the third principal curvature that carries out meaningful geometric shape information. The three principal curvatures in different Borel subsets scales give multi-scale local facial surface descriptors. We combine the proposed principal curvatures with the LNP-based facial descriptor and SRC for recognition. The identification and verification experiments demonstrate the practicability and accuracy of the third principal curvature and the fusion of multi-scale Borel subset descriptors on 3D face from FRGC v2.0.
Full Text Available Abstract This paper develops a novel face recognition technique called Complete Gabor Fisher Classifier (CGFC. Different from existing techniques that use Gabor filters for deriving the Gabor face representation, the proposed approach does not rely solely on Gabor magnitude information but effectively uses features computed based on Gabor phase information as well. It represents one of the few successful attempts found in the literature of combining Gabor magnitude and phase information for robust face recognition. The novelty of the proposed CGFC technique comes from (1 the introduction of a Gabor phase-based face representation and (2 the combination of the recognition technique using the proposed representation with classical Gabor magnitude-based methods into a unified framework. The proposed face recognition framework is assessed in a series of face verification and identification experiments performed on the XM2VTS, Extended YaleB, FERET, and AR databases. The results of the assessment suggest that the proposed technique clearly outperforms state-of-the-art face recognition techniques from the literature and that its performance is almost unaffected by the presence of partial occlusions of the facial area, changes in facial expression, or severe illumination changes.
Xie, Zhenzhu; Yang, Fumeng; Zhang, Yuming; Wu, Congzhong
In this paper,we address the face sketch recognition problem. Firstly, we utilize the eigenface algorithm to convert a sketch image into a synthesized sketch face image. Subsequently, considering the low-level vision problem in synthesized face sketch image .Super resolution reconstruction algorithm based on CNN(convolutional neural network) is employed to improve the visual effect. To be specific, we uses a lightweight super-resolution structure to learn a residual mapping instead of directly mapping the feature maps from the low-level space to high-level patch representations, which making the networks are easier to optimize and have lower computational complexity. Finally, we adopt LDA(Linear Discriminant Analysis) algorithm to realize face sketch recognition on synthesized face image before super resolution and after respectively. Extensive experiments on the face sketch database(CUFS) from CUHK demonstrate that the recognition rate of SVM(Support Vector Machine) algorithm improves from 65% to 69% and the recognition rate of LDA(Linear Discriminant Analysis) algorithm improves from 69% to 75%.What'more,the synthesized face image after super resolution can not only better describer image details such as hair ,nose and mouth etc, but also improve the recognition accuracy effectively.
Karande, Kailash Jagannath
The book presents research work on face recognition using edge information as features for face recognition with ICA algorithms. The independent components are extracted from edge information. These independent components are used with classifiers to match the facial images for recognition purpose. In their study, authors have explored Canny and LOG edge detectors as standard edge detection methods. Oriented Laplacian of Gaussian (OLOG) method is explored to extract the edge information with different orientations of Laplacian pyramid. Multiscale wavelet model for edge detection is also propos
Chance, June E.; Goldstein, Alvin G.
Reviews studies of face-recognition memory and considers implications for assessing the dependability of children's performances as eyewitnesses. Considers personal factors (age, intellectual differences, and gender) and situational factors (familiarity of face, retention interval, and others). Also identifies developmental questions for future…
Yang, Ruyin; Mu, Zhichun; Chen, Long; Fan, Tingyu
The pose issue which may cause loss of useful information has always been a bottleneck in face and ear recognition. To address this problem, we propose a multimodal recognition approach based on face and ear using local feature, which is robust to large facial pose variations in the unconstrained scene. Deep learning method is used for facial pose estimation, and the method of a well-trained Faster R-CNN is used to detect and segment the region of face and ear. Then we propose a weighted region-based recognition method to deal with the local feature. The proposed method has achieved state-of-the-art recognition performance especially when the images are affected by pose variations and random occlusion in unconstrained scene.
Jones, Nicola; Riby, Leigh M; Smith, Michael A
Older adults with type 2 diabetes mellitus (DM2) exhibit accelerated decline in some domains of cognition including verbal episodic memory. Few studies have investigated the influence of DM2 status in older adults on recognition memory for more complex stimuli such as faces. In the present study we sought to compare recognition memory performance for words, objects and faces under conditions of relatively low and high cognitive load. Healthy older adults with good glucoregulatory control (n = 13) and older adults with DM2 (n = 24) were administered recognition memory tasks in which stimuli (faces, objects and words) were presented under conditions of either i) low (stimulus presented without a background pattern) or ii) high (stimulus presented against a background pattern) cognitive load. In a subsequent recognition phase, the DM2 group recognized fewer faces than healthy controls. Further, the DM2 group exhibited word recognition deficits in the low cognitive load condition. The recognition memory impairment observed in patients with DM2 has clear implications for day-to-day functioning. Although these deficits were not amplified under conditions of increased cognitive load, the present study emphasizes that recognition memory impairment for both words and more complex stimuli such as face are a feature of DM2 in older adults. Copyright © 2016 IMSS. Published by Elsevier Inc. All rights reserved.
K, Jyothi; J, Prabhakar C.
In this paper, we present multimodal 2D +3D face recognition method using block based curvelet features. The 3D surface of face (Depth Map) is computed from the stereo face images using stereo vision technique. The statistical measures such as mean, standard deviation, variance and entropy are extracted from each block of curvelet subband for both depth and intensity images independently.In order to compute the decision score, the KNN classifier is employed independently for both intensity an...
Li, Weihong; Liu, Lijuan; Gong, Weiguo
Support vector machine (SVM) has been proved to be a powerful tool for face recognition. The generalization capacity of SVM depends on the model with optimal hyperparameters. The computational cost of SVM model selection results in application difficulty in face recognition. In order to overcome the shortcoming, we utilize the advantage of uniform design--space filling designs and uniformly scattering theory to seek for optimal SVM hyperparameters. Then we propose a face recognition scheme based on SVM with optimal model which obtained by replacing the grid and gradient-based method with uniform design. The experimental results on Yale and PIE face databases show that the proposed method significantly improves the efficiency of SVM model selection.
Ho, Huy Tho; Chellappa, Rama
One of the key challenges for current face recognition techniques is how to handle pose variations between the probe and gallery face images. In this paper, we present a method for reconstructing the virtual frontal view from a given nonfrontal face image using Markov random fields (MRFs) and an efficient variant of the belief propagation algorithm. In the proposed approach, the input face image is divided into a grid of overlapping patches, and a globally optimal set of local warps is estimated to synthesize the patches at the frontal view. A set of possible warps for each patch is obtained by aligning it with images from a training database of frontal faces. The alignments are performed efficiently in the Fourier domain using an extension of the Lucas-Kanade algorithm that can handle illumination variations. The problem of finding the optimal warps is then formulated as a discrete labeling problem using an MRF. The reconstructed frontal face image can then be used with any face recognition technique. The two main advantages of our method are that it does not require manually selected facial landmarks or head pose estimation. In order to improve the performance of our pose normalization method in face recognition, we also present an algorithm for classifying whether a given face image is at a frontal or nonfrontal pose. Experimental results on different datasets are presented to demonstrate the effectiveness of the proposed approach.
Zubair, A. F.; Abu Mansor, M. S.
Computer Aided Process Planning (CAPP) is the bridge between CAD and CAM and pre-processing of the CAD data in the CAPP system is essential. For CNC turning part, conical faces of part model is inevitable to be recognised beside cylindrical and planar faces. As the sinus cosines of the cone radius structure differ according to different models, face identification in automatic feature recognition of the part model need special intention. This paper intends to focus hole on feature on conical faces that can be detected by CAD solid modeller ACIS via. SAT file. Detection algorithm of face topology were generated and compared. The study shows different faces setup for similar conical part models with different hole type features. Three types of holes were compared and different between merge faces and unmerge faces were studied.
Alim, M. Affan; Baig, Misbah M.; Mehboob, Shahzain; Naseem, Imran
In this paper, we propose a framework for low cost secure electronic voting system based on face recognition. Essentially Local Binary Pattern (LBP) is used for face feature characterization in texture format followed by chi-square distribution is used for image classification. Two parallel systems are developed based on smart phone and web applications for face learning and verification modules. The proposed system has two tire security levels by using person ID followed by face verification. Essentially class specific threshold is associated for controlling the security level of face verification. Our system is evaluated three standard databases and one real home based database and achieve the satisfactory recognition accuracies. Consequently our propose system provides secure, hassle free voting system and less intrusive compare with other biometrics.
Siregar, S. T. M.; Syahputra, M. F.; Rahmat, R. F.
Doing a face recognition for one single face does not take a long time to process, but if we implement attendance system or security system on companies that have many faces to be recognized, it will take a long time. Cloud computing is a computing service that is done not on a local device, but on an internet connected to a data center infrastructure. The system of cloud computing also provides a scalability solution where cloud computing can increase the resources needed when doing larger data processing. This research is done by applying eigenface while collecting data as training data is also done by using REST concept to provide resource, then server can process the data according to existing stages. After doing research and development of this application, it can be concluded by implementing Eigenface, recognizing face by applying REST concept as endpoint in giving or receiving related information to be used as a resource in doing model formation to do face recognition.
Fakhir, M. M.; Woo, W. L.; Chambers, J. A.; Dlay, S. S.
Variance pose is an important research topic in face recognition. The alteration of distance parameters across variance pose face features is a challenging. We provide a solution for this problem using perspective projection for variance pose face recognition. Our method infers intrinsic camera parameters of the image which enable the projection of the image plane into 3D. After this, face box tracking and centre of eyes detection can be identified using our novel technique to verify the virtual face feature measurements. The coordinate system of the perspective projection for face tracking allows the holistic dimensions for the face to be fixed in different orientations. The training of frontal images and the rest of the poses on FERET database determine the distance from the centre of eyes to the corner of box face. The recognition system compares the gallery of images against different poses. The system initially utilises information on position of both eyes then focuses principally on closest eye in order to gather data with greater reliability. Differentiation between the distances and position of the right and left eyes is a unique feature of our work with our algorithm outperforming other state of the art algorithms thus enabling stable measurement in variance pose for each individual.
de Gelder, B.; Pourtois, G.R.C.
Neuropsychological data indicate that face processing could be distributed among two functionally and anatomically distinct mechanisms, one specialised for detection and the other aimed at recognition (de Gelder & Rouw, 2000; 2001). These two mechanisms may be implemented in different interacting
Full Text Available In this paper, the performance of the proposed Convolutional Neural Network (CNN with three well-known image recognition methods such as Principal Component Analysis (PCA, Local Binary Patterns Histograms (LBPH and K–Nearest Neighbour (KNN is tested. In our experiments, the overall recognition accuracy of the PCA, LBPH, KNN and proposed CNN is demonstrated. All the experiments were implemented on the ORL database and the obtained experimental results were shown and evaluated. This face database consists of 400 different subjects (40 classes/ 10 images for each class. The experimental result shows that the LBPH provide better results than PCA and KNN. These experimental results on the ORL database demonstrated the effectiveness of the proposed method for face recognition. For proposed CNN we have obtained a best recognition accuracy of 98.3 %. The proposed method based on CNN outperforms the state of the art methods.
D. Sathish Kumar
Full Text Available Face recognition is one of the intensive areas of research in computer vision and pattern recognition but many of which are focused on recognition of faces under varying facial expressions and pose variation. A constrained optical flow algorithm discussed in this paper, recognizes facial images involving various expressions based on motion vector computation. In this paper, an optical flow computation algorithm which computes the frames of varying facial gestures, and integrating with synthesized image in a probabilistic environment has been proposed. Also Histogram Equalization technique has been used to overcome the effect of illuminations while capturing the input data using camera devices. It also enhances the contrast of the image for better processing. The experimental results confirm that the proposed face recognition system is more robust and recognizes the facial images under varying expressions and pose variations more accurately.
Clemmensen, Line Katrine Harder; Gomez, David Delgado; Ersbøll, Bjarne Kjær
of the face recognition problem. The elastic net model is able to select a subset of features with low computational effort compared to other state-of-the-art feature selection methods. Furthermore, the fact that the number of features usually is larger than the number of images in the data base makes feature......The accuracy of data classification methods depends considerably on the data representation and on the selected features. In this work, the elastic net model selection is used to identify meaningful and important features in face recognition. Modelling the characteristics which distinguish one...... person from another using only subsets of features will both decrease the computational cost and increase the generalization capacity of the face recognition algorithm. Moreover, identifying which are the features that better discriminate between persons will also provide a deeper understanding...
search facial components, identify a gestalt face 11 and compare it to a stored set of facial characteristics of known human faces. 3.2 Recognition System...theorize that a face is not merely a set of facial features but is rather something meaningful in its form. This is consistent with the Gestalt theory that...an image is seen in its entirety, not by its individual parts. Hence, the “ gestalt face” refers to a holistic representation of face. Gestalt’s theory
Full Text Available Prosopagnosia has been considered for a long period of time as the most important and almost exclusive disorder in the recognition of familiar people. In recent years, however, this conviction has been undermined by the description of patients showing a concomitant defect in the recognition of familiar faces and voices as a consequence of lesions encroaching upon the right anterior temporal lobe (ATL. These new data have obliged researchers to reconsider on one hand the construct of ‘associative prosopagnosia’ and on the other hand current models of people recognition. A systematic review of the patterns of familiar people recognition disorders observed in patients with right and left ATL lesions has shown that in patients with right ATL lesions face familiarity feelings and the retrieval of person-specific semantic information from faces are selectively affected, whereas in patients with left ATL lesions the defect selectively concerns famous people naming. Furthermore, some patients with right ATL lesions and intact face familiarity feelings show a defect in the retrieval of person-specific semantic knowledge greater from face than from name. These data are at variance with current models assuming: (a that familiarity feelings are generated at the level of person identity nodes (PINs where information processed by various sensory modalities converge, and (b that PINs provide a modality-free gateway to a single semantic system, where information about people is stored in an amodal format. They suggest, on the contrary: (a that familiarity feelings are generated at the level of modality-specific recognition units; (b that face and voice recognition units are represented more in the right than in the left ATLs; (c that in the right ATL are mainly stored person-specific information based on a convergence of perceptual information, whereas in the left ATLs are represented verbally-mediated person-specific information.
Blank, Helen; Anwander, Alfred; von Kriegstein, Katharina
Currently, there are two opposing models for how voice and face information is integrated in the human brain to recognize person identity. The conventional model assumes that voice and face information is only combined at a supramodal stage (Bruce and Young, 1986; Burton et al., 1990; Ellis et al., 1997). An alternative model posits that areas encoding voice and face information also interact directly and that this direct interaction is behaviorally relevant for optimizing person recognition (von Kriegstein et al., 2005; von Kriegstein and Giraud, 2006). To disambiguate between the two different models, we tested for evidence of direct structural connections between voice- and face-processing cortical areas by combining functional and diffusion magnetic resonance imaging. We localized, at the individual subject level, three voice-sensitive areas in anterior, middle, and posterior superior temporal sulcus (STS) and face-sensitive areas in the fusiform gyrus [fusiform face area (FFA)]. Using probabilistic tractography, we show evidence that the FFA is structurally connected with voice-sensitive areas in STS. In particular, our results suggest that the FFA is more strongly connected to middle and anterior than to posterior areas of the voice-sensitive STS. This specific structural connectivity pattern indicates that direct links between face- and voice-recognition areas could be used to optimize human person recognition.
MUHAMMAD EHSAN RANA
Full Text Available The objective of this research is to study the effects of image enhancement techniques on face recognition performance of wearable gadgets with an emphasis on recognition rate.In this research, a number of image enhancement techniques are selected that include brightness normalization, contrast normalization, sharpening, smoothing, and various combinations of these. Subsequently test images are obtained from AT&T database and Yale Face Database B to investigate the effect of these image enhancement techniques under various conditions such as change of illumination and face orientation and expression.The evaluation of data, collected during this research, revealed that the effect of image pre-processing techniques on face recognition highly depends on the illumination condition under which these images are taken. It is revealed that the benefit of applying image enhancement techniques on face images is best seen when there is high variation of illumination among images. Results also indicate that highest recognition rate is achieved when images are taken under low light condition and image contrast is enhanced using histogram equalization technique and then image noise is reduced using median smoothing filter. Additionally combination of contrast normalization and mean smoothing filter shows good result in all scenarios. Results obtained from test cases illustrate up to 75% improvement in face recognition rate when image enhancement is applied to images in given scenarios.
Clemmensen, Line Katrine Harder; Gomez, David Delgado; Ersbøll, Bjarne Kjær
The accuracy of data classification methods depends considerably on the data representation and on the selected features. In this work, the elastic net model selection is used to identify meaningful and important features in face recognition. Modelling the characteristics which distinguish one...... selection techniques such as forward selection or lasso regression become inadequate. In the experimental section, the performance of the elastic net model is compared with geometrical and color based algorithms widely used in face recognition such as Procrustes nearest neighbor, Eigenfaces, or Fisher...
Full Text Available Age-related face recognition deficits are characterized by high false alarms to unfamiliar faces, are not as pronounced for other complex stimuli, and are only partially related to general age-related impairments in cognition. This paper reviews some of the underlying processes likely to be implicated in theses deficits by focusing on areas where contradictions abound as a means to highlight avenues for future research. Research pertaining to three following hypotheses is presented: (i perceptual deterioration, (ii encoding of configural information, and (iii difficulties in recollecting contextual information. The evidence surveyed provides support for the idea that all three factors are likely to contribute, under certain conditions, to the deficits in face recognition seen in older adults. We discuss how these different factors might interact in the context of a generic framework of the different stages implicated in face recognition. Several suggestions for future investigations are outlined.
Wang, Ziqiang; Sun, Xia; Sun, Lijun; Huang, Yuchun
Dimensionality reduction is a key problem in face recognition due to the high-dimensionality of face image. To effectively cope with this problem, a novel dimensionality reduction algorithm called semisupervised kernel marginal Fisher analysis (SKMFA) for face recognition is proposed in this paper. SKMFA can make use of both labelled and unlabeled samples to learn the projection matrix for nonlinear dimensionality reduction. Meanwhile, it can successfully avoid the singularity problem by not calculating the matrix inverse. In addition, in order to make the nonlinear structure captured by the data-dependent kernel consistent with the intrinsic manifold structure, a manifold adaptive nonparameter kernel is incorporated into the learning process of SKMFA. Experimental results on three face image databases demonstrate the effectiveness of our proposed algorithm.
Gordon, Gaile G.
This paper explores the representation of the human face by features based on the curvature of the face surface. Curature captures many features necessary to accurately describe the face, such as the shape of the forehead, jawline, and cheeks, which are not easily detected from standard intensity images. Moreover, the value of curvature at a point on the surface is also viewpoint invariant. Until recently range data of high enough resolution and accuracy to perform useful curvature calculations on the scale of the human face had been unavailable. Although several researchers have worked on the problem of interpreting range data from curved (although usually highly geometrically structured) surfaces, the main approaches have centered on segmentation by signs of mean and Gaussian curvature which have not proved sufficient in themselves for the case of the human face. This paper details the calculation of principal curvature for a particular data set, the calculation of general surface descriptors based on curvature, and the calculation of face specific descriptors based both on curvature features and a priori knowledge about the structure of the face. These face specific descriptors can be incorporated into many different recognition strategies. A system that implements one such strategy, depth template comparison, giving recognition rates between 80% and 90% is described.
Full Text Available Face recognition systems are now being used in many applications such as border crossings, banks, and mobile payments. The wide scale deployment of facial recognition systems has attracted intensive attention to the reliability of face biometrics against spoof attacks, where a photo, a video, or a 3D mask of a genuine user’s face can be used to gain illegitimate access to facilities or services. Though several face antispoofing or liveness detection methods (which determine at the time of capture whether a face is live or spoof have been proposed, the issue is still unsolved due to difficulty in finding discriminative and computationally inexpensive features and methods for spoof attacks. In addition, existing techniques use whole face image or complete video for liveness detection. However, often certain face regions (video frames are redundant or correspond to the clutter in the image (video, thus leading generally to low performances. Therefore, we propose seven novel methods to find discriminative image patches, which we define as regions that are salient, instrumental, and class-specific. Four well-known classifiers, namely, support vector machine (SVM, Naive-Bayes, Quadratic Discriminant Analysis (QDA, and Ensemble, are then used to distinguish between genuine and spoof faces using a voting based scheme. Experimental analysis on two publicly available databases (Idiap REPLAY-ATTACK and CASIA-FASD shows promising results compared to existing works.
Solomon-Harris, Lily M; Mullin, Caitlin R; Steeves, Jennifer K E
The human cortical system for face perception is comprised of a network of connected regions including the middle fusiform gyrus ("fusiform face area" or FFA), the inferior occipital cortex ("occipital face area" or OFA), and the superior temporal sulcus. The traditional hierarchical feedforward model of visual processing suggests information flows from early visual cortex to the OFA for initial face feature analysis to higher order regions including the FFA for identity recognition. However, patient data suggest an alternative model. Patients with acquired prosopagnosia, an inability to visually recognize faces, have been documented with lesions to the OFA but who nevertheless show face-selective activation in the FFA. Moreover, their ability to categorize faces remains intact. This suggests that the FFA is not solely responsible for face recognition and the network is not strictly hierarchical, but may be organized in a reverse hierarchical fashion. We used transcranial magnetic stimulation (TMS) to temporarily disrupt processing in the OFA in neurologically-intact individuals and found participants' ability to categorize intact versus scrambled faces was unaffected, however face identity discrimination was significantly impaired. This suggests that face categorization but not recognition can occur without the "earlier" OFA being online and indicates that "lower level" face category processing may be assumed by other intact face network regions such as the FFA. These results are consistent with the patient data and support a non-hierarchical, global-to-local model with re-entrant connections between the OFA and other face processing areas. Copyright © 2013 Elsevier Inc. All rights reserved.
Ge, Liezhong; Anzures, Gizelle; Wang, Zhe; Kelly, David J.; Pascalis, Olivier; Quinn, Paul C.; Slater, Alan M.; Yang, Zhiliang; Lee, Kang
Children's recognition of familiar own-age peers was investigated. Chinese children (4-, 8-, and 14-year-olds) were asked to identify their classmates from photographs showing the entire face, the internal facial features only, the external facial features only, or the eyes, nose, or mouth only. Participants from all age groups were familiar with…
Fagertun, Jens; Gomez, David Delgado; Ersbøll, Bjarne Kjær
Abstract—In this paper, a novel algorithm for facial recognition is proposed. The technique combines the color texture and geometrical configuration provided by face images. Landmarks and pixel intensities are used by Principal Component Analysis and Fisher Linear Discriminant Analysis to associa...... as an accurate and robust tool for facial identification and unknown detection....
Stewart, Mary E.; McAdam, Clair; Ota, Mitsuhiko; Peppe, Sue; Cleland, Joanne
The present study reports on a new vocal emotion recognition task and assesses whether people with autism spectrum conditions (ASC) perform differently from typically developed individuals on tests of emotional identification from both the face and the voice. The new test of vocal emotion contained trials in which the vocal emotion of the sentence…
Dutta, A.; Veldhuis, Raymond N.J.; Spreeuwers, Lieuwe Jan
In this paper, we propose a non-frontal model based approach which ensures that a face recognition system always gets to compare images having similar view (or pose). This requires a virtual suspect reference set that consists of non-frontal suspect images having pose similar to the surveillance
Rahim, R.; Afriliansyah, T.; Winata, H.; Nofriansyah, D.; Ratnadewi; Aryza, S.
Face identification systems are developing rapidly, and these developments drive the advancement of biometric-based identification systems that have high accuracy. However, to develop a good face recognition system and to have high accuracy is something that’s hard to find. Human faces have diverse expressions and attribute changes such as eyeglasses, mustache, beard and others. Fisher Linear Discriminant (FLD) is a class-specific method that distinguishes facial image images into classes and also creates distance between classes and intra classes so as to produce better classification.
Dutta, A.; Veldhuis, Raymond N.J.; Spreeuwers, Lieuwe Jan; Meuwly, Didier; Meuwly, Didier
Recently, it has been shown that performance of a face recognition system depends on the quality of both face images participating in the recognition process: the reference and the test image. In the context of forensic face recognition, this observation has two implications: a) the quality of the
It is a common belief that we are experts in the processing of famous faces. Although our ability to quickly and accurately recognise pictures of famous faces is quite impressive, we might not really process famous faces as faces per se, but as 'icons' or famous still pictures of famous faces. This assumption was tested in two parallel experiments employing a recognition task on famous, but personally unfamiliar, and on personally familiar faces. Both tests included (a) original, 'iconic' pictures, (b) slightly modified versions of familiar pictures, and (c) rather unfamiliar pictures of familiar persons. Participants (n = 70 + 70) indeed recognised original pictures of famous and personally familiar people very accurately, while performing poorly in recognising slightly modified, as well as unfamiliar versions of famous, but not personally familiar persons. These results indicate that the successful processing of famous faces may depend on icons imbued in society but not on the face as such.
Zhang, De-xin; An, Peng; Zhang, Hao-xiang
In this paper, we propose a video searching system that utilizes face recognition as searching indexing feature. As the applications of video cameras have great increase in recent years, face recognition makes a perfect fit for searching targeted individuals within the vast amount of video data. However, the performance of such searching depends on the quality of face images recorded in the video signals. Since the surveillance video cameras record videos without fixed postures for the object, face occlusion is very common in everyday video. The proposed system builds a model for occluded faces using fuzzy principal component analysis (FPCA), and reconstructs the human faces with the available information. Experimental results show that the system has very high efficiency in processing the real life videos, and it is very robust to various kinds of face occlusions. Hence it can relieve people reviewers from the front of the monitors and greatly enhances the efficiency as well. The proposed system has been installed and applied in various environments and has already demonstrated its power by helping solving real cases.
Full Text Available Nearest subspace (NS classification based on linear regression technique is a very straightforward and efficient method for face recognition. A recently developed NS method, namely the linear regression-based classification (LRC, uses downsampled face images as features to perform face recognition. The basic assumption behind this kind method is that samples from a certain class lie on their own class-specific subspace. Since there are only few training samples for each individual class, which will cause the small sample size (SSS problem, this problem gives rise to misclassification of previous NS methods. In this paper, we propose two novel LRC methods using the idea that every class-specific subspace has its unique basis vectors. Thus, we consider that each class-specific subspace is spanned by two kinds of basis vectors which are the common basis vectors shared by many classes and the class-specific basis vectors owned by one class only. Based on this concept, two classification methods, namely robust LRC 1 and 2 (RLRC 1 and 2, are given to achieve more robust face recognition. Unlike some previous methods which need to extract class-specific basis vectors, the proposed methods are developed merely based on the existence of the class-specific basis vectors but without actually calculating them. Experiments on three well known face databases demonstrate very good performance of the new methods compared with other state-of-the-art methods.
Full Text Available This study was aimed at determining the conditions in which eye-contact may improve recognition memory for faces. Different stimuli and procedures were tested in four experiments. The effect of gaze direction on memory was found when a simple “yes-no” recognition task was used but not when the recognition task was more complex (e.g., including “Remember-Know” judgements, cf. Experiment 2, or confidence ratings, cf. Experiment 4. Moreover, even when a “yes-no” recognition paradigm was used, the effect occurred with one series of stimuli (cf. Experiment 1 but not with another one (cf. Experiment 3. The difficulty to produce the positive effect of gaze direction on memory is discussed.
Jassim, Sabah A.; Sellahewa, Harin
Automatic face recognition (AFR) is a challenging task that is increasingly becoming the preferred biometric trait for identification and has the potential of becoming an essential tool in the fight against crime and terrorism. Closed-circuit television (CCTV) cameras have increasingly been used over the last few years for surveillance in public places such as airports, train stations and shopping centers. They are used to detect and prevent crime, shoplifting, public disorder and terrorism. The work of law-enforcing and intelligence agencies is becoming more reliant on the use of databases of biometric data for large section of the population. Face is one of the most natural biometric traits that can be used for identification and surveillance. However, variations in lighting conditions, facial expressions, face size and pose are a great obstacle to AFR. This paper is concerned with using waveletbased face recognition schemes in the presence of variations of expressions and illumination. In particular, we will investigate the use of a combination of wavelet frequency channels for a multi-stream face recognition using various wavelet subbands as different face signal streams. The proposed schemes extend our recently developed face veri.cation scheme for implementation on mobile devices. We shall present experimental results on the performance of our proposed schemes for a number of face databases including a new AV database recorded on a PDA. By analyzing the various experimental data, we shall demonstrate that the multi-stream approach is robust against variations in illumination and facial expressions than the previous single-stream approach.
Full Text Available Sparse representation based on compressed sensing theory has been widely used in the field of face recognition, and has achieved good recognition results. but the face feature extraction based on sparse representation is too simple, and the sparse coefficient is not sparse. In this paper, we improve the classification algorithm based on the fusion of sparse representation and Gabor feature, and then improved algorithm for Gabor feature which overcomes the problem of large dimension of the vector dimension, reduces the computation and storage cost, and enhances the robustness of the algorithm to the changes of the environment.The classification efficiency of sparse representation is determined by the collaborative representation,we simplify the sparse constraint based on L1 norm to the least square constraint, which makes the sparse coefficients both positive and reduce the complexity of the algorithm. Experimental results show that the proposed method is robust to illumination, facial expression and pose variations of face recognition, and the recognition rate of the algorithm is improved.
Fatima M. Felisberti
Full Text Available The effect of the spatial location of faces in the visual field during brief, free-viewing encoding in subsequent face recognition is not known. This study addressed this question by tagging three groups of faces with cheating, cooperating or neutral behaviours and presenting them for encoding in two visual hemifields (upper vs. lower or left vs. right. Participants then had to indicate if a centrally presented face had been seen before or not. Head and eye movements were free in all phases. Findings showed that the overall recognition of cooperators was significantly better than cheaters, and it was better for faces encoded in the upper hemifield than in the lower hemifield, both in terms of a higher d' and faster reaction time (RT. The d' for any given behaviour in the left and right hemifields was similar. The RT in the left hemifield did not vary with tagged behaviour, whereas the RT in the right hemifield was longer for cheaters than for cooperators. The results showed that memory biases in contextual face recognition were modulated by the spatial location of briefly encoded faces and are discussed in terms of scanning reading habits, top-left bias in lighting preference and peripersonal space.
Deep Neural Networks (DNNs) have established themselves as a dominant technique in machine learning. DNNs have been top performers on a wide variety of tasks including image classification, speech recognition, and face recognition.1-3 Convolutional neural networks (CNNs) have been used in nearly all of the top performing methods on the Labeled Faces in the Wild (LFW) dataset.3-6 In this talk and accompanying paper, I attempt to provide a review and summary of the deep learning techniques used in the state-of-the-art. In addition, I highlight the need for both larger and more challenging public datasets to benchmark these systems. Despite the ability of DNNs and autoencoders to perform unsupervised feature learning, modern facial recognition pipelines still require domain specific engineering in the form of re-alignment. For example, in Facebook's recent DeepFace paper, a 3D "frontalization" step lies at the beginning of the pipeline. This step creates a 3D face model for the incoming image and then uses a series of affine transformations of the fiducial points to "frontalize" the image. This step enables the DeepFace system to use a neural network architecture with locally connected layers without weight sharing as opposed to standard convolutional layers.6 Deep learning techniques combined with large datasets have allowed research groups to surpass human level performance on the LFW dataset.3, 5 The high accuracy (99.63% for FaceNet at the time of publishing) and utilization of outside data (hundreds of millions of images in the case of Google's FaceNet) suggest that current face verification benchmarks such as LFW may not be challenging enough, nor provide enough data, for current techniques.3, 5 There exist a variety of organizations with mobile photo sharing applications that would be capable of releasing a very large scale and highly diverse dataset of facial images captured on mobile devices. Such an "ImageNet for Face Recognition" would likely receive a warm
Ryo Kyung Lee
Full Text Available Faces convey various types of information like identity, ethnicity, sex or emotion. We investigated whether the well-known other-race effect (ORE is observable when facial information other than identity varies between test faces. First, in a race comparison task, German and Korean participants compared the ethnicity of two faces sharing similar identity information but differing in ethnicity. Participants reported which face looked more Asian or Caucasian. Their behavioral results showed that Koreans and Germans were equally good at discriminating ethnicity information in Asian and Caucasian faces. The nationality of participants, however, affected their eye-movement strategy when the test faces were shown sequentially, thus, when memory was involved. In the second study, we focused on ORE in terms of recognition of facial expressions. Korean participants viewed Asian and Caucasian faces showing different facial expressions for 100ms to 800ms and reported the emotion of the faces. Surprisingly, under all three presentation times, Koreans were significantly better with Caucasian faces. These two studies suggest that ORE does not appear in all recognition tasks involving other-race faces. Here, when identity information is not involved in the task, we are not better at discriminating ethnicity and facial expressions in same race compared to other race faces.
Alvi, Fahad Bashir; Pears, Russel
This Research study proposes a novel method for face recognition based on Anthropometric features that make use of an integrated approach comprising of a global and personalized models. The system is aimed to at situations where lighting, illumination, and pose variations cause problems in face recognition. A Personalized model covers the individual aging patterns while a Global model captures general aging patterns in the database. We introduced a de-aging factor that de-ages each individual in the database test and training sets. We used the k nearest neighbor approach for building a personalized model and global model. Regression analysis was applied to build the models. During the test phase, we resort to voting on different features. We used FG-Net database for checking the results of our technique and achieved 65 percent Rank 1 identification rate.
Zhao, Xiangdong; Wang, Kaiwei; Yang, Kailun; Hu, Weijian
It is highly important for visually impaired people (VIP) to be aware of human beings around themselves, so correctly recognizing people in VIP assisting apparatus provide great convenience. However, in classical face recognition technology, faces used in training and prediction procedures are usually frontal, and the procedures of acquiring face images require subjects to get close to the camera so that frontal face and illumination guaranteed. Meanwhile, labels of faces are defined manually rather than automatically. Most of the time, labels belonging to different classes need to be input one by one. It prevents assisting application for VIP with these constraints in practice. In this article, a face recognition system under unconstrained environment is proposed. Specifically, it doesn't require frontal pose or uniform illumination as required by previous algorithms. The attributes of this work lie in three aspects. First, a real time frontal-face synthesizing enhancement is implemented, and frontal faces help to increase recognition rate, which is proved with experiment results. Secondly, RGB-D camera plays a significant role in our system, from which both color and depth information are utilized to achieve real time face tracking which not only raises the detection rate but also gives an access to label faces automatically. Finally, we propose to use neural networks to train a face recognition system, and Principal Component Analysis (PCA) is applied to pre-refine the input data. This system is expected to provide convenient help for VIP to get familiar with others, and make an access for them to recognize people when the system is trained enough.
Farokhi, Sajad; Sheikh, U.U.; Flusser, Jan; Yang, Bo
Roč. 316, č. 1 (2015), s. 234-245 ISSN 0020-0255 R&D Projects: GA ČR(CZ) GA13-29225S Keywords : face recognition * Zernike moments * Hermite kernel * Decision fusion * Near infrared Subject RIV: JD - Computer Applications, Robotics Impact factor: 3.364, year: 2015 http://library.utia.cas.cz/separaty/2015/ZOI/flusser-0444205.pdf
Full Text Available Facial recognition system is fundamental a computer application for the automatic identification of a person through a digitized image or a video source. The major cause for the overall poor performance is related to the transformations in appearance of the user based on the aspects akin to ageing, beard growth, sun-tan etc. In order to overcome the above drawback, Self-update process has been developed in which, the system learns the biometric attributes of the user every time the user interacts with the system and the information gets updated automatically. The procedures of Plastic surgery yield a skilled and endurable means of enhancing the facial appearance by means of correcting the anomalies in the feature and then treating the facial skin with the aim of getting a youthful look. When plastic surgery is performed on an individual, the features of the face undergo reconstruction either locally or globally. But, the changes which are introduced new by plastic surgery remain hard to get modeled by the available face recognition systems and they deteriorate the performances of the face recognition algorithm. Hence the Facial plastic surgery produces changes in the facial features to larger extent and thereby creates a significant challenge to the face recognition system. This work introduces a fresh Multimodal Biometric approach making use of novel approaches to boost the rate of recognition and security. The proposed method consists of various processes like Face segmentation using Active Appearance Model (AAM, Face Normalization using Kernel Density Estimate/ Point Distribution Model (KDE-PDM, Feature extraction using Local Gabor XOR Patterns (LGXP and Classification using Independent Component Analysis (ICA. Efficient techniques have been used in each phase of the FRAS in order to obtain improved results.
Li, Xiao-Xin; Dai, Dao-Qing; Zhang, Xiao-Fei; Ren, Chuan-Xian
Face recognition with occlusion is common in the real world. Inspired by the works of structured sparse representation, we try to explore the structure of the error incurred by occlusion from two aspects: the error morphology and the error distribution. Since human beings recognize the occlusion mainly according to its region shape or profile without knowing accurately what the occlusion is, we argue that the shape of the occlusion is also an important feature. We propose a morphological graph model to describe the morphological structure of the error. Due to the uncertainty of the occlusion, the distribution of the error incurred by occlusion is also uncertain. However, we observe that the unoccluded part and the occluded part of the error measured by the correntropy induced metric follow the exponential distribution, respectively. Incorporating the two aspects of the error structure, we propose the structured sparse error coding for face recognition with occlusion. Our extensive experiments demonstrate that the proposed method is more stable and has higher breakdown point in dealing with the occlusion problems in face recognition as compared to the related state-of-the-art methods, especially for the extreme situation, such as the high level occlusion and the low feature dimension.
DeGutis, Joseph; Wilmer, Jeremy; Mercado, Rogelio J.; Cohan, Sarah
Although holistic processing is thought to underlie normal face recognition ability, widely discrepant reports have recently emerged about this link in an individual differences context. Progress in this domain may have been impeded by the widespread use of subtraction scores, which lack validity due to their contamination with control condition…
Jenkins, Rob; Lavie, Nilli; Driver, Jon
Incidental recognition memory for faces previously exposed as task-irrelevant distractors was assessed as a function of the attentional load of an unrelated task performed on superimposed letter strings at exposure. In Experiment 1, subjects were told to ignore the faces and either to judge the color of the letters (low load) or to search for an angular target letter among other angular letters (high load). A surprise recognition memory test revealed that despite the irrelevance of all faces at exposure, those exposed under low-load conditions were later recognized, but those exposed under high-load conditions were not. Experiment 2 found a similar pattern when both the high- and low-load tasks required shape judgments for the letters but made differing attentional demands. Finally, Experiment 3 showed that high load in a nonface task can significantly reduce even immediate recognition of a fixated face from the preceding trial. These results demonstrate that load in a nonface domain (e.g., letter shape) can reduce face recognition, in accord with Lavie's load theory. In addition to their theoretical impact, these results may have practical implications for eyewitness testimony.
Boom, B.J.; Beumer, G.M.; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.
In this paper we investigate the effect of image resolution on the error rates of a face verification system. We do not restrict ourselves to the face recognition algorithm only, but we also consider the face registration. In our face recognition system, the face registration is done by finding
R. Reena Rose
Full Text Available Texture descriptors have an important role in recognizing face images. However, almost all the existing local texture descriptors use nearest neighbors to encode a texture pattern around a pixel. But in face images, most of the pixels have similar characteristics with that of its nearest neighbors because the skin covers large area in a face and the skin tone at neighboring regions are same. Therefore this paper presents a general framework called Local Texture Description Framework that uses only eight pixels which are at certain distance apart either circular or elliptical from the referenced pixel. Local texture description can be done using the foundation of any existing local texture descriptors. In this paper, the performance of the proposed framework is verified with three existing local texture descriptors Local Binary Pattern (LBP, Local Texture Pattern (LTP and Local Tetra Patterns (LTrPs for the five issues viz. facial expression, partial occlusion, illumination variation, pose variation and general recognition. Five benchmark databases JAFFE, Essex, Indian faces, AT&T and Georgia Tech are used for the experiments. Experimental results demonstrate that even with less number of patterns, the proposed framework could achieve higher recognition accuracy than that of their base models.
Full Text Available Motivated by a recently proposed biologically inspired face recognition approach, we investigated the relation between human behavior and a computational model based on Fourier-Bessel (FB spatial patterns. We measured human recognition performance of FB filtered face images using an 8-alternative forced-choice method. Test stimuli were generated by converting the images from the spatial to the FB domain, filtering the resulting coefficients with a band-pass filter, and finally taking the inverse FB transformation of the filtered coefficients. The performance of the computational models was tested using a simulation of the psychophysical experiment. In the FB model, face images were first filtered by simulated V1- type neurons and later analyzed globally for their content of FB components. In general, there was a higher human contrast sensitivity to radially than to angularly filtered images, but both functions peaked at the 11.3-16 frequency interval. The FB-based model presented similar behavior with regard to peak position and relative sensitivity, but had a wider frequency band width and a narrower response range. The response pattern of two alternative models, based on local FB analysis and on raw luminance, strongly diverged from the human behavior patterns. These results suggest that human performance can be constrained by the type of information conveyed by polar patterns, and consequently that humans might use FB-like spatial patterns in face processing.
Full Text Available The aim of this work is to carry out a comparative study of face recognition methods that are suitable to work in unconstrained environments. The analyzed methods are selected by considering their performance in former comparative studies, in addition to be real-time, to require just one image per person, and to be fully online. In the study two local-matching methods, histograms of LBP features and Gabor Jet descriptors, one holistic method, generalized PCA, and two image-matching methods, SIFT-based and ERCF-based, are analyzed. The methods are compared using the FERET, LFW, UCHFaceHRI, and FRGC databases, which allows evaluating them in real-world conditions that include variations in scale, pose, lighting, focus, resolution, facial expression, accessories, makeup, occlusions, background and photographic quality. Main conclusions of this study are: there is a large dependence of the methods on the amount of face and background information that is included in the face's images, and the performance of all methods decreases largely with outdoor-illumination. The analyzed methods are robust to inaccurate alignment, face occlusions, and variations in expressions, to a large degree. LBP-based methods are an excellent election if we need real-time operation as well as high recognition rates.
Sassenrath, Claudia; Sassenberg, Kai; Ray, Devin G; Scheiter, Katharina; Jarodzka, Halszka
Two studies examined an unexplored motivational determinant of facial emotion recognition: observer regulatory focus. It was predicted that a promotion focus would enhance facial emotion recognition relative to a prevention focus because the attentional strategies associated with promotion focus enhance performance on well-learned or innate tasks - such as facial emotion recognition. In Study 1, a promotion or a prevention focus was experimentally induced and better facial emotion recognition was observed in a promotion focus compared to a prevention focus. In Study 2, individual differences in chronic regulatory focus were assessed and attention allocation was measured using eye tracking during the facial emotion recognition task. Results indicated that the positive relation between a promotion focus and facial emotion recognition is mediated by shorter fixation duration on the face which reflects a pattern of attention allocation matched to the eager strategy in a promotion focus (i.e., striving to make hits). A prevention focus did not have an impact neither on perceptual processing nor on facial emotion recognition. Taken together, these findings demonstrate important mechanisms and consequences of observer motivational orientation for facial emotion recognition.
and verify the current findings by using other biometric modalities (like fingerprint, iris , more spectral images) and assess computational...stereo face imaging. I. INTRODUCTION Face recognition has relative low accuracy compared to fingerprint recognition and iris recognition. To... biometric scores used in score fusion, the higher fusion recognition performance achieved. Zheng et al.  recently had a brief survey on the
Full Text Available A discriminative and robust feature—kernel enhanced informative Gabor feature—is proposed in this paper for face recognition. Mutual information is applied to select a set of informative and nonredundant Gabor features, which are then further enhanced by kernel methods for recognition. Compared with one of the top performing methods in the 2004 Face Verification Competition (FVC2004, our methods demonstrate a clear advantage over existing methods in accuracy, computation efficiency, and memory cost. The proposed method has been fully tested on the FERET database using the FERET evaluation protocol. Significant improvements on three of the test data sets are observed. Compared with the classical Gabor wavelet-based approaches using a huge number of features, our method requires less than 4 milliseconds to retrieve a few hundreds of features. Due to the substantially reduced feature dimension, only 4 seconds are required to recognize 200 face images. The paper also unified different Gabor filter definitions and proposed a training sample generation algorithm to reduce the effects caused by unbalanced number of samples available in different classes.
Vetter, Volker; Giefing, Gerd-Juergen; Mai, Rudolf; Weisser, Hubert
We present a driver face recognition system for comfortable access control and individual settings of automobiles. The primary goals are the prevention of car thefts and heavy accidents caused by unauthorized use (joy-riders), as well as the increase of safety through optimal settings, e.g. of the mirrors and the seat position. The person sitting on the driver's seat is observed automatically by a small video camera in the dashboard. All he has to do is to behave cooperatively, i.e. to look into the camera. A classification system validates his access. Only after a positive identification, the car can be used and the driver-specific environment (e.g. seat position, mirrors, etc.) may be set up to ensure the driver's comfort and safety. The driver identification system has been integrated in a Volkswagen research car. Recognition results are presented.
Full Text Available Recent research on the crossmodal integration of visual and auditory perception suggests that evaluations of emotional information in one sensory modality may tend toward the emotional value generated in another sensory modality. This implies that the emotions elicited by musical stimuli can influence the perception of emotional stimuli presented in other sensory modalities, through a top-down process. The aim of this work was to investigate how crossmodal perceptual processing influences emotional face recognition and how potential modulation of this processing induced by music could be influenced by the subject's musical competence. We investigated how emotional face recognition processing could be modulated by listening to music and how this modulation varies according to the subjective emotional salience of the music and the listener's musical competence. The sample consisted of 24 participants: 12 professional musicians and 12 university students (non-musicians. Participants performed an emotional go/no-go task whilst listening to music by Albeniz, Chopin, or Mozart. The target stimuli were emotionally neutral facial expressions. We examined the N170 Event-Related Potential (ERP and behavioral responses (i.e., motor reaction time to target recognition and musical emotional judgment. A linear mixed-effects model and a decision-tree learning technique were applied to N170 amplitudes and latencies. The main findings of the study were that musicians' behavioral responses and N170 is more affected by the emotional value of music administered in the emotional go/no-go task and this bias is also apparent in responses to the non-target emotional face. This suggests that emotional information, coming from multiple sensory channels, activates a crossmodal integration process that depends upon the stimuli emotional salience and the listener's appraisal.
Ryan, Kaitlin F.; Gauthier, Isabel
When there is a gender effect, women perform better then men in face recognition tasks. Prior work has not documented a male advantage on a face recognition task, suggesting that women may outperform men at face recognition generally either due to evolutionary reasons or the influence of social roles. Here, we question the idea that women excel at all face recognition and provide a proof of concept based on a face category for which men outperform women. We developed a test of face learning to measures individual differences with face categories for which men and women may differ in experience, using the faces of Barbie dolls and of Transformers. The results show a crossover interaction between subject gender and category, where men outperform women with Transformers’ faces. We demonstrate that men can outperform women with some categories of faces, suggesting that explanations for a general face recognition advantage for women are in fact not needed. PMID:27923772
Full Text Available Presented in this paper is a novel system for face recognition that works well in the wild and that is based on ensembles of descriptors that utilize different preprocessing techniques. The power of our proposed approach is demonstrated on two datasets: the FERET dataset and the Labeled Faces in the Wild (LFW dataset. In the FERET datasets, where the aim is identification, we use the angle distance. In the LFW dataset, where the aim is to verify a given match, we use the Support Vector Machine and Similarity Metric Learning. Our proposed system performs well on both datasets, obtaining, to the best of our knowledge, one of the highest performance rates published in the literature on the FERET datasets. Particularly noteworthy is the fact that these good results on both datasets are obtained without using additional training patterns. The MATLAB source of our best ensemble approach will be freely available at https://www.dei.unipd.it/node/2357.
Full Text Available To solve the matching problem of the elements in different data collections, an improved coupled metric learning approach is proposed. First, we improved the supervised locality preserving projection algorithm and added the within-class and between-class information of the improved algorithm to coupled metric learning, so a novel coupled metric learning method is proposed. Furthermore, we extended this algorithm to nonlinear space, and the kernel coupled metric learning method based on supervised locality preserving projection is proposed. In kernel coupled metric learning approach, two elements of different collections are mapped to the unified high dimensional feature space by kernel function, and then generalized metric learning is performed in this space. Experiments based on Yale and CAS-PEAL-R1 face databases demonstrate that the proposed kernel coupled approach performs better in low-resolution and fuzzy face recognition and can reduce the computing time; it is an effective metric method.
Chuk, Tim; Chan, Antoni B; Hsiao, Janet H
The hidden Markov model (HMM)-based approach for eye movement analysis is able to reflect individual differences in both spatial and temporal aspects of eye movements. Here we used this approach to understand the relationship between eye movements during face learning and recognition, and its association with recognition performance. We discovered holistic (i.e., mainly looking at the face center) and analytic (i.e., specifically looking at the two eyes in addition to the face center) patterns during both learning and recognition. Although for both learning and recognition, participants who adopted analytic patterns had better recognition performance than those with holistic patterns, a significant positive correlation between the likelihood of participants' patterns being classified as analytic and their recognition performance was only observed during recognition. Significantly more participants adopted holistic patterns during learning than recognition. Interestingly, about 40% of the participants used different patterns between learning and recognition, and among them 90% switched their patterns from holistic at learning to analytic at recognition. In contrast to the scan path theory, which posits that eye movements during learning have to be recapitulated during recognition for the recognition to be successful, participants who used the same or different patterns during learning and recognition did not differ in recognition performance. The similarity between their learning and recognition eye movement patterns also did not correlate with their recognition performance. These findings suggested that perceptuomotor memory elicited by eye movement patterns during learning does not play an important role in recognition. In contrast, the retrieval of diagnostic information for recognition, such as the eyes for face recognition, is a better predictor for recognition performance. Copyright © 2017 Elsevier Ltd. All rights reserved.
Kita, Yosuke; Gunji, Atsuko; Inoue, Yuki; Goto, Takaaki; Sakihara, Kotoe; Kaga, Makiko; Inagaki, Masumi; Hosokawa, Toru
It is assumed that children with autism spectrum disorders (ASD) have specificities for self-face recognition, which is known to be a basic cognitive ability for social development. In the present study, we investigated neurological substrates and potentially influential factors for self-face recognition of ASD patients using near-infrared spectroscopy (NIRS). The subjects were 11 healthy adult men, 13 normally developing boys, and 10 boys with ASD. Their hemodynamic activities in the frontal area and their scanning strategies (eye-movement) were examined during self-face recognition. Other factors such as ASD severities and self-consciousness were also evaluated by parents and patients, respectively. Oxygenated hemoglobin levels were higher in the regions corresponding to the right inferior frontal gyrus than in those corresponding to the left inferior frontal gyrus. In two groups of children these activities reflected ASD severities, such that the more serious ASD characteristics corresponded with lower activity levels. Moreover, higher levels of public self-consciousness intensified the activities, which were not influenced by the scanning strategies. These findings suggest that dysfunction in the right inferior frontal gyrus areas responsible for self-face recognition is one of the crucial neural substrates underlying ASD characteristics, which could potentially be used to evaluate psychological aspects such as public self-consciousness. Copyright © 2010 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.
Bima Sena Bayu Dewantara
Full Text Available Fuzzy rule optimization is a challenging step in the development of a fuzzy model. A simple two inputs fuzzy model may have thousands of combination of fuzzy rules when it deals with large number of input variations. Intuitively and trial‐error determination of fuzzy rule is very difficult. This paper addresses the problem of optimizing Fuzzy rule using Genetic Algorithm to compensate illumination effect in face recognition. Since uneven illumination contributes negative effects to the performance of face recognition, those effects must be compensated. We have developed a novel algorithmbased on a reflectance model to compensate the effect of illumination for human face recognition. We build a pair of model from a single image and reason those modelsusing Fuzzy.Fuzzy rule, then, is optimized using Genetic Algorithm. This approachspendsless computation cost by still keepinga high performance. Based on the experimental result, we can show that our algorithm is feasiblefor recognizing desired person under variable lighting conditions with faster computation time. Keywords: Face recognition, harsh illumination, reflectance model, fuzzy, genetic algorithm
Full Text Available Which facial features allow human observers to successfully recognize expressions of emotion? While the eyes and mouth have been frequently shown to be of high importance, research on facial action units has made more precise predictions about the areas involved in displaying each emotion. The present research investigated on a fine-grained level, which physical features are most relied on when decoding facial expressions. In the experiment, individual faces expressing the basic emotions according to Ekman were hidden behind a mask of 48 tiles, which was sequentially uncovered. Participants were instructed to stop the sequence as soon as they recognized the facial expression and assign it the correct label. For each part of the face, its contribution to successful recognition was computed, allowing to visualize the importance of different face areas for each expression. Overall, observers were mostly relying on the eye and mouth regions when successfully recognizing an emotion. Furthermore, the difference in the importance of eyes and mouth allowed to group the expressions in a continuous space, ranging from sadness and fear (reliance on the eyes to disgust and happiness (mouth. The face parts with highest diagnostic value for expression identification were typically located in areas corresponding to action units from the facial action coding system. A similarity analysis of the usefulness of different face parts for expression recognition demonstrated that faces cluster according to the emotion they express, rather than by low-level physical features. Also, expressions relying more on the eyes or mouth region were in close proximity in the constructed similarity space. These analyses help to better understand how human observers process expressions of emotion, by delineating the mapping from facial features to psychological representation.
Choi, Jae Young; Ro, Yong Man; Plataniotis, Konstantinos N
This paper introduces the new color face recognition (FR) method that makes effective use of boosting learning as color-component feature selection framework. The proposed boosting color-component feature selection framework is designed for finding the best set of color-component features from various color spaces (or models), aiming to achieve the best FR performance for a given FR task. In addition, to facilitate the complementary effect of the selected color-component features for the purpose of color FR, they are combined using the proposed weighted feature fusion scheme. The effectiveness of our color FR method has been successfully evaluated on the following five public face databases (DBs): CMU-PIE, Color FERET, XM2VTSDB, SCface, and FRGC 2.0. Experimental results show that the results of the proposed method are impressively better than the results of other state-of-the-art color FR methods over different FR challenges including highly uncontrolled illumination, moderate pose variation, and small resolution face images.
Pramanik, Sourav; Bhattacharjee, Dr. Debotosh
To recognize face sketch through face photo database is a challenging task for todays researchers. Because face photo images in training set and face sketch images in testing set have different modality. Difference between two face photos of difference person is smaller than the difference between same person in a face photo and face sketched. In this paper, for reduction of the modality between face photo and face sketch we first bring face photo and face sketch images in a new dimension usi...
Seymour, Karen E; Jones, Richard N; Cushman, Grace K; Galvan, Thania; Puzia, Megan E; Kim, Kerri L; Spirito, Anthony; Dickstein, Daniel P
Little is known about the bio-behavioral mechanisms underlying and differentiating suicide attempts from non-suicidal self-injury (NSSI) in adolescents. Adolescents who attempt suicide or engage in NSSI often report significant interpersonal and social difficulties. Emotional face recognition ability is a fundamental skill required for successful social interactions, and deficits in this ability may provide insight into the unique brain-behavior interactions underlying suicide attempts versus NSSI in adolescents. Therefore, we examined emotional face recognition ability among three mutually exclusive groups: (1) inpatient adolescents who attempted suicide (SA, n = 30); (2) inpatient adolescents engaged in NSSI (NSSI, n = 30); and (3) typically developing controls (TDC, n = 30) without psychiatric illness. Participants included adolescents aged 13-17 years, matched on age, gender and full-scale IQ. Emotional face recognition was evaluated using the diagnostic assessment of nonverbal accuracy (DANVA-2). Compared to TDC youth, adolescents with NSSI made more errors on child fearful and adult sad face recognition while controlling for psychopathology and medication status (ps Adolescent inpatients engaged in NSSI showed greater deficits in emotional face recognition than TDC, but not inpatient adolescents who attempted suicide. Further results suggest the importance of psychopathology in emotional face recognition. Replication of these preliminary results and examination of the role of context-dependent emotional processing are needed moving forward.
Full Text Available The process of security improvement is a huge problem especiallyin large ships. Terrorist attacks and everyday threatsagainst life and property destroy transport and tourist companies,especially large tourist ships. Every person on a ship can berecognized and identified using something that the personknows or by means of something the person possesses. The bestresults will be obtained by using a combination of the person'sknowledge with one biometric characteristic. Analyzing theproblem of biometrics in ITS security we can conclude that facerecognition process supported by one or two traditional biometriccharacteristics can give very good results regarding ship security.In this paper we will describe a biometric system basedon face recognition. Special focus will be given to crew member'sbiometric security in crisis situation like kidnapping, robbelyor illness.
Gerlach, Christian; Starrfelt, Randi
There has been an increase in studies adopting an individual difference approach to examine visual cognition and in particular in studies trying to relate face recognition performance with measures of holistic processing (the face composite effect and the part-whole effect). In the present study we...... examine whether global precedence effects, measured by means of non-face stimuli in Navon’s paradigm, can also account for individual differences in face recognition and, if so, whether the effect is of similar magnitude for faces and objects. We find evidence that global precedence effects facilitate...... both face and object recognition, and to a similar extent. Our results suggest that both face and object recognition are characterized by a coarse-to-fine temporal dynamic, where global shape information is derived prior to local shape information, and that the efficiency of face and object recognition...
Dutta, A.; Veldhuis, Raymond N.J.; Spreeuwers, Lieuwe Jan
The performance of a face recognition system depends on the quality of both test and reference images participating in the face comparison process. In a forensic evaluation case involving face recognition, we do not have any control over the quality of the trace (image captured by a CCTV at a crime
Huisman, Peter; Munster, Ruud; Moro-Ellenberger, Stephanie; Veldhuis, Raymond N.J.; Bazen, A.M.
The problem of pose in 2D face recognition is widely acknowledged. Commercial systems are limited to near frontal face images and cannot deal with pose deviations larger than 15 degrees from the frontal view. This is a problem, when using face recognition for surveillance applications in which
Full Text Available The aim of this paper is to help users improve the door security of sensitive locations by using face detection and recognition. This paper is comprised mainly of three subsystems: face detection, face recognition and automatic door access control. The door will open automatically for the known person due to the command of the microcontroller.
Xiao, Naiqi G.; Quinn, Paul C.; Liu, Shaoying; Ge, Liezhong; Pascalis, Olivier; Lee, Kang
Current knowledge about face processing in infancy comes largely from studies using static face stimuli, but faces that infants see in the real world are mostly moving ones. To bridge this gap, 3-, 6-, and 9-month-old Asian infants (N = 118) were familiarized with either moving or static Asian female faces, and then their face recognition was…
Full Text Available Due to usability features, practical applications, and its lack of intrusiveness, face recognition technology, based on information, derived from individuals' facial features, has been attracting considerable attention recently. Reported recognition rates of commercialized face recognition systems cannot be admitted as official recognition rates, as they are based on assumptions that are beneficial to the specific system and face database. Therefore, performance evaluation methods and tools are necessary to objectively measure the accuracy and performance of any face recognition system. In this paper, we propose and formalize a performance evaluation model for the biometric recognition system, implementing an evaluation tool for face recognition systems based on the proposed model. Furthermore, we performed evaluations objectively by providing guidelines for the design and implementation of a performance evaluation system, formalizing the performance test process.
Elbouz, M.; Bouzidi, F.; Alfalou, A.; Brosseau, C.; Leonard, I.; Benkelfat, B.-E.
In this study, we suggest and validate an all-numerical implementation of a VanderLugt correlator which is optimized for face recognition applications. The main goal of this implementation is to take advantage of the benefits (detection, localization, and identification of a target object within a scene) of correlation methods and exploit the reconfigurability of numerical approaches. This technique requires a numerical implementation of the optical Fourier transform. We pay special attention to adapt the correlation filter to this numerical implementation. One main goal of this work is to reduce the size of the filter in order to decrease the memory space required for real time applications. To fulfil this requirement, we code the reference images with 8 bits and study the effect of this coding on the performances of several composite filters (phase-only filter, binary phase-only filter). The saturation effect has for effect to decrease the performances of the correlator for making a decision when filters contain up to nine references. Further, an optimization is proposed based for an optimized segmented composite filter. Based on this approach, we present tests with different faces demonstrating that the above mentioned saturation effect is significantly reduced while minimizing the size of the learning data base.
Gutta, Srinivas; Huang, Jeffrey R.; Singh, Dig; Wechsler, Harry
One of the most important technologies absent in traditional and emerging frontiers of computing is the management of visual information. Faces are accessible `windows' into the mechanisms that govern our emotional and social lives. The corresponding face recognition tasks considered herein include: (1) Surveillance, (2) CBIR, and (3) CBIR subject to correct ID (`match') displaying specific facial landmarks such as wearing glasses. We developed robust matching (`classification') and retrieval schemes based on hybrid classifiers and showed their feasibility using the FERET database. The hybrid classifier architecture consist of an ensemble of connectionist networks--radial basis functions-- and decision trees. The specific characteristics of our hybrid architecture include (a) query by consensus as provided by ensembles of networks for coping with the inherent variability of the image formation and data acquisition process, and (b) flexible and adaptive thresholds as opposed to ad hoc and hard thresholds. Experimental results, proving the feasibility of our approach, yield (i) 96% accuracy, using cross validation (CV), for surveillance on a data base consisting of 904 images (ii) 97% accuracy for CBIR tasks, on a database of 1084 images, and (iii) 93% accuracy, using CV, for CBIR subject to correct ID match tasks on a data base of 200 images.
Jessica Pui Kar eChan
Full Text Available Face recognition is impaired when changes are made to external face features (e.g. hairstyle, even when all internal features (i.e. eyes, nose, mouth remain the same. Eye movement monitoring was used to determine the extent to which altered hairstyles affect processing of face features, thereby shedding light on how internal and external features are stored in memory. Participants studied a series of faces, followed by a recognition test in which novel, repeated and manipulated (altered hairstyle faces were presented. Recognition was higher for repeated than manipulated faces. Although eye movement patterns distinguished repeated from novel faces, viewing of manipulated faces was similar to that of novel faces. Internal and external features may be stored together as one unit in memory; consequently, changing even a single feature alters processing of the other features and disrupts recognition.
in the realm of academic research in the Type 3 environment. 13) Face Recognition to Improve Voice/ Iris Biometrics : Here, the system uses face...recognition as a supplementary biometric to increase confidence on a match made using a different biometric (for example iris , voice, or fingerprints...Voice/ Iris Biometrics + - - 14. Soft biometrics to improve face recognition - 1 Estimated readiness: The e-Gate environment was not evaluated in
Zimmermann, Friederike G S; Eimer, Martin
Recognizing unfamiliar faces is more difficult than familiar face recognition, and this has been attributed to qualitative differences in the processing of familiar and unfamiliar faces. Familiar faces are assumed to be represented by view-independent codes, whereas unfamiliar face recognition depends mainly on view-dependent low-level pictorial representations. We employed an electrophysiological marker of visual face recognition processes in order to track the emergence of view-independence during the learning of previously unfamiliar faces. Two face images showing either the same or two different individuals in the same or two different views were presented in rapid succession, and participants had to perform an identity-matching task. On trials where both faces showed the same view, repeating the face of the same individual triggered an N250r component at occipito-temporal electrodes, reflecting the rapid activation of visual face memory. A reliable N250r component was also observed on view-change trials. Crucially, this view-independence emerged as a result of face learning. In the first half of the experiment, N250r components were present only on view-repetition trials but were absent on view-change trials, demonstrating that matching unfamiliar faces was initially based on strictly view-dependent codes. In the second half, the N250r was triggered not only on view-repetition trials but also on view-change trials, indicating that face recognition had now become more view-independent. This transition may be due to the acquisition of abstract structural codes of individual faces during face learning, but could also reflect the formation of associative links between sets of view-specific pictorial representations of individual faces. Copyright © 2013 Elsevier Ltd. All rights reserved.
Drew W R Halliday
Full Text Available Although not a core symptom of the disorder, individuals with autism often exhibit selective impairments in their face processing abilities. Importantly, the reciprocal connection between autistic traits and face perception has rarely been examined within the typically developing population. In this study, university participants from the social sciences, physical sciences, and humanities completed a battery of measures that assessed face, object and emotion recognition abilities, general perceptual-cognitive style, and sub-clinical autistic traits (the Autism Quotient (AQ. We employed separate hierarchical multiple regression analyses to evaluate which factors could predict face recognition scores and AQ scores. Gender, object recognition performance, and AQ scores predicted face recognition behaviour. Specifically, males, individuals with more autistic traits, and those with lower object recognition scores performed more poorly on the face recognition test. Conversely, university major, gender and face recognition performance reliably predicted AQ scores. Science majors, males, and individuals with poor face recognition skills showed more autistic-like traits. These results suggest that the broader autism phenotype is associated with lower face recognition abilities, even among typically developing individuals.
Halliday, Drew W R; MacDonald, Stuart W S; Scherf, K Suzanne; Sherf, Suzanne K; Tanaka, James W
Although not a core symptom of the disorder, individuals with autism often exhibit selective impairments in their face processing abilities. Importantly, the reciprocal connection between autistic traits and face perception has rarely been examined within the typically developing population. In this study, university participants from the social sciences, physical sciences, and humanities completed a battery of measures that assessed face, object and emotion recognition abilities, general perceptual-cognitive style, and sub-clinical autistic traits (the Autism Quotient (AQ)). We employed separate hierarchical multiple regression analyses to evaluate which factors could predict face recognition scores and AQ scores. Gender, object recognition performance, and AQ scores predicted face recognition behaviour. Specifically, males, individuals with more autistic traits, and those with lower object recognition scores performed more poorly on the face recognition test. Conversely, university major, gender and face recognition performance reliably predicted AQ scores. Science majors, males, and individuals with poor face recognition skills showed more autistic-like traits. These results suggest that the broader autism phenotype is associated with lower face recognition abilities, even among typically developing individuals.
Bobak, Anna K.; Dowsett, A.; Bate, Sarah
Photographic identity documents (IDs) are commonly used despite clear evidence that unfamiliar face matching is a difficult and error-prone task. The current study set out to examine the performance of seven individuals with extraordinary face recognition memory, so called ?super recognisers? (SRs), on two face matching tasks resembling border control identity checks. In Experiment 1, the SRs as a group outperformed control participants on the ?Glasgow Face Matching Test?, and some case-by-ca...
Robotham, Ro J.; Starrfelt, Randi
Face and word recognition have traditionally been thought to rely on highly specialised and relatively independent cognitive processes. Some of the strongest evidence for this has come from patients with seemingly category-specific visual perceptual deficits such as pure prosopagnosia, a selective...... also have deficits in the other. The implications of this would be immense, with most textbooks in cognitive neuropsychology requiring drastic revisions. In order to evaluate the evidence for dissociations, we review studies that specifically investigate whether face or word recognition can...... face recognition deficit, and pure alexia, a selective word recognition deficit. Together, the patterns of impaired reading with preserved face recognition and impaired face recognition with preserved reading constitute a double dissociation. The existence of these selective deficits has been...
Full Text Available We proposed a face recognition algorithm based on both the multilinear principal component analysis (MPCA and linear discriminant analysis (LDA. Compared with current traditional existing face recognition methods, our approach treats face images as multidimensional tensor in order to find the optimal tensor subspace for accomplishing dimension reduction. The LDA is used to project samples to a new discriminant feature space, while the K nearest neighbor (KNN is adopted for sample set classification. The results of our study and the developed algorithm are validated with face databases ORL, FERET, and YALE and compared with PCA, MPCA, and PCA + LDA methods, which demonstrates an improvement in face recognition accuracy.
Starrfelt, Randi; Klargaard, Solja; Petersen, Anders
Objective: Recent models suggest that face and word recognition may rely on overlapping cognitive processes and neural regions. In support of this notion, face recognition deficits have been demonstrated in developmental dyslexia. Here we test whether the opposite association can also be found......, that is, impaired reading in developmental prosopagnosia. Method: We tested 10 adults with developmental prosopagnosia and 20 matched controls. All participants completed the Cambridge Face Memory Test, the Cambridge Face Perception test and a Face recognition questionnaire used to quantify everyday face...... recognition experience. Reading was measured in four experimental tasks, testing different levels of letter, word, and text reading: a) single word reading with words of varying length, b) vocal response times in single letter and short word naming, c) recognition of single letters and short words at brief...
Noor Abdalrazak Shnain
Full Text Available Facial recognition is one of the most challenging and interesting problems within the field of computer vision and pattern recognition. During the last few years, it has gained special attention due to its importance in relation to current issues such as security, surveillance systems and forensics analysis. Despite this high level of attention to facial recognition, the success is still limited by certain conditions; there is no method which gives reliable results in all situations. In this paper, we propose an efficient similarity index that resolves the shortcomings of the existing measures of feature and structural similarity. This measure, called the Feature-Based Structural Measure (FSM, combines the best features of the well-known SSIM (structural similarity index measure and FSIM (feature similarity index measure approaches, striking a balance between performance for similar and dissimilar images of human faces. In addition to the statistical structural properties provided by SSIM, edge detection is incorporated in FSM as a distinctive structural feature. Its performance is tested for a wide range of PSNR (peak signal-to-noise ratio, using ORL (Olivetti Research Laboratory, now AT&T Laboratory Cambridge and FEI (Faculty of Industrial Engineering, São Bernardo do Campo, São Paulo, Brazil databases. The proposed measure is tested under conditions of Gaussian noise; simulation results show that the proposed FSM outperforms the well-known SSIM and FSIM approaches in its efficiency of similarity detection and recognition of human faces.
P. S. Hiremath
recognition in the framework of symbolic data analysis. Classical KDA extracts features, which are single-valued in nature to represent face images. These single-valued variables may not be able to capture variation of each feature in all the images of same subject; this leads to loss of information. The symbolic KDA algorithm extracts most discriminating nonlinear interval-type features which optimally discriminate among the classes represented in the training set. The proposed method has been successfully tested for face recognition using two databases, ORL database and Yale face database. The effectiveness of the proposed method is shown in terms of comparative performance against popular face recognition methods such as kernel Eigenface method and kernel Fisherface method. Experimental results show that symbolic KDA yields improved recognition rate.
Dornaika, Fadi; Bosaghzadeh, Alireza
Local discriminant embedding (LDE) has been recently proposed to overcome some limitations of the global linear discriminant analysis method. In the case of a small training data set, however, LDE cannot directly be applied to high-dimensional data. This case is the so-called small-sample-size (SSS) problem. The classical solution to this problem was applying dimensionality reduction on the raw data (e.g., using principal component analysis). In this paper, we introduce a novel discriminant technique called "exponential LDE" (ELDE). The proposed ELDE can be seen as an extension of LDE framework in two directions. First, the proposed framework overcomes the SSS problem without discarding the discriminant information that was contained in the null space of the locality preserving scatter matrices associated with LDE. Second, the proposed ELDE is equivalent to transforming original data into a new space by distance diffusion mapping (similar to kernel-based nonlinear mapping), and then, LDE is applied in such a new space. As a result of diffusion mapping, the margin between samples belonging to different classes is enlarged, which is helpful in improving classification accuracy. The experiments are conducted on five public face databases: Yale, Extended Yale, PF01, Pose, Illumination, and Expression (PIE), and Facial Recognition Technology (FERET). The results show that the performances of the proposed ELDE are better than those of LDE and many state-of-the-art discriminant analysis techniques.
Starrfelt, Randi; Klargaard, Solja K; Petersen, Anders; Gerlach, Christian
Recent models suggest that face and word recognition may rely on overlapping cognitive processes and neural regions. In support of this notion, face recognition deficits have been demonstrated in developmental dyslexia. Here we test whether the opposite association can also be found, that is, impaired reading in developmental prosopagnosia. We tested 10 adults with developmental prosopagnosia and 20 matched controls. All participants completed the Cambridge Face Memory Test, the Cambridge Face Perception test and a Face recognition questionnaire used to quantify everyday face recognition experience. Reading was measured in four experimental tasks, testing different levels of letter, word, and text reading: (a) single word reading with words of varying length,(b) vocal response times in single letter and short word naming, (c) recognition of single letters and short words at brief exposure durations (targeting the word superiority effect), and d) text reading. Participants with developmental prosopagnosia performed strikingly similar to controls across the four reading tasks. Formal analysis revealed a significant dissociation between word and face recognition, as the difference in performance with faces and words was significantly greater for participants with developmental prosopagnosia than for controls. Adult developmental prosopagnosics read as quickly and fluently as controls, while they are seemingly unable to learn efficient strategies for recognizing faces. We suggest that this is due to the differing demands that face and word recognition put on the perceptual system. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Spreeuwers, Lieuwe Jan
Biometrics - recognition of persons based on how they look or behave, is the main subject of research at the Chair of Biometric Pattern Recognition (BPR) of the Services, Cyber Security and Safety Group (SCS) of the EEMCS Faculty at the University of Twente. Examples are finger print recognition,
Spreeuwers, Lieuwe Jan
Biometrics - recognition of persons based on how they look or behave, is the main subject of research at the Chair of Biometric Pattern Recognition (BPR) of the Services, Cyber Security and Safety Group (SCS) of the EEMCS Faculty at the University of Twente. Examples are finger print recognition,
Peng, Y.; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.
Most face recognition systems deal well with high-resolution facial images, but perform much worse on low-resolution facial images. In low-resolution face recognition, there is a specific but realistic surveillance scenario: a surveillance camera monitoring a large area. In this scenario, usually
Karaaba, Mahir; Surinta, Olarik; Schomaker, Lambertus; Wiering, Marco
The Single Sample per Person Problem is a challenging problem for face recognition algorithms. Patch-based methods have obtained some promising results for this problem. In this paper, we propose a new face recognition algorithm that is based on a combination of different histograms of oriented
This paper proposes a multimodal biometric scheme for human authentication based on fusion of voice and face recognition. For voice recognition, three categories of features (statistical coefficients, cepstral coefficients and voice timbre) are used and compared. The voice identification modality is carried out using Gaussian Mixture Model (GMM). For face recognition, three recognition methods (Eigenface, Linear Discriminate Analysis (LDA), and Gabor filter) are used and compared. The combination of voice and face biometrics systems into a single multimodal biometrics system is performed using features fusion and scores fusion. This study shows that the best results are obtained using all the features (cepstral coefficients, statistical coefficients and voice timbre features) for voice recognition, LDA face recognition method and scores fusion for the multimodal biometrics system
... ,.,,,.,.,,, , . , . , , , , , , .. , , .. "", .. " , ,, "" , , .. , ", , , .. , , .. , , , , , , ,... ",.. ,,, ",.. , 1 4 7 8 13 2 The Human Face ... ", ... """.".,." 2.1 Cognitive Neurosciences .. , . , 2.2 Psychophysics 2,3 The Social Face, . , . , , . , , .. , , , , , 2.4...
Robotham, Ro J; Starrfelt, Randi
Face and word recognition have traditionally been thought to rely on highly specialised and relatively independent cognitive processes. Some of the strongest evidence for this has come from patients with seemingly category-specific visual perceptual deficits such as pure prosopagnosia, a selective face recognition deficit, and pure alexia, a selective word recognition deficit. Together, the patterns of impaired reading with preserved face recognition and impaired face recognition with preserved reading constitute a double dissociation. The existence of these selective deficits has been questioned over the past decade. It has been suggested that studies describing patients with these pure deficits have failed to measure the supposedly preserved functions using sensitive enough measures, and that if tested using sensitive measurements, all patients with deficits in one visual category would also have deficits in the other. The implications of this would be immense, with most textbooks in cognitive neuropsychology requiring drastic revisions. In order to evaluate the evidence for dissociations, we review studies that specifically investigate whether face or word recognition can be selectively affected by acquired brain injury or developmental disorders. We only include studies published since 2004, as comprehensive reviews of earlier studies are available. Most of the studies assess the supposedly preserved functions using sensitive measurements. We found convincing evidence that reading can be preserved in acquired and developmental prosopagnosia and also evidence (though weaker) that face recognition can be preserved in acquired or developmental dyslexia, suggesting that face and word recognition are at least in part supported by independent processes.
Nasrollahi, Kamal; Moeslund, Thomas B.
Face recognition is still a very challenging task when the input face image is noisy, occluded by some obstacles, of very low-resolution, not facing the camera, and not properly illuminated. These problems make the feature extraction and consequently the face recognition system unstable....... The proposed system in this paper introduces the novel idea of using Haar-like features, which have commonly been used for object detection, along with a probabilistic classifier for face recognition. The proposed system is simple, real-time, effective and robust against most of the mentioned problems....... Experimental results on public databases show that the proposed system indeed outperforms the state-of-the-art face recognition systems....
Casey, Sarah J; Newell, Fiona N
Recent studies have suggested that the familiarity of a face leads to more robust recognition, at least within the visual domain. The aim of our study was to investigate whether face familiarity resulted in a representation of faces that was easily shared across the sensory modalities. In Experiment 1, we tested whether haptic recognition of a highly familiar face (one's own face) was as efficient as visual recognition. Our observers were unable to recognise their own face models from tactile memory alone but were able to recognise their faces visually. However, haptic recognition improved when participants were primed by their own live face. In Experiment 2, we found that short-term familiarisation with a set of previously unfamiliar face stimuli improved crossmodal recognition relative to the recognition of unfamiliar faces. Our findings suggest that familiarisation provides a strong representation of faces but that the nature of the information encoded during learning is critical for efficient crossmodal recognition.
Full Text Available Face recognition system is gaining more importance in social networks and surveillance. The face recognition task is complex due to the variations in illumination, expression, occlusion, aging and pose. The illumination variations in image are due to changes in lighting conditions, poor illumination, low contrast or increased brightness. The variations in illumination adversely affect the quality of image and recognition accuracy. The illumination variations in face image have to be pre-processed prior to face recognition. The Contrast Limited Adaptive Histogram Equalization (CLAHE is an image enhancement technique popular in enhancing medical images. The proposed work is to create illumination invariant face recognition system by enhancing Contrast Limited Adaptive Histogram Equalization technique. This method is termed as “Enhanced CLAHE”. The efficiency of Enhanced CLAHE is tested using Fuzzy K Nearest Neighbour classifier and fisher face subspace projection method. The face recognition accuracy percentage rate, Equal Error Rate and False Acceptance Rate at 1% are calculated. The performance of CLAHE and Enhanced CLAHE methods is compared. The efficiency of the Enhanced CLAHE method is tested with three public face databases AR, Yale and ORL. The Enhanced CLAHE has very high recognition accuracy percentage rate when compared to CLAHE.
Xie, Zhihua; Jiang, Peng; Zhang, Shuai; Xiong, Jinquan
Hyperspectral imaging, recording intrinsic spectral information of the skin cross different spectral bands, become an important issue for robust face recognition. However, the main challenges for hyperspectral face recognition are high data dimensionality, low signal to noise ratio and inter band misalignment. In this paper, hyperspectral face recognition based on LBP (Local binary pattern) and SWLD (Simplified Weber local descriptor) is proposed to extract discriminative local features from spatio-spectral fusion information. Firstly, the spatio-spectral fusion strategy based on statistical information is used to attain discriminative features of hyperspectral face images. Secondly, LBP is applied to extract the orientation of the fusion face edges. Thirdly, SWLD is proposed to encode the intensity information in hyperspectral images. Finally, we adopt a symmetric Kullback-Leibler distance to compute the encoded face images. The hyperspectral face recognition is tested on Hong Kong Polytechnic University Hyperspectral Face database (PolyUHSFD). Experimental results show that the proposed method has higher recognition rate (92.8%) than the state of the art hyperspectral face recognition algorithms.
Full Text Available This paper proposes an improved performance algorithm of face recognition to identify two face mismatch pairs in cases of incorrect decisions. The primary feature of this method is to deploy the similarity score with respect to Gaussian components between two previously unseen faces. Unlike the conventional classical vector distance measurement, our algorithms also consider the plot of summation of the similarity index versus face feature vector distance. A mixture of Gaussian models of labeled faces is also widely applicable to different biometric system parameters. By comparative evaluations, it has been shown that the efficiency of the proposed algorithm is superior to that of the conventional algorithm by an average accuracy of up to 1.15% and 16.87% when compared with 3x3 Multi-Region Histogram (MRH direct-bag-of-features and Principal Component Analysis (PCA-based face recognition systems, respectively. The experimental results show that similarity score consideration is more discriminative for face recognition compared to feature distance. Experimental results of Labeled Face in the Wild (LFW data set demonstrate that our algorithms are suitable for real applications probe-to-gallery identification of face recognition systems. Moreover, this proposed method can also be applied to other recognition systems and therefore additionally improves recognition scores.
Oliu Simon, Marc; Corneanu, Ciprian; Nasrollahi, Kamal
Reliable facial recognition systems are of crucial importance in various applications from entertainment to security. Thanks to the deep-learning concepts introduced in the field, a significant improvement in the performance of the unimodal facial recognition systems has been observed in the recent...
Gerlach, Christian; Klargaard, Solja; Starrfelt, Randi
There is an ongoing debate about whether face recognition and object recognition constitute separate cognitive domains. Clarification of this issue can have important theoretical consequences as face recognition is often used as a prime example of domain-specificity in mind and brain. An important...... source of input to this debate comes from studies of individuals with developmental prosopagnosia, suggesting that face recognition can be selectively impaired. We put the selectivity-hypothesis to test by assessing the performance of 10 subjects with developmental prosopagnosia on demanding tests...... of visual object processing involving both regular and degraded drawings. None of the individuals exhibited a dissociation between face and object recognition, and as a group they were significantly more affected by degradation of objects than control participants. Importantly, we also find positive...
Full Text Available This article reviews a number of recent studies that systematically compared the access to semantic and episodic information from faces and voices. Results have showed that semantic and episodic information is easier to retrieve from faces than from voices. This advantage of faces over voices is a robust phenomenon, which emerges whatever the kind of target persons, might they be famous, personally familiar to the participants, or newly learned. Theoretical accounts of this face advantage over voice are finally discussed.
Full Text Available In this paper, we present a novel approach for three-dimensional face recognition by extracting the curvature maps from range images. There are four types of curvature maps: Gaussian, Mean, Maximum and Minimum curvature maps. These curvature maps are used as a feature for 3D face recognition purpose. The dimension of these feature vectors is reduced using Singular Value Decomposition (SVD technique. Now from calculated three components of SVD, the non-negative values of ‘S’ part of SVD is ranked and used as feature vector. In this proposed method, two pair-wise curvature computations are done. One is Mean, and Maximum curvature pair and another is Gaussian and Mean curvature pair. These are used to compare the result for better recognition rate. This automated 3D face recognition system is focused in different directions like, frontal pose with expression and illumination variation, frontal face along with registered face, only registered face and registered face from different pose orientation across X, Y and Z axes. 3D face images used for this research work are taken from FRAV3D database. The pose variation of 3D facial image is being registered to frontal pose by applying one to all registration technique then curvature mapping is applied on registered face images along with remaining frontal face images. For the classification and recognition purpose five layer feed-forward back propagation neural network classifiers is used, and the corresponding result is discussed in section 4.
Full Text Available Previous event-related potential (ERP studies have shown that the N170 to faces is modulated by the emotion of the face and its context. However, it is unclear how the encoding of emotional target faces as reflected in the N170 is modulated by the preceding contextual facial expression when temporal onset and identity of target faces are unpredictable. In addition, no study as yet has investigated whether contextual facial expression modulates later recognition of target faces. To address these issues, participants in the present study were asked to identify target faces (fearful or neutral that were presented after a sequence of fearful or neutral contextual faces. The number of sequential contextual faces was random and contextual and target faces were of different identities so that temporal onset and identity of target faces were unpredictable. Electroencephalography (EEG data was recorded during the encoding phase. Subsequently, participants had to perform an unexpected old/new recognition task in which target face identities were presented in either the encoded or the non-encoded expression. ERP data showed a reduced N170 to target faces in fearful as compared to neutral context regardless of target facial expression. In the later recognition phase, recognition rates were reduced for target faces in the encoded expression when they had been encountered in fearful as compared to neutral context. The present findings suggest that fearful compared to neutral contextual faces reduce the allocation of attentional resources towards target faces, which results in limited encoding and recognition of target faces.
Arisandi, D.; Syahputra, M. F.; Putri, I. L.; Purnamawati, S.; Rahmat, R. F.; Sari, P. P.
Face Recognition is a field research in Computer Vision that study about learning face and determine the identity of the face from a picture sent to the system. By utilizing this face recognition technology, learning process about people’s identity between students in a university will become simpler. With this technology, student won’t need to browse student directory in university’s server site and look for the person with certain face trait. To obtain this goal, face recognition application use image processing methods consist of two phase, pre-processing phase and recognition phase. In pre-processing phase, system will process input image into the best image for recognition phase. Purpose of this pre-processing phase is to reduce noise and increase signal in image. Next, to recognize face phase, we use Fisherface Methods. This methods is chosen because of its advantage that would help system of its limited data. Therefore from experiment the accuracy of face recognition using fisherface is 90%.
Bobak, Anna Katarzyna; Dowsett, Andrew James; Bate, Sarah
Photographic identity documents (IDs) are commonly used despite clear evidence that unfamiliar face matching is a difficult and error-prone task. The current study set out to examine the performance of seven individuals with extraordinary face recognition memory, so called "super recognisers" (SRs), on two face matching tasks resembling border control identity checks. In Experiment 1, the SRs as a group outperformed control participants on the "Glasgow Face Matching Test", and some case-by-case comparisons also reached significance. In Experiment 2, a perceptually difficult face matching task was used: the "Models Face Matching Test". Once again, SRs outperformed controls both on group and mostly in case-by-case analyses. These findings suggest that SRs are considerably better at face matching than typical perceivers, and would make proficient personnel for border control agencies.
Full Text Available Recently, Sparse Representation-based Classification (SRC has attracted a lot of attention for its applications to various tasks, especially in biometric techniques such as face recognition. However, factors such as lighting, expression, pose and disguise variations in face images will decrease the performances of SRC and most other face recognition techniques. In order to overcome these limitations, we propose a robust face recognition method named Locality Constrained Joint Dynamic Sparse Representation-based Classification (LCJDSRC in this paper. In our method, a face image is first partitioned into several smaller sub-images. Then, these sub-images are sparsely represented using the proposed locality constrained joint dynamic sparse representation algorithm. Finally, the representation results for all sub-images are aggregated to obtain the final recognition result. Compared with other algorithms which process each sub-image of a face image independently, the proposed algorithm regards the local matching-based face recognition as a multi-task learning problem. Thus, the latent relationships among the sub-images from the same face image are taken into account. Meanwhile, the locality information of the data is also considered in our algorithm. We evaluate our algorithm by comparing it with other state-of-the-art approaches. Extensive experiments on four benchmark face databases (ORL, Extended YaleB, AR and LFW demonstrate the effectiveness of LCJDSRC.
Wang, Jianzhong; Yi, Yugen; Zhou, Wei; Shi, Yanjiao; Qi, Miao; Zhang, Ming; Zhang, Baoxue; Kong, Jun
Recently, Sparse Representation-based Classification (SRC) has attracted a lot of attention for its applications to various tasks, especially in biometric techniques such as face recognition. However, factors such as lighting, expression, pose and disguise variations in face images will decrease the performances of SRC and most other face recognition techniques. In order to overcome these limitations, we propose a robust face recognition method named Locality Constrained Joint Dynamic Sparse Representation-based Classification (LCJDSRC) in this paper. In our method, a face image is first partitioned into several smaller sub-images. Then, these sub-images are sparsely represented using the proposed locality constrained joint dynamic sparse representation algorithm. Finally, the representation results for all sub-images are aggregated to obtain the final recognition result. Compared with other algorithms which process each sub-image of a face image independently, the proposed algorithm regards the local matching-based face recognition as a multi-task learning problem. Thus, the latent relationships among the sub-images from the same face image are taken into account. Meanwhile, the locality information of the data is also considered in our algorithm. We evaluate our algorithm by comparing it with other state-of-the-art approaches. Extensive experiments on four benchmark face databases (ORL, Extended YaleB, AR and LFW) demonstrate the effectiveness of LCJDSRC.
Li, Annan; Shan, Shiguang; Gao, Wen
Subspace-based face representation can be looked as a regression problem. From this viewpoint, we first revisited the problem of recognizing faces across pose differences, which is a bottleneck in face recognition. Then, we propose a new approach for cross-pose face recognition using a regressor with a coupled bias-variance tradeoff. We found that striking a coupled balance between bias and variance in regression for different poses could improve the regressor-based cross-pose face representation, i.e., the regressor can be more stable against a pose difference. With the basic idea, ridge regression and lasso regression are explored. Experimental results on CMU PIE, the FERET, and the Multi-PIE face databases show that the proposed bias-variance tradeoff can achieve considerable reinforcement in recognition performance.
Demirci, Esra; Erdogan, Ayten
The objectives of this study were to evaluate both face and emotion recognition, to detect differences among attention deficit and hyperactivity disorder (ADHD) subgroups, to identify effects of the gender and to assess the effects of methylphenidate and atomoxetine treatment on both face and emotion recognition in patients with ADHD. The study sample consisted of 41 male, 29 female patients, 8-15 years of age, who were diagnosed as having combined type ADHD (N = 26), hyperactive/impulsive type ADHD (N = 21) or inattentive type ADHD (N = 23) but had not previously used any medication for ADHD and 35 male, 25 female healthy individuals. Long-acting methylphenidate (OROS-MPH) was prescribed to 38 patients, whereas atomoxetine was prescribed to 32 patients. The reading the mind in the eyes test (RMET) and Benton face recognition test (BFRT) were applied to all participants before and after treatment. The patients with ADHD had a significantly lower number of correct answers in child and adolescent RMET and in BFRT than the healthy controls. Among the ADHD subtypes, the hyperactive/impulsive subtype had a lower number of correct answers in the RMET than the inattentive subtypes, and the hyperactive/impulsive subtype had a lower number of correct answers in short and long form of BFRT than the combined and inattentive subtypes. Male and female patients with ADHD did not differ significantly with respect to the number of correct answers on the RMET and BFRT. The patients showed significant improvement in RMET and BFRT after treatment with OROS-MPH or atomoxetine. Patients with ADHD have difficulties in face recognition as well as emotion recognition. Both OROS-MPH and atomoxetine affect emotion recognition. However, further studies on the face and emotion recognition are needed in ADHD.
as is: neither localized, nor aligned. This was necessary, because the algorithms, which were used to detect faces (such as those implementing Haar ... cascades ) missed various faces of ORL database, making it impossible to evaluate certain faces. Figure 4 presents the accuracy comparison of the...Jones, “Rapid object detection using a boosted cascade of simple features,” in CVPR (1). IEEE Computer Society, 2001, pp. 511–518.  I. H. Witten
Liu, Chang Hong; Chen, Wenfeng; Ward, James
Facial expression is a major source of image variation in face images. Linking numerous expressions to the same face can be a huge challenge for face learning and recognition. It remains largely unknown what level of exposure to this image variation is critical for expression-invariant face recognition. We examined this issue in a recognition memory task, where the number of facial expressions of each face being exposed during a training session was manipulated. Faces were either trained with multiple expressions or a single expression, and they were later tested in either the same or different expressions. We found that recognition performance after learning three emotional expressions had no improvement over learning a single emotional expression (Experiments 1 and 2). However, learning three emotional expressions improved recognition compared to learning a single neutral expression (Experiment 3). These findings reveal both the limitation and the benefit of multiple exposures to variations of emotional expression in achieving expression-invariant face recognition. The transfer of expression training to a new type of expression is likely to depend on a relatively extensive level of training and a certain degree of variation across the types of expressions.
Full Text Available Dimensionality reduction is extremely important for understanding the intrinsic structure hidden in high-dimensional data. In recent years, sparse representation models have been widely used in dimensionality reduction. In this paper, a novel supervised learning method, called Sparsity Preserving Discriminant Projections (SPDP, is proposed. SPDP, which attempts to preserve the sparse representation structure of the data and maximize the between-class separability simultaneously, can be regarded as a combiner of manifold learning and sparse representation. Specifically, SPDP first creates a concatenated dictionary by classwise PCA decompositions and learns the sparse representation structure of each sample under the constructed dictionary using the least square method. Secondly, a local between-class separability function is defined to characterize the scatter of the samples in the different submanifolds. Then, SPDP integrates the learned sparse representation information with the local between-class relationship to construct a discriminant function. Finally, the proposed method is transformed into a generalized eigenvalue problem. Extensive experimental results on several popular face databases demonstrate the feasibility and effectiveness of the proposed approach.
Pisharady, Pramod Kumar; Poh, Loh Ai
This book presents a collection of computational intelligence algorithms that addresses issues in visual pattern recognition such as high computational complexity, abundance of pattern features, sensitivity to size and shape variations and poor performance against complex backgrounds. The book has 3 parts. Part 1 describes various research issues in the field with a survey of the related literature. Part 2 presents computational intelligence based algorithms for feature selection and classification. The algorithms are discriminative and fast. The main application area considered is hand posture recognition. The book also discusses utility of these algorithms in other visual as well as non-visual pattern recognition tasks including face recognition, general object recognition and cancer / tumor classification. Part 3 presents biologically inspired algorithms for feature extraction. The visual cortex model based features discussed have invariance with respect to appearance and size of the hand, and provide good...
Gundavarapu Mallikarjuna Rao
Full Text Available Abstract - The availability of multi-core technology resulted totally new computational era. Researchers are keen to explore available potential in state of art-machines for breaking the bearer imposed by serial computation. Face Recognition is one of the challenging applications on so ever computational environment. The main difficulty of traditional Face Recognition algorithms is lack of the scalability. In this paper Weighted Local Active Pixel Pattern (WLAPP, a new scalable Face Recognition Algorithm suitable for parallel environment is proposed. Local Active Pixel Pattern (LAPP is found to be simple and computational inexpensive compare to Local Binary Patterns (LBP. WLAPP is developed based on concept of LAPP. The experimentation is performed on FG-Net Aging Database with deliberately introduced 20% distortion and the results are encouraging. Keywords — Active pixels, Face Recognition, Local Binary Pattern (LBP, Local Active Pixel Pattern (LAPP, Pattern computing, parallel workers, template, weight computation.
Full Text Available This paper presents a novel color face recognition algorithm by means of fusing color and local information. The proposed algorithm fuses the multiple features derived from different color spaces. Multiorientation and multiscale information relating to the color face features are extracted by applying Steerable Pyramid Transform (SPT to the local face regions. In this paper, the new three hybrid color spaces, YSCr, ZnSCr, and BnSCr, are firstly constructed using the Cb and Cr component images of the YCbCr color space, the S color component of the HSV color spaces, and the Zn and Bn color components of the normalized XYZ color space. Secondly, the color component face images are partitioned into the local patches. Thirdly, SPT is applied to local face regions and some statistical features are extracted. Fourthly, all features are fused according to decision fusion frame and the combinations of Extreme Learning Machines classifiers are applied to achieve color face recognition with fast and high correctness. The experiments show that the proposed Local Color Steerable Pyramid Transform (LCSPT face recognition algorithm improves seriously face recognition performance by using the new color spaces compared to the conventional and some hybrid ones. Furthermore, it achieves faster recognition compared with state-of-the-art studies.
Ge, Liezhong; Anzures, Gizelle; Wang, Zhe; Kelly, David J.; Pascalis, Olivier; Quinn, Paul C.; Slater, Alan M.; Yang, Zhiliang; Lee, Kang
Children’s recognition of familiar own-age peers was investigated. Four-, 8-, and 14-year-old Chinese children were asked to identify their classmates from photographs showing the entire face, the internal facial features only, the external facial features only, or the eyes, nose, or mouth only. Participants from all age groups were familiar with the faces used as stimuli for one academic year. The results showed that children from all age groups demonstrated an advantage for recognition of t...
Farokhi, Sajad; Shamsuddin, Siti Mariyam; Flusser, Jan; Sheikh, Usman Ullah
Roč. 6, č. 1 (2012), s. 181-186 R&D Projects: GA ČR GAP103/11/1552 Institutional support: RVO:67985556 Keywords : face recognition * moment invariants * Zernike moments Subject RIV: JD - Computer Applications, Robotics http://library.utia.cas.cz/separaty/2012/ZOI/flusser-assessment of time-lapse in visible and thermal face recognition -j.pdf
The reported experiment investigated memory of unfamiliar faces and how it is influenced by race, facial expression, direction of gaze, and observers' level of social anxiety. Eighty- seven Japanese participants initially memorized images of Oriental and Caucasian faces displaying either happy or angry expressions with direct or averted gaze. They then saw the previously seen faces and additional distractor faces displaying neutral expressions, and judged if they had seen them before. Their level of social anxiety was measured with a questionnaire. Regardless of gaze or race of the faces, recognition for faces studied with happy expressions was more accurate than for those studied with angry expressions (happiness advantage), but this tendency weakened for people with higher levels of social anxiety, possibly due to their increased anxiety for positive feedback regarding social interactions. Interestingly, the reduction of the happiness advantage observed for the highly anxious participants was more prominent for the own-race faces than for the other-race faces. The results suggest that angry expression disrupts processing of identity-relevant features of the faces, but the memory for happy faces is affected by the social anxiety traits, and the magnitude of the impact may depend on the importance of the face.
Davies-Thompson, Jodie; Newling, Katherine; Andrews, Timothy J
The ability to recognize familiar faces across different viewing conditions contrasts with the inherent difficulty in the perception of unfamiliar faces across similar image manipulations. It is widely believed that this difference in perception and recognition is based on the neural representation for familiar faces being less sensitive to changes in the image than it is for unfamiliar faces. Here, we used an functional magnetic resonance-adaptation paradigm to investigate image invariance in face-selective regions of the human brain. We found clear evidence for a degree of image-invariant adaptation to facial identity in face-selective regions, such as the fusiform face area. However, contrary to the predictions of models of face processing, comparable levels of image invariance were evident for both familiar and unfamiliar faces. This suggests that the marked differences in the perception of familiar and unfamiliar faces may not depend on differences in the way multiple images are represented in core face-selective regions of the human brain.
Watanabe, Eriko; Ishikawa, Mami; Ohta, Maiko; Kodate, Kashiko
Face recognition is used in a wide range of security systems, such as monitoring credit card use, searching for individuals with street cameras via Internet and maintaining immigration control. There are still many technical subjects under study. For instance, the number of images that can be stored is limited under the current system, and the rate of recognition must be improved to account for photo shots taken at different angles under various conditions. We implemented a fully automatic Fast Face Recognition Optical Correlator (FARCO) system by using a 1000 frame/s optical parallel correlator designed and assembled by us. Operational speed for the 1: N (i.e. matching a pair of images among N, where N refers to the number of images in the database) identification experiment (4000 face images) amounts to less than 1.5 seconds, including the pre/post processing. From trial 1: N identification experiments using FARCO, we acquired low error rates of 2.6% False Reject Rate and 1.3% False Accept Rate. By making the most of the high-speed data-processing capability of this system, much more robustness can be achieved for various recognition conditions when large-category data are registered for a single person. We propose a face recognition algorithm for the FARCO while employing a temporal image sequence of moving images. Applying this algorithm to a natural posture, a two times higher recognition rate scored compared with our conventional system. The system has high potential for future use in a variety of purposes such as search for criminal suspects by use of street and airport video cameras, registration of babies at hospitals or handling of an immeasurable number of images in a database.
Full Text Available This paper presents a proposed methodology for face recognition based on an information theory approach to coding and decoding face images. In this paper, we propose a 2D-3D face-matching method based on a principal component analysis (PCA algorithm using canonical correlation analysis (CCA to learn the mapping between a 2D face image and 3D face data. This method makes it possible to match a 2D face image with enrolled 3D face data. Our proposed fusion algorithm is based on the PCA method, which is applied to extract base features. PCA feature-level fusion requires the extraction of different features from the source data before features are merged together. Experimental results on the TEXAS face image database have shown that the classification and recognition results based on the modified CCA-PCA method are superior to those based on the CCA method. Testing the 2D-3D face match results gave a recognition rate for the CCA method of a quite poor 55% while the modified CCA method based on PCA-level fusion achieved a very good recognition score of 85%.
Hills, Peter J; Eaton, Elizabeth; Pake, J Michael
Psychometric schizotypy in the general population correlates negatively with face recognition accuracy, potentially due to deficits in inhibition, social withdrawal, or eye-movement abnormalities. We report an eye-tracking face recognition study in which participants were required to match one of two faces (target and distractor) to a cue face presented immediately before. All faces could be presented with or without paraphernalia (e.g., hats, glasses, facial hair). Results showed that paraphernalia distracted participants, and that the most distracting condition was when the cue and the distractor face had paraphernalia but the target face did not, while there was no correlation between distractibility and participants' scores on the Schizotypal Personality Questionnaire (SPQ). Schizotypy was negatively correlated with proportion of time fixating on the eyes and positively correlated with not fixating on a feature. It was negatively correlated with scan path length and this variable correlated with face recognition accuracy. These results are interpreted as schizotypal traits being associated with a restricted scan path leading to face recognition deficits.
Maurer, Leonie; Zitting, Kirsi-Marja; Elliott, Kieran; Czeisler, Charles A; Ronda, Joseph M; Duffy, Jeanne F
Sleep has been demonstrated to improve consolidation of many types of new memories. However, few prior studies have examined how sleep impacts learning of face-name associations. The recognition of a new face along with the associated name is an important human cognitive skill. Here we investigated whether post-presentation sleep impacts recognition memory of new face-name associations in healthy adults. Fourteen participants were tested twice. Each time, they were presented 20 photos of faces with a corresponding name. Twelve hours later, they were shown each face twice, once with the correct and once with an incorrect name, and asked if each face-name combination was correct and to rate their confidence. In one condition the 12-h interval between presentation and recall included an 8-h nighttime sleep opportunity ("Sleep"), while in the other condition they remained awake ("Wake"). There were more correct and highly confident correct responses when the interval between presentation and recall included a sleep opportunity, although improvement between the "Wake" and "Sleep" conditions was not related to duration of sleep or any sleep stage. These data suggest that a nighttime sleep opportunity improves the ability to correctly recognize face-name associations. Further studies investigating the mechanism of this improvement are important, as this finding has implications for individuals with sleep disturbances and/or memory impairments. Copyright © 2015 Elsevier Inc. All rights reserved.
Kleider-Offutt, Heather M; Bond, Alesha D; Williams, Sarah E; Bohil, Corey J
Prior research indicates that stereotypical Black faces (e.g., wide nose, full lips: Afrocentric) are often associated with crime and violence. The current study investigated whether stereotypical faces may bias the interpretation of facial expression to seem threatening. Stimuli were prerated by face type (stereotypical, nonstereotypical) and expression (neutral, threatening). Later in a forced-choice task, different participants categorized face stimuli as stereotypical or not and threatening or not. Regardless of prerated expression, stereotypical faces were judged as more threatening than were nonstereotypical faces. These findings were supported using computational models based on general recognition theory (GRT), indicating that decision boundaries were more biased toward the threatening response for stereotypical faces than for nonstereotypical faces. GRT analysis also indicated that perception of face stereotypicality and emotional expression are dependent, both across categories and within individual categories. Higher perceived stereotypicality predicts higher perception of threat, and, conversely, higher ratings of threat predict higher perception of stereotypicality. Implications for racial face-type bias influencing perception and decision-making in a variety of social and professional contexts are discussed.
Rosendo Freitas de Amorim
Full Text Available This article investigates the origins and historical aspects of prejudice experienced by homosexuals and the process of recognition of equality of rights, freedom and dignity as a form of affirmation of homosexual citizenship. Despite the recent legal recognition of homoafetivas unions, homosexuality is still treated with a way to lower sexual orientation before the heteronormative default, this translates into many legislative gaps on the right to free expression of sexual orientation. A bibliographical documentary research from classical sociology was held, anthropology and law, as well as the jurisprudence of the higher courts. The study indentifies a direct relationship between sexuality and power. Despite the historical record of homosexuality existing in different times of history, it was usually treated with inferiority, either in their understanding as sin, disease and crime. It is argued that to build a substantive citizenship in Brazil, it is necessary, among other measures, criminalize homophobic practices.
Leonard, Hayley C.; Annaz, Dagmara; Karmiloff-Smith, Annette; Johnson, Mark H.
The current study investigated whether contrasting face recognition abilities in autism and Williams syndrome could be explained by different spatial frequency biases over developmental time. Typically-developing children and groups with Williams syndrome and autism were asked to recognise faces in which low, middle and high spatial frequency…
Full Text Available Studies have shown that own-race faces are more accurately recognised than other-race faces. The present study examined the effects of own- and other-race face recognition when different ethnicity targets are presented to the participants together. Also the effect of semantic information on the recognition of different race faces was examined. The participants (N = 234 were presented with photos of own-race and other-race faces. For some participants the faces were presented with stereotypical names and for some not. As hypothesized, own-race faces were better recognised in target-present lineup and more correctly rejected in target-absent lineup than other-race faces. Concerning presentation method, both own-race and other-race faces were more correctly identified in target-present simultaneous than in target-present sequential lineups. No effects of stereotypical names on face recognition were found. The findings suggest that identifying multi-ethnicity perpetrators is a problematic and difficult task.
Arroyave, S.; Hernandez, L. J.; Torres, Cesar; Matos, Lorenzo
It developed a system capable of recognizing faces of people from their facial features, the images are taken by the software automatically through a process of validating the presence of face to the camera lens, the digitized image is compared with a database that contains previously images captured, to subsequently be recognized and finally identified. The contribution of system set out is the fact that the acquisition of data is done in real time and using a web cam commercial usb interface offering an system equally optimal but much more economical. This tool is very effective in systems where the security is off vital importance, support with a high degree of verification to entities that possess databases with faces of people. (Author)
Gerlach, Christian; Marstrand, Lisbet; Starrfelt, Randi
Face recognition and word reading are thought to be mediated by relatively independent cognitive systems lateralized to the right and left hemisphere respectively. In this case, we should expect a higher incidence of face recognition problems in patients with right hemisphere injury and a higher...... incidence of reading problems in patients with left hemisphere injury. We tested this hypothesis in a group of 31 patients with unilateral right or left hemisphere infarcts in the territory of the posterior cerebral arteries. In most domains tested (e.g., visual attention, object recognition, visuo......-construction, motion perception), we found that both patient groups performed significantly worse than a matched control group. In particular we found a significant number of face recognition deficits in patients with left hemisphere injury and a significant number of patients with word reading deficits following...
Rugo, Kelsi F; Tamler, Kendall N; Woodman, Geoffrey F; Maxcey, Ashleigh M
Despite more than a century of evidence that long-term memory for pictures and words are different, much of what we know about memory comes from studies using words. Recent research examining visual long-term memory has demonstrated that recognizing an object induces the forgetting of objects from the same category. This recognition-induced forgetting has been shown with a variety of everyday objects. However, unlike everyday objects, faces are objects of expertise. As a result, faces may be immune to recognition-induced forgetting. However, despite excellent memory for such stimuli, we found that faces were susceptible to recognition-induced forgetting. Our findings have implications for how models of human memory account for recognition-induced forgetting as well as represent objects of expertise and consequences for eyewitness testimony and the justice system.
Nasrollahi, Kamal; Moeslund, Thomas B.
Face recognition systems are very sensitive to the quality and resolution of their input face images. This makes such systems unreliable when working with long surveillance video sequences without employing some selection and enhancement algorithms. On the other hand, processing all the frames...... of such video sequences by any enhancement or even face recognition algorithm is demanding. Thus, there is a need for a mechanism to summarize the input video sequence to a set of key-frames and then applying an enhancement algorithm to this subset. This paper presents a system doing exactly this. The system...... uses face quality assessment to select the key-frames and a hybrid super-resolution to enhance the face image quality. The suggested system that employs a linear associator face recognizer to evaluate the enhanced results has been tested on real surveillance video sequences and the experimental results...
Cohen, I.; Looije, R.; Neerincx, M.A.
Social robots can comfort and support children who have to cope with chronic diseases. In previous studies, a "facial robot", the iCat, proved to show well-recognized emotional expressions that are important in social interactions. The question is if a mobile robot without a face, the Nao, can
Conclusion: Patients with SAD have a positive point of view of their own face and experience self-relevance for the attractively transformed self-faces. This distorted cognition may be based on dysfunctions in the frontal and inferior parietal regions. The abnormal engagement of the fronto-parietal attentional network during processing face stimuli in non-social situations may be linked to distorted self-recognition in SAD.
Yan, Linlin; Wang, Zhe; Huang, Jianling; Sun, Yu-Hao P.; Judges, Rebecca A.; Xiao, Naiqi G.; Lee, Kang
In the present study, we examined whether social categorization based on university affiliation can induce an advantage in recognizing faces. Moreover, we investigated how the reputation or location of the university affected face recognition performance using an old/new paradigm. We assigned five different university labels to the faces: participants’ own university and four other universities. Among the four other university labels, we manipulated the academic reputation and geographical lo...
Rhodes, Matthew G.; Anastasi, Jeffrey S.
A large number of studies have examined the finding that recognition memory for faces of one's own age group is often superior to memory for faces of another age group. We examined this "own-age bias" (OAB) in the meta-analyses reported. These data showed that hits were reliably greater for same-age relative to other-age faces (g = 0.23) and that…
Ge, Liezhong; Anzures, Gizelle; Wang, Zhe; Kelly, David J.; Pascalis, Olivier; Quinn, Paul C.; Slater, Alan M.; Yang, Zhiliang; Lee, Kang
Children’s recognition of familiar own-age peers was investigated. Four-, 8-, and 14-year-old Chinese children were asked to identify their classmates from photographs showing the entire face, the internal facial features only, the external facial features only, or the eyes, nose, or mouth only. Participants from all age groups were familiar with the faces used as stimuli for one academic year. The results showed that children from all age groups demonstrated an advantage for recognition of the internal facial features relative to their recognition of the external facial features. Previous observations of a shift in reliance from external to internal facial features can, thus, be attributed to experience with faces rather than to age-related changes in face processing. PMID:18639888
Full Text Available The capacity to recognize perceptually similar complex visual stimuli such as human faces has classically been thought to require a large primate, and/or mammalian brain with neurobiological adaptations. However, recent work suggests that the relatively small brain of a paper wasp, Polistes fuscatus, possesses specialized face processing capabilities. In parallel, the honeybee, Apis mellifera, has been shown to be able to rely on configural learning for extensive visual learning, thus converging with primate visual processing. Therefore, the honeybee may be able to recognize human faces, and show sophisticated learning performance due to its foraging lifestyle involving visiting and memorizing many flowers. We investigated the visual capacities of the widespread invasive wasp Vespula vulgaris, which is unlikely to have any specialization for face processing. Freely flying individual wasps were trained in an appetitive-aversive differential conditioning procedure to discriminate between perceptually similar human face images from a standard face recognition test. The wasps could then recognize the target face from novel dissimilar or similar human faces, but showed a significant drop in performance when the stimuli were rotated by 180°, thus paralleling results acquired on a similar protocol with honeybees. This result confirms that a general visual system can likely solve complex recognition tasks, the first stage to evolve a visual expertise system to face recognition, even in the absence of neurobiological or behavioral specialization.
Full Text Available Recent years have witnessed a growing interest in developing methods for 3D face recognition. However, 3D scans often suffer from the problems of missing parts, large facial expressions, and occlusions. To be useful in real-world applications, a 3D face recognition approach should be able to handle these challenges. In this paper, we propose a novel general approach to deal with the 3D face recognition problem by making use of multiple keypoint descriptors (MKD and the sparse representation-based classification (SRC. We call the proposed method 3DMKDSRC for short. Specifically, with 3DMKDSRC, each 3D face scan is represented as a set of descriptor vectors extracted from keypoints by meshSIFT. Descriptor vectors of gallery samples form the gallery dictionary. Given a probe 3D face scan, its descriptors are extracted at first and then its identity can be determined by using a multitask SRC. The proposed 3DMKDSRC approach does not require the pre-alignment between two face scans and is quite robust to the problems of missing data, occlusions and expressions. Its superiority over the other leading 3D face recognition schemes has been corroborated by extensive experiments conducted on three benchmark databases, Bosphorus, GavabDB, and FRGC2.0. The Matlab source code for 3DMKDSRC and the related evaluation results are publicly available at http://sse.tongji.edu.cn/linzhang/3dmkdsrcface/3dmkdsrc.htm.
Panagiotopoulou, Elena; Filippetti, Maria Laura; Tsakiris, Manos; Fotopoulou, Aikaterini
Multisensory integration is a powerful mechanism for constructing body awareness and key for the sense of selfhood. Recent evidence has shown that the specialised C tactile modality that gives rise to feelings of pleasant, affective touch, can enhance the experience of body ownership during multisensory integration. Nevertheless, no study has examined whether affective touch can also modulate psychological identification with our face, the hallmark of our identity. The current study used the ...
Yin, Xi; Liu, Xiaoming
This paper explores multi-task learning (MTL) for face recognition. We answer the questions of how and why MTL can improve the face recognition performance. First, we propose a multi-task Convolutional Neural Network (CNN) for face recognition where identity classification is the main task and pose, illumination, and expression estimations are the side tasks. Second, we develop a dynamic-weighting scheme to automatically assign the loss weight to each side task, which is a crucial problem in MTL. Third, we propose a pose-directed multi-task CNN by grouping different poses to learn pose-specific identity features, simultaneously across all poses. Last but not least, we propose an energy-based weight analysis method to explore how CNN-based MTL works. We observe that the side tasks serve as regularizations to disentangle the variations from the learnt identity features. Extensive experiments on the entire Multi-PIE dataset demonstrate the effectiveness of the proposed approach. To the best of our knowledge, this is the first work using all data in Multi-PIE for face recognition. Our approach is also applicable to in-the-wild datasets for pose-invariant face recognition and achieves comparable or better performance than state of the art on LFW, CFP, and IJB-A datasets.
Full Text Available The cross-race effect – enhanced recognition of racial ingroup faces – has been justified to exist in other categories, such as arbitrary groups. This study aimed to investigate the effect of crossing racial (black/white and arbitrary (blue/yellow categories, in addition to the role of facial expressions in this phenomenon. 120 Caucasian students (from the UK, Macedonia, and Portugal performed a discrimination task (judging faces as new vs. previously seen. Using a within-subjects design, reaction times and accuracy were measured. We hypothesized that (1 the arbitrary group membership of faces would moderate the cross-race effect and (2 the racial group membership of faces would moderate the usual recognition advantage for happy faces.
Hills, Peter J.; Lewis, Michael B.; Honey, R. C.
The accuracy with which previously unfamiliar faces are recognised is increased by the presentation of a stereotype-congruent occupation label [Klatzky, R. L., Martin, G. L., & Kane, R. A. (1982a). "Semantic interpretation effects on memory for faces." "Memory & Cognition," 10, 195-206; Klatzky, R. L., Martin, G. L., & Kane, R. A. (1982b).…
Cui, Chen; Asari, Vijayan K.
Biometric features such as fingerprints, iris patterns, and face features help to identify people and restrict access to secure areas by performing advanced pattern analysis and matching. Face recognition is one of the most promising biometric methodologies for human identification in a non-cooperative security environment. However, the recognition results obtained by face recognition systems are a affected by several variations that may happen to the patterns in an unrestricted environment. As a result, several algorithms have been developed for extracting different facial features for face recognition. Due to the various possible challenges of data captured at different lighting conditions, viewing angles, facial expressions, and partial occlusions in natural environmental conditions, automatic facial recognition still remains as a difficult issue that needs to be resolved. In this paper, we propose a novel approach to tackling some of these issues by analyzing the local textural descriptions for facial feature representation. The textural information is extracted by an enhanced local binary pattern (ELBP) description of all the local regions of the face. The relationship of each pixel with respect to its neighborhood is extracted and employed to calculate the new representation. ELBP reconstructs a much better textural feature extraction vector from an original gray level image in different lighting conditions. The dimensionality of the texture image is reduced by principal component analysis performed on each local face region. Each low dimensional vector representing a local region is now weighted based on the significance of the sub-region. The weight of each sub-region is determined by employing the local variance estimate of the respective region, which represents the significance of the region. The final facial textural feature vector is obtained by concatenating the reduced dimensional weight sets of all the modules (sub-regions) of the face image
Full Text Available Illumination variation makes automatic face recognition a challenging task, especially in low light environments. A very simple and efficient novel low-light image denoising of low frequency noise (DeLFN is proposed. The noise frequency distribution of low-light images is presented based on massive experimental results. The low and very low frequency noise are dominant in low light conditions. DeLFN is a three-level image denoising method. The first level denoises mixed noises by histogram equalization (HE to improve overall contrast. The second level denoises low frequency noise by logarithmic transformation (LOG to enhance the image detail. The third level denoises residual very low frequency noise by high-pass filtering to recover more features of the true images. The PCA (Principal Component Analysis recognition method is applied to test recognition rate of the preprocessed face images with DeLFN. DeLFN are compared with several representative illumination preprocessing methods on the Yale Face Database B, the Extended Yale face database B, and the CMU PIE face database, respectively. DeLFN not only outperformed other algorithms in improving visual quality and face recognition rate, but also is simpler and computationally efficient for real time applications.
Moeini, Ali; Faez, Karim; Moeini, Hossein
A study for feature extraction is proposed to handle the problem of facial appearance changes including facial makeup and plastic surgery in face recognition. To extend a face recognition method robust to facial appearance changes, features are individually extracted from facial depth on which facial makeup and plastic surgery have no effect. Then facial depth features are added to facial texture features to perform feature extraction. Accordingly, a three-dimensional (3-D) face is reconstructed from only a single two-dimensional (2-D) frontal image in real-world scenarios. Then the facial depth is extracted from the reconstructed model. Afterward, the dual-tree complex wavelet transform (DT-CWT) is applied to both texture and reconstructed depth images to extract the feature vectors. Finally, the final feature vectors are generated by combining 2-D and 3-D feature vectors, and are then classified by adopting the support vector machine. Promising results have been achieved for makeup-invariant face recognition on two available image databases including YouTube makeup and virtual makeup, and plastic surgery-invariant face recognition on a plastic surgery face database is compared to several state-of-the-art feature extraction methods. Several real-world scenarios are also planned to evaluate the performance of the proposed method on a combination of these three databases with 1102 subjects.
Short, Nathaniel J; Yuffa, Alex J; Videen, Gorden; Hu, Shuowen
Materials, such as cosmetics, applied to the face can severely inhibit biometric face-recognition systems operating in the visible spectrum. These products are typically made up of materials having different spectral properties and color pigmentation that distorts the perceived shape of the face. The surface of the face emits thermal radiation, due to the living tissue beneath the surface of the skin. The emissivity of skin is approximately 0.99; in comparison, oil- and plastic-based materials, commonly found in cosmetics and face paints, have an emissivity range of 0.9-0.95 in the long-wavelength infrared part of the spectrum. Due to these properties, all three are good thermal emitters and have little impact on the heat transferred from the face. Polarimetric-thermal imaging provides additional details of the face and is also dependent upon the thermal radiation from the face. In this paper, we provide a theoretical analysis on the thermal conductivity of various materials commonly applied to the face using a metallic sphere. Additionally, we observe the impact of environmental conditions on the strength of the polarimetric signature and the ability to recover geometric details. Finally, we show how these materials degrade the performance of traditional face-recognition methods and provide an approach to mitigating this effect using polarimetric-thermal imaging.
Van Strien, Jan W.; Glimmerveen, Johanna C.; Franken, Ingmar H. A.; Martens, Vanessa E. G.; de Bruin, Eveline A.
To examine the development of recognition memory in primary-school children, 36 healthy younger children (8-9 years old) and 36 healthy older children (11-12 years old) participated in an ERP study with an extended continuous face recognition task (Study 1). Each face of a series of 30 faces was shown randomly six times interspersed with…
Lee, Ping-Han; Wu, Szu-Wei; Hung, Yi-Ping
Illumination compensation and normalization play a crucial role in face recognition. The existing algorithms either compensated low-frequency illumination, or captured high-frequency edges. However, the orientations of edges were not well exploited. In this paper, we propose the orientated local histogram equalization (OLHE) in brief, which compensates illumination while encoding rich information on the edge orientations. We claim that edge orientation is useful for face recognition. Three OLHE feature combination schemes were proposed for face recognition: 1) encoded most edge orientations; 2) more compact with good edge-preserving capability; and 3) performed exceptionally well when extreme lighting conditions occurred. The proposed algorithm yielded state-of-the-art performance on AR, CMU PIE, and extended Yale B using standard protocols. We further evaluated the average performance of the proposed algorithm when the images lighted differently were observed, and the proposed algorithm yielded the promising results.
Chude-Olisah, Chollette C.; Sulong, Ghazali; Chude-Okonkwo, Uche A. K.; Hashim, Siti Z. M.
Plastic surgery procedures on the face introduce skin texture variations between images of the same person (intra-subject), thereby making the task of face recognition more difficult than in normal scenario. Usually, in contemporary face recognition systems, the original gray-level face image is used as input to the Gabor descriptor, which translates to encoding some texture properties of the face image. The texture-encoding process significantly degrades the performance of such systems in the case of plastic surgery due to the presence of surgically induced intra-subject variations. Based on the proposition that the shape of significant facial components such as eyes, nose, eyebrow, and mouth remains unchanged after plastic surgery, this paper employs an edge-based Gabor feature representation approach for the recognition of surgically altered face images. We use the edge information, which is dependent on the shapes of the significant facial components, to address the plastic surgery-induced texture variation problems. To ensure that the significant facial components represent useful edge information with little or no false edges, a simple illumination normalization technique is proposed for preprocessing. Gabor wavelet is applied to the edge image to accentuate on the uniqueness of the significant facial components for discriminating among different subjects. The performance of the proposed method is evaluated on the Georgia Tech (GT) and the Labeled Faces in the Wild (LFW) databases with illumination and expression problems, and the plastic surgery database with texture changes. Results show that the proposed edge-based Gabor feature representation approach is robust against plastic surgery-induced face variations amidst expression and illumination problems and outperforms the existing plastic surgery face recognition methods reported in the literature.
Yeong Gon Kim
Full Text Available The performance of unimodal biometric systems (based on a single modality such as face or fingerprint has to contend with various problems, such as illumination variation, skin condition and environmental conditions, and device variations. Therefore, multimodal biometric systems have been used to overcome the limitations of unimodal biometrics and provide high accuracy recognition. In this paper, we propose a new multimodal biometric system based on score level fusion of face and both irises' recognition. Our study has the following novel features. First, the device proposed acquires images of the face and both irises simultaneously. The proposed device consists of a face camera, two iris cameras, near-infrared illuminators and cold mirrors. Second, fast and accurate iris detection is based on two circular edge detections, which are accomplished in the iris image on the basis of the size of the iris detected in the face image. Third, the combined accuracy is enhanced by combining each score for the face and both irises using a support vector machine. The experimental results show that the equal error rate for the proposed method is 0.131%, which is lower than that of face or iris recognition and other fusion methods.
This paper introduces a novel technique to detect faces similarly recognizes in real-time with very high rate. It is essentially a feature-based approach, in which a classifier is trained for Haar-like rectangular features selected by Ada Boost algorithm and efficient representation method histogram equalization is used for varying illumination in the image. The face detection system generates an integral image window to perform a Haar feature classification during one clock cycle. And then i...
Riedel, Philipp; Ragert, Patrick; Schelinski, Stefanie; Kiebel, Stefan J; von Kriegstein, Katharina
It is commonly assumed that the recruitment of visual areas during audition is not relevant for performing auditory tasks ('auditory-only view'). According to an alternative view, however, the recruitment of visual cortices is thought to optimize auditory-only task performance ('auditory-visual view'). This alternative view is based on functional magnetic resonance imaging (fMRI) studies. These studies have shown, for example, that even if there is only auditory input available, face-movement sensitive areas within the posterior superior temporal sulcus (pSTS) are involved in understanding what is said (auditory-only speech recognition). This is particularly the case when speakers are known audio-visually, that is, after brief voice-face learning. Here we tested whether the left pSTS involvement is causally related to performance in auditory-only speech recognition when speakers are known by face. To test this hypothesis, we applied cathodal transcranial direct current stimulation (tDCS) to the pSTS during (i) visual-only speech recognition of a speaker known only visually to participants and (ii) auditory-only speech recognition of speakers they learned by voice and face. We defined the cathode as active electrode to down-regulate cortical excitability by hyperpolarization of neurons. tDCS to the pSTS interfered with visual-only speech recognition performance compared to a control group without pSTS stimulation (tDCS to BA6/44 or sham). Critically, compared to controls, pSTS stimulation additionally decreased auditory-only speech recognition performance selectively for voice-face learned speakers. These results are important in two ways. First, they provide direct evidence that the pSTS is causally involved in visual-only speech recognition; this confirms a long-standing prediction of current face-processing models. Secondly, they show that visual face-sensitive pSTS is causally involved in optimizing auditory-only speech recognition. These results are in line
P. Chandra Sekhar Reddy
Full Text Available Today face recognition capability of the human visual system plays a significant role in day to day life due to numerous important applications for automatic face recognition. One of the problems with the recent image classification and recognition approaches are they have to extract features on the entire image and on the large grey level range of the image. The present paper overcomes this by deriving an approach that reduces the dimensionality of the image using Shape primitives and reducing the grey level range by using a fuzzy logic while preserving the significant attributes of the texture. The present paper proposed an Image Dimensionality Reduction using shape Primitives (IDRSP model for efficient face recognition. Fuzzy logic is applied on IDRSP facial model to reduce the grey level range from 0 to 4. This makes the proposed fuzzy based IDRSP (FIDRSP model suitable to Grey level co-occurrence matrices. The proposed FIDRSP model with GLCM features are compared with existing face recognition algorithm. The results indicate the efficacy of the proposed method.
Zeinstra, C G; Meuwly, D; Ruifrok, A Cc; Veldhuis, R Nj; Spreeuwers, L J
This paper surveys the literature on forensic face recognition (FFR), with a particular focus on the strength of evidence as used in a court of law. FFR is the use of biometric face recognition for several applications in forensic science. It includes scenarios of ID verification and open-set identification, investigation and intelligence, and evaluation of the strength of evidence. We present FFR from operational, tactical, and strategic perspectives. We discuss criticism of FFR and we provide an overview of research efforts from multiple perspectives that relate to the domain of FFR. Finally, we sketch possible future directions for FFR. Copyright © 2018 Central Police University.
Grudzien, A.; Kowalski, M.; Szustakowski, M.
Biometrics is a technique for automatic recognition of a person based on physiological or behavior characteristics. Since the characteristics used are unique, biometrics can create a direct link between a person and identity, based on variety of characteristics. The human face is one of the most important biometric modalities for automatic authentication. The most popular method of face recognition which relies on processing of visual information seems to be imperfect. Thermal infrared imagery may be a promising alternative or complement to visible range imaging due to its several reasons. This paper presents an approach of combining both methods.
Full Text Available Despite the increasing interest in twin studies and the stunning amount of research on face recognition, the ability of adult identical twins to discriminate their own faces from those of their co-twins has been scarcely investigated. One's own face is the most distinctive feature of the bodily self, and people typically show a clear advantage in recognizing their own face even more than other very familiar identities. Given the very high level of resemblance of their faces, monozygotic twins represent a unique model for exploring self-face processing. Herein we examined the ability of monozygotic twins to distinguish their own face from the face of their co-twin and of a highly familiar individual. Results show that twins equally recognize their own face and their twin's face. This lack of self-face advantage was negatively predicted by how much they felt physically similar to their co-twin and by their anxious or avoidant attachment style. We speculate that in monozygotic twins, the visual representation of the self-face overlaps with that of the co-twin. Thus, to distinguish the self from the co-twin, monozygotic twins have to rely much more than control participants on the multisensory integration processes upon which the sense of bodily self is based. Moreover, in keeping with the notion that attachment style influences perception of self and significant others, we propose that the observed self/co-twin confusion may depend upon insecure attachment.
Tang, Xin; Feng, Guo-Can; Li, Xiao-Xin; Cai, Jia-Xin
Face recognition is challenging especially when the images from different persons are similar to each other due to variations in illumination, expression, and occlusion. If we have sufficient training images of each person which can span the facial variations of that person under testing conditions, sparse representation based classification (SRC) achieves very promising results. However, in many applications, face recognition often encounters the small sample size problem arising from the small number of available training images for each person. In this paper, we present a novel face recognition framework by utilizing low-rank and sparse error matrix decomposition, and sparse coding techniques (LRSE+SC). Firstly, the low-rank matrix recovery technique is applied to decompose the face images per class into a low-rank matrix and a sparse error matrix. The low-rank matrix of each individual is a class-specific dictionary and it captures the discriminative feature of this individual. The sparse error matrix represents the intra-class variations, such as illumination, expression changes. Secondly, we combine the low-rank part (representative basis) of each person into a supervised dictionary and integrate all the sparse error matrix of each individual into a within-individual variant dictionary which can be applied to represent the possible variations between the testing and training images. Then these two dictionaries are used to code the query image. The within-individual variant dictionary can be shared by all the subjects and only contribute to explain the lighting conditions, expressions, and occlusions of the query image rather than discrimination. At last, a reconstruction-based scheme is adopted for face recognition. Since the within-individual dictionary is introduced, LRSE+SC can handle the problem of the corrupted training data and the situation that not all subjects have enough samples for training. Experimental results show that our method achieves the
Full Text Available Face recognition is challenging especially when the images from different persons are similar to each other due to variations in illumination, expression, and occlusion. If we have sufficient training images of each person which can span the facial variations of that person under testing conditions, sparse representation based classification (SRC achieves very promising results. However, in many applications, face recognition often encounters the small sample size problem arising from the small number of available training images for each person. In this paper, we present a novel face recognition framework by utilizing low-rank and sparse error matrix decomposition, and sparse coding techniques (LRSE+SC. Firstly, the low-rank matrix recovery technique is applied to decompose the face images per class into a low-rank matrix and a sparse error matrix. The low-rank matrix of each individual is a class-specific dictionary and it captures the discriminative feature of this individual. The sparse error matrix represents the intra-class variations, such as illumination, expression changes. Secondly, we combine the low-rank part (representative basis of each person into a supervised dictionary and integrate all the sparse error matrix of each individual into a within-individual variant dictionary which can be applied to represent the possible variations between the testing and training images. Then these two dictionaries are used to code the query image. The within-individual variant dictionary can be shared by all the subjects and only contribute to explain the lighting conditions, expressions, and occlusions of the query image rather than discrimination. At last, a reconstruction-based scheme is adopted for face recognition. Since the within-individual dictionary is introduced, LRSE+SC can handle the problem of the corrupted training data and the situation that not all subjects have enough samples for training. Experimental results show that our
Lemos, Raquel; Santana, Isabel; Caetano, Gina; Bernardino, Inês; Morais, Ricardo; Farivar, Reza; Castelo-Branco, Miguel
Mild cognitive impairment (MCI) has been associated with a high risk of conversion to Alzheimer's dementia. In addition to memory complaints, impairments in the visuospatial domain have been reported in this condition. We have previously shown that deficits in perceiving structure-from-motion (SFM) objects are reflected in functional reorganization of brain activity within the visual ventral stream. Here we aimed to identify structural correlates of psychophysical complex face and object recognition performance in amnestic MCI patients (n=30 vs. n=25 controls). This study was, therefore, motivated by evidence from recent studies showing that a combination of visual information across dorsal and ventral visual streams may be needed for the perception of three-dimensional (3D) SFM objects. In our experimental paradigm, participants had to discriminate 3D SFM shapes (faces and objects) from 3D SFM meaningless (scrambled) shapes. Morphometric analysis established neuroanatomical evidence for impairment in MCI as demonstrated by smaller hippocampal volumes. We found association between cortical thickness and face recognition performance, comprising the occipital lobe and visual ventral stream fusiform regions (overlapping the known location of face fusiform area) in the right hemisphere, in MCI. We conclude that impairment of 3D visual integration exists at the MCI stage involving also the visual ventral stream and contributing to face recognition deficits. The specificity of such observed structure-function correlation for faces suggests a special role of this processing pathway in health and disease. (JINS, 2016, 22, 744-754).
McBain, Ryan; Norton, Daniel; Chen, Yue
Healthy females outperform males on face recognition tasks. Relative to healthy individuals, schizophrenia patients are impaired at face perception. Yet, it is unclear whether the female advantage found in healthy controls is preserved in females with schizophrenia. In the present study, we compared male and female patients and healthy controls on two basic face perception tasks - detection and identity discrimination. In the detection task, subjects located an upright or inverted line-drawn face (or tree) embedded within a larger line-drawing. In the identity discrimination task, subjects determined which of two side-by-side face images matched an earlier presented face image. Healthy females were significantly more accurate than healthy males on face detection, but not on identity discrimination. However, female patients were not more accurate than male patients on either task. On both upright face detection and face identity discrimination, healthy controls significantly outperformed patients. Patients' performance on face detection was closely associated with tree detection and IQ scores, as well as level of psychosis. This pattern of results suggests that a female advantage in basic face perception is no longer available in schizophrenia, and that this absence may be related to a generalized deficit factor which acts to level performance across sexes, and putative changes in sex-related neurobiological differences associated with schizophrenia. Copyright 2009 Elsevier Ltd. All rights reserved.
Noh, Soo Rim; Isaacowitz, Derek M.
While age-related declines in facial expression recognition are well documented, previous research relied mostly on isolated faces devoid of context. We investigated the effects of context on age differences in recognition of facial emotions and in visual scanning patterns of emotional faces. While their eye movements were monitored, younger and older participants viewed facial expressions (i.e., anger, disgust) in contexts that were emotionally congruent, incongruent, or neutral to the facial expression to be identified. Both age groups had highest recognition rates of facial expressions in the congruent context, followed by the neutral context, and recognition rates in the incongruent context were worst. These context effects were more pronounced for older adults. Compared to younger adults, older adults exhibited a greater benefit from congruent contextual information, regardless of facial expression. Context also influenced the pattern of visual scanning characteristics of emotional faces in a similar manner across age groups. In addition, older adults initially attended more to context overall. Our data highlight the importance of considering the role of context in understanding emotion recognition in adulthood. PMID:23163713
Dyck, Miriam; Winbeck, Maren; Leiberg, Susanne; Chen, Yuhan; Mathiak, Klaus
Studies investigating emotion recognition in patients with schizophrenia predominantly presented photographs of facial expressions. Better control and higher flexibility of emotion displays could be afforded by virtual reality (VR). VR allows the manipulation of facial expression and can simulate social interactions in a controlled and yet more naturalistic environment. However, to our knowledge, there is no study that systematically investigated whether patients with schizophrenia show the same emotion recognition deficits when emotions are expressed by virtual as compared to natural faces. Twenty schizophrenia patients and 20 controls rated pictures of natural and virtual faces with respect to the basic emotion expressed (happiness, sadness, anger, fear, disgust, and neutrality). Consistent with our hypothesis, the results revealed that emotion recognition impairments also emerged for emotions expressed by virtual characters. As virtual in contrast to natural expressions only contain major emotional features, schizophrenia patients already seem to be impaired in the recognition of basic emotional features. This finding has practical implication as it supports the use of virtual emotional expressions for psychiatric research: the ease of changing facial features, animating avatar faces, and creating therapeutic simulations makes validated artificial expressions perfectly suited to study and treat emotion recognition deficits in schizophrenia. Copyright © 2009 Elsevier Ltd. All rights reserved.
In this dissertation, we focus on several aspects of models that aim to predict performance of a face recognition system. Performance prediction models are commonly based on the following two types of performance predictor features: a) image quality features; and b) features derived solely from
Full Text Available Principal component analysis or PCA has been traditionally used as one of the feature extraction techniques in face recognition systems yielding high accuracy when requiring a small number of features. However, the covariance matrix and eigenvalue decomposition stages cause high computational complexity, especially for a large database. Thus, this research presents an alternative approach utilizing an Expectation-Maximization algorithm to reduce the determinant matrix manipulation resulting in the reduction of the stages’ complexity. To improve the computational time, a novel parallel architecture was employed to utilize the benefits of parallelization of matrix computation during feature extraction and classification stages including parallel preprocessing, and their combinations, so-called a Parallel Expectation-Maximization PCA architecture. Comparing to a traditional PCA and its derivatives, the results indicate lower complexity with an insignificant difference in recognition precision leading to high speed face recognition systems, that is, the speed-up over nine and three times over PCA and Parallel PCA.
Full Text Available Computing grids propose to be a very efficacious, economic and ascendable way of image identification. In this paper, we propose a grid based face recognition overture employing a general template matching method to solve the timeconsuming face recognition problem. A new approach has been employed in which the grid was prepared for a specific individual over his photograph using Adobe Photoshop CS5 software. The background was later removed and the grid prepared by merging layers was used as a template for image matching or comparison. This overture is computationally efficient, has high recognition rates and is able to identify a person with minimal efforts and in short time even from photographs taken at different magnifications and from different distances.
Rujirakul, Kanokmon; So-In, Chakchai; Arnonkijpanich, Banchar
Principal component analysis or PCA has been traditionally used as one of the feature extraction techniques in face recognition systems yielding high accuracy when requiring a small number of features. However, the covariance matrix and eigenvalue decomposition stages cause high computational complexity, especially for a large database. Thus, this research presents an alternative approach utilizing an Expectation-Maximization algorithm to reduce the determinant matrix manipulation resulting in the reduction of the stages' complexity. To improve the computational time, a novel parallel architecture was employed to utilize the benefits of parallelization of matrix computation during feature extraction and classification stages including parallel preprocessing, and their combinations, so-called a Parallel Expectation-Maximization PCA architecture. Comparing to a traditional PCA and its derivatives, the results indicate lower complexity with an insignificant difference in recognition precision leading to high speed face recognition systems, that is, the speed-up over nine and three times over PCA and Parallel PCA.
Nemrodov, Dan; Niemeier, Matthias; Mok, Jenkin Ngo Yin; Nestor, Adrian
An extensive body of work documents the time course of neural face processing in the human visual cortex. However, the majority of this work has focused on specific temporal landmarks, such as N170 and N250 components, derived through univariate analyses of EEG data. Here, we take on a broader evaluation of ERP signals related to individual face recognition as we attempt to move beyond the leading theoretical and methodological framework through the application of pattern analysis to ERP data. Specifically, we investigate the spatiotemporal profile of identity recognition across variation in emotional expression. To this end, we apply pattern classification to ERP signals both in time, for any single electrode, and in space, across multiple electrodes. Our results confirm the significance of traditional ERP components in face processing. At the same time though, they support the idea that the temporal profile of face recognition is incompletely described by such components. First, we show that signals associated with different facial identities can be discriminated from each other outside the scope of these components, as early as 70ms following stimulus presentation. Next, electrodes associated with traditional ERP components as well as, critically, those not associated with such components are shown to contribute information to stimulus discriminability. And last, the levels of ERP-based pattern discrimination are found to correlate with recognition accuracy across subjects confirming the relevance of these methods for bridging brain and behavior data. Altogether, the current results shed new light on the fine-grained time course of neural face processing and showcase the value of novel methods for pattern analysis to investigating fundamental aspects of visual recognition. Copyright © 2016 Elsevier Inc. All rights reserved.
Maccari, Lisa; Martella, Diana; Marotta, Andrea; Sebastiani, Mara; Banaj, Nerisa; Fuentes, Luis J; Casagrande, Maria
Short-term sleep deprivation, or extended wakefulness, adversely affects cognitive functions and behavior. However, scarce research has addressed the effects of sleep deprivation (SD) on emotional processing. In this study, we investigated the impact of reduced vigilance due to moderate sleep deprivation on the ability to recognize emotional expressions of faces and emotional content of words. Participants remained awake for 24 h and performed the tasks in two sessions, one in which they were not affected by sleep loss (baseline; BSL), and other affected by SD, according to a counterbalanced sequence. Tasks were carried out twice at 10:00 and 4:00 am, or at 12:00 and 6:00 am. In both tasks, participants had to respond to the emotional valence of the target stimulus: negative, positive, or neutral. The results showed that in the word task, sleep deprivation impaired recognition irrespective of the emotional valence of words. However, sleep deprivation impaired recognition of emotional face expressions mainly when they showed a neutral expression. Emotional face expressions were less affected by the sleep loss, but positive faces were more resistant than negative faces to the detrimental effect of sleep deprivation. The differential effects of sleep deprivation on recognition of the different emotional stimuli are indicative of emotional facial expressions being stronger emotional stimuli than emotional laden words. This dissociation may be attributed to the more automatic sensory encoding of emotional facial content.
Steffens, Melanie C; Landmann, Sören; Mecklenbräuker, Silvia
Research participants' sexual orientation is not consistently taken into account in experimental psychological research. We argue that it should be in any research related to participant or target gender. Corroborating this argument, an example study is presented on the gender bias in face recognition, the finding that women correctly recognize more female than male faces. In contrast, findings with male participants have been inconclusive. An online experiment (N = 1,147) was carried out, on purpose over-sampling lesbian and gay participants. Findings demonstrate that the pro-female gender bias in face recognition is modified by male participants' sexual orientation. Heterosexual women and lesbians as well as heterosexual men showed a pro-female gender bias in face recognition, whereas gay men showed a pro-male gender bias, consistent with the explanation that differences in face expertise develop congruent with interests. These results contribute to the growing evidence that participant sexual orientation can be used to distinguish between alternative theoretical explanations of given gender-correlated patterns of findings.
Konishi, Yukihiko; Okubo, Kensuke; Kato, Ikuko; Ijichi, Sonoko; Nishida, Tomoko; Kusaka, Takashi; Isobe, Kenichi; Itoh, Susumu; Kato, Masaharu; Konishi, Yukuo
The purpose of this study was to examine developmental changes in visuocognitive function, particularly face recognition, in early infancy. In this study, we measured eye movement in healthy infants with a preference gaze problem, particularly eye movement between two face stimulations. We used the eye tracker system (Tobii1750, Tobii Technologies, Sweden) to measure eye movement in infants. Subjects were 17 3-month-old infants and 16 4-month-old infants. The subjects looked two types of face stimulation (upright face/scrambled face) at the same time and we measured their visual behavior (preference/looking/eye movement). Our results showed that 4-month-old infants looked at an upright face longer than 3-month infants, and exploratory behavior while comparing two face stimulations significantly increased. In this study, 4-month-old infants showed a preference towards an upright face. The numbers of eye movements between two face stimuli significantly increased in 4-month-old infants. These results suggest that eye movements may be an important index in face cognitive function during early infancy. Copyright © 2012 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.
Full Text Available Plurality voting is widely employed as combination strategies in pattern recognition. As a technology proposed recently, sparse representation based classification codes the query image as a sparse linear combination of entire training images and classifies the query sample class by class exploiting the class representation error. In this paper, an improvement face recognition approach using sparse representation and plurality voting based on the binary bit-plane images is proposed. After being equalized, gray images are decomposed into eight bit-plane images, sparse representation based classification is exploited respectively on the five bit-plane images that have more discrimination information. Finally, the true identity of query image is voted by these five identities obtained. Experiment results shown that this proposed approach is preferable both in recognition accuracy and in recognition speed.
João C. Monteiro
Full Text Available Humans perform and rely on face recognition routinely and effortlessly throughout their daily lives. Multiple works in recent years have sought to replicate this process in a robust and automatic way. However, it is known that the performance of face recognition algorithms is severely compromised in non-ideal image acquisition scenarios. In an attempt to deal with conditions, such as occlusion and heterogeneous illumination, we propose a new approach motivated by the global precedent hypothesis of the human brain’s cognitive mechanisms of perception. An automatic modeling of SIFT keypoint descriptors using a Gaussian mixture model (GMM-based universal background model method is proposed. A decision is, then, made in an innovative hierarchical sense, with holistic information gaining precedence over a more detailed local analysis. The algorithm was tested on the ORL, ARand Extended Yale B Face databases and presented state-of-the-art performance for a variety of experimental setups.
Full Text Available In recent years, nonnegative matrix factorization (NMF methods of a reduced image data representation attracted the attention of computer vision community. These methods are considered as a convenient part-based representation of image data for recognition tasks with occluded objects. A novel modification in NMF recognition tasks is proposed which utilizes the matrix sparseness control introduced by Hoyer. We have analyzed the influence of sparseness on recognition rates (RRs for various dimensions of subspaces generated for two image databases, ORL face database, and USPS handwritten digit database. We have studied the behavior of four types of distances between a projected unknown image object and feature vectors in NMF subspaces generated for training data. One of these metrics also is a novelty we proposed. In the recognition phase, partial occlusions in the test images have been modeled by putting two randomly large, randomly positioned black rectangles into each test image.
Anzures, Gizelle; Kelly, David J.; Pascalis, Olivier; Quinn, Paul C.; Slater, Alan M.; de Viviés, Xavier; Lee, Kang
We used a matching-to-sample task and manipulated facial pose and feature composition to examine the other-race effect (ORE) in face identity recognition between 5 and 10 years of age. Overall, the present findings provide a genuine measure of own- and other-race face identity recognition in children that is independent of photographic and image…
Susyanto, N.; Veldhuis, Raymond N.J.; Spreeuwers, Lieuwe Jan; Klaassen, C.A.J.
We propose a new method for combining multi-algorithm score-based face recognition systems, which we call the two-step calibration method. Typically, algorithms for face recognition systems produce dependent scores. The two-step method is based on parametric copulas to handle this dependence. Its
Wright, Barry; Clarke, Natalie; Jordan, Jo; Young, Andrew W.; Clarke, Paula; Miles, Jeremy; Nation, Kate; Clarke, Leesa; Williams, Christine
We compared young people with high-functioning autism spectrum disorders (ASDs) with age, sex and IQ matched controls on emotion recognition of faces and pictorial context. Each participant completed two tests of emotion recognition. The first used Ekman series faces. The second used facial expressions in visual context. A control task involved…
Treese, Anne-Cecile; Johansson, Mikael; Lindgren, Magnus
The emotional salience of faces has previously been shown to induce memory distortions in recognition memory tasks. This event-related potential (ERP) study used repeated runs of a continuous recognition task with emotional and neutral faces to investigate emotion-induced memory distortions. In the second and third runs, participants made more…
Full Text Available ObjectiveFamiliarity is a subjective sensation that contributes to person recognition. This process is described as an emotion-based memory-trace of previous meetings and could be disrupted in schizophrenia. Consequently, familiarity disorders could be involved in the impaired social interactions observed in patients with schizophrenia. Previous studies have primarily focused on famous people recognition. Our aim was to identify underlying features, such as emotional disturbances, that may contribute to familiarity disorders in schizophrenia. We hypothesize that patients with familiarity disorders will exhibit a lack of familiarity that could be detected by a flattened skin conductance response (SCR.MethodThe SCR was recorded to test the hypothesis that emotional reactivity disturbances occur in patients with schizophrenia during the categorization of specific familiar, famous and unknown faces as male or female. Forty-eight subjects were divided into the following 3 matched groups with 16 subjects per group: control subjects, schizophrenic people with familiarity disorder, and schizophrenic people without familiarity disorders.ResultsEmotional arousal is reflected by the skin conductance measures. The control subjects and the patients without familiarity disorders experienced a differential emotional response to the specific familiar faces compared with that to the unknown faces. Nevertheless, overall, the schizophrenic patients without familiarity disorders showed a weaker response across conditions compared with the control subjects. In contrast, the patients with familiarity disorders did not show any significant differences in their emotional response to the faces, regardless of the condition.ConclusionOnly patients with familiarity disorders fail to exhibit a difference in emotional response between familiar and non-familiar faces. These patients likely emotionally process familiar faces similarly to unknown faces. Hence, the lower
He, Ran; Zheng, Wei-Shi; Hu, Bao-Gang; Kong, Xiang-Wei
This paper proposes a novel nonnegative sparse representation approach, called two-stage sparse representation (TSR), for robust face recognition on a large-scale database. Based on the divide and conquer strategy, TSR decomposes the procedure of robust face recognition into outlier detection stage and recognition stage. In the first stage, we propose a general multisubspace framework to learn a robust metric in which noise and outliers in image pixels are detected. Potential loss functions, including L1 , L2,1, and correntropy are studied. In the second stage, based on the learned metric and collaborative representation, we propose an efficient nonnegative sparse representation algorithm to find an approximation solution of sparse representation. According to the L1 ball theory in sparse representation, the approximated solution is unique and can be optimized efficiently. Then a filtering strategy is developed to avoid the computation of the sparse representation on the whole large-scale dataset. Moreover, theoretical analysis also gives the necessary condition for nonnegative least squares technique to find a sparse solution. Extensive experiments on several public databases have demonstrated that the proposed TSR approach, in general, achieves better classification accuracy than the state-of-the-art sparse representation methods. More importantly, a significant reduction of computational costs is reached in comparison with sparse representation classifier; this enables the TSR to be more suitable for robust face recognition on a large-scale dataset.
Full Text Available Linlin Yang, Xiaochuan Zhao, Lan Wang, Lulu Yu, Mei Song, Xueyi Wang Department of Mental Health, The First Hospital of Hebei Medical University, Hebei Medical University Institute of Mental Health, Shijiazhuang, People’s Republic of China Abstract: Amnestic mild cognitive impairment (MCI has been conceptualized as a transitional stage between healthy aging and Alzheimer’s disease. Thus, understanding emotional face recognition deficit in patients with amnestic MCI could be useful in determining progression of amnestic MCI. The purpose of this study was to investigate the features of emotional face processing in amnestic MCI by using event-related potentials (ERPs. Patients with amnestic MCI and healthy controls performed a face recognition task, giving old/new responses to previously studied and novel faces with different emotional messages as the stimulus material. Using the learning-recognition paradigm, the experiments were divided into two steps, ie, a learning phase and a test phase. ERPs were analyzed on electroencephalographic recordings. The behavior data indicated high emotion classification accuracy for patients with amnestic MCI and for healthy controls. The mean percentage of correct classifications was 81.19% for patients with amnestic MCI and 96.46% for controls. Our ERP data suggest that patients with amnestic MCI were still be able to undertake personalizing processing for negative faces, but not for neutral or positive faces, in the early frontal processing stage. In the early time window, no differences in frontal old/new effect were found between patients with amnestic MCI and normal controls. However, in the late time window, the three types of stimuli did not elicit any old/new parietal effects in patients with amnestic MCI, suggesting their recollection was impaired. This impairment may be closely associated with amnestic MCI disease. We conclude from our data that face recognition processing and emotional memory is
Fatma Zohra Chelali
Full Text Available Face recognition has received a great attention from a lot of researchers in computer vision, pattern recognition, and human machine computer interfaces in recent years. Designing a face recognition system is a complex task due to the wide variety of illumination, pose, and facial expression. A lot of approaches have been developed to find the optimal space in which face feature descriptors are well distinguished and separated. Face representation using Gabor features and discrete wavelet has attracted considerable attention in computer vision and image processing. We describe in this paper a face recognition system using artificial neural networks like multilayer perceptron (MLP and radial basis function (RBF where Gabor and discrete wavelet based feature extraction methods are proposed for the extraction of features from facial images using two facial databases: the ORL and computer vision. Good recognition rate was obtained using Gabor and DWT parameterization with MLP classifier applied for computer vision dataset.
Full Text Available In many real-world applications such as smart card solutions, law enforcement, surveillance and access control, the limited training sample size is the most fundamental problem. By making use of the low-rank structural information of the reconstructed error image, the so-called nuclear norm-based matrix regression has been demonstrated to be effective for robust face recognition with continuous occlusions. However, the recognition performance of nuclear norm-based matrix regression degrades greatly in the face of the small sample size problem. An alternative solution to tackle this problem is performing matrix regression on each patch and then integrating the outputs from all patches. However, it is difficult to set an optimal patch size across different databases. To fully utilize the complementary information from different patch scales for the final decision, we propose a multi-scale patch-based matrix regression scheme based on which the ensemble of multi-scale outputs can be achieved optimally. Extensive experiments on benchmark face databases validate the effectiveness and robustness of our method, which outperforms several state-of-the-art patch-based face recognition algorithms.
Donoso, Ramiro; San Martín, Cesar; Hermosilla, Gabriel
In this paper, we introduce a new concept in the thermal face recognition area: isothermal features. This consists of a feature vector built from a thermal signature that depends on the emission of the skin of the person and its temperature. A thermal signature is the appearance of the face to infrared sensors and is unique to each person. The infrared face is decomposed into isothermal regions that present the thermal features of the face. Each isothermal region is modeled as circles within a center representing the pixel of the image, and the feature vector is composed of a maximum radius of the circles at the isothermal region. This feature vector corresponds to the thermal signature of a person. The face recognition process is built using a modification of the Expectation Maximization (EM) algorithm in conjunction with a proposed probabilistic index to the classification process. Results obtained using an infrared database are compared with typical state-of-the-art techniques showing better performance, especially in uncontrolled acquisition conditions scenarios.
Full Text Available Emotion recognition problems are frequently reported in individuals with an autism spectrum disorder (ASD. However, this research area is characterized by inconsistent findings, with atypical emotion processing strategies possibly contributing to existing contradictions. In addition, an attenuated saliency of the eyes region is often demonstrated in ASD during face identity processing. We wanted to compare reliance on mouth versus eyes information in children with and without ASD, using hybrid facial expressions. A group of six-to-eight-year-old boys with ASD and an age- and intelligence-matched typically developing (TD group without intellectual disability performed an emotion labelling task with hybrid facial expressions. Five static expressions were used: one neutral expression and four emotional expressions, namely, anger, fear, happiness, and sadness. Hybrid faces were created, consisting of an emotional face half (upper or lower face region with the other face half showing a neutral expression. Results showed no emotion recognition problem in ASD. Moreover, we provided evidence for the existence of top- and bottom-emotions in children: correct identification of expressions mainly depends on information in the eyes (so-called top-emotions: happiness or in the mouth region (so-called bottom-emotions: sadness, anger, and fear. No stronger reliance on mouth information was found in children with ASD.
Full Text Available Automated person recognition (APR based on biometric signals addresses the process of automatically recognize a person according to his physiological traits (face, voice, iris, fingerprint, ear shape, body odor, electroencephalogram – EEG, electrocardiogram, or hand geometry, or behavioural patterns (gait, signature, hand-grip, lip movement. The paper aims at briefly presenting the current challenges for two specific non-cooperative biometric approaches, namely face and gait biometrics as well as approaches that consider combination of the two in the attempt of a more robust system for accurate APR, in the context of surveillance application. Open problems from both sides are also pointed out.
Brace, N A; Hole, G J; Kemp, R I; Pike, G E; Van Duuren, M; Norgate, L
A novel child-oriented procedure was used to examine the face-recognition abilities of children as young as 2 years. A recognition task was embedded in a picture book containing a story about two boys and a witch. The story and the task were designed to be entertaining for children of a wide age range. In eight trials, the children were asked to pick out one of the boys from amongst eight distractors as quickly as possible. Response-time data to both upright and inverted conditions were analysed. The results revealed that children aged 6 years onwards showed the classic inversion effect. By contrast, the youngest children, aged 2 to 4 years, were faster at recognising the target face in the inverted condition than in the upright condition. Several possible explanations for this 'inverted inversion effect' are discussed.
Full Text Available This article deals with a recognition system using an algorithm based on the Principal Component Analysis (PCA technique. The recognition system consists only of a PC and an integrated video camera. The algorithm is developed in MATLAB language and calculates the eigenfaces considered as features of the face. The PCA technique is based on the matching between the facial test image and the training prototype vectors. The mathcing score between the facial test image and the training prototype vectors is calculated between their coefficient vectors. If the matching is high, we have the best recognition. The results of the algorithm based on the PCA technique are very good, even if the person looks from one side at the video camera.
Full Text Available Objective: Children with autism spectrum disorders (ASDs have great problems in social interactions including face recognition. There are many studies reporting deficits in face memory in individuals with ASDs. On the other hand, some studies indicate that this kind of memory is intact in this group. In the present study, delayed face recognition has been investigated in children and adolescents with ASDs compared to the age and sex matched typically developing group.Methods: In two sessions, Benton Facial Recognition Test was administered to 15 children and adolescents with ASDs (high functioning autism and Asperger syndrome and to 15 normal participants, ages 8-17 years. In the first condition, the long form of Benton Facial Recognition Test was used without any delay. In the second session, this test was administered with 15 seconds delay after one week. The reaction times and correct responses were measured in both conditions as the dependent variables.Results: Comparison of the reaction times and correct responses in the two groups revealed no significant difference in delayed and non-delayed conditions. Furthermore, no significant difference was observed between the two conditions in ASDs patients when comparing the variables. Although a significant correlation (p<0.05 was found between delayed and non-delayed conditions, it was not significant in the normal group. Moreover, data analysis revealed no significant difference between the two groups in the two conditions when the IQ was considered as covariate. Conclusion: In this study, it was found that the ability to recognize faces in simultaneous and delayed conditions is similar between adolescents with ASDs and their normal counterparts.
Farokhi, Sajad; Shamsuddin, S.M.; Sheikh, U.U.; Flusser, Jan; Khansari, M.; Jafari-Khouzani, K.
Roč. 31, č. 1 (2014), s. 13-27 ISSN 1051-2004 R&D Projects: GA ČR GAP103/11/1552 Institutional support: RVO:67985556 Keywords : Zernike moments * Undecimated discrete wavelet transform * Decision fusion * Near infrared * Face recognition Subject RIV: JD - Computer Applications, Robotics Impact factor: 1.256, year: 2014 http://library.utia.cas.cz/separaty/2014/ZOI/flusser-0428536.pdf
RFID and biometric time attendance have been used to taking employee's attendances in attendances. But they have disadvantage which is cost a lot in terms of prices when need to be used in several places at the same time. An alternative solution was given by using android application which utilizing QR-Code, Face Recognition, and Google Map Location technologies implemented in smartphone to taking employee's attendances. A test for this system was conduct on one of private colouring studios i...
Jones, Scott P; Dwyer, Dominic M; Lewis, Michael B
The ability to recognize an unfamiliar individual on the basis of prior exposure to a photograph is notoriously poor and prone to errors, but recognition accuracy is improved when multiple photographs are available. In applied situations, when only limited real images are available (e.g., from a mugshot or CCTV image), the generation of new images might provide a technological prosthesis for otherwise fallible human recognition. We report two experiments examining the effects of providing computer-generated additional views of a target face. In Experiment 1, provision of computer-generated views supported better target face recognition than exposure to the target image alone and equivalent performance to that for exposure of multiple photograph views. Experiment 2 replicated the advantage of providing generated views, but also indicated an advantage for multiple viewings of the single target photograph. These results strengthen the claim that identifying a target face can be improved by providing multiple synthesized views based on a single target image. In addition, our results suggest that the degree of advantage provided by synthesized views may be affected by the quality of synthesized material.
Gerlach, Christian; Klargaard, Solja K.; Starrfelt, Randi
There is an ongoing debate about whether face recognition and object recognition constitute separate domains. Clarification of this issue can have important theoretical implications as face recognition is often used as a prime example of domain-specificity in mind and brain. An important source...... of input to this debate comes from studies of individuals with developmental prosopagnosia, suggesting that face recognition can be selectively impaired. We put the selectivity hypothesis to test by assessing the performance of 10 individuals with developmental prosopagnosia on demanding tests of visual...... object processing involving both regular and degraded drawings. None of the individuals exhibited a clear dissociation between face and object recognition, and as a group they were significantly more affected by degradation of objects than control participants. Importantly, we also find positive...
Lopis, D; Baltazar, M; Geronikola, N; Beaucousin, V; Conty, L
Perceiving a direct gaze (i.e. another individual's gaze directed to the observer leading to eye contact) influences positively a wide range of cognitive processes. In particular, direct gaze perception is known to stimulate memory for other's faces and to increase their likeability. Alzheimer's disease (AD) results in social withdrawal and cognitive decline. However, patients show preserved eye contact behaviours until the middle stage of the disease. The eye contact effects could be preserved in AD and be used to compensate for cognitive and social deficits. Yet, it is unknown whether these effects are preserved in normal ageing. The aim of this study was to address whether the positive effects of eye contact on memory for faces and likeability of others are preserved in healthy older adults and in patients with early to mild AD. Nineteen AD patients, 20 older adults and 20 young adults participated in our study. Participants were first presented with faces displaying either direct or averted gaze and rated each face's degree of likeability. They were then asked to identify the faces they had previously seen during a surprise recognition test. Results showed that the effect of eye contact on other's likeability was preserved in normal ageing and in AD. By contrast, an effect of eye contact on memory for faces seems to emerge only in young participants, suggesting that this effect declines with ageing. Interestingly, however, AD patients show a positive correlation between ratings of likeability and recognition scores, suggesting that they implicitly allocated their encoding resources to most likeable faces. These results open a new way for a "compensating" therapy in AD.
The number of cameras increases rapidly in squares, shopping centers, railway stations and airport halls. There are hundreds of cameras in the city center of Amsterdam. This is still modest compared to the tens of thousands of cameras in London, where citizens are expected to be filmed by more than
Jarraya, Islem; Ouarda, Wael; Alimi, Adel M.
To control the state of horses in the born, breeders needs a monitoring system with a surveillance camera that can identify and distinguish between horses. We proposed in  a method of horse's identification at a distance using the frontal facial biometric modality. Due to the change of views, the face recognition becomes more difficult. In this paper, the number of images used in our THoDBRL'2015 database (Tunisian Horses DataBase of Regim Lab) is augmented by adding other images of other views. Thus, we used front, right and left profile face's view. Moreover, we suggested an approach for multiview face recognition. First, we proposed to use the Gabor filter for face characterization. Next, due to the augmentation of the number of images, and the large number of Gabor features, we proposed to test the Deep Neural Network with the auto-encoder to obtain the more pertinent features and to reduce the size of features vector. Finally, we performed the proposed approach on our THoDBRL'2015 database and we used the linear SVM for classification.
Full Text Available Automated comparison of faces in the photographs is a well established discipline. The main aim of this paper is to describe an approach whereby face recognition can be used in suggestion of a new contacts. The new contact suggestion is a common technique used across all main social networks. Our approach uses a freely available face comparison called "Betaface" together with our automated processig of the user´s Facebook profile. The research´s main point of interest is the comparison of friend´s facial images in a social network itself, how to process such a great amount of photos and what additional sources of data should be used. In this approach we used our automated processing algorithm Betaface in the social network Facebook and for the additional data, the Flickr social network was used. The results and their quality are discussed at the end.
Li, Yudu; Ma, Sen; Hu, Zhongze; Chen, Jiansheng; Su, Guangda; Dou, Weibei
Research on brain machine interface (BMI) has been developed very fast in recent years. Numerous feature extraction methods have successfully been applied to electroencephalogram (EEG) classification in various experiments. However, little effort has been spent on EEG based BMI systems regarding familiarity of human faces cognition. In this work, we have implemented and compared the classification performances of four common feature extraction methods, namely, common spatial pattern, principal component analysis, wavelet transform and interval features. High resolution EEG signals were collected from fifteen healthy subjects stimulated by equal number of familiar and novel faces. Principal component analysis outperforms other methods with average classification accuracy reaching 94.2% leading to possible real life applications. Our findings thereby may contribute to the BMI systems for face recognition.
Busey, T A; Tunnicliff, J L
The false recognition of distractor faces created from combinations of studied faces has been attributed to the creation of novel traces in memory, although familiarity accounts are also plausible. In 3 experiments, participants studied parent faces and then were tested with a distractor that was created by morphing 2 parents. These produced high false-alarm rates but no effects of a temporal separation manipulation. In a forced-choice version, participants chose the distractor over the parents. R. M. Nosofsky's (1986) Generalized Context Model and variants could account for some but not all aspects of the data. A new model, SimSample, can account for the effects of typicality and distinctiveness, but not for the morph false alarms unless explicit prototypes are included. The conclusions are consistent with an account of memory in which novel traces are created in memory; alternative explanations are also explored.
Alghamdi, Masheal M.
Face recognition is a challenging problem in computer vision. Difficulties such as slight differences between similar faces of different people, changes in facial expressions, light and illumination condition, and pose variations add extra complications to the face recognition research. Many algorithms are devoted to solving the face recognition problem, among which the family of nonnegative matrix factorization (NMF) algorithms has been widely used as a compact data representation method. Different versions of NMF have been proposed. Wang et al. proposed the graph-based semi-supervised nonnegative learning (S2N2L) algorithm that uses labeled data in constructing intrinsic and penalty graph to enforce separability of labeled data, which leads to a greater discriminating power. Moreover the geometrical structure of labeled and unlabeled data is preserved through using the smoothness assumption by creating a similarity graph that conserves the neighboring information for all labeled and unlabeled data. However, S2N2L is sensitive to light changes, illumination, and partial occlusion. In this thesis, we propose a Semi-Supervised Half-Quadratic NMF (SSHQNMF) algorithm that combines the benefits of S2N2L and the robust NMF by the half- quadratic minimization (HQNMF) algorithm.Our algorithm improves upon the S2N2L algorithm by replacing the Frobenius norm with a robust M-Estimator loss function. A multiplicative update solution for our SSHQNMF algorithmis driven using the half- 4 quadratic (HQ) theory. Extensive experiments on ORL, Yale-A and a subset of the PIE data sets for nine M-estimator loss functions for both SSHQNMF and HQNMF algorithms are investigated, and compared with several state-of-the-art supervised and unsupervised algorithms, along with the original S2N2L algorithm in the context of classification, clustering, and robustness against partial occlusion. The proposed algorithm outperformed the other algorithms. Furthermore, SSHQNMF with Maximum Correntropy
Lezama, Jose; Qiu, Qiang; Sapiro, Guillermo
Surveillance cameras today often capture NIR (near infrared) images in low-light environments. However, most face datasets accessible for training and verification are only collected in the VIS (visible light) spectrum. It remains a challenging problem to match NIR to VIS face images due to the different light spectrum. Recently, breakthroughs have been made for VIS face recognition by applying deep learning on a huge amount of labeled VIS face samples. The same deep learning approach cannot ...
Full Text Available The face recognition is popular in video surveillance, social networks and criminal identifications nowadays. The performance of face recognition would be affected by variations in illumination, pose, aging and partial occlusion of face by Wearing Hats, scarves and glasses etc. The illumination variations are still the challenging problem in face recognition. The aim is to compare the various illumination normalization techniques. The illumination normalization techniques include: Log transformations, Power Law transformations, Histogram equalization, Adaptive histogram equalization, Contrast stretching, Retinex, Multi scale Retinex, Difference of Gaussian, DCT, DCT Normalization, DWT, Gradient face, Self Quotient, Multi scale Self Quotient and Homomorphic filter. The proposed work consists of three steps. First step is to preprocess the face image with the above illumination normalization techniques; second step is to create the train and test database from the preprocessed face images and third step is to recognize the face images using Fuzzy K nearest neighbor classifier. The face recognition accuracy of all preprocessing techniques is compared using the AR face database of color images.
Full Text Available In recent decades, the local pattern descriptor has achieved tremendous success in the field of face recognition, pedestrian detection, and image texture analysis. This study presents a generic approach, called the filtered local pattern descriptor (FLPD, which expands the traditional local pattern descriptor (TLPD by using multi-scale and multi-type filter banks. The FLPD encodes the local information of an image based on the convolutional sum of the sub-image blocks and the filter banks, instead of the original pixel values in the TLPD. This design can effectively increase the diversity of the TLPD feature extraction, thereby enhancing the ability of feature representation and its reliability. Two FLPD-based feature representation methods are proposed for the face image and the pedestrian image. To evaluate the performance of the proposed FLPD, extensive experiments on face recognition and infrared pedestrian detection are conducted using several benchmark image datasets. The experimental results illustrate that the FLPD has a significant advantage in the discrimination and stability of feature extraction, and is able to achieve a satisfactory accuracy in comparison with state-of-the-art methods. It is demonstrated that the FLPD is a powerful and convenient extension of the TLPD by filter banks, and suitable to be implemented as feature extraction into approaches to solve the binary or multi-class image classification problems.
Chen, Wei-Yu; Wu, Frank; Hu, Chung-Chiang
The rise of the Internet of Things to promote the development of technology development board, the processor speed of operation and memory capacity increases, more and more applications, can already be completed before the data on the board computing, combined with the network to sort the information after Sent to the cloud for processing, so that the front of the development board is no longer simply retrieve the data device. This study uses Asus Tinker Board to install OpenCV for real-time face recognition and capture of the face, the acquired face to the Microsoft Cognitive Service cloud database for artificial intelligence comparison, to find out what the face now represents the mood. The face of the corresponding person name, and finally, and then through the text of Speech to read the name of the name to complete the identification of the action. This study was developed using the Asus Tinker Board, which uses ARM-based CPUs with high efficiency and low power consumption, plus improvements in memory and hardware performance for the development board.
Li, Heng; Su, Xiaofan; Wang, Jing; Kan, Han; Han, Tingting; Zeng, Yajie; Chai, Xinyu
Current retinal prostheses can only generate low-resolution visual percepts constituted of limited phosphenes which are elicited by an electrode array and with uncontrollable color and restricted grayscale. Under this visual perception, prosthetic recipients can just complete some simple visual tasks, but more complex tasks like face identification/object recognition are extremely difficult. Therefore, it is necessary to investigate and apply image processing strategies for optimizing the visual perception of the recipients. This study focuses on recognition of the object of interest employing simulated prosthetic vision. We used a saliency segmentation method based on a biologically plausible graph-based visual saliency model and a grabCut-based self-adaptive-iterative optimization framework to automatically extract foreground objects. Based on this, two image processing strategies, Addition of Separate Pixelization and Background Pixel Shrink, were further utilized to enhance the extracted foreground objects. i) The results showed by verification of psychophysical experiments that under simulated prosthetic vision, both strategies had marked advantages over Direct Pixelization in terms of recognition accuracy and efficiency. ii) We also found that recognition performance under two strategies was tied to the segmentation results and was affected positively by the paired-interrelated objects in the scene. The use of the saliency segmentation method and image processing strategies can automatically extract and enhance foreground objects, and significantly improve object recognition performance towards recipients implanted a high-density implant. Copyright © 2017 Elsevier B.V. All rights reserved.
Full Text Available This paper proposes an audio-visual speech recognition method using lip information extracted from side-face images as an attempt to increase noise robustness in mobile environments. Our proposed method assumes that lip images can be captured using a small camera installed in a handset. Two different kinds of lip features, lip-contour geometric features and lip-motion velocity features, are used individually or jointly, in combination with audio features. Phoneme HMMs modeling the audio and visual features are built based on the multistream HMM technique. Experiments conducted using Japanese connected digit speech contaminated with white noise in various SNR conditions show effectiveness of the proposed method. Recognition accuracy is improved by using the visual information in all SNR conditions. These visual features were confirmed to be effective even when the audio HMM was adapted to noise by the MLLR method.
Jin, Jing; Wang, Yuanqing; Xu, Liujing; Cao, Liqun; Han, Lei; Zhou, Biye; Li, Minggao
A non-intrusive gesture recognition human-machine interaction system is proposed in this paper. In order to solve the hand positioning problem which is a difficulty in current algorithms, face detection is used for the pre-processing to narrow the search area and find user's hand quickly and accurately. Hidden Markov Model (HMM) is used for gesture recognition. A certain number of basic gesture units are trained as HMM models. At the same time, an improved 8-direction feature vector is proposed and used to quantify characteristics in order to improve the detection accuracy. The proposed system can be applied in interaction equipments without special training for users, such as household interactive television
Farokhi, S.; Shamsuddin, S. M.; Flusser, Jan; Sheikh, U. U.; Khansari, M.; Jafari-Khouzani, K.
Roč. 22, č. 1 (2013), s. 1-11 ISSN 1017-9909 R&D Projects: GA ČR GAP103/11/1552 Keywords : face recognition * infrared imaging * image moments Subject RIV: JD - Computer Applications, Robotics Impact factor: 0.850, year: 2013 http://library.utia.cas.cz/separaty/2013/ZOI/flusser-rotation and noise invariant near-infrared face recognition by means of zernike moments and spectral regression discriminant analysis.pdf
Hörhan, Markus; Eidenberger, Horst
In this work, we propose two improvements of the Gestalt Interest Points (GIP) algorithm for the recognition of faces of people that have underwent significant weight change. The basic assumption is that some interest points contribute more to the description of such objects than others. We assume that we can eliminate certain interest points to make the whole method more efficient while retaining our classification results. To find out which gestalt interest points can be eliminated, we did experiments concerning contrast and orientation of face features. Furthermore, we investigated the robustness of GIP against image rotation. The experiments show that our method is rotational invariant and - in this practically relevant forensic domain - outperforms the state-of-the-art methods such as SIFT, SURF, ORB and FREAK.
Omara, Ibrahim; Xiao, Gang; Amrani, Moussa; Yan, Zifei; Zuo, Wangmeng
Recently, multimodal biometric systems have received considerable research interest in many applications especially in the fields of security. Multimodal systems can increase the resistance to spoof attacks, provide more details and flexibility, and lead to better performance and lower error rate. In this paper, we present a multimodal biometric system based on face and ear, and propose how to exploit the extracted deep features from Convolutional Neural Networks (CNNs) on the face and ear images to introduce more powerful discriminative features and robust representation ability for them. First, the deep features for face and ear images are extracted based on VGG-M Net. Second, the extracted deep features are fused by using a traditional concatenation and a Discriminant Correlation Analysis (DCA) algorithm. Third, multiclass support vector machine is adopted for matching and classification. The experimental results show that the proposed multimodal system based on deep features is efficient and achieves a promising recognition rate up to 100 % by using face and ear. In addition, the results indicate that the fusion based on DCA is superior to traditional fusion.
Jaeger, Antonio; Cox, Justin C; Dobbins, Ian G
Individuals' memory experiences typically covary with those of others' around them, and on average, an item is more likely to be familiar if a companion recommends it as such. Although it would be ideal if observers could use the external recommendations of others' as statistical priors during recognition decisions, it is currently unclear how or if they do so. Furthermore, understanding the sensitivity of recognition judgments to such external cues is critical for understanding memory conformity and eyewitness suggestibility phenomena. To address this we examined recognition accuracy and confidence following cues from an external source (e.g., "Likely Old") that forecast the likely status of upcoming memory probes. Three regularities emerged. First, hit and correct-rejection rates expectedly fell when participants were invalidly versus validly cued. Second, hit confidence was generally higher than correct-rejection confidence, regardless of cue validity. Finally, and most noteworthy, cue validity interacted with judgment confidence such that validity heavily influenced the confidence of correct rejections but had no discernible influence on the confidence of hits. Bootstrap-informed Monte Carlo simulation supported a dual process recognition model under which familiarity and recollection processes counteract to heavily dampen the influence of external cues on average reported confidence. A 3rd experiment tested this model using source memory. As predicted, because source memory is heavily governed by contextual recollection, cue validity again did not affect confidence, although as with recognition it clearly altered accuracy.
Jaeger, Antonio; Cox, Justin C.; Dobbins, Ian G.
Our memory experiences typically covary with those of the others’ around us, and on average, an item is more likely to be familiar than not, if a companion recommends it as such. Although it would be ideal if observers could use the external recommendations of others as statistical priors during recognition decisions, it is currently unclear how or if they do so. Furthermore, understanding the sensitivity of recognition judgments to such external cues is critical for understanding memory conformity and eyewitness suggestibility phenomena. To address this we examined recognition accuracy and confidence following cues from an external source (e.g., “Likely old”) that forecast the likely status of upcoming memory probes. Three regularities emerged. First, hit and correction rejection rates expectedly fell when subjects were invalidly versus validly cued. Second, hit confidence was generally higher than correct rejection confidence, regardless of cue validity. Finally, and most noteworthy, cue validity interacted with judgment confidence such that validity heavily influenced the confidence of correct rejections, but had no discernable influence on the confidence of hits. Bootstrap informed Monte Carlo simulation supported a dual process recognition model under which familiarity and recollection processes counteract to heavily dampen the influence of external cues on average reported confidence. A third experiment tested this model using source memory. As predicted, because source memory is heavily governed by contextual recollection, cue validity again did not affect confidence, although as with recognition, it clearly altered accuracy. PMID:21967231
Rice, Linda Marie; Wall, Carla Anne; Fogel, Adam; Shic, Frederick
This study examined the extent to which a computer-based social skills intervention called "FaceSay"™ was associated with improvements in affect recognition, mentalizing, and social skills of school-aged children with Autism Spectrum Disorder (ASD). "FaceSay"™ offers students simulated practice with eye gaze, joint attention,…
Rhodes, Gillian; Lie, Hanne C.; Ewing, Louise; Evangelista, Emma; Tanaka, James W.
Discrimination and recognition are often poorer for other-race than own-race faces. These other-race effects (OREs) have traditionally been attributed to reduced perceptual expertise, resulting from more limited experience, with other-race faces. However, recent findings suggest that sociocognitive factors, such as reduced motivation to…
Zhou, Daoxiang; Yang, Dan; Zhang, Xiaohong; Huang, Sheng; Feng, Shu
Currently, considerable efforts have been devoted to devise image representation. However, handcrafted methods need strong domain knowledge and show low generalization ability, and conventional feature learning methods require enormous training data and rich parameters tuning experience. A lightened feature learner is presented to solve these problems with application to face recognition, which shares similar topology architecture as a convolutional neural network. Our model is divided into three components: cascaded convolution filters bank learning layer, nonlinear processing layer, and feature pooling layer. Specifically, in the filters learning layer, we use K-means to learn convolution filters. Features are extracted via convoluting images with the learned filters. Afterward, in the nonlinear processing layer, hyperbolic tangent is employed to capture the nonlinear feature. In the feature pooling layer, to remove the redundancy information and incorporate the spatial layout, we exploit multilevel spatial pyramid second-order pooling technique to pool the features in subregions and concatenate them together as the final representation. Extensive experiments on four representative datasets demonstrate the effectiveness and robustness of our model to various variations, yielding competitive recognition results on extended Yale B and FERET. In addition, our method achieves the best identification performance on AR and labeled faces in the wild datasets among the comparative methods.
Ishizaki, Chikage; Naka, Makiko; Aritomi, Miyoko
We investigated how retrieval conditions affect accuracy-confidence (A-C) relationship sin recognition memory for faces. Seventy participants took a face-recognition test and rated their confidence in their judgment. Twenty-three participants were assigned to a retrieval condition, where they were encouraged to remember background information (scenery) of each picture just before rating their confidence. Twenty-four participants were assigned to a verbalizing condition, in which they were encouraged to remember and verbally describe the background of each picture before rating. Twenty-three participants were assigned to a control condition. The results showed that for the control condition, an A-C relationship was found for old items but not for new items, replicating the results of Takahashi (1998) and Wagenaar (1988). In contrast, in the retrieval condition, an A-C relationship was found for both old and new items. In the verbalizing condition, an A-C relationship was not found for either old or new items. The results showed that retrieving background information affects A-C relationships, supporting the idea that confidence ratings rely not only on memory traces but also on various kinds of information such as retrieved background scenery. Implications for eyewitness testimony were discussed.
Full Text Available This paper introduces a novel method for the recognition of human faces in digital images using a new feature extraction method that combines the global and local information in frontal view of facial images. Radial basis function (RBF neural network with a hybrid learning algorithm (HLA has been used as a classifier. The proposed feature extraction method includes human face localization derived from the shape information. An efficient distance measure as facial candidate threshold (FCT is defined to distinguish between face and nonface images. Pseudo-Zernike moment invariant (PZMI with an efficient method for selecting moment order has been used. A newly defined parameter named axis correction ratio (ACR of images for disregarding irrelevant information of face images is introduced. In this paper, the effect of these parameters in disregarding irrelevant information in recognition rate improvement is studied. Also we evaluate the effect of orders of PZMI in recognition rate of the proposed technique as well as RBF neural network learning speed. Simulation results on the face database of Olivetti Research Laboratory (ORL indicate that the proposed method for human face recognition yielded a recognition rate of 99.3%.
Full Text Available When extracting discriminative features from multimodal data, current methods rarely concern themselves with the data distribution. In this paper, we present an assumption that is consistent with the viewpoint of discrimination, that is, a person’s overall biometric data should be regarded as one class in the input space, and his different biometric data can form different Gaussians distributions, i.e., different subclasses. Hence, we propose a novel multimodal feature extraction and recognition approach based on subclass discriminant analysis (SDA. Specifically, one person’s different bio-data are treated as different subclasses of one class, and a transformed space is calculated, where the difference among subclasses belonging to different persons is maximized, and the difference within each subclass is minimized. Then, the obtained multimodal features are used for classification. Two solutions are presented to overcome the singularity problem encountered in calculation, which are using PCA preprocessing, and employing the generalized singular value decomposition (GSVD technique, respectively. Further, we provide nonlinear extensions of SDA based multimodal feature extraction, that is, the feature fusion based on KPCA-SDA and KSDA-GSVD. In KPCA-SDA, we first apply Kernel PCA on each single modal before performing SDA. While in KSDA-GSVD, we directly perform Kernel SDA to fuse multimodal data by applying GSVD to avoid the singular problem. For simplicity two typical types of biometric data are considered in this paper, i.e., palmprint data and face data. Compared with several representative multimodal biometrics recognition methods, experimental results show that our approaches outperform related multimodal recognition methods and KSDA-GSVD achieves the best recognition performance.
Morita, Tomoyo; Saito, Daisuke N; Ban, Midori; Shimada, Koji; Okamoto, Yuko; Kosaka, Hirotaka; Okazawa, Hidehiko; Asada, Minoru; Naito, Eiichi
We recently reported that right-side dominance of the inferior parietal lobule (IPL) in self-body recognition (proprioceptive illusion) task emerges during adolescence in typical human development. Here, we extend this finding by demonstrating that functional lateralization to the right IPL also develops during adolescence in another self-body (specifically a self-face) recognition task. We collected functional magnetic resonance imaging (fMRI) data from 60 right-handed healthy children (8-11 years), adolescents (12-15 years), and adults (18-23 years; 20 per group) while they judged whether a presented face was their own (Self) or that of somebody else (Other). We also analyzed fMRI data collected while they performed proprioceptive illusion task. All participants performed self-face recognition with high accuracy. Among brain regions where self-face-related activity (Self vs. Other) developed, only right IPL activity developed predominantly for self-face processing, with no substantial involvement in other-face processing. Adult-like right-dominant use of IPL emerged during adolescence, but was not yet present in childhood. Adult-like common activation between the tasks also emerged during adolescence. Adolescents showing stronger right-lateralized IPL activity during illusion also showed this during self-face recognition. Our results suggest the importance of the right IPL in neuronal processing of information associated with one's own body in typically developing humans.
Full Text Available In this paper, we propose a multi-modal 2D + 3D face recognition method for a smart city application based on a Wireless Sensor Network (WSN and various kinds of sensors. Depth maps are exploited for the 3D face representation. As for feature extraction, we propose a new feature called Complete Local Derivative Pattern (CLDP. It adopts the idea of layering and has four layers. In the whole system, we apply CLDP separately on Gabor features extracted from a 2D image and depth map. Then, we obtain two features: CLDP-Gabor and CLDP-Depth. The two features weighted by the corresponding coefficients are combined together in the decision level to compute the total classification distance. At last, the probe face is assigned the identity with the smallest classification distance. Extensive experiments are conducted on three different databases. The results demonstrate the robustness and superiority of the new approach. The experimental results also prove that the proposed multi-modal 2D + 3D method is superior to other multi-modal ones and CLDP performs better than other Local Binary Pattern (LBP based features.
Esins, Janina; Schultz, Johannes; Bülthoff, Isabelle; Kennerknecht, Ingo
A woman in her early 40s with congenital prosopagnosia and attention deficit hyperactivity disorder observed for the first time sudden and extensive improvement of her face recognition abilities, mental imagery, and sense of navigation after galactose intake. This effect of galactose on prosopagnosia has never been reported before. Even if this effect is restricted to a subform of congenital prosopagnosia, galactose might improve the condition of other prosopagnosics. Congenital prosopagnosia, the inability to recognize other people by their face, has extensive negative impact on everyday life. It has a high prevalence of about 2.5%. Monosaccharides are known to have a positive impact on cognitive performance. Here, we report the case of a prosopagnosic woman for whom the daily intake of 5 g of galactose resulted in a remarkable improvement of her lifelong face blindness, along with improved sense of orientation and more vivid mental imagery. All these improvements vanished after discontinuing galactose intake. The self-reported effects of galactose were wide-ranging and remarkably strong but could not be reproduced for 16 other prosopagnosics tested. Indications about heterogeneity within prosopagnosia have been reported; this could explain the difficulty to find similar effects in other prosopagnosics. Detailed analyses of the effects of galactose in prosopagnosia might give more insight into the effects of galactose on human cognition in general. Galactose is cheap and easy to obtain, therefore, a systematic test of its positive effects on other cases of congenital prosopagnosia may be warranted.
This paper presents an effective 3D face keypoint detection, description and matching framework based on three principle curvature measures. These measures give a unified definition of principle curvatures for both smooth and discrete surfaces. They can be reasonably computed based on the normal cycle theory and the geometric measure theory. The strong theoretical basis of these measures provides us a solid discrete estimation method on real 3D face scans represented as triangle meshes. Based on these estimated measures, the proposed method can automatically detect a set of sparse and discriminating 3D facial feature points. The local facial shape around each 3D feature point is comprehensively described by histograms of these principal curvature measures. To guarantee the pose invariance of these descriptors, three principle curvature vectors of these principle curvature measures are employed to assign the canonical directions. Similarity comparison between faces is accomplished by matching all these curvature-based local shape descriptors using the sparse representation-based reconstruction method. The proposed method was evaluated on three public databases, i.e. FRGC v2.0, Bosphorus, and Gavab. Experimental results demonstrated that the three principle curvature measures contain strong complementarity for 3D facial shape description, and their fusion can largely improve the recognition performance. Our approach achieves rank-one recognition rates of 99.6, 95.7, and 97.9% on the neutral subset, expression subset, and the whole FRGC v2.0 databases, respectively. This indicates that our method is robust to moderate facial expression variations. Moreover, it also achieves very competitive performance on the pose subset (over 98.6% except Yaw 90°) and the occlusion subset (98.4%) of the Bosphorus database. Even in the case of extreme pose variations like profiles, it also significantly outperforms the state-of-the-art approaches with a recognition rate of 57.1%. The
Full Text Available Extreme learning machine (ELM is a competitive machine learning technique, which is simple in theory and fast in implementation; it can identify faults quickly and precisely as compared with traditional identification techniques such as support vector machines (SVM. As verified by the simulation results, ELM tends to have better scalability and can achieve much better generalization performance and much faster learning speed compared with traditional SVM. In this paper, we introduce a multiclass AdaBoost based ELM ensemble method. In our approach, the ELM algorithm is selected as the basic ensemble predictor due to its rapid speed and good performance. Compared with the existing boosting ELM algorithm, our algorithm can be directly used in multiclass classification problem. We also carried out comparable experiments with face recognition datasets. The experimental results show that the proposed algorithm can not only make the predicting result more stable, but also achieve better generalization performance.
Polyakova, A.; Lipinskiy, L.
Some problems are difficult to solve by using a single intelligent information technology (IIT). The ensemble of the various data mining (DM) techniques is a set of models which are able to solve the problem by itself, but the combination of which allows increasing the efficiency of the system as a whole. Using the IIT ensembles can improve the reliability and efficiency of the final decision, since it emphasizes on the diversity of its components. The new method of the intellectual informational technology ensemble design is considered in this paper. It is based on the fuzzy logic and is designed to solve the classification and regression problems. The ensemble consists of several data mining algorithms: artificial neural network, support vector machine and decision trees. These algorithms and their ensemble have been tested by solving the face recognition problems. Principal components analysis (PCA) is used for feature selection.
Beatrice eDe Gelder
Full Text Available There are many ways to assess face perception skills. In this study, we describe a novel task battery FEAST (Facial Expression Action Stimulus Test developed to test recognition of identity and expressions of human faces as well as stimulus control categories. The FEAST consists of a neutral and emotional face memory task, a face and object identity matching task, a face and house part-to-whole matching task, and a human and animal facial expression matching task. The identity and part-to-whole matching tasks contain both upright and inverted conditions. The results provide reference data of a healthy sample of controls in two age groups for future users of the FEAST.
Mograss, Melodee A; Guillem, Francois; Stickgold, Robert
Research indicates that habitual short sleepers show more rapid accumulation of slow-wave sleep at the beginning of the night. Enhancement in performance on declarative memory tasks has been associated with early NonREM sleep, consisting of the highest percentage of slow-wave sleep. Twenty-four subjects (eight short sleepers 7 but or=9h) were tested. Subjects were presented with unfamiliar face stimuli and asked to memorize them for a subsequent test. Following sleep, the subjects were presented with the 40 "old/studied" items intermixed with 40 new and asked to indicate the previously presented stimuli. Event-related potentials (ERPs) were analyzed to verify the existence of the "Old/New" effect, i.e. amplitude difference [in ERPs] between the old and new stimuli. ANOVA on the scores revealed a significant interaction between the stimuli and group. Post-hoc test on the studied items revealed more accurate responses in the short sleepers compared to the average and long sleepers. Strikingly, the long sleepers failed to show significant retention of the old/studied items, with their recognition of old faces not different from chance. Reaction time (RT) responses were faster for the old vs. the new items. Pearson correlation revealed a significant negative correlation between accuracy and sleep duration in the short sleepers. However, long and average sleepers showed a positive correlation between the two variables. ANOVA performed on the ERPs revealed main effects of stimuli and site, and no interactions involving the group factor. In conclusion, our data show that individual differences in recognition memory performance may be associated with differences in habitual sleep duration. Crown Copyright 2010. Published by Elsevier B.V. All rights reserved.
Full Text Available According to attribution models of familiarity assessment, people can use a heuristic in recognition-memory decisions, in which they attribute the subjective ease of processing of a memory probe to a prior encounter with the stimulus in question. Research in social cognition suggests that experienced positive affect may be the proximal cue that signals fluency in various experimental contexts. In the present study, we compared the effects of positive affect and fluency on recognition-memory judgments for faces with neutral emotional expression. We predicted that if positive affect is indeed the critical cue that signals processing fluency at retrieval, then its manipulation should produce effects that closely mirror those produced by manipulations of processing fluency. In two experiments, we employed a masked-priming procedure in combination with a Remember-Know paradigm that aimed to separate familiarity- from recollection-based memory decisions. In addition, participants performed a prime-discrimination task that allowed us to take inter-individual differences in prime awareness into account. We found highly similar effects of our priming manipulations of processing fluency and of positive affect. In both cases, the critical effect was specific to familiarity-based recognition responses. Moreover, in both experiments it was reflected in a shift towards a more liberal response bias, rather than in changed discrimination. Finally, in both experiments, the effect was found to be related to prime awareness; it was present only in participants who reported a lack of such awareness on the prime-discrimination task. These findings add to a growing body of evidence that points not only to a role of fluency, but also of positive affect in familiarity assessment. As such they are consistent with the idea that fluency itself may be hedonically marked.
Facial action unit (AU) recognition has been applied in a wild range of fields, and has attracted great attention in the past two decades. Most existing works on AU recognition assumed that the complete label assignment for each training image is available, which is often not the case in practice. Labeling AU is expensive and time consuming process. Moreover, due to the AU ambiguity and subjective difference, some AUs are difficult to label reliably and confidently. Many AU recognition works try to train the classifier for each AU independently, which is of high computation cost and ignores the dependency among different AUs. In this work, we formulate AU recognition under incomplete data as a multi-label learning with missing labels (MLML) problem. Most existing MLML methods usually employ the same features for all classes. However, we find this setting is unreasonable in AU recognition, as the occurrence of different AUs produce changes of skin surface displacement or face appearance in different face regions. If using the shared features for all AUs, much noise will be involved due to the occurrence of other AUs. Consequently, the changes of the specific AUs cannot be clearly highlighted, leading to the performance degradation. Instead, we propose to extract the most discriminative features for each AU individually, which are learned by the supervised learning method. The learned features are further embedded into the instance-level label smoothness term of our model, which also includes the label consistency and the class-level label smoothness. Both a global solution using st-cut and an approximated solution using conjugate gradient (CG) descent are provided. Experiments on both posed and spontaneous facial expression databases demonstrate the superiority of the proposed method in comparison with several state-of-the-art works.
Davis, Joshua; McKone, Elinor; Zirnsak, Marc; Moore, Tirin; O'Kearney, Richard; Apthorp, Deborah; Palermo, Romina
This study distinguished between different subclusters of autistic traits in the general population and examined the relationships between these subclusters, looking at the eyes of faces, and the ability to recognize facial identity. Using the Autism Spectrum Quotient (AQ) measure in a university-recruited sample, we separate the social aspects of autistic traits (i.e., those related to communication and social interaction; AQ-Social) from the non-social aspects, particularly attention-to-detail (AQ-Attention). We provide the first evidence that these social and non-social aspects are associated differentially with looking at eyes: While AQ-Social showed the commonly assumed tendency towards reduced looking at eyes, AQ-Attention was associated with increased looking at eyes. We also report that higher attention-to-detail (AQ-Attention) was then indirectly related to improved face recognition, mediated by increased number of fixations to the eyes during face learning. Higher levels of socially relevant autistic traits (AQ-Social) trended in the opposite direction towards being related to poorer face recognition (significantly so in females on the Cambridge Face Memory Test). There was no evidence of any mediated relationship between AQ-Social and face recognition via reduced looking at the eyes. These different effects of AQ-Attention and AQ-Social suggest face-processing studies in Autism Spectrum Disorder might similarly benefit from considering symptom subclusters. Additionally, concerning mechanisms of face recognition, our results support the view that more looking at eyes predicts better face memory. © 2016 The British Psychological Society.
Nelson, Nicole L; Russell, James A
In a classic study, children were shown an array of facial expressions and asked to choose the person who expressed a specific emotion. Children were later asked to name the emotion in the face with any label they wanted. Subsequent research often relied on the same two tasks--choice from array and free labeling--to support the conclusion that children recognize basic emotions from facial expressions. Here five studies (N=120, 2- to 10-year-olds) showed that these two tasks produce illusory recognition; a novel nonsense facial expression was included in the array. Children "recognized" a nonsense emotion (pax or tolen) and two familiar emotions (fear and jealousy) from the same nonsense face. Children likely used a process of elimination; they paired the unknown facial expression with a label given in the choice-from-array task and, after just two trials, freely labeled the new facial expression with the new label. These data indicate that past studies using this method may have overestimated children's expression knowledge. Copyright © 2015 Elsevier Inc. All rights reserved.
Zheng, Jianwei; Yang, Ping; Chen, Shengyong; Shen, Guojiang; Wang, Wanliang
In this paper, we consider the robust face recognition problem via iterative re-constrained group sparse classifier (IRGSC) with adaptive weights learning. Specifically, we propose a group sparse representation classification (GSRC) approach in which weighted features and groups are collaboratively adopted to encode more structure information and discriminative information than other regression based methods. In addition, we derive an efficient algorithm to optimize the proposed objective function, and theoretically prove the convergence. There are several appealing aspects associated with IRGSC. First, adaptively learned weights can be seamlessly incorporated into the GSRC framework. This integrates the locality structure of the data and validity information of the features into l 2,p -norm regularization to form a unified formulation. Second, IRGSC is very flexible to different size of training set as well as feature dimension thanks to the l 2,p -norm regularization. Third, the derived solution is proved to be a stationary point (globally optimal if p ≥ 1 ). Comprehensive experiments on representative data sets demonstrate that IRGSC is a robust discriminative classifier which significantly improves the performance and efficiency compared with the state-of-the-art methods in dealing with face occlusion, corruption, and illumination changes, and so on.
Deng, Weihong; Hu, Jiani; Guo, Jun
Collaborative representation methods, such as sparse subspace clustering (SSC) and sparse representation-based classification (SRC), have achieved great success in face clustering and classification by directly utilizing the training images as the dictionary bases. In this paper, we reveal that the superior performance of collaborative representation relies heavily on the sufficiently large class separability of the controlled face datasets such as Extended Yale B. On the uncontrolled or undersampled dataset, however, collaborative representation suffers from the misleading coefficients of the incorrect classes. To address this limitation, inspired by the success of linear discriminant analysis (LDA), we develop a superposed linear representation classifier (SLRC) to cast the recognition problem by representing the test image in term of a superposition of the class centroids and the shared intra-class differences. In spite of its simplicity and approximation, the new SLRC largely improves the generalization ability of collaborative representation, and competes well with more sophisticated dictionary learning techniques, on the experiments of AR and FRGC databases. Enforced with the sparsity constraint, SLRC achieves the state-of-the-art performance on FERET database using single sample per person.
Najam, S.S.; Shaikh, A.Z.; Naqvi, S.
A novel hybrid design based electronic voting system is proposed, implemented and analyzed. The proposed system uses two voter verification techniques to give better results in comparison to single identification based systems. Finger print and facial recognition based methods are used for voter identification. Cross verification of a voter during an election process provides better accuracy than single parameter identification method. The facial recognition system uses Viola-Jones algorithm along with rectangular Haar feature selection method for detection and extraction of features to develop a biometric template and for feature extraction during the voting process. Cascaded machine learning based classifiers are used for comparing the features for identity verification using GPCA (Generalized Principle Component Analysis) and K-NN (K-Nearest Neighbor). It is accomplished through comparing the Eigen-vectors of the extracted features with the biometric template pre-stored in the election regulatory body database. The results of the proposed system show that the proposed cascaded design based system performs better than the systems using other classifiers or separate schemes i.e. facial or finger print based schemes. The proposed system will be highly useful for real time applications due to the reason that it has 91% accuracy under nominal light in terms of facial recognition. (author)
Full Text Available A novel hybrid design based electronic voting system is proposed, implemented and analyzed. The proposed system uses two voter verification techniques to give better results in comparison to single identification based systems. Finger print and facial recognition based methods are used for voter identification. Cross verification of a voter during an election process provides better accuracy than single parameter identification method. The facial recognition system uses Viola-Jones algorithm along with rectangular Haar feature selection method for detection and extraction of features to develop a biometric template and for feature extraction during the voting process. Cascaded machine learning based classifiers are used for comparing the features for identity verification using GPCA (Generalized Principle Component Analysis and K-NN (K-Nearest Neighbor. It is accomplished through comparing the Eigen-vectors of the extracted features with the biometric template pre-stored in the election regulatory body database. The results of the proposed system show that the proposed cascaded design based system performs better than the systems using other classifiers or separate schemes i.e. facial or finger print based schemes. The proposed system will be highly useful for real time applications due to the reason that it has 91% accuracy under nominal light in terms of facial recognition.
Betta, G.; Capriglione, D.; Crenna, F.; Rossi, G. B.; Gasparetto, M.; Zappa, E.; Liguori, C.; Paolillo, A.
Security systems based on face recognition through video surveillance systems deserve great interest. Their use is important in several areas including airport security, identification of individuals and access control to critical areas. These systems are based either on the measurement of details of a human face or on a global approach whereby faces are considered as a whole. The recognition is then performed by comparing the measured parameters with reference values stored in a database. The result of this comparison is not deterministic because measurement results are affected by uncertainty due to random variations and/or to systematic effects. In these circumstances the recognition of a face is subject to the risk of a faulty decision. Therefore, a proper metrological characterization is needed to improve the performance of such systems. Suitable methods are proposed for a quantitative metrological characterization of face measurement systems, on which recognition procedures are based. The proposed methods are applied to three different algorithms based either on linear discrimination, on eigenface analysis, or on feature detection.
Betta, G; Capriglione, D; Crenna, F; Rossi, G B; Gasparetto, M; Zappa, E; Liguori, C; Paolillo, A
Security systems based on face recognition through video surveillance systems deserve great interest. Their use is important in several areas including airport security, identification of individuals and access control to critical areas. These systems are based either on the measurement of details of a human face or on a global approach whereby faces are considered as a whole. The recognition is then performed by comparing the measured parameters with reference values stored in a database. The result of this comparison is not deterministic because measurement results are affected by uncertainty due to random variations and/or to systematic effects. In these circumstances the recognition of a face is subject to the risk of a faulty decision. Therefore, a proper metrological characterization is needed to improve the performance of such systems. Suitable methods are proposed for a quantitative metrological characterization of face measurement systems, on which recognition procedures are based. The proposed methods are applied to three different algorithms based either on linear discrimination, on eigenface analysis, or on feature detection
Full Text Available Face Recognition System employs a variety of feature extraction projection techniques which are grouped into Appearance-Based and Feature-Based. In a vast majority of the studies undertaken in the field of Face Recognition special attention is given to the Appearance-Based Methods which represent the dominant and most popular feature extraction technique used. Even though a number of comparative studies exist researchers have not reached consensus within the scientific community regarding the relative ranking of the efficiency of the appearance-based methods LDA PCA etc for face recognition task. This paper studied two appearance-based methods LDA PCA separately with three 3 distance metrics similarity measures such as Euclidean distance City Block amp Cosine to ascertain which projection-metric combination was relatively more efficient in terms of time it takes to recognise a face. The study considered the effect of varying the image data size in a training database on all the projection-metric methods implemented. LDA-Cosine Distance Metric was consequently ascertained to be the most efficient when tested with two separate standard databases AT amp T Face Database and Indian Face Database. It was also concluded that LDA outperformed PCA.
Moulson, Margaret C.; Westerlund, Alissa; Fox, Nathan A.; Zeanah, Charles H.; Nelson, Charles A.
Data are reported from 3 groups of children residing in Bucharest, Romania. Face recognition in currently institutionalized, previously institutionalized, and never-institutionalized children was assessed at 3 time points: preintervention (n = 121), 30 months of age (n = 99), and 42 months of age (n = 77). Children watched photographs of caregiver…
Golan, Ofer; Baron-Cohen, Simon; Hill, Jacqueline
Adults with Asperger Syndrome (AS) can recognise simple emotions and pass basic theory of mind tasks, but have difficulties recognising more complex emotions and mental states. This study describes a new battery of tasks, testing recognition of 20 complex emotions and mental states from faces and voices. The battery was given to males and females…
Gross, Thomas F
The recognition of facial immaturity and emotional expression by children with autism, language disorders, mental retardation, and non-disabled controls was studied in two experiments. Children identified immaturity and expression in upright and inverted faces. The autism group identified fewer immature faces and expressions than control (Exp. 1 & 2), language disordered (Exp. 1), and mental retardation (Exp. 2) groups. Facial inversion interfered with all groups' recognition of facial immaturity and with control and language disordered groups' recognition of expression. Error analyses (Exp. 1 & 2) showed similarities between autism and other groups' perception of immaturity but differences in perception of expressions. Reasons for similarities and differences between children with and without autism when perceiving facial immaturity and expression are discussed.
Saha, Priya; Bhattacharjee, Debotosh; De, Barin K.; Nasipuri, Mita
Facial expressions in infrared imaging have been introduced to solve the problem of illumination, which is an integral constituent of visual imagery. The paper investigates facial skin temperature distribution on mixed thermal facial expressions of our created face database where six are basic expressions and rest 12 are a mixture of those basic expressions. Temperature analysis has been performed on three facial regions of interest (ROIs); periorbital, supraorbital and mouth. Temperature variability of the ROIs in different expressions has been measured using statistical parameters. The temperature variation measurement in ROIs of a particular expression corresponds to a vector, which is later used in recognition of mixed facial expressions. Investigations show that facial features in mixed facial expressions can be characterized by positive emotion induced facial features and negative emotion induced facial features. Supraorbital is a useful facial region that can differentiate basic expressions from mixed expressions. Analysis and interpretation of mixed expressions have been conducted with the help of box and whisker plot. Facial region containing mixture of two expressions is generally less temperature inducing than corresponding facial region containing basic expressions.
Galbally, Javier; Marcel, Sébastien; Fierrez, Julian
To ensure the actual presence of a real legitimate trait in contrast to a fake self-manufactured synthetic or reconstructed sample is a significant problem in biometric authentication, which requires the development of new and efficient protection measures. In this paper, we present a novel software-based fake detection method that can be used in multiple biometric systems to detect different types of fraudulent access attempts. The objective of the proposed system is to enhance the security of biometric recognition frameworks, by adding liveness assessment in a fast, user-friendly, and non-intrusive manner, through the use of image quality assessment. The proposed approach presents a very low degree of complexity, which makes it suitable for real-time applications, using 25 general image quality features extracted from one image (i.e., the same acquired for authentication purposes) to distinguish between legitimate and impostor samples. The experimental results, obtained on publicly available data sets of fingerprint, iris, and 2D face, show that the proposed method is highly competitive compared with other state-of-the-art approaches and that the analysis of the general image quality of real biometric samples reveals highly valuable information that may be very efficiently used to discriminate them from fake traits.
Zawadzka, Katarzyna; Higham, Philip A.; Hanczakowski, Maciej
Two-alternative forced-choice recognition tests are commonly used to assess recognition accuracy that is uncontaminated by changes in bias. In such tests, participants are asked to endorse the studied item out of 2 presented alternatives. Participants may be further asked to provide confidence judgments for their recognition decisions. It is often…
Registration algorithms performed on point clouds or range images of face scans have been successfully used for automatic 3D face recognition under expression variations, but have rarely been investigated to solve pose changes and occlusions mainly since that the basic landmarks to initialize coarse alignment are not always available. Recently, local feature-based SIFT-like matching proves competent to handle all such variations without registration. In this paper, towards 3D face recognition for real-life biometric applications, we significantly extend the SIFT-like matching framework to mesh data and propose a novel approach using fine-grained matching of 3D keypoint descriptors. First, two principal curvature-based 3D keypoint detectors are provided, which can repeatedly identify complementary locations on a face scan where local curvatures are high. Then, a robust 3D local coordinate system is built at each keypoint, which allows extraction of pose-invariant features. Three keypoint descriptors, corresponding to three surface differential quantities, are designed, and their feature-level fusion is employed to comprehensively describe local shapes of detected keypoints. Finally, we propose a multi-task sparse representation based fine-grained matching algorithm, which accounts for the average reconstruction error of probe face descriptors sparsely represented by a large dictionary of gallery descriptors in identification. Our approach is evaluated on the Bosphorus database and achieves rank-one recognition rates of 96.56, 98.82, 91.14, and 99.21 % on the entire database, and the expression, pose, and occlusion subsets, respectively. To the best of our knowledge, these are the best results reported so far on this database. Additionally, good generalization ability is also exhibited by the experiments on the FRGC v2.0 database.
Nie, Aiqing; Griffin, Michael; Keinath, Alexander; Walsh, Matthew; Dittmann, Andrea; Reder, Lynne
Previous research has suggested that faces and words are processed and remembered differently as reflected by different ERP patterns for the two types of stimuli. Specifically, face stimuli produced greater late positive deflections for old items in anterior compared to posterior regions, while word stimuli produced greater late positive deflections in posterior compared to anterior regions. Given that words have existing representations in subjects׳ long-term memories (LTM) and that face stimuli used in prior experiments were of unknown individuals, we conducted an ERP study that crossed face and letter stimuli with the presence or absence of a prior (stable or existing) memory representation. During encoding, subjects judged whether stimuli were known (famous face or real word) or not known (unknown person or pseudo-word). A surprise recognition memory test required subjects to distinguish between stimuli that appeared during the encoding phase and stimuli that did not. ERP results were consistent with previous research when comparing unknown faces and words; however, the late ERP pattern for famous faces was more similar to that for words than for unknown faces. This suggests that the critical ERP difference is mediated by whether there is a prior representation in LTM, and not whether the stimulus involves letters or faces. Published by Elsevier B.V.
Chuk, Tim; Crookes, Kate; Hayward, William G; Chan, Antoni B; Hsiao, Janet H
It remains controversial whether culture modulates eye movement behavior in face recognition. Inconsistent results have been reported regarding whether cultural differences in eye movement patterns exist, whether these differences affect recognition performance, and whether participants use similar eye movement patterns when viewing faces from different ethnicities. These inconsistencies may be due to substantial individual differences in eye movement patterns within a cultural group. Here we addressed this issue by conducting individual-level eye movement data analysis using hidden Markov models (HMMs). Each individual's eye movements were modeled with an HMM. We clustered the individual HMMs according to their similarities and discovered three common patterns in both Asian and Caucasian participants: holistic (looking mostly at the face center), left-eye-biased analytic (looking mostly at the two individual eyes in addition to the face center with a slight bias to the left eye), and right-eye-based analytic (looking mostly at the right eye in addition to the face center). The frequency of participants adopting the three patterns did not differ significantly between Asians and Caucasians, suggesting little modulation from culture. Significantly more participants (75%) showed similar eye movement patterns when viewing own- and other-race faces than different patterns. Most importantly, participants with left-eye-biased analytic patterns performed significantly better than those using either holistic or right-eye-biased analytic patterns. These results suggest that active retrieval of facial feature information through an analytic eye movement pattern may be optimal for face recognition regardless of culture. Copyright © 2017 Elsevier B.V. All rights reserved.
Mortensen, Kristine Køhler; Brotherton, Chloe
In this chapter, we investigate how a face is not a singular, invariable object, but may take on a variety of forms, and how new media has especially created new venues for the moldings of faces. We suggest that faces should be viewed in plural in order to emphasize the many different facial disp...
Golan, Ofer; Sinai-Gavrilov, Yana; Baron-Cohen, Simon
Background Difficulties in recognizing emotions and mental states are central characteristics of autism spectrum conditions (ASC). However, emotion recognition (ER) studies have focused mostly on recognition of the six ?basic? emotions, usually using still pictures of faces. Methods This study describes a new battery of tasks for testing recognition of nine complex emotions and mental states from video clips of faces and from voice recordings taken from the Mindreading DVD. This battery (the ...
Polcher, Alexandra; Frommann, Ingo; Koppara, Alexander; Wolfsgruber, Steffen; Jessen, Frank; Wagner, Michael
There is a need for more sensitive neuropsychological tests to detect subtle cognitive deficits emerging in the preclinical stage of Alzheimer's disease (AD). Associative memory is a cognitive function supported by the hippocampus and affected early in the process of AD. We developed a short computerized face-name associative recognition test (FNART) and tested whether it would detect memory impairment in memory clinic patients with mild cognitive impairment (MCI) and subjective cognitive decline (SCD). We recruited 61 elderly patients with either SCD (n = 32) or MCI (n = 29) and 28 healthy controls (HC) and compared performance on FNART, self-reported cognitive deterioration in different domains (ECog-39), and, in a reduced sample (n = 46), performance on the visual Paired Associates Learning of the CANTAB battery. A significant effect of group on FNART test performance in the total sample was found (p < 0.001). Planned contrasts indicated a significantly lower associative memory performance in the SCD (p = 0.001, d = 0.82) and MCI group (p < 0.001, d = 1.54), as compared to HCs, respectively. The CANTAB-PAL discriminated only between HC and MCI, possibly because of reduced statistical power. Adjusted for depression, performance on FNART was significantly related to ECog-39 Memory in SCD patients (p = 0.024) but not in MCI patients. Associative memory is substantially impaired in memory clinic patients with SCD and correlates specifically with memory complaints at this putative preclinical stage of AD. Further studies will need to examine the predictive validity of the FNART in SCD patients with regard to longitudinal (i.e., conversion to MCI/AD) and biomarker outcomes.
In our daily lives, we form some impressions of other people. Although those impressions are affected by many factors, face-based affective signals such as facial expression, facial attractiveness, or trustworthiness are important. Previous psychological studies have demonstrated the impact of facial impressions on remembering other people, but little is known about the neural mechanisms underlying this psychological process. The purpose of this article is to review recent functional MRI (fMRI) studies to investigate the effects of face-based affective signals including facial expression, facial attractiveness, and trustworthiness on memory for faces, and to propose a tentative concept for understanding this affective-cognitive interaction. On the basis of the aforementioned research, three brain regions are potentially involved in the processing of face-based affective signals. The first candidate is the amygdala, where activity is generally modulated by both affectively positive and negative signals from faces. Activity in the orbitofrontal cortex (OFC), as the second candidate, increases as a function of perceived positive signals from faces; whereas activity in the insular cortex, as the third candidate, reflects a function of face-based negative signals. In addition, neuroscientific studies have reported that the three regions are functionally connected to the memory-related hippocampal regions. These findings suggest that the effects of face-based affective signals on memory for faces could be modulated by interactions between the regions associated with the processing of face-based affective signals and the hippocampus as a memory-related region. PMID:22837740
Dat Tien Nguyen
Full Text Available Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples. Therefore, a presentation attack detection (PAD method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP, local ternary pattern (LTP, and histogram of oriented gradients (HOG. As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN method to extract deep image features and the multi-level local binary pattern (MLBP method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases.
Nguyen, Dat Tien; Pham, Tuyen Danh; Baek, Na Rae; Park, Kang Ryoung
Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP), local ternary pattern (LTP), and histogram of oriented gradients (HOG). As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN) method to extract deep image features and the multi-level local binary pattern (MLBP) method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM) method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases. PMID:29495417
Mortensen, Kristine Køhler; Brotherton, Chloe
for the face the be put into action. Based on an ethnographic study of Danish teenagers’ use of SnapChat we demonstrate how the face is used as a central medium for interaction with peers. Through the analysis of visual SnapChat messages we investigate how SnapChat requires the sender to put an ‘ugly’ face...... displays a single person make use of, and how this ‘pool of faces’ carries sociocultural meaning. While the past decades of swift technological development may seem to have diminished the role of face to face contact, the many new media has – on the contrary – established multiple new and innovative arenas...... forward. Especially the teenage girls engage in manipulating their faces into hideous expressions. However, this type of interaction is not random facial displays, but follow an ‘aesthetics of ugliness’. This aesthetics involve specific ways of looking ugly and is primarily performed by girls who have...
This paper proposes a novel approach for 3D face recognition by learning weighted sparse representation of encoded facial normal information. To comprehensively describe 3D facial surface, three components, in X, Y, and Z-plane respectively, of normal vector are encoded locally to their corresponding normal pattern histograms. They are finally fed to a sparse representation classifier enhanced by learning based spatial weights. Experimental results achieved on the FRGC v2.0 database prove that the proposed encoded normal information is much more discriminative than original normal information. Moreover, the patch based weights learned using the FRGC v1.0 and Bosphorus datasets also demonstrate the importance of each facial physical component for 3D face recognition. © 2011 IEEE.
Full Text Available This paper presents a novel approach for face recognition based on the fusion of the appearance and depth information at the match score level. We apply passive stereoscopy instead of active range scanning as popularly used by others. We show that present-day passive stereoscopy, though less robust and accurate, does make positive contribution to face recognition. By combining the appearance and disparity in a linear fashion, we verified experimentally that the combined results are noticeably better than those for each individual modality. We also propose an original learning method, the bilateral two-dimensional linear discriminant analysis (B2DLDA, to extract facial features of the appearance and disparity images. We compare B2DLDA with some existing 2DLDA methods on both XM2VTS database and our database. The results show that the B2DLDA can achieve better results than others.
Full Text Available This paper presents a novel approach for face recognition based on the fusion of the appearance and depth information at the match score level. We apply passive stereoscopy instead of active range scanning as popularly used by others. We show that present-day passive stereoscopy, though less robust and accurate, does make positive contribution to face recognition. By combining the appearance and disparity in a linear fashion, we verified experimentally that the combined results are noticeably better than those for each individual modality. We also propose an original learning method, the bilateral two-dimensional linear discriminant analysis (B2DLDA, to extract facial features of the appearance and disparity images. We compare B2DLDA with some existing 2DLDA methods on both XM2VTS database and our database. The results show that the B2DLDA can achieve better results than others.
Joordens, Steve; Ozubko, Jason D.; Niewiadomski, Marty W.
In his analysis of the pseudoword effect, [Greene, R.L. (2004). Recognition memory for pseudowords. "Journal of Memory and Language," 50, 259-267.] suggests nonwords can feel more familiar that words in a recognition context if the orthographic features of the nonword match well with the features of the items presented at study. One possible…
van't Wout, Mascha; van Dijke, Annemiek; Aleman, Andre; Kessels, Roy P. C.; Pijpers, Wietske; Kahn, Rene S.
Although schizophrenia has often been associated with deficits in facial affect recognition, it is debated whether the recognition of specific emotions is affected and if these facial affect-processing deficits are related to symptomatology or other patient characteristics. The purpose of the
Wout, M. van 't; Dijke, A. van; Aleman, A.; Kessels, R.P.C.; Pijpers, W.; Kahn, R.S.
Although schizophrenia has often been associated with deficits in facial affect recognition, it is debated whether the recognition of specific emotions is affected and if these facial affect-processing deficits are related to symptomatology or other patient characteristics. The purpose of the
Ding, Xiao Pan; Fu, Genyue; Lee, Kang
The present study used the functional Near-infrared Spectroscopy (fNIRS) methodology to investigate the neural correlates of elementary school children's own- and other-race face processing. An old-new paradigm was used to assess children's recognition ability of own- and other-race faces. FNIRS data revealed that other-race faces elicited significantly greater [oxy-Hb] changes than own-race faces in the right middle frontal gyrus and inferior frontal gyrus regions (BA9) and the left cuneus (BA18). With increased age, the [oxy-Hb] activity differences between own- and other-race faces, or the neural other-race effect (NORE), underwent significant changes in these two cortical areas: at younger ages, the neural response to the other-race faces was modestly greater than that to the own-race faces, but with increased age, the neural response to the own-race faces became increasingly greater than that to the other-race faces. Moreover, these areas had strong regional functional connectivity with a swath of the cortical regions in terms of the neural other-race effect that also changed with increased age. We also found significant and positive correlations between the behavioral other-race effect (reaction time) and the neural other-race effect in the right middle frontal gyrus and inferior frontal gyrus regions (BA9). These results taken together suggest that children, like adults, devote different amounts of neural resources to processing own- and other-race faces, but the size and direction of the neural other-race effect and associated functional regional connectivity change with increased age. © 2013.
Jaeger, Antonio; Cox, Justin C.; Dobbins, Ian G.
Our memory experiences typically covary with those of the others’ around us, and on average, an item is more likely to be familiar than not, if a companion recommends it as such. Although it would be ideal if observers could use the external recommendations of others as statistical priors during recognition decisions, it is currently unclear how or if they do so. Furthermore, understanding the sensitivity of recognition judgments to such external cues is critical for understanding memory conf...
Stein, Timo; End, Albert; Sterzer, Philipp
The detection of a face in a visual scene is the first stage in the face processing hierarchy. Although all subsequent, more elaborate face processing depends on the initial detection of a face, surprisingly little is known about the perceptual mechanisms underlying face detection. Recent evidence suggests that relatively hard-wired face detection mechanisms are broadly tuned to all face-like visual patterns as long as they respect the typical spatial configuration of the eyes above the mouth. Here, we qualify this notion by showing that face detection mechanisms are also sensitive to face shape and facial surface reflectance properties. We used continuous flash suppression (CFS) to render faces invisible at the beginning of a trial and measured the time upright and inverted faces needed to break into awareness. Young Caucasian adult observers were presented with faces from their own race or from another race (race experiment) and with faces from their own age group or from another age group (age experiment). Faces matching the observers’ own race and age group were detected more quickly. Moreover, the advantage of upright over inverted faces in overcoming CFS, i.e., the face inversion effect (FIE), was larger for own-race and own-age faces. These results demonstrate that differences in face shape and surface reflectance influence access to awareness and configural face processing at the initial detection stage. Although we did not collect data from observers of another race or age group, these findings are a first indication that face detection mechanisms are shaped by visual experience with faces from one’s own social group. Such experience-based fine-tuning of face detection mechanisms may equip in-group faces with a competitive advantage for access to conscious awareness. PMID:25136308
Full Text Available The detection of a face in a visual scene is the first stage in the face processing hierarchy. Although all subsequent, more elaborate face processing depends on the initial detection of a face, surprisingly little is known about the perceptual mechanisms underlying face detection. Recent evidence suggests that relatively hard-wired face detection mechanisms are broadly tuned to all face-like visual patterns as long as they respect the typical spatial configuration of the eyes above the mouth. Here, we qualify this notion by showing that face detection mechanisms are also sensitive to face shape and facial surface reflectance properties. We used continuous flash suppression (CFS to render faces invisible at the beginning of a trial and measured the time upright and inverted faces needed to break into awareness. Young Caucasian adult observers were presented with faces from their own race or from another race (race experiment and with faces from their own age group or from another age group (age experiment. Faces matching the observers’ own race and age group were detected more quickly. Moreover, the advantage of upright over inverted faces in overcoming CFS, i.e. the face inversion effect, was larger for own-race and own-age faces. These results demonstrate that differences in face shape and surface reflectance influence access to awareness and configural face processing at the initial detection stage. Although we did not collect data from observers of another race or age group, these findings are a first indication that face detection mechanisms are shaped by visual experience with faces from one’s own social group. Such experience-based fine-tuning of face detection mechanisms may equip in-group faces with a competitive advantage for access to conscious awareness.
Wan, Lulu; Crookes, Kate; Reynolds, Katherine J; Irons, Jessica L; McKone, Elinor
Competing approaches to the other-race effect (ORE) see its primary cause as either a lack of motivation to individuate social outgroup members, or a lack of perceptual experience with other-race faces. Here, we argue that the evidence supporting the social-motivational approach derives from a particular cultural setting: a high socio-economic status group (typically US Whites) looking at the faces of a lower status group (US Blacks) with whom observers typically have at least moderate perceptual experience. In contrast, we test motivation-to-individuate instructions across five studies covering an extremely wide range of perceptual experience, in a cultural setting of more equal socio-economic status, namely Asian and Caucasian participants (N = 480) tested on Asian and Caucasian faces. We find no social-motivational component at all to the ORE, specifically: no reduction in the ORE with motivation instructions, including for novel images of the faces, and at all experience levels; no increase in correlation between own- and other-race face recognition, implying no increase in shared processes; and greater (not the predicted less) effort applied to distinguishing other-race faces than own-race faces under normal ("no instructions") conditions. Instead, the ORE was predicted by level of contact with the other-race. Our results reject both pure social-motivational theories and also the recent Categorization-Individuation model of Hugenberg, Young, Bernstein, and Sacco (2010). We propose a new dual-route approach to the ORE, in which there are two causes of the ORE-lack of motivation, and lack of experience--that contribute differently across varying world locations and cultural settings. Copyright © 2015 Elsevier B.V. All rights reserved.
Herzmann, Grit; Curran, Tim
People have a memory advantage for faces that belong to the same group, for example, that attend the same university or have the same personality type. Faces from such in-group members are assumed to receive more attention during memory encoding and are therefore recognized more accurately. Here we use event-related potentials related to memory encoding and retrieval to investigate the neural correlates of the in-group memory advantage. Using the minimal group procedure, subjects were classified based on a bogus personality test as belonging to one of two personality types. While the electroencephalogram was recorded, subjects studied and recognized faces supposedly belonging to the subject's own and the other personality type. Subjects recognized in-group faces more accurately than out-group faces but the effect size was small. Using the individual behavioral in-group memory advantage in multivariate analyses of covariance, we determined neural correlates of the in-group advantage. During memory encoding (300 to 1000 ms after stimulus onset), subjects with a high in-group memory advantage elicited more positive amplitudes for subsequently remembered in-group than out-group faces, showing that in-group faces received more attention and elicited more neural activity during initial encoding. Early during memory retrieval (300 to 500 ms), frontal brain areas were more activated for remembered in-group faces indicating an early detection of group membership. Surprisingly, the parietal old/new effect (600 to 900 ms) thought to indicate recollection processes differed between in-group and out-group faces independent from the behavioral in-group memory advantage. This finding suggests that group membership affects memory retrieval independent of memory performance. Comparisons with a previous study on the other-race effect, another memory phenomenon influenced by social classification of faces, suggested that the in-group memory advantage is dominated by top
Enticott, Peter G; Kennedy, Hayley A; Johnston, Patrick J; Rinehart, Nicole J; Tonge, Bruce J; Taffe, John R; Fitzgerald, Paul B
There is substantial evidence for facial emotion recognition (FER) deficits in autism spectrum disorder (ASD). The extent of this impairment, however, remains unclear, and there is some suggestion that clinical groups might benefit from the use of dynamic rather than static images. High-functioning individuals with ASD (n = 36) and typically developing controls (n = 36) completed a computerised FER task involving static and dynamic expressions of the six basic emotions. The ASD group showed poorer overall performance in identifying anger and disgust and were disadvantaged by dynamic (relative to static) stimuli when presented with sad expressions. Among both groups, however, dynamic stimuli appeared to improve recognition of anger. This research provides further evidence of specific impairment in the recognition of negative emotions in ASD, but argues against any broad advantages associated with the use of dynamic displays.
Full Text Available Thermal infrared (IR images focus on changes of temperature distribution on facial muscles and blood vessels. These temperature changes can be regarded as texture features of images. A comparative study of face two recognition methods working in thermal spectrum is carried out in this paper. In the first approach, the training images and the test images are processed with Haar wavelet transform and the LL band and the average of LH/HL/HH bands subimages are created for each face image. Then a total confidence matrix is formed for each face image by taking a weighted sum of the corresponding pixel values of the LL band and average band. For LBP feature extraction, each of the face images in training and test datasets is divided into 161 numbers of subimages, each of size 8 × 8 pixels. For each such subimages, LBP features are extracted which are concatenated in manner. PCA is performed separately on the individual feature set for dimensionality reduction. Finally, two different classifiers namely multilayer feed forward neural network and minimum distance classifier are used to classify face images. The experiments have been performed on the database created at our own laboratory and Terravic Facial IR Database.
Bhattacharjee, Debotosh; Seal, Ayan; Ganguly, Suranjan; Nasipuri, Mita; Basu, Dipak Kumar
Thermal infrared (IR) images focus on changes of temperature distribution on facial muscles and blood vessels. These temperature changes can be regarded as texture features of images. A comparative study of face two recognition methods working in thermal spectrum is carried out in this paper. In the first approach, the training images and the test images are processed with Haar wavelet transform and the LL band and the average of LH/HL/HH bands subimages are created for each face image. Then a total confidence matrix is formed for each face image by taking a weighted sum of the corresponding pixel values of the LL band and average band. For LBP feature extraction, each of the face images in training and test datasets is divided into 161 numbers of subimages, each of size 8 × 8 pixels. For each such subimages, LBP features are extracted which are concatenated in manner. PCA is performed separately on the individual feature set for dimensionality reduction. Finally, two different classifiers namely multilayer feed forward neural network and minimum distance classifier are used to classify face images. The experiments have been performed on the database created at our own laboratory and Terravic Facial IR Database.
Full Text Available Recognizing others’ emotional states is crucial for effective social interaction. While most facial emotion recognition tasks use explicit prompts that trigger consciously controlled processing, emotional faces are almost exclusively processed implicitly in real life. Recent attempts in social cognition suggest a dual process perspective, whereby explicit and implicit processes largely operate independently. However, due to differences in methodology the direct comparison of implicit and explicit social cognition has remained a challenge.Here, we introduce a new tool to comparably measure implicit and explicit processing aspects comprising basic and complex emotions in facial expressions. We developed two video-based tasks with similar answer formats to assess performance in respective facial emotion recognition processes: Face Puzzle, implicit and explicit. To assess the tasks’ sensitivity to atypical social cognition and to infer interrelationship patterns between explicit and implicit processes in typical and atypical development, we included healthy adults (NT, n= 24 and adults with autism spectrum disorder (ASD, n = 24.Item analyses yielded good reliability of the new tasks. Group-specific results indicated sensitivity to subtle social impairments in high-functioning ASD. Correlation analyses with established implicit and explicit socio-cognitive measures were further in favor of the tasks’ external validity. Between group comparisons provide first hints of differential relations between implicit and explicit aspects of facial emotion recognition processes in healthy compared to ASD participants. In addition, an increased magnitude of between group differences in the implicit task was found for a speed-accuracy composite measure. The new Face Puzzle tool thus provides two new tasks to separately assess explicit and implicit social functioning, for instance, to measure subtle impairments as well as potential improvements due to social
He, Yi; Ebner, Natalie C.; Johnson, Marcia K.
Younger and older adults’ visual scan patterns were examined as they passively viewed younger and older neutral faces. Both participant age groups tended to look longer at their own-age as compared to other-age faces. In addition, both age groups reported more exposure to own-age than other-age individuals. Importantly, the own-age bias in visual inspection of faces and the own-age bias in self-reported amount of exposure to young and older individuals in everyday life, but not explicit age s...
Matera, G.; Liberto, M.C.; Joosten, L.A.B.; Vinci, M.; Quirino, A.; Pulicari, M.C.; Kullberg, B.J.; Meer, J.W.M. van der; Netea, M.G.; Foca, A.
Bartonella quintana (B. quintana) is a facultative, intracellular bacterium, which causes trench fever, chronic bacteraemia and bacillary angiomatosis. Little is known about the recognition of B. quintana by the innate immune system. In this review, we address the impact of Toll-like receptors
Full Text Available This paper presents a method for recognizing human faces with facial expression. In the proposed approach, a motion history image (MHI is employed to get the features in an expressive face. The face can be seen as a kind of physiological characteristic of a human and the expressions are behavioral characteristics. We fused the 2D images of a face and MHIs which were generated from the same face’s image sequences with expression. Then the fusion features were used to feed a 7-layer deep learning neural network. The previous 6 layers of the whole network can be seen as an autoencoder network which can reduce the dimension of the fusion features. The last layer of the network can be seen as a softmax regression; we used it to get the identification decision. Experimental results demonstrated that our proposed method performs favorably against several state-of-the-art methods.
Dalili, Michael N; Schofield-Toloza, Lawrence; Munafò, Marcus R; Penton-Voak, Ian S
Many cognitive bias modification (CBM) tasks use facial expressions of emotion as stimuli. Some tasks use unique facial stimuli, while others use composite stimuli, given evidence that emotion is encoded prototypically. However, CBM using composite stimuli may be identity- or emotion-specific, and may not generalise to other stimuli. We investigated the generalisability of effects using composite faces in two experiments. Healthy adults in each study were randomised to one of four training conditions: two stimulus-congruent conditions, where same faces were used during all phases of the task, and two stimulus-incongruent conditions, where faces of the opposite sex (Experiment 1) or faces depicting another emotion (Experiment 2) were used after the modification phase. Our results suggested that training effects generalised across identities. However, our results indicated only partial generalisation across emotions. These findings suggest effects obtained using composite stimuli may extend beyond the stimuli used in the task but remain emotion-specific.
In this article, I shall examine the cognitive, heuristic and theoretical functions of the concept of recognition. To evaluate both the explanatory power and the limitations of a sociological concept, the theory construction must be analysed and its actual productivity for sociological theory must...... be evaluated. In the first section, I will introduce the concept of recognition as a travelling concept playing a role both on the intellectual stage and in real life. In the second section, I will concentrate on the presentation of Honneth’s theory of recognition, emphasizing the construction of the concept...... and its explanatory power. Finally, I will discuss Honneth’s concept in relation to the critique that has been raised, addressing the debate between Honneth and Fraser. In a short conclusion, I will return to the question of the explanatory power of the concept of recognition....
Full Text Available People have a memory advantage for faces that belong to the same group, for example, that attend the same university or have the same personality type. Faces from such in-group members are assumed to receive more attention during memory encoding and are therefore recognized more accurately. Here we use event-related potentials related to memory encoding and retrieval to investigate the neural correlates of the in-group memory advantage. Using the minimal group procedure, subjects were classified based on a bogus personality test as belonging to one of two personality types. While the electroencephalogram was recorded, subjects studied and recognized faces supposedly belonging to the subject's own and the other personality type. Subjects recognized in-group faces more accurately than out-group faces but the effect size was small. Using the individual behavioral in-group memory advantage in multivariate analyses of covariance, we determined neural correlates of the in-group advantage. During memory encoding (300 to 1000 ms after stimulus onset, subjects with a high in-group memory advantage elicited more positive amplitudes for subsequently remembered in-group than out-group faces, showing that in-group faces received more attention and elicited more neural activity during initial encoding. Early during memory retrieval (300 to 500 ms, frontal brain areas were more activated for remembered in-group faces indicating an early detection of group membership. Surprisingly, the parietal old/new effect (600 to 900 ms thought to indicate recollection processes differed between in-group and out-group faces independent from the behavioral in-group memory advantage. This finding suggests that group membership affects memory retrieval independent of memory performance. Comparisons with a previous study on the other-race effect, another memory phenomenon influenced by social classification of faces, suggested that the in-group memory advantage is dominated by
J. Yoga Narasimhalu Naidu
Full Text Available Agriculture Technology Management Agency (ATMA is a registered society in India with key stakeholders enmeshed with various agricultural activities for sustainable agricultural development in the state, with focus at district level. It is a hotbed for integrating research, extension and marketing activities and decentralizing day-to-day management of the public Agricultural Technology Development and Dissemination System. The present study was carried out in Andhra Pradesh state to explore the constraints faced by the extension functionaries at each level of decentralized management. Moreover, constraints perceived by the farmers with the support of ATMA in realizing their needs were also studied.
Full Text Available The purpose of this review was to build upon a recent review by Weigelt et al. which examined visual search strategies and face identification between individuals with autism spectrum disorders (ASD and typically developing peers. Seven databases, CINAHL Plus, EMBASE, ERIC, Medline, Proquest, PsychInfo and PubMed were used to locate published scientific studies matching our inclusion criteria. A total of 28 articles not included in Weigelt et al. met criteria for inclusion into this systematic review. Of these 28 studies, 16 were available and met criteria at the time of the previous review, but were mistakenly excluded; and twelve were recently published. Weigelt et al. found quantitative, but not qualitative, differences in face identification in individuals with ASD. In contrast, the current systematic review found both qualitative and quantitative differences in face identification between individuals with and without ASD. There is a large inconsistency in findings across the eye tracking and neurobiological studies reviewed. Recommendations for future research in face recognition in ASD were discussed.
Full Text Available Automatic authentication systems, using biometric technology, are becoming increasingly important with the increased need for person verification in our daily life. A few years back, fingerprint verification was done only in criminal investigations. Now fingerprints and face images are widely used in bank tellers, airports, and building entrances. Face images are easy to obtain, but successful recognition depends on proper orientation and illumination of the image, compared to the one taken at registration time. Facial features heavily change with illumination and orientation angle, leading to increased false rejection as well as false acceptance. Registering face images for all possible angles and illumination is impossible. In this work, we proposed a memory efficient way to register (store multiple angle and changing illumination face image data, and a computationally efficient authentication technique, using multilayer perceptron (MLP. Though MLP is trained using a few registered images with different orientation, due to generalization property of MLP, interpolation of features for intermediate orientation angles was possible. The algorithm is further extended to include illumination robust authentication system. Results of extensive experiments verify the effectiveness of the proposed algorithm.
Wang, Yulong; Tang, Yuan Yan; Li, Luoqing
As an efficient sparse representation algorithm, orthogonal matching pursuit (OMP) has attracted massive attention in recent years. However, OMP and most of its variants estimate the sparse vector using the mean square error criterion, which depends on the Gaussianity assumption of the error distribution. A violation of this assumption, e.g., non-Gaussian noise, may lead to performance degradation. In this paper, a correntropy matching pursuit (CMP) method is proposed to alleviate this problem of OMP. Unlike many other matching pursuit methods, our method is independent of the error distribution. We show that CMP can adaptively assign small weights on severely corrupted entries of data and large weights on clean ones, thus reducing the effect of large noise. Our another contribution is to develop a robust sparse representation-based recognition method based on CMP. Experiments on synthetic and real data show the effectiveness of our method for both sparse approximation and pattern recognition, especially for noisy, corrupted, and incomplete data.
Sunday, Mackenzie A; Lee, Woo-Yeol; Gauthier, Isabel
The presence of differential item functioning (DIF) in a test suggests bias that could disadvantage members of a certain group. Previous work with tests of visual learning abilities found significant DIF related to age groups in a car test (Lee, Cho, McGugin, Van Gulick, & Gauthier, 2015), but not in a face test (Cho et al., 2015). The presence of age DIF is a threat to the validity of the test even for studies where aging is not of interest. Here, we assessed whether this pattern of age DIF for cars and not faces would also apply to new tests targeting the same abilities with a new matching task that uses two studied items per trial. We found evidence for DIF in matching tests for faces and for cars, though with encouragingly small effect sizes. Even though the age DIF was small enough at the test level to be acceptable for most uses, we also asked whether the specific format of our matching tasks may induce some age-related DIF regardless of domain. We decomposed the face matching task into its components, and using new data from subjects performing these simpler tasks, found evidence that the age DIF was driven by the similarity of the two faces presented at study on each trial. Overall, our results suggest that using a matching format, especially for cars, reduces age-related DIF, and that a simpler matching task with only one study item per trial could reduce age DIF further.
Full Text Available Abstract Background The present study was undertaken in order to determine whether a set of clinical features, which are not included in the DSM-IV or ICD-10 for Asperger Syndrome (AS, are associated with AS in particular or whether they are merely a familial trait that is not related to the diagnosis. Methods Ten large families, a total of 138 persons, of whom 58 individuals fulfilled the diagnostic criteria for AS and another 56 did not to fulfill these criteria, were studied using a structured interview focusing on the possible presence of face recognition difficulties, aberrant sensibility and eating habits and sleeping disturbances. Results The prevalence for face recognition difficulties was 46.6% in individuals with AS compared with 10.7% in the control group. The corresponding figures for subjectively reported presence of aberrant sensibilities were 91.4% and 46.6%, for sleeping disturbances 48.3% and 23.2% and for aberrant eating habits 60.3% and 14.3%, respectively. Conclusion An aberrant processing of sensory information appears to be a common feature in AS. The impact of these and other clinical features that are not incorporated in the ICD-10 and DSM-IV on our understanding of AS may hitherto have been underestimated. These associated clinical traits may well be reflected by the behavioural characteristics of these individuals.
Nieminen-von Wendt, Taina; Paavonen, Juulia E; Ylisaukko-Oja, Tero; Sarenius, Susan; Källman, Tiia; Järvelä, Irma; von Wendt, Lennart
Background The present study was undertaken in order to determine whether a set of clinical features, which are not included in the DSM-IV or ICD-10 for Asperger Syndrome (AS), are associated with AS in particular or whether they are merely a familial trait that is not related to the diagnosis. Methods Ten large families, a total of 138 persons, of whom 58 individuals fulfilled the diagnostic criteria for AS and another 56 did not to fulfill these criteria, were studied using a structured interview focusing on the possible presence of face recognition difficulties, aberrant sensibility and eating habits and sleeping disturbances. Results The prevalence for face recognition difficulties was 46.6% in individuals with AS compared with 10.7% in the control group. The corresponding figures for subjectively reported presence of aberrant sensibilities were 91.4% and 46.6%, for sleeping disturbances 48.3% and 23.2% and for aberrant eating habits 60.3% and 14.3%, respectively. Conclusion An aberrant processing of sensory information appears to be a common feature in AS. The impact of these and other clinical features that are not incorporated in the ICD-10 and DSM-IV on our understanding of AS may hitherto have been underestimated. These associated clinical traits may well be reflected by the behavioural characteristics of these individuals. PMID:15826308
Li, Qin; Wang, Hua Jing; You, Jane; Li, Zhao Ming; Li, Jin Xue
In some large-scale face recognition task, such as driver license identification and law enforcement, the training set only contains one image per person. This situation is referred to as one sample problem. Because many face recognition techniques implicitly assume that several (at least two) images per person are available for training, they cannot deal with the one sample problem. This paper investigates principal component analysis (PCA), Fisher linear discriminant analysis (LDA), and locality preserving projections (LPP) and shows why they cannot perform well in one sample problem. After that, this paper presents four reasons that make one sample problem itself difficult: the small sample size problem; the lack of representative samples; the underestimated intra-class variation; and the overestimated inter-class variation. Based on the analysis, this paper proposes to enlarge the training set based on the inter-class relationship. This paper also extends LDA and LPP to extract features from the enlarged training set. The experimental results show the effectiveness of the proposed method.
Full Text Available We investigated whether the type of stimulus (pictures of static faces vs. body motion contributes differently to the recognition of emotions. The performance (accuracy and response times of 25 Low Autistic Traits (LAT group young adults (21 males and 20 young adults (16 males with either High Autistic Traits (HAT group or with High Functioning Autism Spectrum Disorder was compared in the recognition of four emotions (Happiness, Anger, Fear and Sadness either shown in static faces or conveyed by moving bodies (patch-light displays, PLDs. Overall, HAT individuals were as accurate as LAT ones in perceiving emotions both with faces and with PLDs. Moreover, they correctly described non-emotional actions depicted by PLDs, indicating that they perceived the motion conveyed by the PLDs per se. For LAT participants, happiness proved to be the easiest emotion to be recognized: in line with previous studies we found a happy face advantage for faces, which for the first time was also found for bodies (happy body advantage. Furthermore, LAT participants recognized sadness better by static faces and fear by PLDs. This advantage for motion kinematics in the recognition of fear was not present in HAT participants, suggesting that i emotion recognition is not generally impaired in HAT individuals, ii the cues exploited for emotion recognition by LAT and HAT groups are not always the same. These findings are discussed against the background of emotional processing in typically and atypically developed individuals.
Actis-Grosso, Rossana; Bossi, Francesco; Ricciardelli, Paola
We investigated whether the type of stimulus (pictures of static faces vs. body motion) contributes differently to the recognition of emotions. The performance (accuracy and response times) of 25 Low Autistic Traits (LAT group) young adults (21 males) and 20 young adults (16 males) with either High Autistic Traits or with High Functioning Autism Spectrum Disorder (HAT group) was compared in the recognition of four emotions (Happiness, Anger, Fear, and Sadness) either shown in static faces or conveyed by moving body patch-light displays (PLDs). Overall, HAT individuals were as accurate as LAT ones in perceiving emotions both with faces and with PLDs. Moreover, they correctly described non-emotional actions depicted by PLDs, indicating that they perceived the motion conveyed by the PLDs per se. For LAT participants, happiness proved to be the easiest emotion to be recognized: in line with previous studies we found a happy face advantage for faces, which for the first time was also found for bodies (happy body advantage). Furthermore, LAT participants recognized sadness better by static faces and fear by PLDs. This advantage for motion kinematics in the recognition of fear was not present in HAT participants, suggesting that (i) emotion recognition is not generally impaired in HAT individuals, (ii) the cues exploited for emotion recognition by LAT and HAT groups are not always the same. These findings are discussed against the background of emotional processing in typically and atypically developed individuals.
Herrington, John D.; Riley, Meghan E.; Grupe, Daniel W.; Schultz, Robert T.
This study examines whether deficits in visual information processing in autism-spectrum disorder (ASD) can be offset by the recruitment of brain structures involved in selective attention. During functional MRI, 12 children with ASD and 19 control participants completed a selective attention one-back task in which images of faces and houses were…
Azzopardi, George; Greco, Antonio; Saggese, Alessia; Vento, Mario
The popularity and the appeal of systems which are able to automatically determine the gender from face images is growing rapidly. Such a great interest arises from the wide variety of applications, especially in the fields of retail and video surveillance. In recent years there have been several
Dutta, A.; Veldhuis, Raymond N.J.; Spreeuwers, Lieuwe Jan
Quality of a pair of facial images is a strong indicator of the uncertainty in decision about identity based on that image pair. In this paper, we describe a Bayesian approach to model the relation between image quality (like pose, illumination, noise, sharpness, etc) and corresponding face
Ersche, K D; Hagan, C C; Smith, D G; Jones, P S; Calder, A J; Williams, G B
The ability to recognize facial expressions of emotion in others is a cornerstone of human interaction. Selective impairments in the recognition of facial expressions of fear have frequently been reported in chronic cocaine users, but the nature of these impairments remains poorly understood. We used the multivariate method of partial least squares and structural magnetic resonance imaging to identify gray matter brain networks that underlie facial affect processing in both cocaine-dependent (n = 29) and healthy male volunteers (n = 29). We hypothesized that disruptions in neuroendocrine function in cocaine-dependent individuals would explain their impairments in fear recognition by modulating the relationship with the underlying gray matter networks. We found that cocaine-dependent individuals not only exhibited significant impairments in the recognition of fear, but also for facial expressions of anger. Although recognition accuracy of threatening expressions co-varied in all participants with distinctive gray matter networks implicated in fear and anger processing, in cocaine users it was less well predicted by these networks than in controls. The weaker brain-behavior relationships for threat processing were also mediated by distinctly different factors. Fear recognition impairments were influenced by variations in intelligence levels, whereas anger recognition impairments were associated with comorbid opiate dependence and related reduction in testosterone levels. We also observed an inverse relationship between testosterone levels and the duration of crack and opiate use. Our data provide novel insight into the neurobiological basis of abnormal threat processing in cocaine dependence, which may shed light on new opportunities facilitating the psychosocial integration of these patients.
Full Text Available Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the “small sample size” (SSS problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1 how to define diverse base classifiers from the small data; (2 how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0–1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system.
Borgi, Marta; Cirulli, Francesca
Accumulating behavioral and neurophysiological studies support the idea of infantile (cute) faces as highly biologically relevant stimuli rapidly and unconsciously capturing attention and eliciting positive/affectionate behaviors, including willingness to care. It has been hypothesized that the presence of infantile physical and behavioral features in companion (or pet) animals (i.e., dogs and cats) might form the basis of our attraction to these species. Preliminary evidence has indeed shown that the human attentional bias toward the baby schema may extend to animal facial configurations. In this review, the role of facial cues, specifically of infantile traits and facial signals (i.e., eyes gaze) as emotional and communicative signals is highlighted and discussed as regulating the human-animal bond, similarly to what can be observed in the adult-infant interaction context. Particular emphasis is given to the neuroendocrine regulation of the social bond between humans and animals through oxytocin secretion. Instead of considering companion animals as mere baby substitutes for their owners, in this review we highlight the central role of cats and dogs in human lives. Specifically, we consider the ability of companion animals to bond with humans as fulfilling the need for attention and emotional intimacy, thus serving similar psychological and adaptive functions as human-human friendships. In this context, facial cuteness is viewed not just as a releaser of care/parental behavior, but, more in general, as a trait motivating social engagement. To conclude, the impact of this information for applied disciplines is briefly described, particularly in consideration of the increasing evidence of the beneficial effects of contacts with animals for human health and wellbeing.
Full Text Available Accumulating behavioral and neurophysiological studies support the idea of infantile (cute faces as highly biologically relevant stimuli rapidly and unconsciously capturing attention and eliciting positive/affectionate behaviors, including willingness to care. It has been hypothesized that the presence of infantile physical and behavioral features in companion (or pet animals (i.e. dogs and cats might form the basis of our attraction to these species. Preliminary evidence has indeed shown that the human attentional bias toward the baby schema may extend to animal facial configurations. In this review, the role of facial cues, specifically of infantile traits and facial signals (i.e. eyes gaze as emotional and communicative signals is highlighted and discussed as regulating human-animal bond, similarly to what can be observed in the adult-infant interaction context. Particular emphasis is given to the neuroendocrine regulation of social bond between humans and animals through oxytocin secretion. Instead of considering companion animals as mere baby substitutes for their owners, in this review we highlight the central role of cats and dogs in human lives. Specifically, we consider the ability of companion animals to bond with humans as fulfilling the need for attention and emotional intimacy, thus serving similar psychological and adaptive functions as human-human friendships. In this context, facial cuteness is viewed not just as a releaser of care/parental behavior, but more in general as a trait motivating social engagement. To conclude, the impact of this information for applied disciplines is briefly described, particularly in consideration of the increasing evidence of the beneficial effects of contacts with animals for human health and wellbeing.
This paper presents a mesh-based approach for 3D face recognition using a novel local shape descriptor and a SIFT-like matching process. Both maximum and minimum curvatures estimated in the 3D Gaussian scale space are employed to detect salient points. To comprehensively characterize 3D facial surfaces and their variations, we calculate weighted statistical distributions of multiple order surface differential quantities, including histogram of mesh gradient (HoG), histogram of shape index (HoS) and histogram of gradient of shape index (HoGS) within a local neighborhood of each salient point. The subsequent matching step then robustly associates corresponding points of two facial surfaces, leading to much more matched points between different scans of a same person than the ones of different persons. Experimental results on the Bosphorus dataset highlight the effectiveness of the proposed method and its robustness to facial expression variations. © 2011 IEEE.
Høyland, Anne Lise; Nærland, Terje; Engstrøm, Morten; Lydersen, Stian; Andreassen, Ole Andreas
An altered processing of emotions may contribute to a reduced ability for social interaction and communication in autism spectrum disorder, ASD. We investigated how face-emotion recognition in ASD is different from typically developing across adolescent age groups. Fifty adolescents diagnosed with ASD and 49 typically developing (age 12-21 years) were included. The ASD diagnosis was underpinned by parent-rated Social Communication Questionnaire. We used a cued GO/ NOGO task with pictures of facial expressions and recorded reaction time, intra-individual variability of reaction time and omissions/commissions. The Social Responsiveness Scale was used as a measure of social function. Analyses were conducted for the whole group and for young (emotion recognition and severity of social problems indicating a delayed development of emotional understanding in ASD. It also points towards alterations in top-down attention control in the ASD group. This suggests novel disease-related features that should be investigated in more details in experimental settings.
Wallis, D. J.; Ridout, N.; Sharpe, E.
The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link Emotion recognition deficits have consistently been reported in clinical and sub-clinical disordered eating. However, most studies have used static faces, despite the dynamic nature of everyday social interactions. The current aims were to confirm previous findings of emotion recognition deficits in non-clinical disordered eating and to determin...
Baracchi, D.; Petrocelli, I.; Chittka, L.; Ricciardi, G.; Turillazzi, S.
Social insects have evolved sophisticated recognition systems enabling them to accept nest-mates but reject alien conspecifics. In the social wasp, Liostenogaster flavolineata (Stenogastrinae), individuals differ in their cuticular hydrocarbon profiles according to colony membership; each female also possesses a unique (visual) facial pattern. This species represents a unique model to understand how vision and olfaction are integrated and the extent to which wasps prioritize one channel over the other to discriminate aliens and nest-mates. Liostenogaster flavolineata females are able to discriminate between alien and nest-mate females using facial patterns or chemical cues in isolation. However, the two sensory modalities are not equally efficient in the discrimination of ‘friend’ from ‘foe’. Visual cues induce an increased number of erroneous attacks on nest-mates (false alarms), but such attacks are quickly aborted and never result in serious injury. Odour cues, presented in isolation, result in an increased number of misses: erroneous acceptances of outsiders. Interestingly, wasps take the relative efficiencies of the two sensory modalities into account when making rapid decisions about colony membership of an individual: chemical profiles are entirely ignored when the visual and chemical stimuli are presented together. Thus, wasps adopt a strategy to ‘err on the safe side’ by memorizing individual faces to recognize colony members, and disregarding odour cues to minimize the risk of intrusion from colony outsiders. PMID:25652836
Li, Jun; Liang, Jimin; Tian, Jie; Liu, Jiangang; Zhao, Jizheng; Zhang, Hui; Shi, Guangming
Although top-down perceptual process plays an important role in face processing, its neural substrate is still puzzling because the top-down stream is extracted difficultly from the activation pattern associated with contamination caused by bottom-up face perception input. In the present study, a novel paradigm of instructing participants to detect faces from pure noise images is employed, which could efficiently eliminate the interference of bottom-up face perception in topdown face processing. Analyzing the map of functional connectivity with right FFA analyzed by conventional Pearson's correlation, a possible face processing pattern induced by top-down perception can be obtained. Apart from the brain areas of bilateral fusiform gyrus (FG), left inferior occipital gyrus (IOG) and left superior temporal sulcus (STS), which are consistent with a core system in the distributed cortical network for face perception, activation induced by top-down face processing is also found in these regions that include the anterior cingulate gyrus (ACC), right oribitofrontal cortex (OFC), left precuneus, right parahippocampal cortex, left dorsolateral prefrontal cortex (DLPFC), right frontal pole, bilateral premotor cortex, left inferior parietal cortex and bilateral thalamus. The results indicate that making-decision, attention, episodic memory retrieving and contextual associative processing network cooperate with general face processing regions to process face information under top-down perception.
Golan, Ofer; Sinai-Gavrilov, Yana; Baron-Cohen, Simon
Difficulties in recognizing emotions and mental states are central characteristics of autism spectrum conditions (ASC). However, emotion recognition (ER) studies have focused mostly on recognition of the six 'basic' emotions, usually using still pictures of faces. This study describes a new battery of tasks for testing recognition of nine complex emotions and mental states from video clips of faces and from voice recordings taken from the Mindreading DVD. This battery (the Cambridge Mindreading Face-Voice Battery for Children or CAM-C) was given to 30 high-functioning children with ASC, aged 8 to 11, and to 25 matched controls. The ASC group scored significantly lower than controls on complex ER from faces and voices. In particular, participants with ASC had difficulty with six out of nine complex emotions. Age was positively correlated with all task scores, and verbal IQ was correlated with scores in the voice task. CAM-C scores were negatively correlated with parent-reported level of autism spectrum symptoms. Children with ASC show deficits in recognition of complex emotions and mental states from both facial and vocal expressions. The CAM-C may be a useful test for endophenotypic studies of ASC and is one of the first to use dynamic stimuli as an assay to reveal the ER profile in ASC. It complements the adult version of the CAM Face-Voice Battery, thus providing opportunities for developmental assessment of social cognition in autism.
Edmund T eRolls
Full Text Available Neurophysiological evidence for invariant representations of objects and faces in the primate inferior temporal visual cortex is described. Then a computational approach to how invariant representations are formed in the brain is described that builds on the neurophysiology. A feature hierarchy modelin which invariant representations can be built by self-organizing learning based on the temporal and spatialstatistics of the visual input produced by objects as they transform in the world is described. VisNet can use temporal continuity in an associativesynaptic learning rule with a short term memory trace, and/or it can use spatialcontinuity in Continuous Spatial Transformation learning which does not require a temporal trace. The model of visual processing in theventral cortical stream can build representations of objects that are invariant withrespect to translation, view, size, and also lighting. The modelhas been extended to provide an account of invariant representations in the dorsal visualsystem of the global motion produced by objects such as looming, rotation, and objectbased movement. The model has been extended to incorporate top-down feedback connectionsto model the control of attention by biased competition in for example spatial and objectsearch tasks. The model has also been extended to account for how the visual system canselect single objects in complex visual scenes, and how multiple objects can berepresented in a scene. The model has also been extended to provide, with an additional layer, for the development of representations of spatial scenes of the type found in the hippocampus.
Solomonova, Elizaveta; Stenstrom, Philippe; Schon, Emilie; Duquette, Alexandra; Dubé, Simon; O'Reilly, Christian; Nielsen, Tore
Face recognition is a highly specialized capability that has implicit and explicit memory components. Studies show that learning tasks with facial components are dependent on rapid eye movement and non-rapid eye movement sleep features, including rapid eye movement sleep density and fast sleep spindles. This study aimed to investigate the relationship between sleep-dependent consolidation of memory for faces and partial rapid eye movement sleep deprivation, rapid eye movement density, and fast and slow non-rapid eye movement sleep spindles. Fourteen healthy participants spent 1 night each in the laboratory. Prior to bed they completed a virtual reality task in which they interacted with computer-generated characters. Half of the participants (REMD group) underwent a partial rapid eye movement sleep deprivation protocol and half (CTL group) had a normal amount of rapid eye movement sleep. Upon awakening, they completed a face recognition task that contained a mixture of previously encountered faces from the task and new faces. Rapid eye movement density and fast and slow sleep spindles were detected using in-house software. The REMD group performed worse than the CTL group on the face recognition task; however, rapid eye movement duration and rapid eye movement density were not related to task performance. Fast and slow sleep spindles showed differential relationships to task performance, with fast spindles being positively and slow spindles negatively correlated with face recognition. The results support the notion that rapid eye movement and non-rapid eye movement sleep characteristics play complementary roles in face memory consolidation. This study also raises the possibility that fast and slow spindles contribute in opposite ways to sleep-dependent memory consolidation. © 2017 European Sleep Research Society.
Anne Lise Høyland
Full Text Available An altered processing of emotions may contribute to a reduced ability for social interaction and communication in autism spectrum disorder, ASD. We investigated how face-emotion recognition in ASD is different from typically developing across adolescent age groups. Fifty adolescents diagnosed with ASD and 49 typically developing (age 12-21 years were included. The ASD diagnosis was underpinned by parent-rated Social Communication Questionnaire. We used a cued GO/ NOGO task with pictures of facial expressions and recorded reaction time, intra-individual variability of reaction time and omissions/commissions. The Social Responsiveness Scale was used as a measure of social function. Analyses were conducted for the whole group and for young (< 16 years and old (≥ 16 years age groups. We found no significant differences in any task measures between the whole group of typically developing and ASD and no significant correlations with the Social Responsiveness Scale. However, there was a non-significant tendency for longer reaction time in the young group with ASD (p = 0.099. The Social Responsiveness Scale correlated positively with reaction time (r = 0.30, p = 0.032 and intra-individual variability in reaction time (r = 0.29, p = 0.037 in the young group and in contrast, negatively in the old group (r = -0.23, p = 0.13; r = -0.38, p = 0.011, respectively giving significant age group interactions for both reaction time (p = 0.008 and intra-individual variability in reaction time (p = 0.001. Our findings suggest an age-dependent association between emotion recognition and severity of social problems indicating a delayed development of emotional understanding in ASD. It also points towards alterations in top-down attention control in the ASD group. This suggests novel disease-related features that should be investigated in more details in experimental settings.
White, S.J.; Louis, D.S.; Braunstein, E.M.; Hankin, F.M.; Greene, T.L.
Videotape fluoroscopy was used to diagnose a previously undescribed carpal dissociation, the capitate lunate instability pattern. In eight patients with midcarpal pain and clicking, the examiner simultaneously applied pressure to the scaphoid tuberosity while applying longitudinal traction and flexion to the wrist under fluoroscopic control. This maneuver revealed dorsal subluxation of the proximal carpal row and capitate lunate subluxation in each of the eight patients. Plain radiography and arthrography were not helpful in the diagnosis. All eight cases were managed conservatively. Videotape fluoroscopy is the best radiologic method of diagnosing capitate-lunate instability
Gerasimenko, N Iu; Slavutskaia, A V; Kalinin, S A; Kulikov, M A; Mikhaĭlova, E S
In 38 healthy subjects accuracy and response time were examined during recognition of two categories of images--animals andnonliving objects--under forward masking. We revealed new data that masking effects depended of categorical similarity of target and masking stimuli. The recognition accuracy was the lowest and the response time was the most slow, when the target and masking stimuli belongs to the same category, that was combined with high dispersion of response times. The revealed effects were more clear in the task of animal recognition in comparison with the recognition of nonliving objects. We supposed that the revealed effects connected with interference between cortical representations of the target and masking stimuli and discussed our results in context of cortical interference and negative priming.
Wang, Hailing; Ip, Chengteng; Fu, Shimin; Sun, Pei
Face recognition theories suggest that our brains process invariant (e.g., gender) and changeable (e.g., emotion) facial dimensions separately. To investigate whether these two dimensions are processed in different time courses, we analyzed the selection negativity (SN, an event-related potential component reflecting attentional modulation) elicited by face gender and emotion during a feature selective attention task. Participants were instructed to attend to a combination of face emotion and gender attributes in Experiment 1 (bi-dimensional task) and to either face emotion or gender in Experiment 2 (uni-dimensional task). The results revealed that face emotion did not elicit a substantial SN, whereas face gender consistently generated a substantial SN in both experiments. These results suggest that face gender is more sensitive to feature-selective attention and that face emotion is encoded relatively automatically on SN, implying the existence of different underlying processing mechanisms for invariant and changeable facial dimensions. Copyright © 2017 Elsevier Ltd. All rights reserved.
Recognition memory for low- and high-frequency-filtered emotional faces: Low spatial frequencies drive emotional memory enhancement, whereas high spatial frequencies drive the emotion-induced recognition bias.
Rohr, Michaela; Tröger, Johannes; Michely, Nils; Uhde, Alarith; Wentura, Dirk
This article deals with two well-documented phenomena regarding emotional stimuli: emotional memory enhancement-that is, better long-term memory for emotional than for neutral stimuli-and the emotion-induced recognition bias-that is, a more liberal response criterion for emotional than for neutral stimuli. Studies on visual emotion perception and attention suggest that emotion-related processes can be modulated by means of spatial-frequency filtering of the presented emotional stimuli. Specifically, low spatial frequencies are assumed to play a primary role for the influence of emotion on attention and judgment. Given this theoretical background, we investigated whether spatial-frequency filtering also impacts (1) the memory advantage for emotional faces and (2) the emotion-induced recognition bias, in a series of old/new recognition experiments. Participants completed incidental-learning tasks with high- (HSF) and low- (LSF) spatial-frequency-filtered emotional and neutral faces. The results of the surprise recognition tests showed a clear memory advantage for emotional stimuli. Most importantly, the emotional memory enhancement was significantly larger for face images containing only low-frequency information (LSF faces) than for HSF faces across all experiments, suggesting that LSF information plays a critical role in this effect, whereas the emotion-induced recognition bias was found only for HSF stimuli. We discuss our findings in terms of both the traditional account of different processing pathways for HSF and LSF information and a stimulus features account. The double dissociation in the results favors the latter account-that is, an explanation in terms of differences in the characteristics of HSF and LSF stimuli.
Morita, Tomoyo; Saito, Daisuke N; Ban, Midori; Shimada, Koji; Okamoto, Yuko; Kosaka, Hirotaka; Okazawa, Hidehiko; Asada, Minoru; Naito, Eiichi
Proprioception is somatic sensation that allows us to sense and recognize position, posture, and their changes in our body parts. It pertains directly to oneself and may contribute to bodily awareness. Likewise, one's face is a symbol of oneself, so that visual self-face recognition directly contributes to the awareness of self as distinct from others. Recently, we showed that right-hemispheric dominant activity in the inferior fronto-parietal cortices, which are connected by the inferior branch of the superior longitudinal fasciculus (SLF III), is associated with proprioceptive illusion (awareness), in concert with sensorimotor activity. Herein, we tested the hypothesis that visual self-face recognition shares brain regions active during proprioceptive illusion in the right inferior fronto-parietal SLF III network. We scanned brain activity using functional magnetic resonance imaging while twenty-two right-handed healthy adults performed two tasks. One was a proprioceptive illusion task, where blindfolded participants experienced a proprioceptive illusion of right hand movement. The other was a visual self-face recognition task, where the participants judged whether an observed face was their own. We examined whether the self-face recognition and the proprioceptive illusion commonly activated the inferior fronto-parietal cortices connected by the SLF III in a right-hemispheric dominant manner. Despite the difference in sensory modality and in the body parts involved in the two tasks, both tasks activated the right inferior fronto-parietal cortices, which are likely connected by the SLF III, in a right-side dominant manner. Here we discuss possible roles for right inferior fronto-parietal activity in bodily awareness and self-awareness. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Full Text Available The present study aimed to investigate the processes through which individuals with social anxiety attend to and interpret compound emotional expressions of the face and body. Incongruent face-body compound images that combined an angry face (or body with a fearful, sad, or happy body (or face were presented to a social anxiety group (SA; n=22 and a healthy control group (HC; n=22. The participants were instructed to interpret the emotional state of the image, and their eye-movements and behavioral responses were measured. The results revealed that both group showed longer scanpath length during the recognition of compound images which combined angry face with angry, fearful, sadness, or happy body. The SA group also showed longer scanpath length in congruent face-body compound images of fear and sad. Additionally, the SA group fixated for a shorter period of time on the face and longer on the body than the HC group. Regarding emotion interpretation, the SA group was more likely to interpret the emotional state of incongruent face-body compound images based on the body than the HC group. These findings provide a preliminary observation that individuals with social anxiety showed different attentional bias pattern by congruency of face-body compound images and that it might have biased their interpretations of the emotional states.
Lemieux, Chantal L; Collin, Charles A; Nelson, Elizabeth A
In two experiments, we examined the effects of varying the spatial frequency (SF) content of face images on eye movements during the learning and testing phases of an old/new recognition task. At both learning and testing, participants were presented with face stimuli band-pass filtered to 11 different SF bands, as well as an unfiltered baseline condition. We found that eye movements varied significantly as a function of SF. Specifically, the frequency of transitions between facial features showed a band-pass pattern, with more transitions for middle-band faces (≈5-20 cycles/face) than for low-band (≈20 cpf) ones. These findings were similar for the learning and testing phases. The distributions of transitions across facial features were similar for the middle-band, high-band, and unfiltered faces, showing a concentration on the eyes and mouth; conversely, low-band faces elicited mostly transitions involving the nose and nasion. The eye movement patterns elicited by low, middle, and high bands are similar to those previous researchers have suggested reflect holistic, configural, and featural processing, respectively. More generally, our results are compatible with the hypotheses that eye movements are functional, and that the visual system makes flexible use of visuospatial information in face processing. Finally, our finding that only middle spatial frequencies yielded the same number and distribution of fixations as unfiltered faces adds more evidence to the idea that these frequencies are especially important for face recognition, and reveals a possible mediator for the superior performance that they elicit.
Wallis, Deborah J; Ridout, Nathan; Sharpe, Emma
Emotion recognition deficits have consistently been reported in clinical and sub-clinical disordered eating. However, most studies have used static faces, despite the dynamic nature of everyday social interactions. The current aims were to confirm previous findings of emotion recognition deficits in non-clinical disordered eating and to determine if these deficits would be more evident in response to static as compared to dynamic emotional stimuli. We also aimed to establish if these emotion recognition deficits could be explained by comorbid psychopathology (depression, anxiety or alexithymia). Eighty-nine females were assigned to groups based on scores on the Eating Disorders Inventory (EDI); high (n = 45) and low (n = 44). Participants were presented with emotional faces and video clips portraying fear, anger, disgust, sadness, happiness, surprise and neutral affect. As predicted, the high EDI group correctly recognised fewer emotional displays than did the low EDI group. However, this deficit was not more evident for negative as opposed to positive emotions. Furthermore, the deficit was not larger for static stimuli in comparison to dynamic. Overall emotion recognition accuracy was negatively associated with Drive for Thinness, but not Bulimia or Body Dissatisfaction. Importantly, the emotion recognition deficits observed in the high EDI group and that were associated with eating disorder symptoms were independent of depression, anxiety and alexithymia. Findings confirm that even minor elevations in disordered eating are associated with poorer emotion recognition. This is important, as problems in recognition of the emotional displays of others are thought to be a risk factor for clinical eating disorders. Copyright © 2018 Elsevier Ltd. All rights reserved.
Kahraman, Fatih; Gokmen, Muhittin; Darkner, Sune
Face recognition systems are typically required to work under highly varying illumination conditions. This leads to complex effects imposed on the acquired face image that pertains little to the actual identity. Consequently, illumination normalization is required to reach acceptable recognition...... rates in face recognition systems. In this paper, we propose an approach that integrates the face identity and illumination models under the widely used Active Appearance Model framework as an extension to the texture model in order to obtain illumination-invariant face localization...
.... (4) Invariants -- both geometric and other types. (5) Human faces: Analysis of images of human faces, including feature extraction, face recognition, compression, and recognition of facial expressions...
.... (4) Invariants: both geometric and other types. (5) Human faces: Analysis of images of human faces, including feature extraction, face recognition, compression, and recognition of facial expressions...
Holdnack, James A; Delis, Dean C
The WMS-III face memory subtest was developed as a quick, reliable, measure of non-verbal recognition memory. While the face memory subtest has demonstrated clinical sensitivity, the test has been criticized for low correlation with other WMS-III visual memory subtests and for failing to differentiate performance between clinical groups. One possible reason for these findings may be due to the impact of response bias associated with recognition memory tests. Four studies were conducted to evaluate the utility of applying signal detection measures to the face memory subtests. The first two studies used the WMS-III standardization data set to determine age and education effects and to present normative and reliability data for hits, false positives, discriminability and response bias. The third study tested the hypothesis that using response components and signal detection measures would enhance the correlation between face memory and the other WMS-III visual memory subtests. The fourth study compared performance of patients with Alzheimer's disease, Huntington's disease, Korsakoff's syndrome and demographically matched controls on the new face memory scores. The new measures did not have higher correlation with other WMS-III visual memory measures than the standard scoring of the test. Analysis of the clinical samples indicated that the discriminability index best differentiated patients from controls. The response components, particularly delayed false positives, differentiated performance among the clinical groups. Normative and reliability data are presented.
Ryu, Nam Gyu; Lim, Byung Woo; Cho, Jae Keun; Kim, Jin
We investigated whether experiencing right- or left-sided facial paralysis would affect an individual's ability to recognize one side of the human face using hybrid hemi-facial photos by preliminary study. Further investigation looked at the relationship between facial recognition ability, stress, and quality of life. To investigate predominance of one side of the human face for face recognition, 100 normal participants (right-handed: n = 97, left-handed: n = 3, right brain dominance: n = 56, left brain dominance: n = 44) answered a questionnaire that included hybrid hemi-facial photos developed to determine decide superiority of one side for human face recognition. To determine differences of stress level and quality of life between individuals experiencing right- and left-sided facial paralysis, 100 patients (right side:50, left side:50, not including traumatic facial nerve paralysis) answered a questionnaire about facial disability index test and quality of life (SF-36 Korean version). Regardless of handedness or hemispheric dominance, the proportion of predominance of the right side in human face recognition was larger than the left side (71% versus 12%, neutral: 17%). Facial distress index of the patients with right-sided facial paralysis was lower than that of left-sided patients (68.8 ± 9.42 versus 76.4 ± 8.28), and the SF-36 scores of right-sided patients were lower than left-sided patients (119.07 ± 15.24 versus 123.25 ± 16.48, total score: 166). Universal preference for the right side in human face recognition showed worse psychological mood and social interaction in patients with right-side facial paralysis than left-sided paralysis. This information is helpful to clinicians in that psychological and social factors should be considered when treating patients with facial-paralysis. Copyright © 2016 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.
Golan, Ofer; Gordon, Ilanit; Fichman, Keren; Keinan, Giora
Children with ASD show emotion recognition difficulties, as part of their social communication deficits. We examined facial emotion recognition (FER) in intellectually disabled children with ASD and in younger typically developing (TD) controls, matched on mental age. Our emotion-matching paradigm employed three different modalities: facial, vocal…
Labuschagne, Izelle; Jones, Rebecca; Callaghan, Jenny; Whitehead, Daisy; Dumas, Eve M; Say, Miranda J; Hart, Ellen P; Justo, Damian; Coleman, Allison; Dar Santos, Rachelle C; Frost, Chris; Craufurd, David; Tabrizi, Sarah J; Stout, Julie C
Facial emotion recognition impairments have been reported in Huntington's disease (HD). However, the nature of the impairments across the spectrum of HD remains unclear. We report on emotion recognition data from 344 participants comprising premanifest HD (PreHD) and early HD patients, and controls. In a test of recognition of facial emotions, we examined responses to six basic emotional expressions and neutral expressions. In addition, and within the early HD sample, we tested for differences on emotion recognition performance between those 'on' vs. 'off' neuroleptic or selective serotonin reuptake inhibitor (SSRI) medications. The PreHD groups showed significant (pfaces; whereas the early HD groups were significantly impaired across all emotions including neutral expressions. In early HD, neuroleptic use was associated with worse facial emotion recognition, whereas SSRI use was associated with better facial emotion recognition. The findings suggest that emotion recognition impairments exist across the HD spectrum, but are relatively more widespread in manifest HD than in the premanifest period. Commonly prescribed medications to treat HD-related symptoms also appear to affect emotion recognition. These findings have important implications for interpersonal communication and medication usage in HD. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Sandra Cristina Soares
Full Text Available Previous studies in the social anxiety arena have shown an impaired attentional control system, similar to that found in trait anxiety. However, the effect of task demands on social anxiety in socially threatening stimuli, such as angry faces, remains unseen. In the present study, fifty-four university students scoring high and low in the Social Interaction and Performance Anxiety and Avoidance Scale (SIPAAS questionnaire, participated in a target letter discrimination task while task-irrelevant face stimuli (angry, disgust, happy, and neutral were simultaneously presented. The results showed that high (compared to low socially anxious individuals were more prone to distraction by task-irrelevant stimuli, particularly under high perceptual load conditions. More importantly, for such individuals, the accuracy proportions for angry faces significantly differed between the low and high perceptual load conditions, which is discussed in light of current evolutionary models of social anxiety.
Nuesse, Theresa; Steenken, Rike; Neher, Tobias
, and it has been suggested that differences in cognitive abilities may also be important. The objective of this study was to investigate associations between performance in cognitive tasks and speech recognition under different listening conditions in older adults with either age appropriate hearing...... or hearing-impairment. To that end, speech recognition threshold (SRT) measurements were performed under several masking conditions that varied along the perceptual dimensions of dip listening, spatial separation, and informational masking. In addition, a neuropsychological test battery was administered....... In repeated linear regression analyses, composite scores of cognitive test outcomes (evaluated using PCA) were included to predict SRTs. These associations were different for the two groups. When hearing thresholds were controlled for, composed cognitive factors were significantly associated with the SRTs...
Facial emotional recognition in schizophrenia: preliminary results of the Virtual Reality Program for Facial Emotional Recognition = Reconhecimento emocional de faces na esquizofrenia: resultados preliminares do Programa de Realidade Virtual para o Reconhecimento Emocional de Faces
Teresa Souto; Alexandre Baptista; Diana Tavares; Cristina Queirós; António Marques
BACKGROUND: Significant deficits in emotional recognition and social perception characterize patients with schizophrenia and have direct negative impact both in inter-personal relationships and in social functioning. Virtual reality, as a methodological resource, might have a high potential for assessment and training skills in people suffering from mental illness. OBJECTIVES: To present preliminary results of a facial emotional recognition assessment designed for patients with schizophrenia,...
Passos, Renato Ribeiro; Marciano da Costa, Liovando; Rodrigues de Assis, Igor; Santos, Danilo Andrade; Ruiz, Hugo Alberto; Guimarães, Lorena Abdalla de Oliveira Prata; Andrade, Felipe Vaz
The efficient use of water is increasingly important and proper soil management, within the specificities of each region of the country, allows achieving greater efficiency. The South and Caparaó regions of Espírito Santo, Brazil are characterized by relief of `hill seas' with differences in the degree of pasture degradation due to sun exposure. The objective of this study was to evaluate the least limiting water range in Udox soil under degraded pastures with two faces of exposure to the sun and three pedoenvironments. In each pedoenvironment, namely Alegre, Celina, and Café, two areas were selected, one with exposure on the North/West face and the other on the South/East face. In each of these areas, undisturbed soil samples were collected at 0-10 cm depth to determine the least limiting water range. The exposed face of the pasture that received the highest solar incidence (North/West) presented the lowest values in least limiting water range. The least limiting water range proved to be a physical quality indicator for Udox soil under degraded pastures.
Full Text Available Several studies have reported impairments in decoding emotional facial expressions in intimate partner violence (IPV perpetrators. However, the mechanisms that underlie these impaired skills are not well known. Given this gap in the literature, we aimed to establish whether IPV perpetrators (n = 18 differ in their emotion decoding process, attentional skills, and testosterone (T, cortisol (C levels and T/C ratio in comparison with controls (n = 20, and also to examine the moderating role of the group and hormonal parameters in the relationship between attention skills and the emotion decoding process. Our results demonstrated that IPV perpetrators showed poorer emotion recognition and higher attention switching costs than controls. Nonetheless,they did not differ in attention to detail and hormonal parameters. Finally, the slope predicting emotion recognition from deficits in attention switching became steeper as T levels increased, especially in IPV perpetrators, although the basal C and T/C ratios were unrelated to emotion recognition and attention deficits for both groups. These findings contribute to a better understanding of the mechanisms underlying emotion recognition deficits. These factors therefore constitute the target for future interventions.
Wang, Jing; Li, Heng; Fu, Weizhen; Chen, Yao; Li, Liming; Lyu, Qing; Han, Tingting; Chai, Xinyu
Retinal prostheses have the potential to restore partial vision. Object recognition in scenes of daily life is one of the essential tasks for implant wearers. Still limited by the low-resolution visual percepts provided by retinal prostheses, it is important to investigate and apply image processing methods to convey more useful visual information to the wearers. We proposed two image processing strategies based on Itti's visual saliency map, region of interest (ROI) extraction, and image segmentation. Itti's saliency model generated a saliency map from the original image, in which salient regions were grouped into ROI by the fuzzy c-means clustering. Then Grabcut generated a proto-object from the ROI labeled image which was recombined with background and enhanced in two ways--8-4 separated pixelization (8-4 SP) and background edge extraction (BEE). Results showed that both 8-4 SP and BEE had significantly higher recognition accuracy in comparison with direct pixelization (DP). Each saliency-based image processing strategy was subject to the performance of image segmentation. Under good and perfect segmentation conditions, BEE and 8-4 SP obtained noticeably higher recognition accuracy than DP, and under bad segmentation condition, only BEE boosted the performance. The application of saliency-based image processing strategies was verified to be beneficial to object recognition in daily scenes under simulated prosthetic vision. They are hoped to help the development of the image processing module for future retinal prostheses, and thus provide more benefit for the patients. Copyright © 2015 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.
Harris, Gareth; Shen, Yu; Ha, Heonick; Donato, Alessandra; Wallis, Samuel; Zhang, Xiaodong; Zhang, Yun
Food is critical for survival. Many animals, including the nematode Caenorhabditis elegans, use sensorimotor systems to detect and locate preferred food sources. However, the signaling mechanisms underlying food-choice behaviors are poorly understood. Here, we characterize the molecular signaling that regulates recognition and preference between different food odors in C. elegans. We show that the major olfactory sensory neurons, AWB and AWC, play essential roles in this behavior. A canonical Gα-protein, together with guanylate cyclases and cGMP-gated channels, is needed for the recognition of food odors. The food-odor-evoked signal is transmitted via glutamatergic neurotransmission from AWC and through AMPA and kainate-like glutamate receptor subunits. In contrast, peptidergic signaling is required to generate preference between different food odors while being dispensable for the recognition of the odors. We show that this regulation is achieved by the neuropeptide NLP-9 produced in AWB, which acts with its putative receptor NPR-18, and by the neuropeptide NLP-1 produced in AWC. In addition, another set of sensory neurons inhibits food-odor preference. These mechanistic logics, together with a previously mapped neural circuit underlying food-odor preference, provide a functional network linking sensory response, transduction, and downstream receptors to process complex olfactory information and generate the appropriate behavioral decision essential for survival. Copyright © 2014 the authors 0270-6474/14/339389-15$15.00/0.
Liew, Sook-Lei; Ma, Yina; Han, Shihui; Aziz-Zadeh, Lisa
Human adults typically respond faster to their own face than to the faces of others. However, in Chinese participants, this self-face advantage is lost in the presence of one's supervisor, and they respond faster to their supervisor's face than to their own. While this “boss effect” suggests a strong modulation of self-processing in the presence of influential social superiors, the current study examined whether this effect was true across cultures. Given the wealth of literature on cultural differences between collectivist, interdependent versus individualistic, independent self-construals, we hypothesized that the boss effect might be weaker in independent than interdependent cultures. Twenty European American college students were asked to identify orientations of their own face or their supervisors' face. We found that European Americans, unlike Chinese participants, did not show a “boss effect” and maintained the self-face advantage even in the presence of their supervisor's face. Interestingly, however, their self-face advantage decreased as their ratings of their boss's perceived social status increased, suggesting that self-processing in Americans is influenced more by one's social status than by one's hierarchical position as a social superior. In addition, when their boss's face was presented with a labmate's face, American participants responded faster to the boss's face, indicating that the boss may represent general social dominance rather than a direct negative threat to oneself, in more independent cultures. Altogether, these results demonstrate a strong cultural modulation of self-processing in social contexts and suggest that the very concept of social positions, such as a boss, may hold markedly different meanings to the self across Western and East Asian cultures. PMID:21359209
Liew, Sook-Lei; Ma, Yina; Han, Shihui; Aziz-Zadeh, Lisa
Human adults typically respond faster to their own face than to the faces of others. However, in Chinese participants, this self-face advantage is lost in the presence of one's supervisor, and they respond faster to their supervisor's face than to their own. While this "boss effect" suggests a strong modulation of self-processing in the presence of influential social superiors, the current study examined whether this effect was true across cultures. Given the wealth of literature on cultural differences between collectivist, interdependent versus individualistic, independent self-construals, we hypothesized that the boss effect might be weaker in independent than interdependent cultures. Twenty European American college students were asked to identify orientations of their own face or their supervisors' face. We found that European Americans, unlike Chinese participants, did not show a "boss effect" and maintained the self-face advantage even in the presence of their supervisor's face. Interestingly, however, their self-face advantage decreased as their ratings of their boss's perceived social status increased, suggesting that self-processing in Americans is influenced more by one's social status than by one's hierarchical position as a social superior. In addition, when their boss's face was presented with a labmate's face, American participants responded faster to the boss's face, indicating that the boss may represent general social dominance rather than a direct negative threat to oneself, in more independent cultures. Altogether, these results demonstrate a strong cultural modulation of self-processing in social contexts and suggest that the very concept of social positions, such as a boss, may hold markedly different meanings to the self across Western and East Asian cultures.
García-Gutiérrez, Ana; Aguado, Luis; Romero-Ferreiro, Verónica; Pérez-Moreno, Elisa
In order to test whether expression and gender can be attended to simultaneously without a cost in accuracy four experiments were carried out using a dual gender-expression task with male and female faces showing different emotional expressions that were backward masked by emotionally neutral faces. In the dual-facial condition the participants had to report both the gender and the expression of the targets. In two control conditions the participant reported either the gender or the expression of the face and indicated whether a surrounding frame was continuous or discontinuous. In Experiments 1-3, with angry and happy targets, asymmetric interference was observed. Gender discrimination, but no expression discrimination, was impaired in the dual-facial condition compared to the corresponding control. This effect was obtained with a between-subjects design in Experiment 1, with a within-subjects design in Experiment 2, and with androgynous face masks in Experiment 3. In Experiments 4a and 4b different target combinations were tested. No decrement of performance in the dual-facial task was observed for either gender or expression discrimination with fearful-disgusted (Experiment 4a) or fearful-happy faces (Experiment 4b). We conclude that the ability to attend simultaneously to gender and expression cues without a decrement in performance depends on the specific combination of expressions to be differentiated between. Happy and angry expressions are usually directed at the perceiver and command preferential attention. Under conditions of restricted viewing such as those of the present study, discrimination of these expressions is prioritized leading to impaired discrimination of other facial properties such as gender.
Wolff, N.; Kemter, K.; Schweinberger, S.R.; Wiese, H.
It is well established that memory is more accurate for own-relative to other-race faces (own-race bias), which has been suggested to result from larger perceptual expertise for own-race faces. Previous studies also demonstrated better memory for own-relative to other-gender faces, which is less likely to result from differences in perceptual expertise, and rather may be related to social in-group vs out-group categorization. We examined neural correlates of the own-gender bias using event-re...
Golan, Ofer; Ashwin, Emma; Granader, Yael; McClintock, Suzy; Day, Kate; Leggett, Victoria; Baron-Cohen, Simon
This study evaluated "The Transporters", an animated series designed to enhance emotion comprehension in children with autism spectrum conditions (ASC). n = 20 children with ASC (aged 4-7) watched "The Transporters" everyday for 4 weeks. Participants were tested before and after intervention on emotional vocabulary and emotion recognition at three…
Miyahara, Motohide; Bray, Anne; Tsujii, Masatsugu; Fujita, Chikako; Sugiyama, Toshiro
This study used a choice reaction-time paradigm to test the perceived impairment of facial affect recognition in Asperger's disorder. Twenty teenagers with Asperger's disorder and 20 controls were compared with respect to the latency and accuracy of response to happy or disgusted facial expressions, presented in cartoon or real images and in…
Koeneke, Alejandra; Ponce, Guillermo; Hoenicka, Janet; Huertas, Evelio
Ankyrin repeat and kinase domain containing I (ANKK1) and dopamine D2 receptor (DRD2) genes have been associated with psychopathic traits in clinical samples. On the other hand, individuals high in psychopathy show reduced affective priming and deficits in facial expression recognition. We have hypothesized that these emotion-related cognitive phenomena are associated with Taq IA (rs18000497) SNP (single nucleotide polymorphism) of the ANKK1 gene and with C957T (rs6277) SNP of the DRD2 gene. We performed a genetic association analysis in 94 self-reported Caucasian healthy volunteers. The participants completed 144 trials of an affective priming task, in which primes and targets were emotional words. They also had to recognize 64 facial expressions of happiness, sadness, anger, and fear in an expression recognition task. Regarding the genetic analyses, Taq IA and C957T SNPs were genotyped. We found that the C957T SNP TT genotype was associated with a stronger priming effect and a better recognition of angry expressions. No associations were found for the Taq IA SNP. In addition, in silico analysis demonstrated that C957T SNP is a marker of a regulatory sequence at the 5' UTR of ANKK1 gene, thus suggesting the involvement of the whole ANKK1/DRD2 locus in cognitive-emotional processing. These results suggest that affective priming and recognition of angry facial expressions are endophenotypes that lie on the pathway between the ANKK1/DRD2 locus and some deviant phenotypes.
Ali Ghorbanpour Arani
Full Text Available In the present research, vibration and instability of axially moving sandwich plate made of soft core and composite face sheets under initial tension is investigated. Single-walled carbon nano-tubes (SWCNTs are selected as a reinforcement of composite face sheets inside Poly methyl methacrylate (PMMA matrix. Higher order shear deformation theory (HSDT is utilized due to its accuracy of polynomial functions than other plate theories. Based on extended rule of mixture, the structural properties of composite face sheets are taken into consideration. Motion equations are obtained by means of Hamilton’s principle and solved analytically. Influences of various parameters such as axially moving speed, volume fraction of CNTs, pre-tension, thickness and aspect ratio of sandwich plate on the vibration characteristics of moving system are discussed in details. The results indicated that the critical speed of moving sandwich plate is strongly dependent on the volume fraction of CNTs. Therefore, the critical speed of moving sandwich plate can be improved by adding appropriate values of CNTs. The results of this investigation can be used in design and manufacturing of marine vessels and aircrafts.
Boutet, Isabelle; Collin, Charles; Faubert, Jocelyn
Configural relations and a critical band of spatial frequencies (SFs) in the middle range are particularly important for face recognition. We report the results of four experiments in which the relationship between these two types of information was examined. In Experiments 1, 2A, and 2B, the face inversion effect (FIE) was used to probe configural face encoding. Recognition of upright and inverted faces and nonface objects was measured in four conditions: a no-filter condition and three SF conditions (low, medium, and high frequency). We found significant FIEs of comparable magnitudes for all frequency conditions. In Experiment 3, discrimination of faces on the basis of either configural or featural modifications was measured under the same four conditions. Although the ability to discriminate configural modifications was superior in the medium-frequency condition, so was the ability to discriminate featural modifications. We conclude that the band of SF that is critical for face recognition does not contribute preferentially to configural encoding.
Gilligan, J.; Bourham, M.; Hankins, O.; Eddy, W.; Hurley, J.; Black, D.
Disruption damage to plasma facing components has been found to be a limiting design constraint in ITER and other large fusion devices. A growing data base is confirming the role of the vapor shield in protecting ablated surfaces under disruption-like conditions, which would imply longer lifetimes for plasma facing components. We present new results for exposure of various material surfaces to high heat fluxes up to 70 GW/m 2 over 100 μs (7 MJ/m 2 ) in the SIRENS high heat flux test facility. Tested materials are graphite grades, pyrolytic graphite, refractory metals and alloys, refractory coatings on copper substrates, boron nitride and preliminary results of diamond coating on silicon substrates. An empirical scaling law of the energy transmission factor through the vapor shield has been obtained. The application of a strong external magnetic field, to reduce turbulent energy transport in the vapor shield boundary, is shown to decrease f by as much as 35% for fields of 8 T. (orig.)
Tamati, Terrin N; Gilbert, Jaimie L; Pisoni, David B
Previous studies investigating speech recognition in adverse listening conditions have found extensive variability among individual listeners. However, little is currently known about the core underlying factors that influence speech recognition abilities. To investigate sensory, perceptual, and neurocognitive differences between good and poor listeners on the Perceptually Robust English Sentence Test Open-set (PRESTO), a new high-variability sentence recognition test under adverse listening conditions. Participants who fell in the upper quartile (HiPRESTO listeners) or lower quartile (LoPRESTO listeners) on key word recognition on sentences from PRESTO in multitalker babble completed a battery of behavioral tasks and self-report questionnaires designed to investigate real-world hearing difficulties, indexical processing skills, and neurocognitive abilities. Young, normal-hearing adults (N = 40) from the Indiana University community participated in the current study. Participants' assessment of their own real-world hearing difficulties was measured with a self-report questionnaire on situational hearing and hearing health history. Indexical processing skills were assessed using a talker discrimination task, a gender discrimination task, and a forced-choice regional dialect categorization task. Neurocognitive abilities were measured with the Auditory Digit Span Forward (verbal short-term memory) and Digit Span Backward (verbal working memory) tests, the Stroop Color and Word Test (attention/inhibition), the WordFam word familiarity test (vocabulary size), the Behavioral Rating Inventory of Executive Function-Adult Version (BRIEF-A) self-report questionnaire on executive function, and two performance subtests of the Wechsler Abbreviated Scale of Intelligence (WASI) Performance Intelligence Quotient (IQ; nonverbal intelligence). Scores on self-report questionnaires and behavioral tasks were tallied and analyzed by listener group (HiPRESTO and LoPRESTO). The extreme
In the theory of differential geometry, surface normal, as a first order surface differential quantity, determines the orientation of a surface at each point and contains informative local surface shape information. To fully exploit this kind of information for 3D face recognition (FR), this paper proposes a novel highly discriminative facial shape descriptor, namely multi-scale and multi-component local normal patterns (MSMC-LNP). Given a normalized facial range image, three components of normal vectors are first estimated, leading to three normal component images. Then, each normal component image is encoded locally to local normal patterns (LNP) on different scales. To utilize spatial information of facial shape, each normal component image is divided into several patches, and their LNP histograms are computed and concatenated according to the facial configuration. Finally, each original facial surface is represented by a set of LNP histograms including both global and local cues. Moreover, to make the proposed solution robust to the variations of facial expressions, we propose to learn the weight of each local patch on a given encoding scale and normal component image. Based on the learned weights and the weighted LNP histograms, we formulate a weighted sparse representation-based classifier (W-SRC). In contrast to the overwhelming majority of 3D FR approaches which were only benchmarked on the FRGC v2.0 database, we carried out extensive experiments on the FRGC v2.0, Bosphorus, BU-3DFE and 3D-TEC databases, thus including 3D face data captured in different scenarios through various sensors and depicting in particular different challenges with respect to facial expressions. The experimental results show that the proposed approach consistently achieves competitive rank-one recognition rates on these databases despite their heterogeneous nature, and thereby demonstrates its effectiveness and its generalizability. © 2014 Elsevier B.V.
Rosselli, Federica B; Alemi, Alireza; Ansuini, Alessio; Zoccolan, Davide
In recent years, a number of studies have explored the possible use of rats as models of high-level visual functions. One central question at the root of such an investigation is to understand whether rat object vision relies on the processing of visual shape features or, rather, on lower-order image properties (e.g., overall brightness). In a recent study, we have shown that rats are capable of extracting multiple features of an object that are diagnostic of its identity, at least when those features are, structure-wise, distinct enough to be parsed by the rat visual system. In the present study, we have assessed the impact of object structure on rat perceptual strategy. We trained rats to discriminate between two structurally similar objects, and compared their recognition strategies with those reported in our previous study. We found that, under conditions of lower stimulus discriminability, rat visual discrimination strategy becomes more view-dependent and subject-dependent. Rats were still able to recognize the target objects, in a way that was largely tolerant (i.e., invariant) to object transformation; however, the larger structural and pixel-wise similarity affected the way objects were processed. Compared to the findings of our previous study, the patterns of diagnostic features were: (i) smaller and more scattered; (ii) only partially preserved across object views; and (iii) only partially reproducible across rats. On the other hand, rats were still found to adopt a multi-featural processing strategy and to make use of part of the optimal discriminatory information afforded by the two objects. Our findings suggest that, as in humans, rat invariant recognition can flexibly rely on either view-invariant representations of distinctive object features or view-specific object representations, acquired through learning.
Federica Bianca Rosselli
Full Text Available In recent years, a number of studies have explored the possible use of rats as models of high-level visual functions. One central question at the root of such an investigation is to understand whether rat object vision relies on the processing of visual shape features or, rather, on lower-order image properties (e.g., overall brightness. In a recent study, we have shown that rats are capable of extracting multiple features of an object that are diagnostic of its identity, at least when those features are, structure-wise, distinct enough to be parsed by the rat visual system. In the present study, we have assessed the impact of object structure on rat perceptual strategy. We trained rats to discriminate between two structurally similar objects, and compared their recognition strategies with those reported in our previous study. We found that, under conditions of lower stimulus discriminability, rat visual discrimination strategy becomes more view-dependent and subject-dependent. Rats were still able to recognize the target objects, in a way that was largely tolerant (i.e., invariant to object transformation; however, the larger structural and pixel-wise similarity affected the way objects were processed. Compared to the findings of our previous study, the patterns of diagnostic features were: i smaller and more scattered; ii only partially preserved across object views; and iii only partially reproducible across rats. On the other hand, rats were still found to adopt a multi-featural processing strategy and to make use of part of the optimal discriminatory information afforded by the two objects. Our findings suggest that, as in humans, rat invariant recognition can flexibly rely on either view-invariant representations of distinctive object features or view-specific object representations, acquired through learning.
Gundogdu, Erhan; Koç, Aykut; Alatan, A. Aydın.
Visual object classification has long been studied in visible spectrum by utilizing conventional cameras. Since the labeled images has recently increased in number, it is possible to train deep Convolutional Neural Networks (CNN) with significant amount of parameters. As the infrared (IR) sensor technology has been improved during the last two decades, labeled images extracted from IR sensors have been started to be used for object detection and recognition tasks. We address the problem of infrared object recognition and detection by exploiting 15K images from the real-field with long-wave and mid-wave IR sensors. For feature learning, a stacked denoising autoencoder is trained in this IR dataset. To recognize the objects, the trained stacked denoising autoencoder is fine-tuned according to the binary classification loss of the target object. Once the training is completed, the test samples are propagated over the network, and the probability of the test sample belonging to a class is computed. Moreover, the trained classifier is utilized in a detect-by-classification method, where the classification is performed in a set of candidate object boxes and the maximum confidence score in a particular location is accepted as the score of the detected object. To decrease the computational complexity, the detection step at every frame is avoided by running an efficient correlation filter based tracker. The detection part is performed when the tracker confidence is below a pre-defined threshold. The experiments conducted on the real field images demonstrate that the proposed detection and tracking framework presents satisfactory results for detecting tanks under cluttered background.
Benitez-Garcia, Gibran; Nakamura, Tomoaki; Kaneko, Masahide
Darwin was the first one to assert that facial expressions are innate and universal, which are recognized across all cultures. However, recent some cross-cultural studies have questioned this assumed universality. Therefore, this paper presents an analysis of the differences between Western and East-Asian faces of the six basic expressions (anger, disgust, fear, happiness, sadness and surprise) focused on three individual facial regions of eyes-eyebrows, nose and mouth. The analysis is conducted by applying PCA for two feature extraction methods: appearance-based by using the pixel intensities of facial parts, and geometric-based by handling 125 feature points from the face. Both methods are evaluated using 4 standard databases for both racial groups and the results are compared with a cross-cultural human study applied to 20 participants. Our analysis reveals that differences between Westerns and East-Asians exist mainly on the regions of eyes-eyebrows and mouth for expressions of fear and disgust respectively. This work presents important findings for a better design of automatic facial expression recognition systems based on the difference between two racial groups.
Kellen, David; Klauer, Karl Christoph
A classic discussion in the recognition-memory literature concerns the question of whether recognition judgments are better described by continuous or discrete processes. These two hypotheses are instantiated by the signal detection theory model (SDT) and the 2-high-threshold model, respectively. Their comparison has almost invariably relied on…
Elbouz, Marwa; Alfalou, Ayman; Brosseau, Christian
Home automation is being implemented into more and more domiciles of the elderly and disabled in order to maintain their independence and safety. For that purpose, we propose and validate a surveillance video system, which detects various posture-based events. One of the novel points of this system is to use adapted Vander-Lugt correlator (VLC) and joint-transfer correlator (JTC) techniques to make decisions on the identity of a patient and his three-dimensional (3-D) positions in order to overcome the problem of crowd environment. We propose a fuzzy logic technique to get decisions on the subject's behavior. Our system is focused on the goals of accuracy, convenience, and cost, which in addition does not require any devices attached to the subject. The system permits one to study and model subject responses to behavioral change intervention because several levels of alarm can be incorporated according different situations considered. Our algorithm performs a fast 3-D recovery of the subject's head position by locating eyes within the face image and involves a model-based prediction and optical correlation techniques to guide the tracking procedure. The object detection is based on (hue, saturation, value) color space. The system also involves an adapted fuzzy logic control algorithm to make a decision based on information given to the system. Furthermore, the principles described here are applicable to a very wide range of situations and robust enough to be implementable in ongoing experiments.
Ariadi Sandrini Rezende
Full Text Available This article seeks to demonstrate that the individual suffers from a recognition crisis when seeking the solution of conflicts in the Judiciary. This, however, can only restore the recognition of one of the procedural individuals, leaving the other in non-recognition process. In this way, it seeks to compare the principles and ideas of state jurisdiction with the principles and ideas of the mediation, demonstrating that mediation is a conflict resolution process that better accomplishes its ideas and principles with the theory of recognition. So, applying the recognition theory of Axel Honneth to the conventional model of jurisdiction, it notes that one member of the demand ends up dissatisfied with the ruling, while in mediation, it is wanted a balance of satisfactions. Due to this balance and convergence, it is believed that the process of claiming of the personality of no procedural members, when mediation is used, are dismantled.
Baptista, Carlos Alberto; Loureiro, Sonia Regina; de Lima Osório, Flávia; Zuardi, Antonio Waldo; Magalhães, Pedro V; Kapczinski, Flávio; Filho, Alaor Santos; Freitas-Ferrari, Maria Cecília; Crippa, José Alexandre S
Despite the fact that public speaking is a common academic activity and that social phobia has been associated with lower educational achievement and impaired academic performance, little research has examined the prevalence of social phobia in college students. The aim of this study was to evaluate the prevalence of social phobia in a large sample of Brazilian college students and to examine the academic impact of this disorder. The Social Phobia Inventory (SPIN) and the MINI-SPIN, used as the indicator of social phobia in the screening phase, were applied to 2319 randomly selected students from two Brazilian universities. For the second phase (diagnostic confirmation), four psychiatrists and one clinical psychologist administered the SCID-IV to subjects with MINI-SPIN scores of 6 or higher. The prevalence of social phobia among the university students was 11.6%. Women with social phobia had significantly lower grades than those without the disorder. Fear of public speaking was the most common social fear. Only two of the 237 students with social phobia (0.8%) had previously received a diagnosis of social phobia and were under treatment. Social phobia comorbidities were not evaluated in this study. The methods of assessment employed by the universities (written exams) may mask the presence of social phobia. This was not a population-based study, and thus the results are not generalizable to the entire population with social phobia. Preventive strategies are recommended to reduce the under-recognition and the adverse impact of social phobia on academic performance and overall quality of life of university students. Copyright © 2011 Elsevier B.V. All rights reserved.
Daar, Marwan; Wilson, Hugh R
With a few exceptions, previous studies have explored masking using either a backward mask or a common onset trailing mask, but not both. In a series of experiments, we demonstrate the use of faces in central visual field as a viable method to study the relationship between these two types of mask schedule. We tested observers in a two alternative forced choice face identification task, where both target and mask comprised synthetic faces, and show that a simple model can successfully predict masking across a variety of masking schedules ranging from a backward mask to a common onset trailing mask and a number of intermediate variations. Our data are well accounted for by a window of sensitivity to mask interference that is centered at around 100 ms. Copyright © 2015 Elsevier Ltd. All rights reserved.
Tan, Zheng-Hua; Kraljevski, Ivan
This paper presents a method that combines variable frame length and rate analysis for speech recognition in noisy environments, together with an investigation of the effect of different frame lengths on speech recognition performance. The method adopts frame selection using an a posteriori signal...... frame length to a steady or low SNR region. The speech recognition results show that the proposed variable frame rate and length method outperforms fixed frame rate and length analysis, as well as standalone variable frame rate analysis in terms of noise-robustness.......-to-noise (SNR) ratio weighted energy distance and increases the length of the selected frames, according to the number of non-selected preceding frames. It assigns a higher frame rate and a normal frame length to a rapidly changing and high SNR region of a speech signal, and a lower frame rate and an increased...
Apostolopoulos, George; Tzitzilonis, Vasileios; Kappatos, Vassilios
Disguised face recognition is considered as very challenging and important problem in the face recognition field. A disguised face recognition algorithm is proposed using quaternionic representation. The feature extraction module is accomplished with a new method, decomposing each face image...
Full Text Available The work consists of the reconstruction of the face of the great poet called Dante Alighieri through a multidisciplinary approach that matches traditional techniques (manual ones, usually used in forensic anthropology, with digital methodologies that take advantage of technologies born in manufacturer-military fields but that are more and more often applied to the field of the cultural heritage. Unable to get the original skull of Dante, the work started from the data and the elements collected by Fabio Frassetto and Giuseppe Sergi, two important anthropologists, respectively at the University of Bologna and Rome, in an investigation carried out in 1921, sixth century anniversary of his death, on the remains of the poet collected in Ravenna. Thanks to this, we have a very accurate description of Dante’s bones, including 297 metric data inherent to the whole skeleton, some photographs in the scale of the skull, the various norms and many other bones, as well as a model of the skull subsequently realized by Frassetto. According to these information, a geometric reconstruction of Dante Alighieri skull including the jaw was carried out through the employment and integration of the instruments and technologies of the virtual reality, and from this the relative physical model through fast prototype was realized. An important aspect of the work regards in a particular manner the methodology of 3D modelling proposed for the new reconstruction of the jaw (not found in the course of the 1921 recognition, starting from a reference model. The model of the skull prototype is then useful as the basis for the successive stage of facial reconstruction through the traditional techniques of forensic art.
Full Text Available in this area in order to obtain representative results. A sonic probe extensometer was used to monitor the roof and support performances in the experiment sites. Two holes were drilled and instrumented with sonic probe anchors in each site. The first hole... was drilled and instrumented at the face before any mining took place, and the second hole drilled in the middle of the cut out distance. In order to determine the effect of time on roof deformation, the sites were left for 48 hours unsupported, where...
Most biometric systems employed for human recognition require physical contact with, or close proximity to, a cooperative subject. Far more challenging is the ability to reliably recognize individuals at a distance, when viewed from an arbitrary angle under real-world environmental conditions. Gait and face data are the two biometrics that can be most easily captured from a distance using a video camera. This comprehensive and logically organized text/reference addresses the fundamental problems associated with gait and face-based human recognition, from color and infrared video data that are
Heynen, Martina; Bunnefeld, Nils; Borcherding, Jost
Predation is thought to be one of the main structuring forces in animal communities. However, selective predation is often measured on isolated traits in response to a single predatory species, but only rarely are selective forces on several traits quantified or even compared between different predators naturally occurring in the same system. In the present study, we therefore measured behavioral and morphological traits in young-of-the-year Eurasian perch Perca fluviatilis and compared their selective values in response to the 2 most common predators, adult perch and pike Esox lucius . Using mixed effects models and model averaging to analyze our data, we quantified and compared the selectivity of the 2 predators on the different morphological and behavioral traits. We found that selection on the behavioral traits was higher than on morphological traits and perch predators preyed overall more selectively than pike predators. Pike tended to positively select shallow bodied and nonvigilant individuals (i.e. individuals not performing predator inspection). In contrast, perch predators selected mainly for bolder juvenile perch (i.e. individuals spending more time in the open, more active), which was most important. Our results are to the best of our knowledge the first that analyzed behavioral and morphological adaptations of juvenile perch facing 2 different predation strategies. We found that relative specific predation intensity for the divergent traits differed between the predators, providing some additional ideas why juvenile perch display such a high degree of phenotypic plasticity.
Chai, Qiao; He, Jie
The current study investigated the stage at which Chinese preschoolers started considering recipients' material welfare and minimizing existing inequalities under both noncollaborative and collaborative contexts. Also, it analyzed how they behaved when recipients' material welfare was in conflict with merit or equality rule. Experiment 1 found…
Pantic, Maja; Li, S.; Jain, A.
Facial expression recognition is a process performed by humans or computers, which consists of: 1. Locating faces in the scene (e.g., in an image; this step is also referred to as face detection), 2. Extracting facial features from the detected face region (e.g., detecting the shape of facial
Schafer, Phillip B.
Humans routinely recognize speech in challenging acoustic environments with background music, engine sounds, competing talkers, and other acoustic noise. However, today's automatic speech recognition (ASR) systems perform poorly in such environments. In this dissertation, I present novel methods for ASR designed to approach human-level performance by emulating the brain's processing of sounds. I exploit recent advances in auditory neuroscience to compute neuron-based representations of speech, and design novel methods for decoding these representations to produce word transcriptions. I begin by considering speech representations modeled on the spectrotemporal receptive fields of auditory neurons. These representations can be tuned to optimize a variety of objective functions, which characterize the response properties of a neural population. I propose an objective function that explicitly optimizes the noise invariance of the neural responses, and find that it gives improved performance on an ASR task in noise compared to other objectives. The method as a whole, however, fails to significantly close the performance gap with humans. I next consider speech representations that make use of spiking model neurons. The neurons in this method are feature detectors that selectively respond to spectrotemporal patterns within short time windows in speech. I consider a number of methods for training the response properties of the neurons. In particular, I present a method using linear support vector machines (SVMs) and show that this method produces spikes that are robust to additive noise. I compute the spectrotemporal receptive fields of the neurons for comparison with previous physiological results. To decode the spike-based speech representations, I propose two methods designed to work on isolated word recordings. The first method uses a classical ASR technique based on the hidden Markov model. The second method is a novel template-based recognition scheme that takes
Wu, Q.; Qian, Z.; Dong, D.; Song, E.; Hong, Y. [China University Of Mining and Technology, Beijing (China). Beijing Campus
The formation of a saturated body of coal-water mixture is due to the actions of multiple controlling factors of water source, coal characteristics, potential energy and time. Coal-water burst disaster is characterized by paroxysm, huge energy, short duration, strong explosive force and causing severe damages. Very often it takes place only under special background conditions. In extremely inclined coal seam districts, because the working faces are generally arranged under water-prevention coal pillars, the mining inbreak heights are too near the location of the body of coal-water mixture. Hence the mining activity may induce the occurrence of coal-water burst disaster. Based on the analysis of the disaster mechanism, some effective preventive measures for coal-water burst disaster in coal mines are put forward. 3 refs., 1 fig.
Shao, Ming; Zhang, Yizhe; Fu, Yun
Learning discriminant face representation for pose-invariant face recognition has been identified as a critical issue in visual learning systems. The challenge lies in the drastic changes of facial appearances between the test face and the registered face. To that end, we propose a high-level feature learning framework called "collaborative random faces (RFs)-guided encoders" toward this problem. The contributions of this paper are three fold. First, we propose a novel supervised autoencoder that is able to capture the high-level identity feature despite of pose variations. Second, we enrich the identity features by replacing the target values of conventional autoencoders with random signals (RFs in this paper), which are unique for each subject under different poses. Third, we further improve the performance of the framework by incorporating deep convolutional neural network facial descriptors and linking discriminative identity features from different RFs for the augmented identity features. Finally, we conduct face identification experiments on Multi-PIE database, and face verification experiments on labeled faces in the wild and YouTube Face databases, where face recognition rate and verification accuracy with Receiver Operating Characteristic curves are rendered. In addition, discussions of model parameters and connections with the existing methods are provided. These experiments demonstrate that our learning system works fairly well on handling pose variations.
Lei, Zhen; Pietikäinen, Matti; Li, Stan Z
Local feature descriptor is an important module for face recognition and those like Gabor and local binary patterns (LBP) have proven effective face descriptors. Traditionally, the form of such local descriptors is predefined in a handcrafted way. In this paper, we propose a method to learn a discriminant face descriptor (DFD) in a data-driven way. The idea is to learn the most discriminant local features that minimize the difference of the features between images of the same person and maximize that between images from different people. In particular, we propose to enhance the discriminative ability of face representation in three aspects. First, the discriminant image filters are learned. Second, the optimal neighborhood sampling strategy is soft determined. Third, the dominant patterns are statistically constructed. Discriminative learning is incorporated to extract effective and robust features. We further apply the proposed method to the heterogeneous (cross-modality) face recognition problem and learn DFD in a coupled way (coupled DFD or C-DFD) to reduce the gap between features of heterogeneous face images to improve the performance of this challenging problem. Extensive experiments on FERET, CAS-PEAL-R1, LFW, and HFB face databases validate the effectiveness of the proposed DFD learning on both homogeneous and heterogeneous face recognition problems. The DFD improves POEM and LQP by about 4.5 percent on LFW database and the C-DFD enhances the heterogeneous face recognition performance of LBP by over 25 percent.
Full Text Available In this work we present a study on the performance of CVD (chemical vapor deposition graphene coatings grown and transferred on Ni as protection barriers under two scenarios that lead to unwanted metal ion release, microbial corrosion and allergy test conditions. These phenomena have a strong impact in different fields considering nickel (or its alloys is one of the most widely used metals in industrial and consumer products. Microbial corrosion costs represent fractions of national gross product in different developed countries, whereas Ni allergy is one of the most prevalent allergic conditions in the western world, affecting around 10% of the population. We found that grown graphene coatings act as a protective membrane in biological environments that decreases microbial corrosion of Ni and reduces release of Ni2+ ions (source of Ni allergic contact hypersensitivity when in contact with sweat. This performance seems not to be connected to the strong orbital hybridization that Ni and graphene interface present, indicating electron transfer might not be playing a main role in the robust response of this nanostructured system. The observed protection from biological environment can be understood in terms of graphene impermeability to transfer Ni2+ ions, which is enhanced for few layers of graphene grown on Ni. We expect our work will provide a new route for application of graphene as a protection coating for metals in biological environments, where current strategies have shown short-term efficiency and have raised health concerns.
Gentil, Dana; del Campo, Valeria; Henrique Rodrigues da Cunha, Thiago; Henríquez, Ricardo; Garín, Carolina; Ramírez, Cristian; Flores, Marcos; Seeger, Michael
In this work we present a study on the performance of CVD (chemical vapor deposition) graphene coatings grown and transferred on Ni as protection barriers under two scenarios that lead to unwanted metal ion release, microbial corrosion and allergy test conditions. These phenomena have a strong impact in different fields considering nickel (or its alloys) is one of the most widely used metals in industrial and consumer products. Microbial corrosion costs represent fractions of national gross product in different developed countries, whereas Ni allergy is one of the most prevalent allergic conditions in the western world, affecting around 10% of the population. We found that grown graphene coatings act as a protective membrane in biological environments that decreases microbial corrosion of Ni and reduces release of Ni2+ ions (source of Ni allergic contact hypersensitivity) when in contact with sweat. This performance seems not to be connected to the strong orbital hybridization that Ni and graphene interface present, indicating electron transfer might not be playing a main role in the robust response of this nanostructured system. The observed protection from biological environment can be understood in terms of graphene impermeability to transfer Ni2+ ions, which is enhanced for few layers of graphene grown on Ni. We expect our work will provide a new route for application of graphene as a protection coating for metals in biological environments, where current strategies have shown short-term efficiency and have raised health concerns. PMID:29292763
Na, Wondo; Kim, Gibbeum; Kim, Gungu; Han, Woojae; Kim, Jinsook
The current study aimed to evaluate hearing-related changes in terms of speech-in-noise processing, fast-rate speech processing, and working memory; and to identify which of these three factors is significantly affected by age-related hearing loss. One hundred subjects aged 65-84 years participated in the study. They were classified into four groups ranging from normal hearing to moderate-to-severe hearing loss. All the participants were tested for speech perception in quiet and noisy conditions and for speech perception with time alteration in quiet conditions. Forward- and backward-digit span tests were also conducted to measure the participants' working memory. 1) As the level of background noise increased, speech perception scores systematically decreased in all the groups. This pattern was more noticeable in the three hearing-impaired groups than in the normal hearing group. 2) As the speech rate increased faster, speech perception scores decreased. A significant interaction was found between speed of speech and hearing loss. In particular, 30% of compressed sentences revealed a clear differentiation between moderate hearing loss and moderate-to-severe hearing loss. 3) Although all the groups showed a longer span on the forward-digit span test than the backward-digit span test, there was no significant difference as a function of hearing loss. The degree of hearing loss strongly affects the speech recognition of babble-masked and time-compressed speech in the elderly but does not affect the working memory. We expect these results to be applied to appropriate rehabilitation strategies for hearing-impaired elderly who experience difficulty in communication.
Jessica P.K. Chan
Full Text Available Older adults typically exhibit poorer face recognition compared to younger adults. These recognition differences may be due to underlying age-related changes in eye movement scanning. We examined whether older adults’ recognition could be improved by yoking their eye movements to those of younger adults. Participants studied younger and older faces, under free viewing conditions (bases, through a gaze-contingent moving window (own, or a moving window which replayed the eye movements of a base participant (yoked. During the recognition test, participants freely viewed the faces with no viewing restrictions. Own-age recognition biases were observed for older adults in all viewing conditions, suggesting that this effect occurs independently of scanning. Participants in the bases condition had the highest recognition accuracy, and participants in the yoked condition were more accurate than participants in the own condition. Among yoked participants, recognition did not depend on age of the base participant. These results suggest that successful encoding for all participants requires the bottom-up contribution of peripheral information, regardless of the locus of control of the viewer. Although altering the pattern of eye movements did not increase recognition, the amount of sampling of the face during encoding predicted subsequent recognition accuracy for all participants. Increased sampling may confer some advantages for subsequent recognition, particularly for people who have declining memory abilities.
Full Text Available Wondo Na,1 Gibbeum Kim,1 Gungu Kim,1 Woojae Han,2 Jinsook Kim2 1Department of Speech Pathology and Audiology, Graduate School, 2Division of Speech Pathology and Audiology, Research Institute of Audiology and Speech Pathology, College of Natural Sciences, Hallym University, Chuncheon, Republic of Korea Purpose: The current study aimed to evaluate hearing-related changes in terms of speech-in-noise processing, fast-rate speech processing, and working memory; and to identify which of these three factors is significantly affected by age-related hearing loss.Methods: One hundred subjects aged 65–84 years participated in the study. They were classified into four groups ranging from normal hearing to moderate-to-severe hearing loss. All the participants were tested for speech perception in quiet and noisy conditions and for speech perception with time alteration in quiet conditions. Forward- and backward-digit span tests were also conducted to measure the participants’ working memory.Results: 1 As the level of background noise increased, speech perception scores systematically decreased in all the groups. This pattern was more noticeable in the three hearing-impaired groups than in the normal hearing group. 2 As the speech rate increased faster, speech perception scores decreased. A significant interaction was found between speed of speech and hearing loss. In particular, 30% of compressed sentences revealed a clear differentiation between moderate hearing loss and moderate-to-severe hearing loss. 3 Although all the groups showed a longer span on the forward-digit span test than the backward-digit span test, there was no significant difference as a function of hearing loss.Conclusion: The degree of hearing loss strongly affects the speech recognition of babble-masked and time-compressed speech in the elderly but does not affect the working memory. We expect these results to be applied to appropriate rehabilitation strategies for hearing
Full Text Available Manifold learning methods have been widely used in machine condition monitoring and fault diagnosis. However, the results reported in these studies focus on the machine faults under stable loading and rotational speeds, which cannot interpret the practical machine running. Rotating machine is always running under variable speeds and loading, which makes the vibration signal more complicated. To address such concern, the NPE (neighborhood preserving embedding is applied for bearing fault classification. Compared with other algorithms (PCA, LPP, LDA, and ISOP, the NPE performs well in feature extraction. Since the traditional time domain signal denoising is time consuming and memory consuming, we denoise the signal features directly in feature space. Furthermore, NPE and SOM (self-organizing map are combined to assess the bearing degradation performance. Simulation and experiment results validate the effectiveness of the proposed method.
Chevet, G; Schlosser, J; Courtois, X; Escourbiac, F; Missirlian, M; Herb, V; Martin, E; Camus, G; Braccini, M
In order to predict the lifetime of carbon fibre composite (CFC) armoured plasma-facing components in magnetic fusion devices, it is necessary to analyse the damage mechanisms and to model the damage propagation under cycling heat loads. At Tore Supra studies have been launched to better understand the damage process of the armoured flat tile elements of the actively cooled toroidal pump limiter, leading to the characterization of the damageable mechanical behaviour of the used N11 CFC material and of the CFC/Cu bond. Up until now the calculations have shown damage developing in the CFC (within the zone submitted to high shear stress) and in the bond (from the free edge of the CFC/Cu interface). Damage is due to manufacturing shear stresses and does not evolve under heat due to stress relaxation. For the ITER divertor, NB31 material has been characterized and the characterization of NB41 is in progress. Finite element calculations show again the development of CFC damage in the high shear stress zones after manufacturing. Stresses also decrease under heat flux so the damage does not evolve. The characterization of the CFC/Cu bond is more complex due to the monoblock geometry, which leads to more scattered stresses. These calculations allow the fabrication difficulties to be better understood and will help to analyse future high heat flux tests on various mock-ups.
Krithika, L. B.; Venkatesh, K.; Rathore, S.; Kumar, M. Harish
Human beings exploit emotions comprehensively for conveying messages and their resolution. Emotion detection and face recognition can provide an interface between the individuals and technologies. The most successful applications of recognition analysis are recognition of faces. Many different techniques have been used to recognize the facial expressions and emotion detection handle varying poses. In this paper, we approach an efficient method to recognize the facial expressions to track face points and distances. This can automatically identify observer face movements and face expression in image. This can capture different aspects of emotion and facial expressions.
Ihsan Abdulhussein Baqer email@example.com
Full Text Available In this paper, the Artificial Neural Network (ANN is trained on the patterns of the normal component to tangential component ratios at the time of slippage occurrence, so that it can be able to distinguish the slippage occurrence under different type of load (quasi-static and dynamic loads, and then generates a feedback signal used as an input signal to run the actuator. This process is executed without the need for any information about the characteristics of the grasped object, such as weight, surface texture, shape, coefficient of the friction and the type of the load exerted on the grasped object. For fulfillment this approach, a new fingertip design has been proposed in order to detect the slippage in multi-direction between the grasped object and the artificial fingertips. This design is composed of two under-actuated fingers with an actuation system which includes flexible parts (compressive springs. These springs operate as a compensator for the grasping force at the time of slippage occurrence in spite of the actuator is in stopped situation. The contact force component ratios can be calculated via a conventional sensor (Flexiforce sensor after processed the force data using Matlab/Simulink program through a specific mathematical model which is derived according to the mechanism of the artificial finger.
Pohl, Rüdiger F; Michalkiewicz, Martha; Erdfelder, Edgar; Hilbig, Benjamin E
According to the recognition-heuristic theory, decision makers solve paired comparisons in which one object is recognized and the other not by recognition alone, inferring that recognized objects have higher criterion values than unrecognized ones. However, success-and thus usefulness-of this heuristic depends on the validity of recognition as a cue, and adaptive decision making, in turn, requires that decision makers are sensitive to it. To this end, decision makers could base their evaluation of the recognition validity either on the selected set of objects (the set's recognition validity), or on the underlying domain from which the objects were drawn (the domain's recognition validity). In two experiments, we manipulated the recognition validity both in the selected set of objects and between domains from which the sets were drawn. The results clearly show that use of the recognition heuristic depends on the domain's recognition validity, not on the set's recognition validity. In other words, participants treat all sets as roughly representative of the underlying domain and adjust their decision strategy adaptively (only) with respect to the more general environment rather than the specific items they are faced with.
Full Text Available Abstract Background Coral bleaching can be defined as the loss of symbiotic zooxanthellae and/or their photosynthetic pigments from their cnidarian host. This major disturbance of reef ecosystems is principally induced by increases in water temperature. Since the beginning of the 1980s and the onset of global climate change, this phenomenon has been occurring at increasing rates and scales, and with increasing severity. Several studies have been undertaken in the last few years to better understand the cellular and molecular mechanisms of coral bleaching but the jigsaw puzzle is far from being complete, especially concerning the early events leading to symbiosis breakdown. The aim of the present study was to find molecular actors involved early in the mechanism leading to symbiosis collapse. Results In our experimental procedure, one set of Pocillopora damicornis nubbins was subjected to a gradual increase of water temperature from 28°C to 32°C over 15 days. A second control set kept at constant temperature (28°C. The differentially expressed mRNA between the stressed states (sampled just before the onset of bleaching and the non stressed states (control were isolated by Suppression Subtractive Hybridization. Transcription rates of the most interesting genes (considering their putative function were quantified by Q-RT-PCR, which revealed a significant decrease in transcription of two candidates six days before bleaching. RACE-PCR experiments showed that one of them (PdC-Lectin contained a C-Type-Lectin domain specific for mannose. Immunolocalisation demonstrated that this host gene mediates molecular interactions between the host and the symbionts suggesting a putative role in zooxanthellae acquisition and/or sequestration. The second gene corresponds to a gene putatively involved in calcification processes (Pdcyst-rich. Its down-regulation could reflect a trade-off mechanism leading to the arrest of the mineralization process under stress