WorldWideScience

Sample records for human face images

  1. Modeling human faces with multi-image photogrammetry

    Science.gov (United States)

    D'Apuzzo, Nicola

    2002-03-01

    Modeling and measurement of the human face have been increasing by importance for various purposes. Laser scanning, coded light range digitizers, image-based approaches and digital stereo photogrammetry are the used methods currently employed in medical applications, computer animation, video surveillance, teleconferencing and virtual reality to produce three dimensional computer models of the human face. Depending on the application, different are the requirements. Ours are primarily high accuracy of the measurement and automation in the process. The method presented in this paper is based on multi-image photogrammetry. The equipment, the method and results achieved with this technique are here depicted. The process is composed of five steps: acquisition of multi-images, calibration of the system, establishment of corresponding points in the images, computation of their 3-D coordinates and generation of a surface model. The images captured by five CCD cameras arranged in front of the subject are digitized by a frame grabber. The complete system is calibrated using a reference object with coded target points, which can be measured fully automatically. To facilitate the establishment of correspondences in the images, texture in the form of random patterns can be projected from two directions onto the face. The multi-image matching process, based on a geometrical constrained least squares matching algorithm, produces a dense set of corresponding points in the five images. Neighborhood filters are then applied on the matching results to remove the errors. After filtering the data, the three-dimensional coordinates of the matched points are computed by forward intersection using the results of the calibration process; the achieved mean accuracy is about 0.2 mm in the sagittal direction and about 0.1 mm in the lateral direction. The last step of data processing is the generation of a surface model from the point cloud and the application of smooth filters. Moreover, a

  2. Human Face as human single identity

    OpenAIRE

    Warnars, Spits

    2014-01-01

    Human face as a physical human recognition can be used as a unique identity for computer to recognize human by transforming human face with face algorithm as simple text number which can be primary key for human. Human face as single identity for human will be done by making a huge and large world centre human face database, where the human face around the world will be recorded from time to time and from generation to generation. Architecture database will be divided become human face image ...

  3. Our Faces in the Dog's Brain: Functional Imaging Reveals Temporal Cortex Activation during Perception of Human Faces.

    Directory of Open Access Journals (Sweden)

    Laura V Cuaya

    Full Text Available Dogs have a rich social relationship with humans. One fundamental aspect of it is how dogs pay close attention to human faces in order to guide their behavior, for example, by recognizing their owner and his/her emotional state using visual cues. It is well known that humans have specific brain regions for the processing of other human faces, yet it is unclear how dogs' brains process human faces. For this reason, our study focuses on describing the brain correlates of perception of human faces in dogs using functional magnetic resonance imaging (fMRI. We trained seven domestic dogs to remain awake, still and unrestrained inside an MRI scanner. We used a visual stimulation paradigm with block design to compare activity elicited by human faces against everyday objects. Brain activity related to the perception of faces changed significantly in several brain regions, but mainly in the bilateral temporal cortex. The opposite contrast (i.e., everyday objects against human faces showed no significant brain activity change. The temporal cortex is part of the ventral visual pathway, and our results are consistent with reports in other species like primates and sheep, that suggest a high degree of evolutionary conservation of this pathway for face processing. This study introduces the temporal cortex as candidate to process human faces, a pillar of social cognition in dogs.

  4. A robust human face detection algorithm

    Science.gov (United States)

    Raviteja, Thaluru; Karanam, Srikrishna; Yeduguru, Dinesh Reddy V.

    2012-01-01

    Human face detection plays a vital role in many applications like video surveillance, managing a face image database, human computer interface among others. This paper proposes a robust algorithm for face detection in still color images that works well even in a crowded environment. The algorithm uses conjunction of skin color histogram, morphological processing and geometrical analysis for detecting human faces. To reinforce the accuracy of face detection, we further identify mouth and eye regions to establish the presence/absence of face in a particular region of interest.

  5. Recognition of human face images by the free flying wasp Vespula vulgaris

    Directory of Open Access Journals (Sweden)

    Aurore Avarguès-Weber

    2017-08-01

    Full Text Available The capacity to recognize perceptually similar complex visual stimuli such as human faces has classically been thought to require a large primate, and/or mammalian brain with neurobiological adaptations. However, recent work suggests that the relatively small brain of a paper wasp, Polistes fuscatus, possesses specialized face processing capabilities. In parallel, the honeybee, Apis mellifera, has been shown to be able to rely on configural learning for extensive visual learning, thus converging with primate visual processing. Therefore, the honeybee may be able to recognize human faces, and show sophisticated learning performance due to its foraging lifestyle involving visiting and memorizing many flowers. We investigated the visual capacities of the widespread invasive wasp Vespula vulgaris, which is unlikely to have any specialization for face processing. Freely flying individual wasps were trained in an appetitive-aversive differential conditioning procedure to discriminate between perceptually similar human face images from a standard face recognition test. The wasps could then recognize the target face from novel dissimilar or similar human faces, but showed a significant drop in performance when the stimuli were rotated by 180°, thus paralleling results acquired on a similar protocol with honeybees. This result confirms that a general visual system can likely solve complex recognition tasks, the first stage to evolve a visual expertise system to face recognition, even in the absence of neurobiological or behavioral specialization.

  6. Static human face recognition using artificial neural networks

    International Nuclear Information System (INIS)

    Qamar, R.; Shah, S.H.; Javed-ur-Rehman

    2003-01-01

    This paper presents a novel method of human face recognition using digital computers. A digital PC camera is used to take the BMP images of the human faces. An artificial neural network using Back Propagation Algorithm is developed as a recognition engine. The BMP images of the faces serve as the input patterns for this engine. A software 'Face Recognition' has been developed to recognize the human faces for which it is trained. Once the neural network is trained for patterns of the faces, the software is able to detect and recognize them with success rate of about 97%. (author)

  7. Retinotopy and attention to the face and house images in the human visual cortex.

    Science.gov (United States)

    Wang, Bin; Yan, Tianyi; Ohno, Seiichiro; Kanazawa, Susumu; Wu, Jinglong

    2016-06-01

    Attentional modulation of the neural activities in human visual areas has been well demonstrated. However, the retinotopic activities that are driven by face and house images and attention to face and house images remain unknown. In the present study, we used images of faces and houses to estimate the retinotopic activities that were driven by both the images and attention to the images, driven by attention to the images, and driven by the images. Generally, our results show that both face and house images produced similar retinotopic activities in visual areas, which were only observed in the attention + stimulus and the attention conditions, but not in the stimulus condition. The fusiform face area (FFA) responded to faces that were presented on the horizontal meridian, whereas parahippocampal place area (PPA) rarely responded to house at any visual field. We further analyzed the amplitudes of the neural responses to the target wedge. In V1, V2, V3, V3A, lateral occipital area 1 (LO-1), and hV4, the neural responses to the attended target wedge were significantly greater than those to the unattended target wedge. However, in LO-2, ventral occipital areas 1 and 2 (VO-1 and VO-2) and FFA and PPA, the differences were not significant. We proposed that these areas likely have large fields of attentional modulation for face and house images and exhibit responses to both the target wedge and the background stimuli. In addition, we proposed that the absence of retinotopic activity in the stimulus condition might imply no perceived difference between the target wedge and the background stimuli.

  8. Decoding of faces and face components in face-sensitive human visual cortex

    Directory of Open Access Journals (Sweden)

    David F Nichols

    2010-07-01

    Full Text Available A great challenge to the field of visual neuroscience is to understand how faces are encoded and represented within the human brain. Here we show evidence from functional magnetic resonance imaging (fMRI for spatially distributed processing of the whole face and its components in face-sensitive human visual cortex. We used multi-class linear pattern classifiers constructed with a leave-one-scan-out verification procedure to discriminate brain activation patterns elicited by whole faces, the internal features alone, and the external head outline alone. Furthermore, our results suggest that whole faces are represented disproportionately in the fusiform cortex (FFA whereas the building blocks of faces are represented disproportionately in occipitotemporal cortex (OFA. Faces and face components may therefore be organized with functional clustering within both the FFA and OFA, but with specialization for face components in the OFA and the whole face in the FFA.

  9. Why the long face? The importance of vertical image structure for biological "barcodes" underlying face recognition.

    Science.gov (United States)

    Spence, Morgan L; Storrs, Katherine R; Arnold, Derek H

    2014-07-29

    Humans are experts at face recognition. The mechanisms underlying this complex capacity are not fully understood. Recently, it has been proposed that face recognition is supported by a coarse-scale analysis of visual information contained in horizontal bands of contrast distributed along the vertical image axis-a biological facial "barcode" (Dakin & Watt, 2009). A critical prediction of the facial barcode hypothesis is that the distribution of image contrast along the vertical axis will be more important for face recognition than image distributions along the horizontal axis. Using a novel paradigm involving dynamic image distortions, a series of experiments are presented examining famous face recognition impairments from selectively disrupting image distributions along the vertical or horizontal image axes. Results show that disrupting the image distribution along the vertical image axis is more disruptive for recognition than matched distortions along the horizontal axis. Consistent with the facial barcode hypothesis, these results suggest that human face recognition relies disproportionately on appropriately scaled distributions of image contrast along the vertical image axis. © 2014 ARVO.

  10. DisFace: A Database of Human Facial Disorders

    Directory of Open Access Journals (Sweden)

    Paramjit Kaur

    2017-10-01

    Full Text Available Face is an integral part of human body by which an individual communicates in the society. Its importance can be highlighted by the fact that a person deprived of face cannot sustain in the living world. In the past few decades, human face has gained attention of several researchers, whether it is related to facial anthropometry, facial disorder, face transplantation or face reconstruction. Several researches have also shown the correlation between neuropsychiatry disorders and human face and also that how face recognition abilities are correlated with these disorders. Currently, several databases exist which contain the facial images of several individuals captured from different sources. The advantage of these databases is that the images in these databases can be used for testing and training purpose. However, in current date no such database exists which would provide not only facial images of individuals; but also the literature concerning the human face, list of several genes controlling human face, list of facial disorders and various tools which work on facial images. Thus, the current research aims at developing a database of human facial disorders using bioinformatics approach. The database will contain information about facial diseases, medications, symptoms, findings, etc. The information will be extracted from several other databases like OMIM, PubChem, Radiopedia, Medline Plus, FDA, etc. and links to them will also be provided. Initially, the diseases specific for human face have been obtained from already created published corpora of literature using text mining approach. Becas tool was used to obtain the specific task.  A dataset will be created and stored in the form of database. It will be a database containing cross-referenced index of human facial diseases, medications, symptoms, signs, etc. Thus, a database on human face with complete existing information about human facial disorders will be developed. The novelty of the

  11. Modified GrabCut for human face segmentation

    Directory of Open Access Journals (Sweden)

    Dina Khattab

    2014-12-01

    Full Text Available GrabCut is a segmentation technique for 2D still color images, which is mainly based on an iterative energy minimization. The energy function of the GrabCut optimization algorithm is based mainly on a probabilistic model for pixel color distribution. Therefore, GrabCut may introduce unacceptable results in the cases of low contrast between foreground and background colors. In this manner, this paper presents a modified GrabCut technique for the segmentation of human faces from images of full humans. The modified technique introduces a new face location model for the energy minimization function of the GrabCut, in addition to the existing color one. This location model considers the distance distribution of the pixels from the silhouette boundary of a fitted head, of a 3D morphable model, to the image. The experimental results of the modified GrabCut have demonstrated better segmentation robustness and accuracy compared to the original GrabCut for human face segmentation.

  12. Face Recognition in Humans and Machines

    Science.gov (United States)

    O'Toole, Alice; Tistarelli, Massimo

    The study of human face recognition by psychologists and neuroscientists has run parallel to the development of automatic face recognition technologies by computer scientists and engineers. In both cases, there are analogous steps of data acquisition, image processing, and the formation of representations that can support the complex and diverse tasks we accomplish with faces. These processes can be understood and compared in the context of their neural and computational implementations. In this chapter, we present the essential elements of face recognition by humans and machines, taking a perspective that spans psychological, neural, and computational approaches. From the human side, we overview the methods and techniques used in the neurobiology of face recognition, the underlying neural architecture of the system, the role of visual attention, and the nature of the representations that emerges. From the computational side, we discuss face recognition technologies and the strategies they use to overcome challenges to robust operation over viewing parameters. Finally, we conclude the chapter with a look at some recent studies that compare human and machine performances at face recognition.

  13. Efficient human face detection in infancy.

    Science.gov (United States)

    Jakobsen, Krisztina V; Umstead, Lindsey; Simpson, Elizabeth A

    2016-01-01

    Adults detect conspecific faces more efficiently than heterospecific faces; however, the development of this own-species bias (OSB) remains unexplored. We tested whether 6- and 11-month-olds exhibit OSB in their attention to human and animal faces in complex visual displays with high perceptual load (25 images competing for attention). Infants (n = 48) and adults (n = 43) passively viewed arrays containing a face among 24 non-face distractors while we measured their gaze with remote eye tracking. While OSB is typically not observed until about 9 months, we found that, already by 6 months, human faces were more likely to be detected, were detected more quickly (attention capture), and received longer looks (attention holding) than animal faces. These data suggest that 6-month-olds already exhibit OSB in face detection efficiency, consistent with perceptual attunement. This specialization may reflect the biological importance of detecting conspecific faces, a foundational ability for early social interactions. © 2015 Wiley Periodicals, Inc.

  14. A novel polar-based human face recognition computational model

    Directory of Open Access Journals (Sweden)

    Y. Zana

    2009-07-01

    Full Text Available Motivated by a recently proposed biologically inspired face recognition approach, we investigated the relation between human behavior and a computational model based on Fourier-Bessel (FB spatial patterns. We measured human recognition performance of FB filtered face images using an 8-alternative forced-choice method. Test stimuli were generated by converting the images from the spatial to the FB domain, filtering the resulting coefficients with a band-pass filter, and finally taking the inverse FB transformation of the filtered coefficients. The performance of the computational models was tested using a simulation of the psychophysical experiment. In the FB model, face images were first filtered by simulated V1- type neurons and later analyzed globally for their content of FB components. In general, there was a higher human contrast sensitivity to radially than to angularly filtered images, but both functions peaked at the 11.3-16 frequency interval. The FB-based model presented similar behavior with regard to peak position and relative sensitivity, but had a wider frequency band width and a narrower response range. The response pattern of two alternative models, based on local FB analysis and on raw luminance, strongly diverged from the human behavior patterns. These results suggest that human performance can be constrained by the type of information conveyed by polar patterns, and consequently that humans might use FB-like spatial patterns in face processing.

  15. Discrimination between smiling faces: Human observers vs. automated face analysis.

    Science.gov (United States)

    Del Líbano, Mario; Calvo, Manuel G; Fernández-Martín, Andrés; Recio, Guillermo

    2018-05-11

    This study investigated (a) how prototypical happy faces (with happy eyes and a smile) can be discriminated from blended expressions with a smile but non-happy eyes, depending on type and intensity of the eye expression; and (b) how smile discrimination differs for human perceivers versus automated face analysis, depending on affective valence and morphological facial features. Human observers categorized faces as happy or non-happy, or rated their valence. Automated analysis (FACET software) computed seven expressions (including joy/happiness) and 20 facial action units (AUs). Physical properties (low-level image statistics and visual saliency) of the face stimuli were controlled. Results revealed, first, that some blended expressions (especially, with angry eyes) had lower discrimination thresholds (i.e., they were identified as "non-happy" at lower non-happy eye intensities) than others (especially, with neutral eyes). Second, discrimination sensitivity was better for human perceivers than for automated FACET analysis. As an additional finding, affective valence predicted human discrimination performance, whereas morphological AUs predicted FACET discrimination. FACET can be a valid tool for categorizing prototypical expressions, but is currently more limited than human observers for discrimination of blended expressions. Configural processing facilitates detection of in/congruence(s) across regions, and thus detection of non-genuine smiling faces (due to non-happy eyes). Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Face detection on distorted images using perceptual quality-aware features

    Science.gov (United States)

    Gunasekar, Suriya; Ghosh, Joydeep; Bovik, Alan C.

    2014-02-01

    We quantify the degradation in performance of a popular and effective face detector when human-perceived image quality is degraded by distortions due to additive white gaussian noise, gaussian blur or JPEG compression. It is observed that, within a certain range of perceived image quality, a modest increase in image quality can drastically improve face detection performance. These results can be used to guide resource or bandwidth allocation in a communication/delivery system that is associated with face detection tasks. A new face detector based on QualHOG features is also proposed that augments face-indicative HOG features with perceptual quality-aware spatial Natural Scene Statistics (NSS) features, yielding improved tolerance against image distortions. The new detector provides statistically significant improvements over a strong baseline on a large database of face images representing a wide range of distortions. To facilitate this study, we created a new Distorted Face Database, containing face and non-face patches from images impaired by a variety of common distortion types and levels. This new dataset is available for download and further experimentation at www.ideal.ece.utexas.edu/˜suriya/DFD/.

  17. A new method for face detection in colour images for emotional bio-robots

    Institute of Scientific and Technical Information of China (English)

    HAPESHI; Kevin

    2010-01-01

    Emotional bio-robots have become a hot research topic in last two decades. Though there have been some progress in research, design and development of various emotional bio-robots, few of them can be used in practical applications. The study of emotional bio-robots demands multi-disciplinary co-operation. It involves computer science, artificial intelligence, 3D computation, engineering system modelling, analysis and simulation, bionics engineering, automatic control, image processing and pattern recognition etc. Among them, face detection belongs to image processing and pattern recognition. An emotional robot must have the ability to recognize various objects, particularly, it is very important for a bio-robot to be able to recognize human faces from an image. In this paper, a face detection method is proposed for identifying any human faces in colour images using human skin model and eye detection method. Firstly, this method can be used to detect skin regions from the input colour image after normalizing its luminance. Then, all face candidates are identified using an eye detection method. Comparing with existing algorithms, this method only relies on the colour and geometrical data of human face rather than using training datasets. From experimental results, it is shown that this method is effective and fast and it can be applied to the development of an emotional bio-robot with further improvements of its speed and accuracy.

  18. Head pose estimation from a 2D face image using 3D face morphing with depth parameters.

    Science.gov (United States)

    Kong, Seong G; Mbouna, Ralph Oyini

    2015-06-01

    This paper presents estimation of head pose angles from a single 2D face image using a 3D face model morphed from a reference face model. A reference model refers to a 3D face of a person of the same ethnicity and gender as the query subject. The proposed scheme minimizes the disparity between the two sets of prominent facial features on the query face image and the corresponding points on the 3D face model to estimate the head pose angles. The 3D face model used is morphed from a reference model to be more specific to the query face in terms of the depth error at the feature points. The morphing process produces a 3D face model more specific to the query image when multiple 2D face images of the query subject are available for training. The proposed morphing process is computationally efficient since the depth of a 3D face model is adjusted by a scalar depth parameter at feature points. Optimal depth parameters are found by minimizing the disparity between the 2D features of the query face image and the corresponding features on the morphed 3D model projected onto 2D space. The proposed head pose estimation technique was evaluated on two benchmarking databases: 1) the USF Human-ID database for depth estimation and 2) the Pointing'04 database for head pose estimation. Experiment results demonstrate that head pose estimation errors in nodding and shaking angles are as low as 7.93° and 4.65° on average for a single 2D input face image.

  19. Human face processing is tuned to sexual age preferences

    DEFF Research Database (Denmark)

    Ponseti, J; Granert, O; van Eimeren, T

    2014-01-01

    Human faces can motivate nurturing behaviour or sexual behaviour when adults see a child or an adult face, respectively. This suggests that face processing is tuned to detecting age cues of sexual maturity to stimulate the appropriate reproductive behaviour: either caretaking or mating....... In paedophilia, sexual attraction is directed to sexually immature children. Therefore, we hypothesized that brain networks that normally are tuned to mature faces of the preferred gender show an abnormal tuning to sexual immature faces in paedophilia. Here, we use functional magnetic resonance imaging (f......MRI) to test directly for the existence of a network which is tuned to face cues of sexual maturity. During fMRI, participants sexually attracted to either adults or children were exposed to various face images. In individuals attracted to adults, adult faces activated several brain regions significantly more...

  20. A specialized face-processing model inspired by the organization of monkey face patches explains several face-specific phenomena observed in humans.

    Science.gov (United States)

    Farzmahdi, Amirhossein; Rajaei, Karim; Ghodrati, Masoud; Ebrahimpour, Reza; Khaligh-Razavi, Seyed-Mahdi

    2016-04-26

    Converging reports indicate that face images are processed through specialized neural networks in the brain -i.e. face patches in monkeys and the fusiform face area (FFA) in humans. These studies were designed to find out how faces are processed in visual system compared to other objects. Yet, the underlying mechanism of face processing is not completely revealed. Here, we show that a hierarchical computational model, inspired by electrophysiological evidence on face processing in primates, is able to generate representational properties similar to those observed in monkey face patches (posterior, middle and anterior patches). Since the most important goal of sensory neuroscience is linking the neural responses with behavioral outputs, we test whether the proposed model, which is designed to account for neural responses in monkey face patches, is also able to predict well-documented behavioral face phenomena observed in humans. We show that the proposed model satisfies several cognitive face effects such as: composite face effect and the idea of canonical face views. Our model provides insights about the underlying computations that transfer visual information from posterior to anterior face patches.

  1. Affective attitudes to face images associated with intracerebral EEG source location before face viewing.

    Science.gov (United States)

    Pizzagalli, D; Koenig, T; Regard, M; Lehmann, D

    1999-01-01

    We investigated whether different, personality-related affective attitudes are associated with different brain electric field (EEG) sources before any emotional challenge (stimulus exposure). A 27-channel EEG was recorded in 15 subjects during eyes-closed resting. After recording, subjects rated 32 images of human faces for affective appeal. The subjects in the first (i.e., most negative) and fourth (i.e., most positive) quartile of general affective attitude were further analyzed. The EEG data (mean=25+/-4. 8 s/subject) were subjected to frequency-domain model dipole source analysis (FFT-Dipole-Approximation), resulting in 3-dimensional intracerebral source locations and strengths for the delta-theta, alpha, and beta EEG frequency band, and for the full range (1.5-30 Hz) band. Subjects with negative attitude (compared to those with positive attitude) showed the following source locations: more inferior for all frequency bands, more anterior for the delta-theta band, more posterior and more right for the alpha, beta and 1.5-30 Hz bands. One year later, the subjects were asked to rate the face images again. The rating scores for the same face images were highly correlated for all subjects, and original and retest affective mean attitude was highly correlated across subjects. The present results show that subjects with different affective attitudes to face images had different active, cerebral, neural populations in a task-free condition prior to viewing the images. We conclude that the brain functional state which implements affective attitude towards face images as a personality feature exists without elicitors, as a continuously present, dynamic feature of brain functioning. Copyright 1999 Elsevier Science B.V.

  2. Discriminating Projections for Estimating Face Age in Wild Images

    Energy Technology Data Exchange (ETDEWEB)

    Tokola, Ryan A [ORNL; Bolme, David S [ORNL; Ricanek, Karl [ORNL; Barstow, Del R [ORNL; Boehnen, Chris Bensing [ORNL

    2014-01-01

    We introduce a novel approach to estimating the age of a human from a single uncontrolled image. Current face age estimation algorithms work well in highly controlled images, and some are robust to changes in illumination, but it is usually assumed that images are close to frontal. This bias is clearly seen in the datasets that are commonly used to evaluate age estimation, which either entirely or mostly consist of frontal images. Using pose-specific projections, our algorithm maps image features into a pose-insensitive latent space that is discriminative with respect to age. Age estimation is then performed using a multi-class SVM. We show that our approach outperforms other published results on the Images of Groups dataset, which is the only age-related dataset with a non-trivial number of off-axis face images, and that we are competitive with recent age estimation algorithms on the mostly-frontal FG-NET dataset. We also experimentally demonstrate that our feature projections introduce insensitivity to pose.

  3. Invariant Face recognition Using Infrared Images

    International Nuclear Information System (INIS)

    Zahran, E.G.

    2012-01-01

    Over the past few decades, face recognition has become a rapidly growing research topic due to the increasing demands in many applications of our daily life such as airport surveillance, personal identification in law enforcement, surveillance systems, information safety, securing financial transactions, and computer security. The objective of this thesis is to develop a face recognition system capable of recognizing persons with a high recognition capability, low processing time, and under different illumination conditions, and different facial expressions. The thesis presents a study for the performance of the face recognition system using two techniques; the Principal Component Analysis (PCA), and the Zernike Moments (ZM). The performance of the recognition system is evaluated according to several aspects including the recognition rate, and the processing time. Face recognition systems that use visual images are sensitive to variations in the lighting conditions and facial expressions. The performance of these systems may be degraded under poor illumination conditions or for subjects of various skin colors. Several solutions have been proposed to overcome these limitations. One of these solutions is to work in the Infrared (IR) spectrum. IR images have been suggested as an alternative source of information for detection and recognition of faces, when there is little or no control over lighting conditions. This arises from the fact that these images are formed due to thermal emissions from skin, which is an intrinsic property because these emissions depend on the distribution of blood vessels under the skin. On the other hand IR face recognition systems still have limitations with temperature variations and recognition of persons wearing eye glasses. In this thesis we will fuse IR images with visible images to enhance the performance of face recognition systems. Images are fused using the wavelet transform. Simulation results show that the fusion of visible and

  4. Crossing the “Uncanny Valley”: adaptation to cartoon faces can influence perception of human faces

    Science.gov (United States)

    Chen, Haiwen; Russell, Richard; Nakayama, Ken; Livingstone, Margaret

    2013-01-01

    Adaptation can shift what individuals identify to be a prototypical or attractive face. Past work suggests that low-level shape adaptation can affect high-level face processing but is position dependent. Adaptation to distorted images of faces can also affect face processing but only within sub-categories of faces, such as gender, age, and race/ethnicity. This study assesses whether there is a representation of face that is specific to faces (as opposed to all shapes) but general to all kinds of faces (as opposed to subcategories) by testing whether adaptation to one type of face can affect perception of another. Participants were shown cartoon videos containing faces with abnormally large eyes. Using animated videos allowed us to simulate naturalistic exposure and avoid positional shape adaptation. Results suggest that adaptation to cartoon faces with large eyes shifts preferences for human faces toward larger eyes, supporting the existence of general face representations. PMID:20465173

  5. Music-Elicited Emotion Identification Using Optical Flow Analysis of Human Face

    Science.gov (United States)

    Kniaz, V. V.; Smirnova, Z. N.

    2015-05-01

    Human emotion identification from image sequences is highly demanded nowadays. The range of possible applications can vary from an automatic smile shutter function of consumer grade digital cameras to Biofied Building technologies, which enables communication between building space and residents. The highly perceptual nature of human emotions leads to the complexity of their classification and identification. The main question arises from the subjective quality of emotional classification of events that elicit human emotions. A variety of methods for formal classification of emotions were developed in musical psychology. This work is focused on identification of human emotions evoked by musical pieces using human face tracking and optical flow analysis. Facial feature tracking algorithm used for facial feature speed and position estimation is presented. Facial features were extracted from each image sequence using human face tracking with local binary patterns (LBP) features. Accurate relative speeds of facial features were estimated using optical flow analysis. Obtained relative positions and speeds were used as the output facial emotion vector. The algorithm was tested using original software and recorded image sequences. The proposed technique proves to give a robust identification of human emotions elicited by musical pieces. The estimated models could be used for human emotion identification from image sequences in such fields as emotion based musical background or mood dependent radio.

  6. Internal representations for face detection: an application of noise-based image classification to BOLD responses.

    Science.gov (United States)

    Nestor, Adrian; Vettel, Jean M; Tarr, Michael J

    2013-11-01

    What basic visual structures underlie human face detection and how can we extract such structures directly from the amplitude of neural responses elicited by face processing? Here, we address these issues by investigating an extension of noise-based image classification to BOLD responses recorded in high-level visual areas. First, we assess the applicability of this classification method to such data and, second, we explore its results in connection with the neural processing of faces. To this end, we construct luminance templates from white noise fields based on the response of face-selective areas in the human ventral cortex. Using behaviorally and neurally-derived classification images, our results reveal a family of simple but robust image structures subserving face representation and detection. Thus, we confirm the role played by classical face selective regions in face detection and we help clarify the representational basis of this perceptual function. From a theory standpoint, our findings support the idea of simple but highly diagnostic neurally-coded features for face detection. At the same time, from a methodological perspective, our work demonstrates the ability of noise-based image classification in conjunction with fMRI to help uncover the structure of high-level perceptual representations. Copyright © 2012 Wiley Periodicals, Inc.

  7. Assessing paedophilia based on the haemodynamic brain response to face images

    DEFF Research Database (Denmark)

    Ponseti, Jorge; Granert, Oliver; Van Eimeren, Thilo

    2016-01-01

    that human face processing is tuned to sexual age preferences. This observation prompted us to test whether paedophilia can be inferred based on the haemodynamic brain responses to adult and child faces. METHODS: Twenty-four men sexually attracted to prepubescent boys or girls (paedophiles) and 32 men......OBJECTIVES: Objective assessment of sexual preferences may be of relevance in the treatment and prognosis of child sexual offenders. Previous research has indicated that this can be achieved by pattern classification of brain responses to sexual child and adult images. Our recent research showed...... sexually attracted to men or women (teleiophiles) were exposed to images of child and adult, male and female faces during a functional magnetic resonance imaging (fMRI) session. RESULTS: A cross-validated, automatic pattern classification algorithm of brain responses to facial stimuli yielded four...

  8. InterFace: A software package for face image warping, averaging, and principal components analysis.

    Science.gov (United States)

    Kramer, Robin S S; Jenkins, Rob; Burton, A Mike

    2017-12-01

    We describe InterFace, a software package for research in face recognition. The package supports image warping, reshaping, averaging of multiple face images, and morphing between faces. It also supports principal components analysis (PCA) of face images, along with tools for exploring the "face space" produced by PCA. The package uses a simple graphical user interface, allowing users to perform these sophisticated image manipulations without any need for programming knowledge. The program is available for download in the form of an app, which requires that users also have access to the (freely available) MATLAB Runtime environment.

  9. Human Age Estimation Method Robust to Camera Sensor and/or Face Movement

    Directory of Open Access Journals (Sweden)

    Dat Tien Nguyen

    2015-08-01

    Full Text Available Human age can be employed in many useful real-life applications, such as customer service systems, automatic vending machines, entertainment, etc. In order to obtain age information, image-based age estimation systems have been developed using information from the human face. However, limitations exist for current age estimation systems because of the various factors of camera motion and optical blurring, facial expressions, gender, etc. Motion blurring can usually be presented on face images by the movement of the camera sensor and/or the movement of the face during image acquisition. Therefore, the facial feature in captured images can be transformed according to the amount of motion, which causes performance degradation of age estimation systems. In this paper, the problem caused by motion blurring is addressed and its solution is proposed in order to make age estimation systems robust to the effects of motion blurring. Experiment results show that our method is more efficient for enhancing age estimation performance compared with systems that do not employ our method.

  10. An Efficient Feature Extraction Method with Pseudo-Zernike Moment in RBF Neural Network-Based Human Face Recognition System

    Directory of Open Access Journals (Sweden)

    Ahmadi Majid

    2003-01-01

    Full Text Available This paper introduces a novel method for the recognition of human faces in digital images using a new feature extraction method that combines the global and local information in frontal view of facial images. Radial basis function (RBF neural network with a hybrid learning algorithm (HLA has been used as a classifier. The proposed feature extraction method includes human face localization derived from the shape information. An efficient distance measure as facial candidate threshold (FCT is defined to distinguish between face and nonface images. Pseudo-Zernike moment invariant (PZMI with an efficient method for selecting moment order has been used. A newly defined parameter named axis correction ratio (ACR of images for disregarding irrelevant information of face images is introduced. In this paper, the effect of these parameters in disregarding irrelevant information in recognition rate improvement is studied. Also we evaluate the effect of orders of PZMI in recognition rate of the proposed technique as well as RBF neural network learning speed. Simulation results on the face database of Olivetti Research Laboratory (ORL indicate that the proposed method for human face recognition yielded a recognition rate of 99.3%.

  11. 3D FaceCam: a fast and accurate 3D facial imaging device for biometrics applications

    Science.gov (United States)

    Geng, Jason; Zhuang, Ping; May, Patrick; Yi, Steven; Tunnell, David

    2004-08-01

    Human faces are fundamentally three-dimensional (3D) objects, and each face has its unique 3D geometric profile. The 3D geometric features of a human face can be used, together with its 2D texture, for rapid and accurate face recognition purposes. Due to the lack of low-cost and robust 3D sensors and effective 3D facial recognition (FR) algorithms, almost all existing FR systems use 2D face images. Genex has developed 3D solutions that overcome the inherent problems in 2D while also addressing limitations in other 3D alternatives. One important aspect of our solution is a unique 3D camera (the 3D FaceCam) that combines multiple imaging sensors within a single compact device to provide instantaneous, ear-to-ear coverage of a human face. This 3D camera uses three high-resolution CCD sensors and a color encoded pattern projection system. The RGB color information from each pixel is used to compute the range data and generate an accurate 3D surface map. The imaging system uses no moving parts and combines multiple 3D views to provide detailed and complete 3D coverage of the entire face. Images are captured within a fraction of a second and full-frame 3D data is produced within a few seconds. This described method provides much better data coverage and accuracy in feature areas with sharp features or details (such as the nose and eyes). Using this 3D data, we have been able to demonstrate that a 3D approach can significantly improve the performance of facial recognition. We have conducted tests in which we have varied the lighting conditions and angle of image acquisition in the "field." These tests have shown that the matching results are significantly improved when enrolling a 3D image rather than a single 2D image. With its 3D solutions, Genex is working toward unlocking the promise of powerful 3D FR and transferring FR from a lab technology into a real-world biometric solution.

  12. Vitality Detection in Face Images using Second Order Gradient

    OpenAIRE

    Aruni Singh

    2012-01-01

    Spoofing is a very big challenge in biometrics, specially in face image. So many artificial techniques are available to tamper or hide the original face. To ensure the actual presence of live face image in contrast to fake face image this research has been contributed. The intended purpose of proposed approach is also to endorse the biometric authentication, by joining the liveness awareness with Facial Recognition Technology (FRT). In this research 200 dummy face images and 200 real face ima...

  13. Towards Designing Android Faces after Actual Humans

    DEFF Research Database (Denmark)

    Vlachos, Evgenios; Schärfe, Henrik

    2015-01-01

    Using their face as their prior affective interface, android robots and other agents embody emotional facial expressions, and convey messages on their identity, gender, age, race, and attractiveness. We are examining whether androids can convey emotionally relevant information via their static...... facial sig-nals, just as humans do. Based on the fact that social information can be accu-rately identified from still images of nonexpressive unknown faces, a judgment paradigm was employed to discover, and compare the style of facial expres-sions of the Geminoid-DK android (modeled after an actual...... initially made for the Original, suggesting that androids inherit the same style of facial expression as their originals. Our findings support the case of designing android faces after specific actual persons who portray facial features that are familiar to the users, and also relevant to the notion...

  14. Human versus Non-Human Face Processing: Evidence from Williams Syndrome

    Science.gov (United States)

    Santos, Andreia; Rosset, Delphine; Deruelle, Christine

    2009-01-01

    Increased motivation towards social stimuli in Williams syndrome (WS) led us to hypothesize that a face's human status would have greater impact than face's orientation on WS' face processing abilities. Twenty-nine individuals with WS were asked to categorize facial emotion expressions in real, human cartoon and non-human cartoon faces presented…

  15. Performance evaluation of no-reference image quality metrics for face biometric images

    Science.gov (United States)

    Liu, Xinwei; Pedersen, Marius; Charrier, Christophe; Bours, Patrick

    2018-03-01

    The accuracy of face recognition systems is significantly affected by the quality of face sample images. The recent established standardization proposed several important aspects for the assessment of face sample quality. There are many existing no-reference image quality metrics (IQMs) that are able to assess natural image quality by taking into account similar image-based quality attributes as introduced in the standardization. However, whether such metrics can assess face sample quality is rarely considered. We evaluate the performance of 13 selected no-reference IQMs on face biometrics. The experimental results show that several of them can assess face sample quality according to the system performance. We also analyze the strengths and weaknesses of different IQMs as well as why some of them failed to assess face sample quality. Retraining an original IQM by using face database can improve the performance of such a metric. In addition, the contribution of this paper can be used for the evaluation of IQMs on other biometric modalities; furthermore, it can be used for the development of multimodality biometric IQMs.

  16. Can the usage of human growth hormones affect facial appearance and the accuracy of face recognition systems?

    Science.gov (United States)

    Rose, Jake; Martin, Michael; Bourlai, Thirimachos

    2014-06-01

    In law enforcement and security applications, the acquisition of face images is critical in producing key trace evidence for the successful identification of potential threats. The goal of the study is to demonstrate that steroid usage significantly affects human facial appearance and hence, the performance of commercial and academic face recognition (FR) algorithms. In this work, we evaluate the performance of state-of-the-art FR algorithms on two unique face image datasets of subjects before (gallery set) and after (probe set) steroid (or human growth hormone) usage. For the purpose of this study, datasets of 73 subjects were created from multiple sources found on the Internet, containing images of men and women before and after steroid usage. Next, we geometrically pre-processed all images of both face datasets. Then, we applied image restoration techniques on the same face datasets, and finally, we applied FR algorithms in order to match the pre-processed face images of our probe datasets against the face images of the gallery set. Experimental results demonstrate that only a specific set of FR algorithms obtain the most accurate results (in terms of the rank-1 identification rate). This is because there are several factors that influence the efficiency of face matchers including (i) the time lapse between the before and after image pre-processing and restoration face photos, (ii) the usage of different drugs (e.g. Dianabol, Winstrol, and Decabolan), (iii) the usage of different cameras to capture face images, and finally, (iv) the variability of standoff distance, illumination and other noise factors (e.g. motion noise). All of the previously mentioned complicated scenarios make clear that cross-scenario matching is a very challenging problem and, thus, further investigation is required.

  17. Faces in places: humans and machines make similar face detection errors.

    Directory of Open Access Journals (Sweden)

    Bernard Marius 't Hart

    Full Text Available The human visual system seems to be particularly efficient at detecting faces. This efficiency sometimes comes at the cost of wrongfully seeing faces in arbitrary patterns, including famous examples such as a rock configuration on Mars or a toast's roast patterns. In machine vision, face detection has made considerable progress and has become a standard feature of many digital cameras. The arguably most wide-spread algorithm for such applications ("Viola-Jones" algorithm achieves high detection rates at high computational efficiency. To what extent do the patterns that the algorithm mistakenly classifies as faces also fool humans? We selected three kinds of stimuli from real-life, first-person perspective movies based on the algorithm's output: correct detections ("real faces", false positives ("illusory faces" and correctly rejected locations ("non faces". Observers were shown pairs of these for 20 ms and had to direct their gaze to the location of the face. We found that illusory faces were mistaken for faces more frequently than non faces. In addition, rotation of the real face yielded more errors, while rotation of the illusory face yielded fewer errors. Using colored stimuli increases overall performance, but does not change the pattern of results. When replacing the eye movement by a manual response, however, the preference for illusory faces over non faces disappeared. Taken together, our data show that humans make similar face-detection errors as the Viola-Jones algorithm, when directing their gaze to briefly presented stimuli. In particular, the relative spatial arrangement of oriented filters seems of relevance. This suggests that efficient face detection in humans is likely to be pre-attentive and based on rather simple features as those encoded in the early visual system.

  18. Gender classification from face images by using local binary pattern and gray-level co-occurrence matrix

    Science.gov (United States)

    Uzbaş, Betül; Arslan, Ahmet

    2018-04-01

    Gender is an important step for human computer interactive processes and identification. Human face image is one of the important sources to determine gender. In the present study, gender classification is performed automatically from facial images. In order to classify gender, we propose a combination of features that have been extracted face, eye and lip regions by using a hybrid method of Local Binary Pattern and Gray-Level Co-Occurrence Matrix. The features have been extracted from automatically obtained face, eye and lip regions. All of the extracted features have been combined and given as input parameters to classification methods (Support Vector Machine, Artificial Neural Networks, Naive Bayes and k-Nearest Neighbor methods) for gender classification. The Nottingham Scan face database that consists of the frontal face images of 100 people (50 male and 50 female) is used for this purpose. As the result of the experimental studies, the highest success rate has been achieved as 98% by using Support Vector Machine. The experimental results illustrate the efficacy of our proposed method.

  19. Personality judgments from everyday images of faces

    Directory of Open Access Journals (Sweden)

    Clare AM Sutherland

    2015-10-01

    Full Text Available People readily make personality attributions to images of strangers’ faces. Here we investigated the basis of these personality attributions as made to everyday, naturalistic face images. In a first study, we used 1,000 highly varying ‘ambient image’ face photographs to test the correspondence between personality judgments of the Big Five and dimensions known to underlie a range of facial first impressions: approachability, dominance and youthful-attractiveness. Interestingly, the facial Big Five judgments were found to separate to some extent: judgments of openness, extraversion, emotional stability and agreeableness were mainly linked to facial first impressions of approachability, whereas conscientiousness judgments involved a combination of approachability and dominance. In a second study we used average face images to investigate which main cues are used by perceivers to make impressions of the Big Five, by extracting consistent cues to impressions from the large variation in the original images. When forming impressions of strangers from highly varying, naturalistic face photographs, perceivers mainly seem to rely on broad facial cues to approachability, such as smiling.

  20. Face Image Quality and its Improvement in a Face Detection System

    DEFF Research Database (Denmark)

    Kamal, Nasrollahi; Moeslund, Thomas B.

    2008-01-01

    When a person passes by a surveillance camera a sequence of images is obtained. Most of these images are redundant and usually keeping some of them which have better quality is sufficient. So before performing any analysis on the face of a person, the face at the first step needs to be detected...... we are trying to develop a system to deal with the video sequences in these 3 steps....

  1. Improving face image extraction by using deep learning technique

    Science.gov (United States)

    Xue, Zhiyun; Antani, Sameer; Long, L. R.; Demner-Fushman, Dina; Thoma, George R.

    2016-03-01

    The National Library of Medicine (NLM) has made a collection of over a 1.2 million research articles containing 3.2 million figure images searchable using the Open-iSM multimodal (text+image) search engine. Many images are visible light photographs, some of which are images containing faces ("face images"). Some of these face images are acquired in unconstrained settings, while others are studio photos. To extract the face regions in the images, we first applied one of the most widely-used face detectors, a pre-trained Viola-Jones detector implemented in Matlab and OpenCV. The Viola-Jones detector was trained for unconstrained face image detection, but the results for the NLM database included many false positives, which resulted in a very low precision. To improve this performance, we applied a deep learning technique, which reduced the number of false positives and as a result, the detection precision was improved significantly. (For example, the classification accuracy for identifying whether the face regions output by this Viola- Jones detector are true positives or not in a test set is about 96%.) By combining these two techniques (Viola-Jones and deep learning) we were able to increase the system precision considerably, while avoiding the need to manually construct a large training set by manual delineation of the face regions.

  2. Sub-component modeling for face image reconstruction in video communications

    Science.gov (United States)

    Shiell, Derek J.; Xiao, Jing; Katsaggelos, Aggelos K.

    2008-08-01

    Emerging communications trends point to streaming video as a new form of content delivery. These systems are implemented over wired systems, such as cable or ethernet, and wireless networks, cell phones, and portable game systems. These communications systems require sophisticated methods of compression and error-resilience encoding to enable communications across band-limited and noisy delivery channels. Additionally, the transmitted video data must be of high enough quality to ensure a satisfactory end-user experience. Traditionally, video compression makes use of temporal and spatial coherence to reduce the information required to represent an image. In many communications systems, the communications channel is characterized by a probabilistic model which describes the capacity or fidelity of the channel. The implication is that information is lost or distorted in the channel, and requires concealment on the receiving end. We demonstrate a generative model based transmission scheme to compress human face images in video, which has the advantages of a potentially higher compression ratio, while maintaining robustness to errors and data corruption. This is accomplished by training an offline face model and using the model to reconstruct face images on the receiving end. We propose a sub-component AAM modeling the appearance of sub-facial components individually, and show face reconstruction results under different types of video degradation using a weighted and non-weighted version of the sub-component AAM.

  3. Modeling human dynamics of face-to-face interaction networks

    OpenAIRE

    Starnini, Michele; Baronchelli, Andrea; Pastor-Satorras, Romualdo

    2013-01-01

    Face-to-face interaction networks describe social interactions in human gatherings, and are the substrate for processes such as epidemic spreading and gossip propagation. The bursty nature of human behavior characterizes many aspects of empirical data, such as the distribution of conversation lengths, of conversations per person, or of inter-conversation times. Despite several recent attempts, a general theoretical understanding of the global picture emerging from data is still lacking. Here ...

  4. Probabilistic recognition of human faces from video

    DEFF Research Database (Denmark)

    Zhou, Saohua; Krüger, Volker; Chellappa, Rama

    2003-01-01

    Recognition of human faces using a gallery of still or video images and a probe set of videos is systematically investigated using a probabilistic framework. In still-to-video recognition, where the gallery consists of still images, a time series state space model is proposed to fuse temporal...... of the identity variable produces the recognition result. The model formulation is very general and it allows a variety of image representations and transformations. Experimental results using videos collected by NIST/USF and CMU illustrate the effectiveness of this approach for both still-to-video and video-to-video...... information in a probe video, which simultaneously characterizes the kinematics and identity using a motion vector and an identity variable, respectively. The joint posterior distribution of the motion vector and the identity variable is estimated at each time instant and then propagated to the next time...

  5. Adapting Local Features for Face Detection in Thermal Image

    Directory of Open Access Journals (Sweden)

    Chao Ma

    2017-11-01

    Full Text Available A thermal camera captures the temperature distribution of a scene as a thermal image. In thermal images, facial appearances of different people under different lighting conditions are similar. This is because facial temperature distribution is generally constant and not affected by lighting condition. This similarity in face appearances is advantageous for face detection. To detect faces in thermal images, cascade classifiers with Haar-like features are generally used. However, there are few studies exploring the local features for face detection in thermal images. In this paper, we introduce two approaches relying on local features for face detection in thermal images. First, we create new feature types by extending Multi-Block LBP. We consider a margin around the reference and the generally constant distribution of facial temperature. In this way, we make the features more robust to image noise and more effective for face detection in thermal images. Second, we propose an AdaBoost-based training method to get cascade classifiers with multiple types of local features. These feature types have different advantages. In this way we enhance the description power of local features. We did a hold-out validation experiment and a field experiment. In the hold-out validation experiment, we captured a dataset from 20 participants, comprising 14 males and 6 females. For each participant, we captured 420 images with 10 variations in camera distance, 21 poses, and 2 appearances (participant with/without glasses. We compared the performance of cascade classifiers trained by different sets of the features. The experiment results showed that the proposed approaches effectively improve the performance of face detection in thermal images. In the field experiment, we compared the face detection performance in realistic scenes using thermal and RGB images, and gave discussion based on the results.

  6. What is adapted in face adaptation? The neural representations of expression in the human visual system.

    Science.gov (United States)

    Fox, Christopher J; Barton, Jason J S

    2007-01-05

    The neural representation of facial expression within the human visual system is not well defined. Using an adaptation paradigm, we examined aftereffects on expression perception produced by various stimuli. Adapting to a face, which was used to create morphs between two expressions, substantially biased expression perception within the morphed faces away from the adapting expression. This adaptation was not based on low-level image properties, as a different image of the same person displaying that expression produced equally robust aftereffects. Smaller but significant aftereffects were generated by images of different individuals, irrespective of gender. Non-face visual, auditory, or verbal representations of emotion did not generate significant aftereffects. These results suggest that adaptation affects at least two neural representations of expression: one specific to the individual (not the image), and one that represents expression across different facial identities. The identity-independent aftereffect suggests the existence of a 'visual semantic' for facial expression in the human visual system.

  7. Human Body Image Edge Detection Based on Wavelet Transform

    Institute of Scientific and Technical Information of China (English)

    李勇; 付小莉

    2003-01-01

    Human dresses are different in thousands way.Human body image signals have big noise, a poor light and shade contrast and a narrow range of gray gradation distribution. The application of a traditional grads method or gray method to detect human body image edges can't obtain satisfactory results because of false detections and missed detections. According to tte peculiarity of human body image, dyadic wavelet transform of cubic spline is successfully applied to detect the face and profile edges of human body image and Mallat algorithm is used in the wavelet decomposition in this paper.

  8. Image preprocessing study on KPCA-based face recognition

    Science.gov (United States)

    Li, Xuan; Li, Dehua

    2015-12-01

    Face recognition as an important biometric identification method, with its friendly, natural, convenient advantages, has obtained more and more attention. This paper intends to research a face recognition system including face detection, feature extraction and face recognition, mainly through researching on related theory and the key technology of various preprocessing methods in face detection process, using KPCA method, focuses on the different recognition results in different preprocessing methods. In this paper, we choose YCbCr color space for skin segmentation and choose integral projection for face location. We use erosion and dilation of the opening and closing operation and illumination compensation method to preprocess face images, and then use the face recognition method based on kernel principal component analysis method for analysis and research, and the experiments were carried out using the typical face database. The algorithms experiment on MATLAB platform. Experimental results show that integration of the kernel method based on PCA algorithm under certain conditions make the extracted features represent the original image information better for using nonlinear feature extraction method, which can obtain higher recognition rate. In the image preprocessing stage, we found that images under various operations may appear different results, so as to obtain different recognition rate in recognition stage. At the same time, in the process of the kernel principal component analysis, the value of the power of the polynomial function can affect the recognition result.

  9. Sensory competition in the face processing areas of the human brain.

    Directory of Open Access Journals (Sweden)

    Krisztina Nagy

    Full Text Available The concurrent presentation of multiple stimuli in the visual field may trigger mutually suppressive interactions throughout the ventral visual stream. While several studies have been performed on sensory competition effects among non-face stimuli relatively little is known about the interactions in the human brain for multiple face stimuli. In the present study we analyzed the neuronal basis of sensory competition in an event-related functional magnetic resonance imaging (fMRI study using multiple face stimuli. We varied the ratio of faces and phase-noise images within a composite display with a constant number of peripheral stimuli, thereby manipulating the competitive interactions between faces. For contralaterally presented stimuli we observed strong competition effects in the fusiform face area (FFA bilaterally and in the right lateral occipital area (LOC, but not in the occipital face area (OFA, suggesting their different roles in sensory competition. When we increased the spatial distance among pairs of faces the magnitude of suppressive interactions was reduced in the FFA. Surprisingly, the magnitude of competition depended on the visual hemifield of the stimuli: ipsilateral stimulation reduced the competition effects somewhat in the right LOC while it increased them in the left LOC. This suggests a left hemifield dominance of sensory competition. Our results support the sensory competition theory in the processing of multiple faces and suggests that sensory competition occurs in several cortical areas in both cerebral hemispheres.

  10. Face Image Retrieval of Efficient Sparse Code words and Multiple Attribute in Binning Image

    Directory of Open Access Journals (Sweden)

    Suchitra S

    2017-08-01

    Full Text Available ABSTRACT In photography, face recognition and face retrieval play an important role in many applications such as security, criminology and image forensics. Advancements in face recognition make easier for identity matching of an individual with attributes. Latest development in computer vision technologies enables us to extract facial attributes from the input image and provide similar image results. In this paper, we propose a novel LOP and sparse codewords method to provide similar matching results with respect to input query image. To improve accuracy in image results with input image and dynamic facial attributes, Local octal pattern algorithm [LOP] and Sparse codeword applied in offline and online. The offline and online procedures in face image binning techniques apply with sparse code. Experimental results with Pubfig dataset shows that the proposed LOP along with sparse codewords able to provide matching results with increased accuracy of 90%.

  11. Rapid Categorization of Human and Ape Faces in 9-Month-Old Infants Revealed by Fast Periodic Visual Stimulation.

    Science.gov (United States)

    Peykarjou, Stefanie; Hoehl, Stefanie; Pauen, Sabina; Rossion, Bruno

    2017-10-02

    This study investigates categorization of human and ape faces in 9-month-olds using a Fast Periodic Visual Stimulation (FPVS) paradigm while measuring EEG. Categorization responses are elicited only if infants discriminate between different categories and generalize across exemplars within each category. In study 1, human or ape faces were presented as standard and deviant stimuli in upright and inverted trials. Upright ape faces presented among humans elicited strong categorization responses, whereas responses for upright human faces and for inverted ape faces were smaller. Deviant inverted human faces did not elicit categorization. Data were best explained by a model with main effects of species and orientation. However, variance of low-level image characteristics was higher for the ape than the human category. Variance was matched to replicate this finding in an independent sample (study 2). Both human and ape faces elicited categorization in upright and inverted conditions, but upright ape faces elicited the strongest responses. Again, data were best explained by a model of two main effects. These experiments demonstrate that 9-month-olds rapidly categorize faces, and unfamiliar faces presented among human faces elicit increased categorization responses. This likely reflects habituation for the familiar standard category, and stronger release for the unfamiliar category deviants.

  12. Pose-Invariant Face Recognition via RGB-D Images.

    Science.gov (United States)

    Sang, Gaoli; Li, Jing; Zhao, Qijun

    2016-01-01

    Three-dimensional (3D) face models can intrinsically handle large pose face recognition problem. In this paper, we propose a novel pose-invariant face recognition method via RGB-D images. By employing depth, our method is able to handle self-occlusion and deformation, both of which are challenging problems in two-dimensional (2D) face recognition. Texture images in the gallery can be rendered to the same view as the probe via depth. Meanwhile, depth is also used for similarity measure via frontalization and symmetric filling. Finally, both texture and depth contribute to the final identity estimation. Experiments on Bosphorus, CurtinFaces, Eurecom, and Kiwi databases demonstrate that the additional depth information has improved the performance of face recognition with large pose variations and under even more challenging conditions.

  13. Can human eyes prevent perceptual narrowing for monkey faces in human infants?

    Science.gov (United States)

    Damon, Fabrice; Bayet, Laurie; Quinn, Paul C; Hillairet de Boisferon, Anne; Méary, David; Dupierrix, Eve; Lee, Kang; Pascalis, Olivier

    2015-07-01

    Perceptual narrowing has been observed in human infants for monkey faces: 6-month-olds can discriminate between them, whereas older infants from 9 months of age display difficulty discriminating between them. The difficulty infants from 9 months have processing monkey faces has not been clearly identified. It could be due to the structural characteristics of monkey faces, particularly the key facial features that differ from human faces. The current study aimed to investigate whether the information conveyed by the eyes is of importance. We examined whether the presence of Caucasian human eyes in monkey faces allows recognition to be maintained in 6-month-olds and facilitates recognition in 9- and 12-month-olds. Our results revealed that the presence of human eyes in monkey faces maintains recognition for those faces at 6 months of age and partially facilitates recognition of those faces at 9 months of age, but not at 12 months of age. The findings are interpreted in the context of perceptual narrowing and suggest that the attenuation of processing of other-species faces is not reversed by the presence of human eyes. © 2015 Wiley Periodicals, Inc.

  14. Face Spoof Attack Recognition Using Discriminative Image Patches

    Directory of Open Access Journals (Sweden)

    Zahid Akhtar

    2016-01-01

    Full Text Available Face recognition systems are now being used in many applications such as border crossings, banks, and mobile payments. The wide scale deployment of facial recognition systems has attracted intensive attention to the reliability of face biometrics against spoof attacks, where a photo, a video, or a 3D mask of a genuine user’s face can be used to gain illegitimate access to facilities or services. Though several face antispoofing or liveness detection methods (which determine at the time of capture whether a face is live or spoof have been proposed, the issue is still unsolved due to difficulty in finding discriminative and computationally inexpensive features and methods for spoof attacks. In addition, existing techniques use whole face image or complete video for liveness detection. However, often certain face regions (video frames are redundant or correspond to the clutter in the image (video, thus leading generally to low performances. Therefore, we propose seven novel methods to find discriminative image patches, which we define as regions that are salient, instrumental, and class-specific. Four well-known classifiers, namely, support vector machine (SVM, Naive-Bayes, Quadratic Discriminant Analysis (QDA, and Ensemble, are then used to distinguish between genuine and spoof faces using a voting based scheme. Experimental analysis on two publicly available databases (Idiap REPLAY-ATTACK and CASIA-FASD shows promising results compared to existing works.

  15. Electrophysiological brain dynamics during the esthetic judgment of human bodies and faces.

    Science.gov (United States)

    Muñoz, Francisco; Martín-Loeches, Manuel

    2015-01-12

    This experiment investigated how the esthetic judgment of human body and face modulates cognitive and affective processes. We hypothesized that judgments on ugliness and beauty would elicit separable event-related brain potentials (ERP) patterns, depending on the esthetic value of body and faces in both genders. In a pretest session, participants evaluated images in a range from very ugly to very beautiful, what generated three sets of beautiful, ugly and neutral faces and bodies. In the recording session, they performed a task consisting in a beautiful-neutral-ugly judgment. Cognitive and affective effects were observed on a differential pattern of ERP components (P200, P300 and LPC). Main findings revealed a P200 amplitude increase to ugly images, probably the result of a negativity bias in attentional processes. A P300 increase was found mostly to beautiful images, particularly to female bodies, consistent with the salience of these stimuli, particularly for stimulus categorization. LPC appeared significantly larger to both ugly and beautiful images, probably reflecting later, decision processes linked to keeping information in working memory. This finding was especially remarkable for ugly male faces. Our findings are discussed on the ground of evolutionary and adaptive value of esthetics in person evaluation. This article is part of a Special Issue entitled Hold Item. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. A color based face detection system using multiple templates

    Institute of Scientific and Technical Information of China (English)

    王涛; 卜佳俊; 陈纯

    2003-01-01

    A color based system using multiple templates was developed and implemented for detecting human faces in color images. The algorithm consists of three image processing steps. The first step is human skin color statistics. Then it separates skin regions from non-skin regions. After that, it locates the frontal human face(s) within the skin regions. In the first step, 250 skin samples from persons of different ethnicities are used to determine the color distribution of human skin in chromatic color space in order to get a chroma chart showing likelihoods of skin colors. This chroma chart is used to generate, from the original color image, a gray scale image whose gray value at a pixel shows its likelihood of representing the skin. The algorithm uses an adaptive thresholding process to achieve the optimal threshold value for dividing the gray scale image into separate skin regions from non skin regions. Finally, multiple face templates matching is used to determine if a given skin region represents a frontal human face or not. Test of the system with more than 400 color images showed that the resulting detection rate was 83%, which is better than most color-based face detection systems. The average speed for face detection is 0.8 second/image (400×300 pixels) on a Pentium 3 (800MHz) PC.

  17. A color based face detection system using multiple templates

    Institute of Scientific and Technical Information of China (English)

    王涛; 卜佳酸; 陈纯

    2003-01-01

    A color based system using multiple templates was developed and implemented for detecting hu-man faces in color images.The algorithm comsists of three image processing steps.The first step is human skin color statistics.Then it separates skin regions from non-skin regions.After that,it locates the frontal human face(s) within the skin regions.In the first step,250 skin samples from persons of different ethnicities are used to determine the color distribution of human skin in chromatic color space in order to get a chroma chart showing likelihoods of skin colors.This chroma chart is used to generate,from the original color image,a gray scale image whose gray value at a pixel shows its likelihood of representing the shin,The algorithm uses an adaptive thresholding process to achieve the optimal threshold value for dividing the gray scale image into sep-arate skin regions from non skin regions.Finally,multiple face templates matching is used to determine if a given skin region represents a frontal human face or not.Test of the system with more than 400 color images showed that the resulting detection rate was 83%,which is better than most colou-based face detection sys-tems.The average speed for face detection is 0.8 second/image(400×300pixels) on a Pentium 3(800MHz) PC.

  18. Network dynamics of human face perception.

    Directory of Open Access Journals (Sweden)

    Cihan Mehmet Kadipasaoglu

    Full Text Available Prevailing theories suggests that cortical regions responsible for face perception operate in a serial, feed-forward fashion. Here, we utilize invasive human electrophysiology to evaluate serial models of face-processing via measurements of cortical activation, functional connectivity, and cortico-cortical evoked potentials. We find that task-dependent changes in functional connectivity between face-selective regions in the inferior occipital (f-IOG and fusiform gyrus (f-FG are bidirectional, not feed-forward, and emerge following feed-forward input from early visual cortex (EVC to both of these regions. Cortico-cortical evoked potentials similarly reveal independent signal propagations between EVC and both f-IOG and f-FG. These findings are incompatible with serial models, and support a parallel, distributed network underpinning face perception in humans.

  19. The neural code for face orientation in the human fusiform face area.

    Science.gov (United States)

    Ramírez, Fernando M; Cichy, Radoslaw M; Allefeld, Carsten; Haynes, John-Dylan

    2014-09-03

    Humans recognize faces and objects with high speed and accuracy regardless of their orientation. Recent studies have proposed that orientation invariance in face recognition involves an intermediate representation where neural responses are similar for mirror-symmetric views. Here, we used fMRI, multivariate pattern analysis, and computational modeling to investigate the neural encoding of faces and vehicles at different rotational angles. Corroborating previous studies, we demonstrate a representation of face orientation in the fusiform face-selective area (FFA). We go beyond these studies by showing that this representation is category-selective and tolerant to retinal translation. Critically, by controlling for low-level confounds, we found the representation of orientation in FFA to be compatible with a linear angle code. Aspects of mirror-symmetric coding cannot be ruled out when FFA mean activity levels are considered as a dimension of coding. Finally, we used a parametric family of computational models, involving a biased sampling of view-tuned neuronal clusters, to compare different face angle encoding models. The best fitting model exhibited a predominance of neuronal clusters tuned to frontal views of faces. In sum, our findings suggest a category-selective and monotonic code of face orientation in the human FFA, in line with primate electrophysiology studies that observed mirror-symmetric tuning of neural responses at higher stages of the visual system, beyond the putative homolog of human FFA. Copyright © 2014 the authors 0270-6474/14/3412155-13$15.00/0.

  20. Discriminative Projection Selection Based Face Image Hashing

    Science.gov (United States)

    Karabat, Cagatay; Erdogan, Hakan

    Face image hashing is an emerging method used in biometric verification systems. In this paper, we propose a novel face image hashing method based on a new technique called discriminative projection selection. We apply the Fisher criterion for selecting the rows of a random projection matrix in a user-dependent fashion. Moreover, another contribution of this paper is to employ a bimodal Gaussian mixture model at the quantization step. Our simulation results on three different databases demonstrate that the proposed method has superior performance in comparison to previously proposed random projection based methods.

  1. Functional organization of the face-sensitive areas in human occipital-temporal cortex.

    Science.gov (United States)

    Shao, Hanyu; Weng, Xuchu; He, Sheng

    2017-08-15

    Human occipital-temporal cortex features several areas sensitive to faces, presumably forming the biological substrate for face perception. To date, there are piecemeal insights regarding the functional organization of these regions. They have come, however, from studies that are far from homogeneous with regard to the regions involved, the experimental design, and the data analysis approach. In order to provide an overall view of the functional organization of the face-sensitive areas, it is necessary to conduct a comprehensive study that taps into the pivotal functional properties of all the face-sensitive areas, within the context of the same experimental design, and uses multiple data analysis approaches. In this study, we identified the most robustly activated face-sensitive areas in bilateral occipital-temporal cortices (i.e., AFP, aFFA, pFFA, OFA, pcSTS, pSTS) and systemically compared their regionally averaged activation and multivoxel activation patterns to 96 images from 16 object categories, including faces and non-faces. This condition-rich and single-image analysis approach critically samples the functional properties of a brain region, allowing us to test how two basic functional properties, namely face-category selectivity and face-exemplar sensitivity are distributed among these regions. Moreover, by examining the correlational structure of neural responses to the 96 images, we characterize their interactions in the greater face-processing network. We found that (1) r-pFFA showed the highest face-category selectivity, followed by l-pFFA, bilateral aFFA and OFA, and then bilateral pcSTS. In contrast, bilateral AFP and pSTS showed low face-category selectivity; (2) l-aFFA, l-pcSTS and bilateral AFP showed evidence of face-exemplar sensitivity; (3) r-OFA showed high overall response similarities with bilateral LOC and r-pFFA, suggesting it might be a transitional stage between general and face-selective information processing; (4) r-aFFA showed high

  2. A Novel Approach of Low-Light Image Denoising for Face Recognition

    Directory of Open Access Journals (Sweden)

    Yimei Kang

    2014-04-01

    Full Text Available Illumination variation makes automatic face recognition a challenging task, especially in low light environments. A very simple and efficient novel low-light image denoising of low frequency noise (DeLFN is proposed. The noise frequency distribution of low-light images is presented based on massive experimental results. The low and very low frequency noise are dominant in low light conditions. DeLFN is a three-level image denoising method. The first level denoises mixed noises by histogram equalization (HE to improve overall contrast. The second level denoises low frequency noise by logarithmic transformation (LOG to enhance the image detail. The third level denoises residual very low frequency noise by high-pass filtering to recover more features of the true images. The PCA (Principal Component Analysis recognition method is applied to test recognition rate of the preprocessed face images with DeLFN. DeLFN are compared with several representative illumination preprocessing methods on the Yale Face Database B, the Extended Yale face database B, and the CMU PIE face database, respectively. DeLFN not only outperformed other algorithms in improving visual quality and face recognition rate, but also is simpler and computationally efficient for real time applications.

  3. Exploring manifold structure of face images via multiple graphs

    KAUST Repository

    Alghamdi, Masheal

    2013-01-01

    Geometric structure in the data provides important information for face image recognition and classification tasks. Graph regularized non-negative matrix factorization (GrNMF) performs well in this task. However, it is sensitive to the parameters selection. Wang et al. proposed multiple graph regularized non-negative matrix factorization (MultiGrNMF) to solve the parameter selection problem by testing it on medical images. In this paper, we introduce the MultiGrNMF algorithm in the context of still face Image classification, and conduct a comparative study of NMF, GrNMF, and MultiGrNMF using two well-known face databases. Experimental results show that MultiGrNMF outperforms NMF and GrNMF for most cases.

  4. Exploring manifold structure of face images via multiple graphs

    KAUST Repository

    Alghamdi, Masheal

    2013-12-24

    Geometric structure in the data provides important information for face image recognition and classification tasks. Graph regularized non-negative matrix factorization (GrNMF) performs well in this task. However, it is sensitive to the parameters selection. Wang et al. proposed multiple graph regularized non-negative matrix factorization (MultiGrNMF) to solve the parameter selection problem by testing it on medical images. In this paper, we introduce the MultiGrNMF algorithm in the context of still face Image classification, and conduct a comparative study of NMF, GrNMF, and MultiGrNMF using two well-known face databases. Experimental results show that MultiGrNMF outperforms NMF and GrNMF for most cases.

  5. 任意光照下人脸图像的低维光照空间表示%A Low-dimensional Illumination Space Representation of Human Faces for Arbitrary Lighting Conditions

    Institute of Scientific and Technical Information of China (English)

    胡元奎; 汪增福

    2007-01-01

    The proposed method for low-dimensional illumination space representation (LDISR) of human faces can not only synthesize a virtual face image when given lighting conditions but also estimate lighting conditions when given a face image. The LDISR is based on the observation that 9 basis point light sources can represent almost arbitrary lighting conditions for face recognition application and different human faces have a similar LDISR. The principal component analysis (PCA) and the nearest neighbor clustering method are adopted to obtain the 9 basis point light sources. The 9 basis images under the 9 basis point light sources are then used to construct an LDISR which can represent almost all face images under arbitrary lighting conditions.Illumination ratio image (IRI) is employed to generate virtual face images under different illuminations. The LDISR obtained from face images of one person can be used for other people. Experimental results on image reconstruction and face recognition indicate the efficiency of LDISR.

  6. Human behavior preceding dog bites to the face.

    Science.gov (United States)

    Rezac, P; Rezac, K; Slama, P

    2015-12-01

    Facial injuries caused by dog bites pose a serious problem. The aims of this study were to determine human behavior immediately preceding a dog bite to the face and to assess the effects of victim age and gender and dog sex and size on the location of the bite to the face and the need for medical treatment. Complete data on 132 incidents of bites to the face were analysed. A human bending over a dog, putting the face close to the dog's face, and gazing between victim and dog closely preceded a dog bite to the face in 76%, 19% and 5% of cases, respectively. More than half of the bites were directed towards the central area of the victim's face (nose, lips). More than two thirds of the victims were children, none of the victims was an adult dog owner and only adult dogs bit the face. Victim's age and gender and dog's sex and size did not affect the location of the bite on the face. People who were bitten by large dogs sought medical treatment more often than people who were bitten by small dogs (P face close to the dog's face and gazing between human and dog should be avoided, and children should be carefully and constantly supervised when in the presence of dogs. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Energy conservation using face detection

    Science.gov (United States)

    Deotale, Nilesh T.; Kalbande, Dhananjay R.; Mishra, Akassh A.

    2011-10-01

    Computerized Face Detection, is concerned with the difficult task of converting a video signal of a person to written text. It has several applications like face recognition, simultaneous multiple face processing, biometrics, security, video surveillance, human computer interface, image database management, digital cameras use face detection for autofocus, selecting regions of interest in photo slideshows that use a pan-and-scale and The Present Paper deals with energy conservation using face detection. Automating the process to a computer requires the use of various image processing techniques. There are various methods that can be used for Face Detection such as Contour tracking methods, Template matching, Controlled background, Model based, Motion based and color based. Basically, the video of the subject are converted into images are further selected manually for processing. However, several factors like poor illumination, movement of face, viewpoint-dependent Physical appearance, Acquisition geometry, Imaging conditions, Compression artifacts makes Face detection difficult. This paper reports an algorithm for conservation of energy using face detection for various devices. The present paper suggests Energy Conservation can be done by Detecting the Face and reducing the brightness of complete image and then adjusting the brightness of the particular area of an image where the face is located using histogram equalization.

  8. Dogs can discriminate human smiling faces from blank expressions.

    Science.gov (United States)

    Nagasawa, Miho; Murai, Kensuke; Mogi, Kazutaka; Kikusui, Takefumi

    2011-07-01

    Dogs have a unique ability to understand visual cues from humans. We investigated whether dogs can discriminate between human facial expressions. Photographs of human faces were used to test nine pet dogs in two-choice discrimination tasks. The training phases involved each dog learning to discriminate between a set of photographs of their owner's smiling and blank face. Of the nine dogs, five fulfilled these criteria and were selected for test sessions. In the test phase, 10 sets of photographs of the owner's smiling and blank face, which had previously not been seen by the dog, were presented. The dogs selected the owner's smiling face significantly more often than expected by chance. In subsequent tests, 10 sets of smiling and blank face photographs of 20 persons unfamiliar to the dogs were presented (10 males and 10 females). There was no statistical difference between the accuracy in the case of the owners and that in the case of unfamiliar persons with the same gender as the owner. However, the accuracy was significantly lower in the case of unfamiliar persons of the opposite gender to that of the owner, than with the owners themselves. These results suggest that dogs can learn to discriminate human smiling faces from blank faces by looking at photographs. Although it remains unclear whether dogs have human-like systems for visual processing of human facial expressions, the ability to learn to discriminate human facial expressions may have helped dogs adapt to human society.

  9. The Impact of Image Quality on the Performance of Face Recognition

    NARCIS (Netherlands)

    Dutta, A.; Veldhuis, Raymond N.J.; Spreeuwers, Lieuwe Jan

    The performance of a face recognition system depends on the quality of both test and reference images participating in the face comparison process. In a forensic evaluation case involving face recognition, we do not have any control over the quality of the trace (image captured by a CCTV at a crime

  10. Face recognition in the thermal infrared domain

    Science.gov (United States)

    Kowalski, M.; Grudzień, A.; Palka, N.; Szustakowski, M.

    2017-10-01

    Biometrics refers to unique human characteristics. Each unique characteristic may be used to label and describe individuals and for automatic recognition of a person based on physiological or behavioural properties. One of the most natural and the most popular biometric trait is a face. The most common research methods on face recognition are based on visible light. State-of-the-art face recognition systems operating in the visible light spectrum achieve very high level of recognition accuracy under controlled environmental conditions. Thermal infrared imagery seems to be a promising alternative or complement to visible range imaging due to its relatively high resistance to illumination changes. A thermal infrared image of the human face presents its unique heat-signature and can be used for recognition. The characteristics of thermal images maintain advantages over visible light images, and can be used to improve algorithms of human face recognition in several aspects. Mid-wavelength or far-wavelength infrared also referred to as thermal infrared seems to be promising alternatives. We present the study on 1:1 recognition in thermal infrared domain. The two approaches we are considering are stand-off face verification of non-moving person as well as stop-less face verification on-the-move. The paper presents methodology of our studies and challenges for face recognition systems in the thermal infrared domain.

  11. 2D Methods for pose invariant face recognition

    CSIR Research Space (South Africa)

    Mokoena, Ntabiseng

    2016-12-01

    Full Text Available The ability to recognise face images under random pose is a task that is done effortlessly by human beings. However, for a computer system, recognising face images under varying poses still remains an open research area. Face recognition across pose...

  12. The fMRI analysis of brain activation in response to face image affected by background images

    International Nuclear Information System (INIS)

    Shimada, Takamasa; Fukami, Tadanori; Saito, Yoichi

    2011-01-01

    The stimuli of a face images expressing fear induce the activation in the medial temporal lobe was reported in previous studies. In particular, it was reported that face image expressing fear activated the amygdala and hippo-campus area of brain. In these studies, no background images were used with facial stimuli. However, normal day-to-day images always have a background. We investigated the effect of combining face images expressing fear and different background images. As a result, strong activation was detected in the amygdala and hippocampus area when the lightning background image was used. But strong activation was not detected when the fire background image was used. From the results of questionnaire rating the impression of possibility of experiencing the situation of shown images, it is thought that this difference of impression of possibility made the difference of empathy and caused the difference of brain activation. (author)

  13. Illumination normalization of face image based on illuminant direction estimation and improved Retinex.

    Directory of Open Access Journals (Sweden)

    Jizheng Yi

    Full Text Available Illumination normalization of face image for face recognition and facial expression recognition is one of the most frequent and difficult problems in image processing. In order to obtain a face image with normal illumination, our method firstly divides the input face image into sixteen local regions and calculates the edge level percentage in each of them. Secondly, three local regions, which meet the requirements of lower complexity and larger average gray value, are selected to calculate the final illuminant direction according to the error function between the measured intensity and the calculated intensity, and the constraint function for an infinite light source model. After knowing the final illuminant direction of the input face image, the Retinex algorithm is improved from two aspects: (1 we optimize the surround function; (2 we intercept the values in both ends of histogram of face image, determine the range of gray levels, and stretch the range of gray levels into the dynamic range of display device. Finally, we achieve illumination normalization and get the final face image. Unlike previous illumination normalization approaches, the method proposed in this paper does not require any training step or any knowledge of 3D face and reflective surface model. The experimental results using extended Yale face database B and CMU-PIE show that our method achieves better normalization effect comparing with the existing techniques.

  14. Illumination normalization of face image based on illuminant direction estimation and improved Retinex.

    Science.gov (United States)

    Yi, Jizheng; Mao, Xia; Chen, Lijiang; Xue, Yuli; Rovetta, Alberto; Caleanu, Catalin-Daniel

    2015-01-01

    Illumination normalization of face image for face recognition and facial expression recognition is one of the most frequent and difficult problems in image processing. In order to obtain a face image with normal illumination, our method firstly divides the input face image into sixteen local regions and calculates the edge level percentage in each of them. Secondly, three local regions, which meet the requirements of lower complexity and larger average gray value, are selected to calculate the final illuminant direction according to the error function between the measured intensity and the calculated intensity, and the constraint function for an infinite light source model. After knowing the final illuminant direction of the input face image, the Retinex algorithm is improved from two aspects: (1) we optimize the surround function; (2) we intercept the values in both ends of histogram of face image, determine the range of gray levels, and stretch the range of gray levels into the dynamic range of display device. Finally, we achieve illumination normalization and get the final face image. Unlike previous illumination normalization approaches, the method proposed in this paper does not require any training step or any knowledge of 3D face and reflective surface model. The experimental results using extended Yale face database B and CMU-PIE show that our method achieves better normalization effect comparing with the existing techniques.

  15. Choosing face: The curse of self in profile image selection.

    Science.gov (United States)

    White, David; Sutherland, Clare A M; Burton, Amy L

    2017-01-01

    People draw automatic social inferences from photos of unfamiliar faces and these first impressions are associated with important real-world outcomes. Here we examine the effect of selecting online profile images on first impressions. We model the process of profile image selection by asking participants to indicate the likelihood that images of their own face ("self-selection") and of an unfamiliar face ("other-selection") would be used as profile images on key social networking sites. Across two large Internet-based studies (n = 610), in line with predictions, image selections accentuated favorable social impressions and these impressions were aligned to the social context of the networking sites. However, contrary to predictions based on people's general expertise in self-presentation, other-selected images conferred more favorable impressions than self-selected images. We conclude that people make suboptimal choices when selecting their own profile pictures, such that self-perception places important limits on facial first impressions formed by others. These results underscore the dynamic nature of person perception in real-world contexts.

  16. Asymmetry and Symmetry in the Beauty of Human Faces

    Directory of Open Access Journals (Sweden)

    Marjan Hessamian

    2010-02-01

    Full Text Available The emphasis in the published literature has mostly been on symmetry as the critical source for beauty judgment. In fact, both symmetry and asymmetry serve as highly aesthetic sources of beauty, whether the context is perceptual or conceptual. The human brain is characterized by symbolic cognition and this type of cognition facilitates a range of aesthetic reactions. For example, both art and natural scenery contain asymmetrical elements, which nevertheless render the whole effect beautiful. A further good case in point is, in fact, human faces. Normally, faces are structurally left-right symmetrical content-wise but not size-wise or function-wise. Attractiveness has often been discussed in terms of content-wise full-face symmetry. To test whether or not attractiveness can be gleaned only from the presence of left-right full-faces we tested half faces. Three separate groups of participants viewed and rated the attractiveness of 56 full-faces (women’s and men’s, their 56 vertical left hemi-faces and 56 vertical right hemi-faces. We found no statistically significant differences in the attractiveness ratings of full- and hemi-faces (whether left or right. Instead, we found a strong and significant positive correlation between the ratings of the hemi- and full-faces. These results are consistent with the view that the underpinning of human facial beauty is complex and that bilateral symmetry does not constitute a principle factor in beauty assessment. We discuss that the highly evolved human brain, compared to other animals, as well as symbolic and abstract cognition in humans enable a wide variety of aesthetic reactions.

  17. Differences in the Pattern of Hemodynamic Response to Self-Face and Stranger-Face Images in Adolescents with Anorexia Nervosa: A Near-Infrared Spectroscopic Study.

    Directory of Open Access Journals (Sweden)

    Takeshi Inoue

    Full Text Available There have been no reports concerning the self-face perception in patients with anorexia nervosa (AN. The purpose of this study was to compare the neuronal correlates of viewing self-face images (i.e. images of familiar face and stranger-face images (i.e. images of an unfamiliar face in female adolescents with and without AN. We used near-infrared spectroscopy (NIRS to measure hemodynamic responses while the participants viewed full-color photographs of self-face and stranger-face. Fifteen females with AN (mean age, 13.8 years and 15 age- and intelligence quotient (IQ-matched female controls without AN (mean age, 13.1 years participated in the study. The responses to photographs were compared with the baseline activation (response to white uniform blank. In the AN group, the concentration of oxygenated hemoglobin (oxy-Hb significantly increased in the right temporal area during the presentation of both the self-face and stranger-face images compared with the baseline level. In contrast, in the control group, the concentration of oxy-Hb significantly increased in the right temporal area only during the presentation of the self-face image. To our knowledge the present study is the first report to assess brain activities during self-face and stranger-face perception among female adolescents with AN. There were different patterns of brain activation in response to the sight of the self-face and stranger-face images in female adolescents with AN and controls.

  18. Differences in the Pattern of Hemodynamic Response to Self-Face and Stranger-Face Images in Adolescents with Anorexia Nervosa: A Near-Infrared Spectroscopic Study.

    Science.gov (United States)

    Inoue, Takeshi; Sakuta, Yuiko; Shimamura, Keiichi; Ichikawa, Hiroko; Kobayashi, Megumi; Otani, Ryoko; Yamaguchi, Masami K; Kanazawa, So; Kakigi, Ryusuke; Sakuta, Ryoichi

    2015-01-01

    There have been no reports concerning the self-face perception in patients with anorexia nervosa (AN). The purpose of this study was to compare the neuronal correlates of viewing self-face images (i.e. images of familiar face) and stranger-face images (i.e. images of an unfamiliar face) in female adolescents with and without AN. We used near-infrared spectroscopy (NIRS) to measure hemodynamic responses while the participants viewed full-color photographs of self-face and stranger-face. Fifteen females with AN (mean age, 13.8 years) and 15 age- and intelligence quotient (IQ)-matched female controls without AN (mean age, 13.1 years) participated in the study. The responses to photographs were compared with the baseline activation (response to white uniform blank). In the AN group, the concentration of oxygenated hemoglobin (oxy-Hb) significantly increased in the right temporal area during the presentation of both the self-face and stranger-face images compared with the baseline level. In contrast, in the control group, the concentration of oxy-Hb significantly increased in the right temporal area only during the presentation of the self-face image. To our knowledge the present study is the first report to assess brain activities during self-face and stranger-face perception among female adolescents with AN. There were different patterns of brain activation in response to the sight of the self-face and stranger-face images in female adolescents with AN and controls.

  19. [Health and humanization Diploma: the value of reflection and face to face learning].

    Science.gov (United States)

    Martínez-Gutiérrez, Javiera; Magliozzi, Pietro; Torres, Patricio; Soto, Mauricio; Walker, Rosa

    2015-03-01

    In a rapidly changing culture like ours, with emphasis on productivity, there is a strong need to find the meaning of health care work using learning instances that privilege reflection and face to face contact with others. The Diploma in Health and Humanization (DSH), was developed as an interdisciplinary space for training on issues related to humanization. To analyze the experience of DSH aiming to identify the elements that students considered key factors for the success of the program. We conducted a focus group with DSH graduates, identifying factors associated with satisfaction. Transcripts were coded and analyzed by two independent reviewers. DSH graduates valued a safe space, personal interaction, dialogue and respect as learning tools of the DSH. They also appreciates the opportunity to have emotional interactions among students and between them and the teacher as well as the opportunity to share personal stories and their own search for meaning. DSH is a learning experience in which their graduates value the ability to think about their vocation and the affective interaction with peers and teachers. We hope to contribute to the development of face to face courses in the area of humanization. Face to face methodology is an excellent teaching technique for contents related to the meaning of work, and more specifically, to a group of learners that require affective communication and a personal connection of their work with their own values and beliefs.

  20. Preference for Attractive Faces in Human Infants Extends beyond Conspecifics

    Science.gov (United States)

    Quinn, Paul C.; Kelly, David J.; Lee, Kang; Pascalis, Olivier; Slater, Alan M.

    2008-01-01

    Human infants, just a few days of age, are known to prefer attractive human faces. We examined whether this preference is human-specific. Three- to 4-month-olds preferred attractive over unattractive domestic and wild cat (tiger) faces (Experiments 1 and 3). The preference was not observed when the faces were inverted, suggesting that it did not…

  1. Modelling temporal networks of human face-to-face contacts with public activity and individual reachability

    Science.gov (United States)

    Zhang, Yi-Qing; Cui, Jing; Zhang, Shu-Min; Zhang, Qi; Li, Xiang

    2016-02-01

    Modelling temporal networks of human face-to-face contacts is vital both for understanding the spread of airborne pathogens and word-of-mouth spreading of information. Although many efforts have been devoted to model these temporal networks, there are still two important social features, public activity and individual reachability, have been ignored in these models. Here we present a simple model that captures these two features and other typical properties of empirical face-to-face contact networks. The model describes agents which are characterized by an attractiveness to slow down the motion of nearby people, have event-triggered active probability and perform an activity-dependent biased random walk in a square box with periodic boundary. The model quantitatively reproduces two empirical temporal networks of human face-to-face contacts which are testified by their network properties and the epidemic spread dynamics on them.

  2. Effects of pose and image resolution on automatic face recognition

    NARCIS (Netherlands)

    Mahmood, Zahid; Ali, Tauseef; Khan, Samee U.

    The popularity of face recognition systems have increased due to their use in widespread applications. Driven by the enormous number of potential application domains, several algorithms have been proposed for face recognition. Face pose and image resolutions are among the two important factors that

  3. Images of war: using satellite images for human rights monitoring in Turkish Kurdistan.

    Science.gov (United States)

    de Vos, Hugo; Jongerden, Joost; van Etten, Jacob

    2008-09-01

    In areas of war and armed conflict it is difficult to get trustworthy and coherent information. Civil society and human rights groups often face problems of dealing with fragmented witness reports, disinformation of war propaganda, and difficult direct access to these areas. Turkish Kurdistan was used as a case study of armed conflict to evaluate the potential use of satellite images for verification of witness reports collected by human rights groups. The Turkish army was reported to be burning forests, fields and villages as a strategy in the conflict against guerrilla uprising. This paper concludes that satellite images are useful to validate witness reports of forest fires. Even though the use of this technology for human rights groups will depend on some feasibility factors such as prices, access and expertise, the images proved to be key for analysis of spatial aspects of conflict and valuable for reconstructing a more trustworthy picture.

  4. EEG source imaging assists decoding in a face recognition task

    DEFF Research Database (Denmark)

    Andersen, Rasmus S.; Eliasen, Anders U.; Pedersen, Nicolai

    2017-01-01

    of face recognition. This task concerns the differentiation of brain responses to images of faces and scrambled faces and poses a rather difficult decoding problem at the single trial level. We implement the pipeline using spatially focused features and show that this approach is challenged and source...

  5. Visual search of Mooney faces

    Directory of Open Access Journals (Sweden)

    Jessica Emeline Goold

    2016-02-01

    Full Text Available Faces spontaneously capture attention. However, which special attributes of a face underlie this effect are unclear. To address this question, we investigate how gist information, specific visual properties and differing amounts of experience with faces affect the time required to detect a face. Three visual search experiments were conducted investigating the rapidness of human observers to detect Mooney face images. Mooney images are two-toned, ambiguous images. They were used in order to have stimuli that maintain gist information but limit low-level image properties. Results from the experiments show: 1 although upright Mooney faces were searched inefficiently, they were detected more rapidly than inverted Mooney face targets, demonstrating the important role of gist information in guiding attention towards a face. 2 Several specific Mooney face identities were searched efficiently while others were not, suggesting the involvement of specific visual properties in face detection. 3 By providing participants with unambiguous gray-scale versions of the Mooney face targets prior to the visual search task, the targets were detected significantly more efficiently, suggesting that prior experience with Mooney faces improves the ability to extract gist information for rapid face detection. However, a week of training with Mooney face categorization did not lead to even more efficient visual search of Mooney face targets. In summary, these results reveal that specific local image properties cannot account for how faces capture attention. On the other hand, gist information alone cannot account for how faces capture attention either. Prior experience facilitates the effect of gist on visual search of faces, making faces a special object category for guiding attention.

  6. Drawing cartoon faces--a functional imaging study of the cognitive neuroscience of drawing.

    Science.gov (United States)

    Miall, R Chris; Gowen, Emma; Tchalenko, John

    2009-03-01

    We report a functional imaging study of drawing cartoon faces. Normal, untrained participants were scanned while viewing simple black and white cartoon line drawings of human faces, retaining them for a short memory interval, and then drawing them without vision of their hand or the paper. Specific encoding and retention of information about the faces were tested for by contrasting these two stages (with display of cartoon faces) against the exploration and retention of random dot stimuli. Drawing was contrasted between conditions in which only memory of a previously viewed face was available versus a condition in which both memory and simultaneous viewing of the cartoon were possible, and versus drawing of a new, previously unseen, face. We show that the encoding of cartoon faces powerfully activates the face-sensitive areas of the lateral occipital cortex and the fusiform gyrus, but there is no significant activation in these areas during the retention interval. Activity in both areas was also high when drawing the displayed cartoons. Drawing from memory activates areas in posterior parietal cortex and frontal areas. This activity is consistent with the encoding and retention of the spatial information about the face to be drawn as a visuo-motor action plan, either representing a series of targets for ocular fixation or as spatial targets for the drawing action.

  7. Image-Based 3D Face Modeling System

    Directory of Open Access Journals (Sweden)

    Vladimir Vezhnevets

    2005-08-01

    Full Text Available This paper describes an automatic system for 3D face modeling using frontal and profile images taken by an ordinary digital camera. The system consists of four subsystems including frontal feature detection, profile feature detection, shape deformation, and texture generation modules. The frontal and profile feature detection modules automatically extract the facial parts such as the eye, nose, mouth, and ear. The shape deformation module utilizes the detected features to deform the generic head mesh model such that the deformed model coincides with the detected features. A texture is created by combining the facial textures augmented from the input images and the synthesized texture and mapped onto the deformed generic head model. This paper provides a practical system for 3D face modeling, which is highly automated by aggregating, customizing, and optimizing a bunch of individual computer vision algorithms. The experimental results show a highly automated process of modeling, which is sufficiently robust to various imaging conditions. The whole model creation including all the optional manual corrections takes only 2∼3 minutes.

  8. Preference for Averageness in Faces Does Not Generalize to Non-Human Primates

    Directory of Open Access Journals (Sweden)

    Olivia B. Tomeo

    2017-07-01

    Full Text Available Facial attractiveness is a long-standing topic of active study in both neuroscience and social science, motivated by its positive social consequences. Over the past few decades, it has been established that averageness is a major factor influencing judgments of facial attractiveness in humans. Non-human primates share similar social behaviors as well as neural mechanisms related to face processing with humans. However, it is unknown whether monkeys, like humans, also find particular faces attractive and, if so, which kind of facial traits they prefer. To address these questions, we investigated the effect of averageness on preferences for faces in monkeys. We tested three adult male rhesus macaques using a visual paired comparison (VPC task, in which they viewed pairs of faces (both individual faces, or one individual face and one average face; viewing time was used as a measure of preference. We did find that monkeys looked longer at certain individual faces than others. However, unlike humans, monkeys did not prefer the average face over individual faces. In fact, the more the individual face differed from the average face, the longer the monkeys looked at it, indicating that the average face likely plays a role in face recognition rather than in judgments of facial attractiveness: in models of face recognition, the average face operates as the norm against which individual faces are compared and recognized. Taken together, our study suggests that the preference for averageness in faces does not generalize to non-human primates.

  9. The changing face of britain -images and reality-

    OpenAIRE

    John,Herbert

    2012-01-01

    There are many famous images of Britain held by people in Japan. Those images can be wide-ranging, influenced by tourist trips to Britain, television programs or school textbooks. Some of the populer images include things such as Royal weddings and "kiri on London" (foggy London). However, are these images truly representative of modern Britain? Other cultures, inbluding Japan, are changing the face of Britain in many ways. This lecture shows how Britain is changing, and ezamines some of the ...

  10. Frontal Face Detection using Haar Wavelet Coefficients and Local Histogram Correlation

    Directory of Open Access Journals (Sweden)

    Iwan Setyawan

    2011-12-01

    Full Text Available Face detection is the main building block on which all automatic systems dealing with human faces is built. For example, a face recognition system must rely on face detection to process an input image and determine which areas contain human faces. These areas then become the input for the face recognition system for further processing. This paper presents a face detection system designed to detect frontal faces. The system uses Haar wavelet coefficients and local histogram correlation as differentiating features. Our proposed system is trained using 100 training images. Our experiments show that the proposed system performed well during testing, achieving a detection rate of 91.5%.

  11. Ultrahigh speed en face OCT capsule for endoscopic imaging.

    Science.gov (United States)

    Liang, Kaicheng; Traverso, Giovanni; Lee, Hsiang-Chieh; Ahsen, Osman Oguz; Wang, Zhao; Potsaid, Benjamin; Giacomelli, Michael; Jayaraman, Vijaysekhar; Barman, Ross; Cable, Alex; Mashimo, Hiroshi; Langer, Robert; Fujimoto, James G

    2015-04-01

    Depth resolved and en face OCT visualization in vivo may have important clinical applications in endoscopy. We demonstrate a high speed, two-dimensional (2D) distal scanning capsule with a micromotor for fast rotary scanning and a pneumatic actuator for precision longitudinal scanning. Longitudinal position measurement and image registration were performed by optical tracking of the pneumatic scanner. The 2D scanning device enables high resolution imaging over a small field of view and is suitable for OCT as well as other scanning microscopies. Large field of view imaging for screening or surveillance applications can also be achieved by proximally pulling back or advancing the capsule while scanning the distal high-speed micromotor. Circumferential en face OCT was demonstrated in living swine at 250 Hz frame rate and 1 MHz A-scan rate using a MEMS tunable VCSEL light source at 1300 nm. Cross-sectional and en face OCT views of the upper and lower gastrointestinal tract were generated with precision distal pneumatic longitudinal actuation as well as proximal manual longitudinal actuation. These devices could enable clinical studies either as an adjunct to endoscopy, attached to an endoscope, or as a swallowed tethered capsule for non-endoscopic imaging without sedation. The combination of ultrahigh speed imaging and distal scanning capsule technology could enable both screening and surveillance applications.

  12. Gender Perception From Faces Using Boosted LBPH (Local Binary Patten Histograms

    Directory of Open Access Journals (Sweden)

    U. U. Tariq

    2013-06-01

    Full Text Available Automatic Gender classification from faces has several applications such as surveillance, human computer interaction, targeted advertisement etc. Humans can recognize gender from faces quite accurately but for computer vision it is a difficult task. Many studies have targeted this problem but most of these studies used images of faces taken under constrained conditions. Real-world applications however require to process images from real-world, that have significant variation in lighting and pose, which makes the gender classification task very difficult. We have examined the problem of automatic gender classification from faces on real-world images. Using a face detector faces from images are extracted aligned and represented using Local binary pattern histogram. Discriminative features are selected using Adaboost and the boosted LBP features are used to train a support vector machine that provides a recognition rate of 93.29%.

  13. Real-time Face Detection using Skin Color Model

    Institute of Scientific and Technical Information of China (English)

    LU Yao-xin; LIU Zhi-Qiang; ZHU Xiang-hua

    2004-01-01

    This paper presents a new face detection approach to real-time applications, which is based on the skin color model and the morphological filtering. First the non-skin color pixels of the input image are removed based on the skin color model in the YCrCb chrominance space, from which we extract candidate human face regions. Then a mathematical morphological filter is used to remove noisy regions and fill the holes in the candidate skin color regions. We adopt the similarity between the human face features and the candidate face regions to locate the face regions in the original image. We have implemented the algorithm in our smart media system. The experiment results show that this system is effective in real-time applications.

  14. A novel BCI based on ERP components sensitive to configural processing of human faces

    Science.gov (United States)

    Zhang, Yu; Zhao, Qibin; Jing, Jin; Wang, Xingyu; Cichocki, Andrzej

    2012-04-01

    This study introduces a novel brain-computer interface (BCI) based on an oddball paradigm using stimuli of facial images with loss of configural face information (e.g., inversion of face). To the best of our knowledge, till now the configural processing of human faces has not been applied to BCI but widely studied in cognitive neuroscience research. Our experiments confirm that the face-sensitive event-related potential (ERP) components N170 and vertex positive potential (VPP) have reflected early structural encoding of faces and can be modulated by the configural processing of faces. With the proposed novel paradigm, we investigate the effects of ERP components N170, VPP and P300 on target detection for BCI. An eight-class BCI platform is developed to analyze ERPs and evaluate the target detection performance using linear discriminant analysis without complicated feature extraction processing. The online classification accuracy of 88.7% and information transfer rate of 38.7 bits min-1 using stimuli of inverted faces with only single trial suggest that the proposed paradigm based on the configural processing of faces is very promising for visual stimuli-driven BCI applications.

  15. Reconstructing 3D Face Model with Associated Expression Deformation from a Single Face Image via Constructing a Low-Dimensional Expression Deformation Manifold.

    Science.gov (United States)

    Wang, Shu-Fan; Lai, Shang-Hong

    2011-10-01

    Facial expression modeling is central to facial expression recognition and expression synthesis for facial animation. In this work, we propose a manifold-based 3D face reconstruction approach to estimating the 3D face model and the associated expression deformation from a single face image. With the proposed robust weighted feature map (RWF), we can obtain the dense correspondences between 3D face models and build a nonlinear 3D expression manifold from a large set of 3D facial expression models. Then a Gaussian mixture model in this manifold is learned to represent the distribution of expression deformation. By combining the merits of morphable neutral face model and the low-dimensional expression manifold, a novel algorithm is developed to reconstruct the 3D face geometry as well as the facial deformation from a single face image in an energy minimization framework. Experimental results on simulated and real images are shown to validate the effectiveness and accuracy of the proposed algorithm.

  16. USE OF IMAGE ENHANCEMENT TECHNIQUES FOR IMPROVING REAL TIME FACE RECOGNITION EFFICIENCY ON WEARABLE GADGETS

    Directory of Open Access Journals (Sweden)

    MUHAMMAD EHSAN RANA

    2017-01-01

    Full Text Available The objective of this research is to study the effects of image enhancement techniques on face recognition performance of wearable gadgets with an emphasis on recognition rate.In this research, a number of image enhancement techniques are selected that include brightness normalization, contrast normalization, sharpening, smoothing, and various combinations of these. Subsequently test images are obtained from AT&T database and Yale Face Database B to investigate the effect of these image enhancement techniques under various conditions such as change of illumination and face orientation and expression.The evaluation of data, collected during this research, revealed that the effect of image pre-processing techniques on face recognition highly depends on the illumination condition under which these images are taken. It is revealed that the benefit of applying image enhancement techniques on face images is best seen when there is high variation of illumination among images. Results also indicate that highest recognition rate is achieved when images are taken under low light condition and image contrast is enhanced using histogram equalization technique and then image noise is reduced using median smoothing filter. Additionally combination of contrast normalization and mean smoothing filter shows good result in all scenarios. Results obtained from test cases illustrate up to 75% improvement in face recognition rate when image enhancement is applied to images in given scenarios.

  17. From local pixel structure to global image super-resolution: a new face hallucination framework.

    Science.gov (United States)

    Hu, Yu; Lam, Kin-Man; Qiu, Guoping; Shen, Tingzhi

    2011-02-01

    We have developed a new face hallucination framework termed from local pixel structure to global image super-resolution (LPS-GIS). Based on the assumption that two similar face images should have similar local pixel structures, the new framework first uses the input low-resolution (LR) face image to search a face database for similar example high-resolution (HR) faces in order to learn the local pixel structures for the target HR face. It then uses the input LR face and the learned pixel structures as priors to estimate the target HR face. We present a three-step implementation procedure for the framework. Step 1 searches the database for K example faces that are the most similar to the input, and then warps the K example images to the input using optical flow. Step 2 uses the warped HR version of the K example faces to learn the local pixel structures for the target HR face. An effective method for learning local pixel structures from an individual face, and an adaptive procedure for fusing the local pixel structures of different example faces to reduce the influence of warping errors, have been developed. Step 3 estimates the target HR face by solving a constrained optimization problem by means of an iterative procedure. Experimental results show that our new method can provide good performances for face hallucination, both in terms of reconstruction error and visual quality; and that it is competitive with existing state-of-the-art methods.

  18. Facial Asymmetry-Based Age Group Estimation: Role in Recognizing Age-Separated Face Images.

    Science.gov (United States)

    Sajid, Muhammad; Taj, Imtiaz Ahmad; Bajwa, Usama Ijaz; Ratyal, Naeem Iqbal

    2018-04-23

    Face recognition aims to establish the identity of a person based on facial characteristics. On the other hand, age group estimation is the automatic calculation of an individual's age range based on facial features. Recognizing age-separated face images is still a challenging research problem due to complex aging processes involving different types of facial tissues, skin, fat, muscles, and bones. Certain holistic and local facial features are used to recognize age-separated face images. However, most of the existing methods recognize face images without incorporating the knowledge learned from age group estimation. In this paper, we propose an age-assisted face recognition approach to handle aging variations. Inspired by the observation that facial asymmetry is an age-dependent intrinsic facial feature, we first use asymmetric facial dimensions to estimate the age group of a given face image. Deeply learned asymmetric facial features are then extracted for face recognition using a deep convolutional neural network (dCNN). Finally, we integrate the knowledge learned from the age group estimation into the face recognition algorithm using the same dCNN. This integration results in a significant improvement in the overall performance compared to using the face recognition algorithm alone. The experimental results on two large facial aging datasets, the MORPH and FERET sets, show that the proposed age group estimation based on the face recognition approach yields superior performance compared to some existing state-of-the-art methods. © 2018 American Academy of Forensic Sciences.

  19. 'Faceness' and affectivity: evidence for genetic contributions to distinct components of electrocortical response to human faces.

    Science.gov (United States)

    Shannon, Robert W; Patrick, Christopher J; Venables, Noah C; He, Sheng

    2013-12-01

    The ability to recognize a variety of different human faces is undoubtedly one of the most important and impressive functions of the human perceptual system. Neuroimaging studies have revealed multiple brain regions (including the FFA, STS, OFA) and electrophysiological studies have identified differing brain event-related potential (ERP) components (e.g., N170, P200) possibly related to distinct types of face information processing. To evaluate the heritability of ERP components associated with face processing, including N170, P200, and LPP, we examined ERP responses to fearful and neutral face stimuli in monozygotic (MZ) and dizygotic (DZ) twins. Concordance levels for early brain response indices of face processing (N170, P200) were found to be stronger for MZ than DZ twins, providing evidence of a heritable basis to each. These findings support the idea that certain key neural mechanisms for face processing are genetically coded. Implications for understanding individual differences in recognition of facial identity and the emotional content of faces are discussed. Copyright © 2013 Elsevier Inc. All rights reserved.

  20. Human faces are slower than chimpanzee faces.

    Directory of Open Access Journals (Sweden)

    Anne M Burrows

    Full Text Available While humans (like other primates communicate with facial expressions, the evolution of speech added a new function to the facial muscles (facial expression muscles. The evolution of speech required the development of a coordinated action between visual (movement of the lips and auditory signals in a rhythmic fashion to produce "visemes" (visual movements of the lips that correspond to specific sounds. Visemes depend upon facial muscles to regulate shape of the lips, which themselves act as speech articulators. This movement necessitates a more controlled, sustained muscle contraction than that produced during spontaneous facial expressions which occur rapidly and last only a short period of time. Recently, it was found that human tongue musculature contains a higher proportion of slow-twitch myosin fibers than in rhesus macaques, which is related to the slower, more controlled movements of the human tongue in the production of speech. Are there similar unique, evolutionary physiologic biases found in human facial musculature related to the evolution of speech?Using myosin immunohistochemistry, we tested the hypothesis that human facial musculature has a higher percentage of slow-twitch myosin fibers relative to chimpanzees (Pan troglodytes and rhesus macaques (Macaca mulatta. We sampled the orbicularis oris and zygomaticus major muscles from three cadavers of each species and compared proportions of fiber-types. Results confirmed our hypothesis: humans had the highest proportion of slow-twitch myosin fibers while chimpanzees had the highest proportion of fast-twitch fibers.These findings demonstrate that the human face is slower than that of rhesus macaques and our closest living relative, the chimpanzee. They also support the assertion that human facial musculature and speech co-evolved. Further, these results suggest a unique set of evolutionary selective pressures on human facial musculature to slow down while the function of this muscle

  1. Voice-associated static face image releases speech from informational masking.

    Science.gov (United States)

    Gao, Yayue; Cao, Shuyang; Qu, Tianshu; Wu, Xihong; Li, Haifeng; Zhang, Jinsheng; Li, Liang

    2014-06-01

    In noisy, multipeople talking environments such as a cocktail party, listeners can use various perceptual and/or cognitive cues to improve recognition of target speech against masking, particularly informational masking. Previous studies have shown that temporally prepresented voice cues (voice primes) improve recognition of target speech against speech masking but not noise masking. This study investigated whether static face image primes that have become target-voice associated (i.e., facial images linked through associative learning with voices reciting the target speech) can be used by listeners to unmask speech. The results showed that in 32 normal-hearing younger adults, temporally prepresenting a voice-priming sentence with the same voice reciting the target sentence significantly improved the recognition of target speech that was masked by irrelevant two-talker speech. When a person's face photograph image became associated with the voice reciting the target speech by learning, temporally prepresenting the target-voice-associated face image significantly improved recognition of target speech against speech masking, particularly for the last two keywords in the target sentence. Moreover, speech-recognition performance under the voice-priming condition was significantly correlated to that under the face-priming condition. The results suggest that learned facial information on talker identity plays an important role in identifying the target-talker's voice and facilitating selective attention to the target-speech stream against the masking-speech stream. © 2014 The Institute of Psychology, Chinese Academy of Sciences and Wiley Publishing Asia Pty Ltd.

  2. Fourier power spectrum characteristics of face photographs: attractiveness perception depends on low-level image properties.

    Science.gov (United States)

    Menzel, Claudia; Hayn-Leichsenring, Gregor U; Langner, Oliver; Wiese, Holger; Redies, Christoph

    2015-01-01

    We investigated whether low-level processed image properties that are shared by natural scenes and artworks - but not veridical face photographs - affect the perception of facial attractiveness and age. Specifically, we considered the slope of the radially averaged Fourier power spectrum in a log-log plot. This slope is a measure of the distribution of special frequency power in an image. Images of natural scenes and artworks possess - compared to face images - a relatively shallow slope (i.e., increased high spatial frequency power). Since aesthetic perception might be based on the efficient processing of images with natural scene statistics, we assumed that the perception of facial attractiveness might also be affected by these properties. We calculated Fourier slope and other beauty-associated measurements in face images and correlated them with ratings of attractiveness and age of the depicted persons (Study 1). We found that Fourier slope - in contrast to the other tested image properties - did not predict attractiveness ratings when we controlled for age. In Study 2A, we overlaid face images with random-phase patterns with different statistics. Patterns with a slope similar to those in natural scenes and artworks resulted in lower attractiveness and higher age ratings. In Studies 2B and 2C, we directly manipulated the Fourier slope of face images and found that images with shallower slopes were rated as more attractive. Additionally, attractiveness of unaltered faces was affected by the Fourier slope of a random-phase background (Study 3). Faces in front of backgrounds with statistics similar to natural scenes and faces were rated as more attractive. We conclude that facial attractiveness ratings are affected by specific image properties. An explanation might be the efficient coding hypothesis.

  3. Face recognition via sparse representation of SIFT feature on hexagonal-sampling image

    Science.gov (United States)

    Zhang, Daming; Zhang, Xueyong; Li, Lu; Liu, Huayong

    2018-04-01

    This paper investigates a face recognition approach based on Scale Invariant Feature Transform (SIFT) feature and sparse representation. The approach takes advantage of SIFT which is local feature other than holistic feature in classical Sparse Representation based Classification (SRC) algorithm and possesses strong robustness to expression, pose and illumination variations. Since hexagonal image has more inherit merits than square image to make recognition process more efficient, we extract SIFT keypoint in hexagonal-sampling image. Instead of matching SIFT feature, firstly the sparse representation of each SIFT keypoint is given according the constructed dictionary; secondly these sparse vectors are quantized according dictionary; finally each face image is represented by a histogram and these so-called Bag-of-Words vectors are classified by SVM. Due to use of local feature, the proposed method achieves better result even when the number of training sample is small. In the experiments, the proposed method gave higher face recognition rather than other methods in ORL and Yale B face databases; also, the effectiveness of the hexagonal-sampling in the proposed method is verified.

  4. Quasi-simultaneous OCT en-face imaging with two different depth resolutions

    International Nuclear Information System (INIS)

    Podoleanu, Adrian Gh; Cucu, Radu G; Rosen, Richard B; Dobre, George M; Rogers, John A; Jackson, David A

    2003-01-01

    We report a system capable of acquiring two quasi-simultaneous en-face optical coherence tomography (OCT) images of different depth resolution (one better than 20 μm and the other between 80 and 330 μm) at a frame rate of 2 Hz. The larger depth resolution image makes it ideal for target positioning in the OCT imaging of moving organs, such as eye fundus and cornea, as well as in the alignment of stacks of en-face OCT images. This role is similar to that of the confocal channel in a previously reported dual channel OCT/confocal imaging instrument. The system presented operates as a dual channel imaging instrument, where both channels operate on the OCT principle. We illustrate the functionality of the system with examples from a coin, skin from a finger and optic nerve in vivo

  5. Patterns of Visual Attention to Faces and Objects in Autism Spectrum Disorder

    Science.gov (United States)

    McPartland, James C.; Webb, Sara Jane; Keehn, Brandon; Dawson, Geraldine

    2011-01-01

    This study used eye-tracking to examine visual attention to faces and objects in adolescents with autism spectrum disorder (ASD) and typical peers. Point of gaze was recorded during passive viewing of images of human faces, inverted human faces, monkey faces, three-dimensional curvilinear objects, and two-dimensional geometric patterns.…

  6. The IMM Frontal Face Database

    DEFF Research Database (Denmark)

    Fagertun, Jens; Stegmann, Mikkel Bille

    2005-01-01

    This note describes a data set consisting of 120 annotated monocular images of 12 different frontal human faces. Points of correspondence are placed on each image so the data set can be readily used for building statistical models of shape. Format specifications and terms of use are also given...

  7. [Decrease in N170 evoked potential component latency during repeated presentation of face images].

    Science.gov (United States)

    Verkhliutov, V M; Ushakov, V L; Strelets, V B

    2009-01-01

    The 15 healthy volunteers EEG from 28 channels was recorded during the presentation of visual stimuli in the form of face and building images. The stimuli were presented in two series. The first series consisted of 60 face and 60 building images presented in random order. The second series consisted of 30 face and 30 building images. The second series began 1.5-2 min after the end of the first ore. No instruction was given to the participants. P1, N170 and VPP EP components were identified for both stimuli categories. These components were located in the medial parietal area (Brodmann area 40). P1 and N170 components were recorded in the superior temporal fissure (Brodmann area 21, STS region), the first component had the latency 120 ms, the second one--155 ms. VPP was recorded with the latency 190 ms (Brodmann area 19). Dynamic mapping of EP components with the latency from 97 to 242 ms revealed the removal of positive maximums from occipital to frontal areas through temporal ones and their subsequent returning to occipital areas through the central ones. During the comparison of EP components to face and building images the amplitude differences were revealed in the following areas: P1--in frontal, central and anterior temporal areas, N170--in frontal, central, temporal and parietal areas, VPP--in all areas. It was also revealed that N170 latency was 12 ms shorter for face than for building images. It was proposed that the above mentioned N170 latency decrease for face in comparison with building images is connected with the different space location of the fusiform area responsible for face and building images recognition. Priming--the effect that is revealed during the repetitive face images presentation is interpreted as the manifestation of functional heterogeneity of the fusiform area responsible for the face images recognition. The hypothesis is put forward that the parts of extrastriate cortex which are located closer to the central retinotopical

  8. Rating Nasolabial Aesthetics in Unilateral Cleft Lip and Palate Patients: Cropped Versus Full-Face Images.

    Science.gov (United States)

    Schwirtz, Roderic M F; Mulder, Frans J; Mosmuller, David G M; Tan, Robin A; Maal, Thomas J; Prahl, Charlotte; de Vet, Henrica C W; Don Griot, J Peter W

    2018-05-01

    To determine if cropping facial images affects nasolabial aesthetics assessments in unilateral cleft lip patients and to evaluate the effect of facial attractiveness on nasolabial evaluation. Two cleft surgeons and one cleft orthodontist assessed standardized frontal photographs 4 times; nasolabial aesthetics were rated on cropped and full-face images using the Cleft Aesthetic Rating Scale, and total facial attractiveness was rated on full-face images with and without the nasolabial area blurred using a 5-point Likert scale. Cleft Palate Craniofacial Unit of a University Medical Center. Inclusion criteria: nonsyndromic unilateral cleft lip and an available frontal view photograph around 10 years of age. a history of facial trauma and an incomplete cleft. Eighty-one photographs were available for assessment. Differences in mean CARS scores between cropped versus full-face photographs and attractive versus unattractive rated patients were evaluated by paired t test. Nasolabial aesthetics are scored more negatively on full-face photographs compared to cropped photographs, regardless of facial attractiveness. (Mean CARS score, nose: cropped = 2.8, full-face = 3.0, P lip: cropped = 2.4, full-face = 2.7, P lip: cropped = 2.6, full-face = 2.8, P < .001). Aesthetic outcomes of the nasolabial area are assessed significantly more positively when using cropped images compared to full-face images. For this reason, cropping images, revealing the nasolabial area only, is recommended for aesthetical assessments.

  9. The effect of image resolution on the performance of a face recognition system

    NARCIS (Netherlands)

    Boom, B.J.; Beumer, G.M.; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.

    2006-01-01

    In this paper we investigate the effect of image resolution on the error rates of a face verification system. We do not restrict ourselves to the face recognition algorithm only, but we also consider the face registration. In our face recognition system, the face registration is done by finding

  10. The Way Dogs (Canis familiaris Look at Human Emotional Faces Is Modulated by Oxytocin. An Eye-Tracking Study

    Directory of Open Access Journals (Sweden)

    Anna Kis

    2017-10-01

    Full Text Available Dogs have been shown to excel in reading human social cues, including facial cues. In the present study we used eye-tracking technology to further study dogs’ face processing abilities. It was found that dogs discriminated between human facial regions in their spontaneous viewing pattern and looked most to the eye region independently of facial expression. Furthermore dogs played most attention to the first two images presented, afterwards their attention dramatically decreases; a finding that has methodological implications. Increasing evidence indicates that the oxytocin system is involved in dogs’ human-directed social competence, thus as a next step we investigated the effects of oxytocin on processing of human facial emotions. It was found that oxytocin decreases dogs’ looking to the human faces expressing angry emotional expression. More interestingly, however, after oxytocin pre-treatment dogs’ preferential gaze toward the eye region when processing happy human facial expressions disappears. These results provide the first evidence that oxytocin is involved in the regulation of human face processing in dogs. The present study is one of the few empirical investigations that explore eye gaze patterns in naïve and untrained pet dogs using a non-invasive eye-tracking technique and thus offers unique but largely untapped method for studying social cognition in dogs.

  11. Imaging human brain cyto- and myelo-architecture with quantitative OCT (Conference Presentation)

    Science.gov (United States)

    Boas, David A.; Wang, Hui; Konukoglu, Ender; Fischl, Bruce; Sakadzic, Sava; Magnain, Caroline V.

    2017-02-01

    No current imaging technology allows us to directly and without significant distortion visualize the microscopic and defining anatomical features of the human brain. Ex vivo histological techniques can yield exquisite planar images, but the cutting, mounting and staining that are required components of this type of imaging induce distortions that are different for each slice, introducing cross-slice differences that prohibit true 3D analysis. We are overcoming this issue by utilizing Optical Coherence Tomography (OCT) with the goal to image whole human brain cytoarchitectural and laminar properties with potentially 3.5 µm resolution in block-face without the need for exogenous staining. From the intrinsic scattering contrast of the brain tissue, OCT gives us images that are comparable to Nissl stains, but without the distortions introduced in standard histology as the OCT images are acquired from the block face prior to slicing and thus without the need for subsequent staining and mounting. We have shown that laminar and cytoarchitectural properties of the brain can be characterized with OCT just as well as with Nissl staining. We will present our recent advances to improve the axial resolution while maintaining contrast; improvements afforded by speckle reduction procedures; and efforts to obtain quantitative maps of the optical scattering coefficient, an intrinsic property of the tissue.

  12. In vivo imaging through the entire thickness of human cornea by full-field optical coherence tomography

    Science.gov (United States)

    Mazlin, Viacheslav; Xiao, Peng; Dalimier, Eugénie; Grieve, Kate; Irsch, Kristina; Sahel, José; Fink, Mathias; Boccara, Claude

    2018-02-01

    Despite obvious improvements in visualization of the in vivo cornea through the faster imaging speeds and higher axial resolutions, cellular imaging stays unresolvable task for OCT, as en face viewing with a high lateral resolution is required. The latter is possible with FFOCT, a method that relies on a camera, moderate numerical aperture (NA) objectives and an incoherent light source to provide en face images with a micrometer-level resolution. Recently, we for the first time demonstrated the ability of FFOCT to capture images from the in vivo human cornea1. In the current paper we present an extensive study of appearance of healthy in vivo human corneas under FFOCT examination. En face corneal images with a micrometer-level resolution were obtained from the three healthy subjects. For each subject it was possible to acquire images through the entire corneal depth and visualize the epithelium structures, Bowman's layer, sub-basal nerve plexus (SNP) fibers, anterior, middle and posterior stroma, endothelial cells with nuclei. Dimensions and densities of the structures visible with FFOCT, are in agreement with those seen by other cornea imaging methods. Cellular-level details in the images obtained together with the relatively large field-of-view (FOV) and contactless way of imaging make this device a promising candidate for becoming a new tool in ophthalmological diagnostics.

  13. Cross-modal face recognition using multi-matcher face scores

    Science.gov (United States)

    Zheng, Yufeng; Blasch, Erik

    2015-05-01

    The performance of face recognition can be improved using information fusion of multimodal images and/or multiple algorithms. When multimodal face images are available, cross-modal recognition is meaningful for security and surveillance applications. For example, a probe face is a thermal image (especially at nighttime), while only visible face images are available in the gallery database. Matching a thermal probe face onto the visible gallery faces requires crossmodal matching approaches. A few such studies were implemented in facial feature space with medium recognition performance. In this paper, we propose a cross-modal recognition approach, where multimodal faces are cross-matched in feature space and the recognition performance is enhanced with stereo fusion at image, feature and/or score level. In the proposed scenario, there are two cameras for stereo imaging, two face imagers (visible and thermal images) in each camera, and three recognition algorithms (circular Gaussian filter, face pattern byte, linear discriminant analysis). A score vector is formed with three cross-matched face scores from the aforementioned three algorithms. A classifier (e.g., k-nearest neighbor, support vector machine, binomial logical regression [BLR]) is trained then tested with the score vectors by using 10-fold cross validations. The proposed approach was validated with a multispectral stereo face dataset from 105 subjects. Our experiments show very promising results: ACR (accuracy rate) = 97.84%, FAR (false accept rate) = 0.84% when cross-matching the fused thermal faces onto the fused visible faces by using three face scores and the BLR classifier.

  14. Face cognition in humans: Psychophysiological, developmental, and cross-cultural aspects

    OpenAIRE

    Chernorizov A. M.; Zhong-qing J.; Petrakova A. V.; Zinchenko Yu. P.

    2016-01-01

    Investigators are finding increasing evidence for cross-cultural specificity in face cognition along with individual characteristics. The functions on which face cognition is based not only are types of general cognitive functions (perception, memory) but are elements of specific mental processes. Face perception, memorization, correct recognition of faces, and understanding the information that faces provide are essential skills for humans as a social species and can be considered as facets ...

  15. Human Face Recognition Using Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Răzvan-Daniel Albu

    2009-10-01

    Full Text Available In this paper, I present a novel hybrid face recognition approach based on a convolutional neural architecture, designed to robustly detect highly variable face patterns. The convolutional network extracts successively larger features in a hierarchical set of layers. With the weights of the trained neural networks there are created kernel windows used for feature extraction in a 3-stage algorithm. I present experimental results illustrating the efficiency of the proposed approach. I use a database of 796 images of 159 individuals from Reims University which contains quite a high degree of variability in expression, pose, and facial details.

  16. Neural representation of face familiarity in an awake chimpanzee

    Directory of Open Access Journals (Sweden)

    Hirokata Fukushima

    2013-12-01

    Full Text Available Evaluating the familiarity of faces is critical for social animals as it is the basis of individual recognition. In the present study, we examined how face familiarity is reflected in neural activities in our closest living relative, the chimpanzee. Skin-surface event-related brain potentials (ERPs were measured while a fully awake chimpanzee observed photographs of familiar and unfamiliar chimpanzee faces (Experiment 1 and human faces (Experiment 2. The ERPs evoked by chimpanzee faces differentiated unfamiliar individuals from familiar ones around midline areas centered on vertex sites at approximately 200 ms after the stimulus onset. In addition, the ERP response to the image of the subject’s own face did not significantly diverge from those evoked by familiar chimpanzees, suggesting that the subject’s brain at a minimum remembered the image of her own face. The ERPs evoked by human faces were not influenced by the familiarity of target individuals. These results indicate that chimpanzee neural representations are more sensitive to the familiarity of conspecific than allospecific faces.

  17. A new viewpoint on the evolution of sexually dimorphic human faces.

    Science.gov (United States)

    Burke, Darren; Sulikowski, Danielle

    2010-10-21

    Human faces show marked sexual shape dimorphism, and this affects their attractiveness. Humans also show marked height dimorphism, which means that men typically view women's faces from slightly above and women typically view men's faces from slightly below. We tested the idea that this perspective difference may be the evolutionary origin of the face shape dimorphism by having males and females rate the masculinity/femininity and attractiveness of male and female faces that had been manipulated in pitch (forward or backward tilt), simulating viewing the face from slightly above or below. As predicted, tilting female faces upwards decreased their perceived femininity and attractiveness, whereas tilting them downwards increased their perceived femininity and attractiveness. Male faces tilted up were judged to be more masculine, and tilted down judged to be less masculine. This suggests that sexual selection may have embodied this viewpoint difference into the actual facial proportions of men and women.

  18. Face recognition based on symmetrical virtual image and original training image

    Science.gov (United States)

    Ke, Jingcheng; Peng, Yali; Liu, Shigang; Li, Jun; Pei, Zhao

    2018-02-01

    In face representation-based classification methods, we are able to obtain high recognition rate if a face has enough available training samples. However, in practical applications, we only have limited training samples to use. In order to obtain enough training samples, many methods simultaneously use the original training samples and corresponding virtual samples to strengthen the ability of representing the test sample. One is directly using the original training samples and corresponding mirror samples to recognize the test sample. However, when the test sample is nearly symmetrical while the original training samples are not, the integration of the original training and mirror samples might not well represent the test samples. To tackle the above-mentioned problem, in this paper, we propose a novel method to obtain a kind of virtual samples which are generated by averaging the original training samples and corresponding mirror samples. Then, the original training samples and the virtual samples are integrated to recognize the test sample. Experimental results on five face databases show that the proposed method is able to partly overcome the challenges of the various poses, facial expressions and illuminations of original face image.

  19. Research of Face Recognition with Fisher Linear Discriminant

    Science.gov (United States)

    Rahim, R.; Afriliansyah, T.; Winata, H.; Nofriansyah, D.; Ratnadewi; Aryza, S.

    2018-01-01

    Face identification systems are developing rapidly, and these developments drive the advancement of biometric-based identification systems that have high accuracy. However, to develop a good face recognition system and to have high accuracy is something that’s hard to find. Human faces have diverse expressions and attribute changes such as eyeglasses, mustache, beard and others. Fisher Linear Discriminant (FLD) is a class-specific method that distinguishes facial image images into classes and also creates distance between classes and intra classes so as to produce better classification.

  20. An embedded face-classification system for infrared images on an FPGA

    Science.gov (United States)

    Soto, Javier E.; Figueroa, Miguel

    2014-10-01

    We present a face-classification architecture for long-wave infrared (IR) images implemented on a Field Programmable Gate Array (FPGA). The circuit is fast, compact and low power, can recognize faces in real time and be embedded in a larger image-processing and computer vision system operating locally on an IR camera. The algorithm uses Local Binary Patterns (LBP) to perform feature extraction on each IR image. First, each pixel in the image is represented as an LBP pattern that encodes the similarity between the pixel and its neighbors. Uniform LBP codes are then used to reduce the number of patterns to 59 while preserving more than 90% of the information contained in the original LBP representation. Then, the image is divided into 64 non-overlapping regions, and each region is represented as a 59-bin histogram of patterns. Finally, the algorithm concatenates all 64 regions to create a 3,776-bin spatially enhanced histogram. We reduce the dimensionality of this histogram using Linear Discriminant Analysis (LDA), which improves clustering and enables us to store an entire database of 53 subjects on-chip. During classification, the circuit applies LBP and LDA to each incoming IR image in real time, and compares the resulting feature vector to each pattern stored in the local database using the Manhattan distance. We implemented the circuit on a Xilinx Artix-7 XC7A100T FPGA and tested it with the UCHThermalFace database, which consists of 28 81 x 150-pixel images of 53 subjects in indoor and outdoor conditions. The circuit achieves a 98.6% hit ratio, trained with 16 images and tested with 12 images of each subject in the database. Using a 100 MHz clock, the circuit classifies 8,230 images per second, and consumes only 309mW.

  1. Face Recognition for Access Control Systems Combining Image-Difference Features Based on a Probabilistic Model

    Science.gov (United States)

    Miwa, Shotaro; Kage, Hiroshi; Hirai, Takashi; Sumi, Kazuhiko

    We propose a probabilistic face recognition algorithm for Access Control System(ACS)s. Comparing with existing ACSs using low cost IC-cards, face recognition has advantages in usability and security that it doesn't require people to hold cards over scanners and doesn't accept imposters with authorized cards. Therefore face recognition attracts more interests in security markets than IC-cards. But in security markets where low cost ACSs exist, price competition is important, and there is a limitation on the quality of available cameras and image control. Therefore ACSs using face recognition are required to handle much lower quality images, such as defocused and poor gain-controlled images than high security systems, such as immigration control. To tackle with such image quality problems we developed a face recognition algorithm based on a probabilistic model which combines a variety of image-difference features trained by Real AdaBoost with their prior probability distributions. It enables to evaluate and utilize only reliable features among trained ones during each authentication, and achieve high recognition performance rates. The field evaluation using a pseudo Access Control System installed in our office shows that the proposed system achieves a constant high recognition performance rate independent on face image qualities, that is about four times lower EER (Equal Error Rate) under a variety of image conditions than one without any prior probability distributions. On the other hand using image difference features without any prior probabilities are sensitive to image qualities. We also evaluated PCA, and it has worse, but constant performance rates because of its general optimization on overall data. Comparing with PCA, Real AdaBoost without any prior distribution performs twice better under good image conditions, but degrades to a performance as good as PCA under poor image conditions.

  2. A New Viewpoint on the Evolution of Sexually Dimorphic Human Faces

    Directory of Open Access Journals (Sweden)

    Darren Burke

    2010-10-01

    Full Text Available Human faces show marked sexual shape dimorphism, and this affects their attractiveness. Humans also show marked height dimorphism, which means that men typically view women's faces from slightly above and women typically view men's faces from slightly below. We tested the idea that this perspective difference may be the evolutionary origin of the face shape dimorphism by having males and females rate the masculinity/femininity and attractiveness of male and female faces that had been manipulated in pitch (forward or backward tilt, simulating viewing the face from slightly above or below. As predicted, tilting female faces upwards decreased their perceived femininity and attractiveness, whereas tilting them downwards increased their perceived femininity and attractiveness. Male faces tilted up were judged to be more masculine, and tilted down judged to be less masculine. This suggests that sexual selection may have embodied this viewpoint difference into the actual facial proportions of men and women.

  3. A special purpose knowledge-based face localization method

    Science.gov (United States)

    Hassanat, Ahmad; Jassim, Sabah

    2008-04-01

    This paper is concerned with face localization for visual speech recognition (VSR) system. Face detection and localization have got a great deal of attention in the last few years, because it is an essential pre-processing step in many techniques that handle or deal with faces, (e.g. age, face, gender, race and visual speech recognition). We shall present an efficient method for localization human's faces in video images captured on mobile constrained devices, under a wide variation in lighting conditions. We use a multiphase method that may include all or some of the following steps starting with image pre-processing, followed by a special purpose edge detection, then an image refinement step. The output image will be passed through a discrete wavelet decomposition procedure, and the computed LL sub-band at a certain level will be transformed into a binary image that will be scanned by using a special template to select a number of possible candidate locations. Finally, we fuse the scores from the wavelet step with scores determined by color information for the candidate location and employ a form of fuzzy logic to distinguish face from non-face locations. We shall present results of large number of experiments to demonstrate that the proposed face localization method is efficient and achieve high level of accuracy that outperforms existing general-purpose face detection methods.

  4. Face Recognition using Approximate Arithmetic

    DEFF Research Database (Denmark)

    Marso, Karol

    Face recognition is image processing technique which aims to identify human faces and found its use in various different fields for example in security. Throughout the years this field evolved and there are many approaches and many different algorithms which aim to make the face recognition as effective...... processing applications the results do not need to be completely precise and use of the approximate arithmetic can lead to reduction in terms of delay, space and power consumption. In this paper we examine possible use of approximate arithmetic in face recognition using Eigenfaces algorithm....

  5. Unified Probabilistic Models for Face Recognition from a Single Example Image per Person

    Institute of Scientific and Technical Information of China (English)

    Pin Liao; Li Shen

    2004-01-01

    This paper presents a new technique of unified probabilistic models for face recognition from only one single example image per person. The unified models, trained on an obtained training set with multiple samples per person, are used to recognize facial images from another disjoint database with a single sample per person. Variations between facial images are modeled as two unified probabilistic models: within-class variations and between-class variations. Gaussian Mixture Models are used to approximate the distributions of the two variations and exploit a classifier combination method to improve the performance. Extensive experimental results on the ORL face database and the authors' database (the ICT-JDL database) including totally 1,750facial images of 350 individuals demonstrate that the proposed technique, compared with traditional eigenface method and some well-known traditional algorithms, is a significantly more effective and robust approach for face recognition.

  6. Comparison of different methods for gender estimation from face image of various poses

    Science.gov (United States)

    Ishii, Yohei; Hongo, Hitoshi; Niwa, Yoshinori; Yamamoto, Kazuhiko

    2003-04-01

    Recently, gender estimation from face images has been studied for frontal facial images. However, it is difficult to obtain such facial images constantly in the case of application systems for security, surveillance and marketing research. In order to build such systems, a method is required to estimate gender from the image of various facial poses. In this paper, three different classifiers are compared in appearance-based gender estimation, which use four directional features (FDF). The classifiers are linear discriminant analysis (LDA), Support Vector Machines (SVMs) and Sparse Network of Winnows (SNoW). Face images used for experiments were obtained from 35 viewpoints. The direction of viewpoints varied +/-45 degrees horizontally, +/-30 degrees vertically at 15 degree intervals respectively. Although LDA showed the best performance for frontal facial images, SVM with Gaussian kernel was found the best performance (86.0%) for the facial images of 35 viewpoints. It is considered that SVM with Gaussian kernel is robust to changes in viewpoint when estimating gender from these results. Furthermore, the estimation rate was quite close to the average estimation rate at 35 viewpoints respectively. It is supposed that the methods are reasonable to estimate gender within the range of experimented viewpoints by learning face images from multiple directions by one class.

  7. Real-Time Acquisition of High Quality Face Sequences from an Active Pan-Tilt-Zoom Camera

    DEFF Research Database (Denmark)

    Haque, Mohammad A.; Nasrollahi, Kamal; Moeslund, Thomas B.

    2013-01-01

    -based real-time high-quality face image acquisition system, which utilizes pan-tilt-zoom parameters of a camera to focus on a human face in a scene and employs a face quality assessment method to log the best quality faces from the captured frames. The system consists of four modules: face detection, camera...... control, face tracking, and face quality assessment before logging. Experimental results show that the proposed system can effectively log the high quality faces from the active camera in real-time (an average of 61.74ms was spent per frame) with an accuracy of 85.27% compared to human annotated data.......Traditional still camera-based facial image acquisition systems in surveillance applications produce low quality face images. This is mainly due to the distance between the camera and subjects of interest. Furthermore, people in such videos usually move around, change their head poses, and facial...

  8. Development of Human Face Literature Database Using Text Mining Approach: Phase I.

    Science.gov (United States)

    Kaur, Paramjit; Krishan, Kewal; Sharma, Suresh K

    2018-06-01

    The face is an important part of the human body by which an individual communicates in the society. Its importance can be highlighted by the fact that a person deprived of face cannot sustain in the living world. The amount of experiments being performed and the number of research papers being published under the domain of human face have surged in the past few decades. Several scientific disciplines, which are conducting research on human face include: Medical Science, Anthropology, Information Technology (Biometrics, Robotics, and Artificial Intelligence, etc.), Psychology, Forensic Science, Neuroscience, etc. This alarms the need of collecting and managing the data concerning human face so that the public and free access of it can be provided to the scientific community. This can be attained by developing databases and tools on human face using bioinformatics approach. The current research emphasizes on creating a database concerning literature data of human face. The database can be accessed on the basis of specific keywords, journal name, date of publication, author's name, etc. The collected research papers will be stored in the form of a database. Hence, the database will be beneficial to the research community as the comprehensive information dedicated to the human face could be found at one place. The information related to facial morphologic features, facial disorders, facial asymmetry, facial abnormalities, and many other parameters can be extracted from this database. The front end has been developed using Hyper Text Mark-up Language and Cascading Style Sheets. The back end has been developed using hypertext preprocessor (PHP). The JAVA Script has used as scripting language. MySQL (Structured Query Language) is used for database development as it is most widely used Relational Database Management System. XAMPP (X (cross platform), Apache, MySQL, PHP, Perl) open source web application software has been used as the server.The database is still under the

  9. Anterior temporal face patches: A meta-analysis and empirical study

    Directory of Open Access Journals (Sweden)

    Rebecca J. Von Der Heide

    2013-02-01

    Full Text Available Studies of nonhuman primates have reported face sensitive patches in the ventral anterior temporal lobes (ATL. In humans, ATL resection or damage causes an associative prosopagnosia in which face perception is intact but face memory is compromised. Some fMRI studies have extended these findings using famous and familiar faces. However, it is unclear whether these regions in the human ATL are in locations comparable to those reported in non-human primates, typically using unfamiliar faces. We present the results of two studies of person memory: a meta-analysis of existing fMRI studies and an empirical fMRI study using optimized imaging parameters. Both studies showed left-lateralized ATL activations to familiar individuals while novel faces activated the right ATL. Activations to famous faces were quite ventral, similar to what has been reported in monkeys. These findings suggest that face memory-sensitive patches in the human ATL are in the ventral/polar ATL.

  10. Dynamic analysis of mental sweating of eccrine sweat gland of human fingertip by time-sequential piled-up en face optical coherence tomography images.

    Science.gov (United States)

    Ohmi, Masato; Wada, Yuki

    2016-08-01

    In this paper, we demonstrate dynamic analysis of mental sweating for sound stimulus of a few tens of eccrine sweat glands by the time-sequential piled-up en face optical coherence tomography (OCT) images with the frame spacing of 3.3 sec. In the experiment, the amount of excess sweat can be evaluated simultaneously for a few tens of sweat glands by piling up of all the en face OCT images. Non-uniformity was observed in mental sweating where the amount of sweat in response to sound stimulus is different for each sweat gland. Furthermore, the amount of sweat is significantly increased in proportion to the strength of the stimulus.

  11. The N170 component is sensitive to face-like stimuli: a study of Chinese Peking opera makeup.

    Science.gov (United States)

    Liu, Tiantian; Mu, Shoukuan; He, Huamin; Zhang, Lingcong; Fan, Cong; Ren, Jie; Zhang, Mingming; He, Weiqi; Luo, Wenbo

    2016-12-01

    The N170 component is considered a neural marker of face-sensitive processing. In the present study, the face-sensitive N170 component of event-related potentials (ERPs) was investigated with a modified oddball paradigm using a natural face (the standard stimulus), human- and animal-like makeup stimuli, scrambled control images that mixed human- and animal-like makeup pieces, and a grey control image. Nineteen participants were instructed to respond within 1000 ms by pressing the ' F ' or ' J ' key in response to the standard or deviant stimuli, respectively. We simultaneously recorded ERPs, response accuracy, and reaction times. The behavioral results showed that the main effect of stimulus type was significant for reaction time, whereas there were no significant differences in response accuracies among stimulus types. In relation to the ERPs, N170 amplitudes elicited by human-like makeup stimuli, animal-like makeup stimuli, scrambled control images, and a grey control image progressively decreased. A right hemisphere advantage was observed in the N170 amplitudes for human-like makeup stimuli, animal-like makeup stimuli, and scrambled control images but not for grey control image. These results indicate that the N170 component is sensitive to face-like stimuli and reflect configural processing in face recognition.

  12. High precision automated face localization in thermal images: oral cancer dataset as test case

    Science.gov (United States)

    Chakraborty, M.; Raman, S. K.; Mukhopadhyay, S.; Patsa, S.; Anjum, N.; Ray, J. G.

    2017-02-01

    Automated face detection is the pivotal step in computer vision aided facial medical diagnosis and biometrics. This paper presents an automatic, subject adaptive framework for accurate face detection in the long infrared spectrum on our database for oral cancer detection consisting of malignant, precancerous and normal subjects of varied age group. Previous works on oral cancer detection using Digital Infrared Thermal Imaging(DITI) reveals that patients and normal subjects differ significantly in their facial thermal distribution. Therefore, it is a challenging task to formulate a completely adaptive framework to veraciously localize face from such a subject specific modality. Our model consists of first extracting the most probable facial regions by minimum error thresholding followed by ingenious adaptive methods to leverage the horizontal and vertical projections of the segmented thermal image. Additionally, the model incorporates our domain knowledge of exploiting temperature difference between strategic locations of the face. To our best knowledge, this is the pioneering work on detecting faces in thermal facial images comprising both patients and normal subjects. Previous works on face detection have not specifically targeted automated medical diagnosis; face bounding box returned by those algorithms are thus loose and not apt for further medical automation. Our algorithm significantly outperforms contemporary face detection algorithms in terms of commonly used metrics for evaluating face detection accuracy. Since our method has been tested on challenging dataset consisting of both patients and normal subjects of diverse age groups, it can be seamlessly adapted in any DITI guided facial healthcare or biometric applications.

  13. Face recognition: database acquisition, hybrid algorithms, and human studies

    Science.gov (United States)

    Gutta, Srinivas; Huang, Jeffrey R.; Singh, Dig; Wechsler, Harry

    1997-02-01

    One of the most important technologies absent in traditional and emerging frontiers of computing is the management of visual information. Faces are accessible `windows' into the mechanisms that govern our emotional and social lives. The corresponding face recognition tasks considered herein include: (1) Surveillance, (2) CBIR, and (3) CBIR subject to correct ID (`match') displaying specific facial landmarks such as wearing glasses. We developed robust matching (`classification') and retrieval schemes based on hybrid classifiers and showed their feasibility using the FERET database. The hybrid classifier architecture consist of an ensemble of connectionist networks--radial basis functions-- and decision trees. The specific characteristics of our hybrid architecture include (a) query by consensus as provided by ensembles of networks for coping with the inherent variability of the image formation and data acquisition process, and (b) flexible and adaptive thresholds as opposed to ad hoc and hard thresholds. Experimental results, proving the feasibility of our approach, yield (i) 96% accuracy, using cross validation (CV), for surveillance on a data base consisting of 904 images (ii) 97% accuracy for CBIR tasks, on a database of 1084 images, and (iii) 93% accuracy, using CV, for CBIR subject to correct ID match tasks on a data base of 200 images.

  14. Imaging of the face. L'imagerie actuelle du massif facial

    Energy Technology Data Exchange (ETDEWEB)

    Bourjat, P.; Veillon, F.

    In this report, the authors evaluate the indications of the different imaging techniques of the face and the adjacent deep and superficial regions. Thus, CT stays the first examination of the paranasal sinuses completed by MRI especially when an inflammatory pathology is associated with a benign or malignant tumor. Traumatisms of the face must be investigated by CT with an emphasis on the frontal and sagittal reformated sections. The study of the superficial areas of the face (parotid gland) is best explored by US and MRI. MRI gives better results than CT in the exploration of the deep regions of the face. Arteriography remains obligatory in the study of certain tumours specially the nasopharyngeal angiofibroma.

  15. Fraudulent ID using face morphs: Experiments on human and automatic recognition.

    Science.gov (United States)

    Robertson, David J; Kramer, Robin S S; Burton, A Mike

    2017-01-01

    Matching unfamiliar faces is known to be difficult, and this can give an opportunity to those engaged in identity fraud. Here we examine a relatively new form of fraud, the use of photo-ID containing a graphical morph between two faces. Such a document may look sufficiently like two people to serve as ID for both. We present two experiments with human viewers, and a third with a smartphone face recognition system. In Experiment 1, viewers were asked to match pairs of faces, without being warned that one of the pair could be a morph. They very commonly accepted a morphed face as a match. However, in Experiment 2, following very short training on morph detection, their acceptance rate fell considerably. Nevertheless, there remained large individual differences in people's ability to detect a morph. In Experiment 3 we show that a smartphone makes errors at a similar rate to 'trained' human viewers-i.e. accepting a small number of morphs as genuine ID. We discuss these results in reference to the use of face photos for security.

  16. Direct imaging of haloes and truncations in face-on nearby galaxies

    NARCIS (Netherlands)

    Knapen, J. H.; Peters, S. P. C.; van der Kruit, P. C.; Trujillo, I.; Fliri, J.; Cisternas, M.; Kelvin, L. S.; Bragaglia, A.; Arnaboldi, M.; Rejkuba, M.; Romano, D.

    2016-01-01

    We use ultra-deep imaging from the IAC Stripe 82 Legacy Project to study the surface photometry of 22 nearby, face-on to moderately inclined spiral galaxies. The reprocessed and co-added SDSS/Stripe 82 imaging allows us to probe down to 29-30 r'-mag/arcsec2 and thus reach into the very faint

  17. Faces in the Mist: Illusory Face and Letter Detection

    Directory of Open Access Journals (Sweden)

    Cory A. Rieth

    2011-06-01

    Full Text Available We report three behavioral experiments on the spatial characteristics evoking illusory face and letter detection. False detections made to pure noise images were analyzed using a modified reverse correlation method in which hundreds of observers rated a modest number of noise images (480 during a single session. This method was originally developed for brain imaging research, and has been used in a number of fMRI publications, but this is the first report of the behavioral classification images. In Experiment 1 illusory face detection occurred in response to scattered dark patches throughout the images, with a bias to the left visual field. This occurred despite the use of a fixation cross and expectations that faces would be centered. In contrast, illusory letter detection (Experiment 2 occurred in response to centrally positioned dark patches. Experiment 3 included an oval in all displays to spatially constrain illusory face detection. With the addition of this oval the classification image revealed an eyes/nose/mouth pattern. These results suggest that face detection is triggered by a minimal face-like pattern even when these features are not centered in visual focus.

  18. Defining Face Perception Areas in the Human Brain: A Large-Scale Factorial fMRI Face Localizer Analysis

    Science.gov (United States)

    Rossion, Bruno; Hanseeuw, Bernard; Dricot, Laurence

    2012-01-01

    A number of human brain areas showing a larger response to faces than to objects from different categories, or to scrambled faces, have been identified in neuroimaging studies. Depending on the statistical criteria used, the set of areas can be overextended or minimized, both at the local (size of areas) and global (number of areas) levels. Here…

  19. Age synthesis and estimation via faces: a survey.

    Science.gov (United States)

    Fu, Yun; Guo, Guodong; Huang, Thomas S

    2010-11-01

    Human age, as an important personal trait, can be directly inferred by distinct patterns emerging from the facial appearance. Derived from rapid advances in computer graphics and machine vision, computer-based age synthesis and estimation via faces have become particularly prevalent topics recently because of their explosively emerging real-world applications, such as forensic art, electronic customer relationship management, security control and surveillance monitoring, biometrics, entertainment, and cosmetology. Age synthesis is defined to rerender a face image aesthetically with natural aging and rejuvenating effects on the individual face. Age estimation is defined to label a face image automatically with the exact age (year) or the age group (year range) of the individual face. Because of their particularity and complexity, both problems are attractive yet challenging to computer-based application system designers. Large efforts from both academia and industry have been devoted in the last a few decades. In this paper, we survey the complete state-of-the-art techniques in the face image-based age synthesis and estimation topics. Existing models, popular algorithms, system performances, technical difficulties, popular face aging databases, evaluation protocols, and promising future directions are also provided with systematic discussions.

  20. A Feature-Based Structural Measure: An Image Similarity Measure for Face Recognition

    Directory of Open Access Journals (Sweden)

    Noor Abdalrazak Shnain

    2017-08-01

    Full Text Available Facial recognition is one of the most challenging and interesting problems within the field of computer vision and pattern recognition. During the last few years, it has gained special attention due to its importance in relation to current issues such as security, surveillance systems and forensics analysis. Despite this high level of attention to facial recognition, the success is still limited by certain conditions; there is no method which gives reliable results in all situations. In this paper, we propose an efficient similarity index that resolves the shortcomings of the existing measures of feature and structural similarity. This measure, called the Feature-Based Structural Measure (FSM, combines the best features of the well-known SSIM (structural similarity index measure and FSIM (feature similarity index measure approaches, striking a balance between performance for similar and dissimilar images of human faces. In addition to the statistical structural properties provided by SSIM, edge detection is incorporated in FSM as a distinctive structural feature. Its performance is tested for a wide range of PSNR (peak signal-to-noise ratio, using ORL (Olivetti Research Laboratory, now AT&T Laboratory Cambridge and FEI (Faculty of Industrial Engineering, São Bernardo do Campo, São Paulo, Brazil databases. The proposed measure is tested under conditions of Gaussian noise; simulation results show that the proposed FSM outperforms the well-known SSIM and FSIM approaches in its efficiency of similarity detection and recognition of human faces.

  1. Adjudicating between face-coding models with individual-face fMRI responses.

    Directory of Open Access Journals (Sweden)

    Johan D Carlin

    2017-07-01

    Full Text Available The perceptual representation of individual faces is often explained with reference to a norm-based face space. In such spaces, individuals are encoded as vectors where identity is primarily conveyed by direction and distinctiveness by eccentricity. Here we measured human fMRI responses and psychophysical similarity judgments of individual face exemplars, which were generated as realistic 3D animations using a computer-graphics model. We developed and evaluated multiple neurobiologically plausible computational models, each of which predicts a representational distance matrix and a regional-mean activation profile for 24 face stimuli. In the fusiform face area, a face-space coding model with sigmoidal ramp tuning provided a better account of the data than one based on exemplar tuning. However, an image-processing model with weighted banks of Gabor filters performed similarly. Accounting for the data required the inclusion of a measurement-level population averaging mechanism that approximates how fMRI voxels locally average distinct neuronal tunings. Our study demonstrates the importance of comparing multiple models and of modeling the measurement process in computational neuroimaging.

  2. The importance of surface-based cues for face discrimination in non-human primates.

    Science.gov (United States)

    Parr, Lisa A; Taubert, Jessica

    2011-07-07

    Understanding how individual identity is processed from faces remains a complex problem. Contrast reversal, showing faces in photographic negative, impairs face recognition in humans and demonstrates the importance of surface-based information (shading and pigmentation) in face recognition. We tested the importance of contrast information for face encoding in chimpanzees and rhesus monkeys using a computerized face-matching task. Results showed that contrast reversal (positive to negative) selectively impaired face processing in these two species, although the impairment was greater for chimpanzees. Unlike chimpanzees, however, monkeys performed just as well matching negative to positive faces, suggesting that they retained some ability to extract identity information from negative faces. A control task showed that chimpanzees, but not rhesus monkeys, performed significantly better matching face parts compared with whole faces after a contrast reversal, suggesting that contrast reversal acts selectively on face processing, rather than general visual-processing mechanisms. These results confirm the importance of surface-based cues for face processing in chimpanzees and humans, while the results were less salient for rhesus monkeys. These findings make a significant contribution to understanding the evolution of cognitive specializations for face processing among primates, and suggest potential differences between monkeys and apes.

  3. Deep features for efficient multi-biometric recognition with face and ear images

    Science.gov (United States)

    Omara, Ibrahim; Xiao, Gang; Amrani, Moussa; Yan, Zifei; Zuo, Wangmeng

    2017-07-01

    Recently, multimodal biometric systems have received considerable research interest in many applications especially in the fields of security. Multimodal systems can increase the resistance to spoof attacks, provide more details and flexibility, and lead to better performance and lower error rate. In this paper, we present a multimodal biometric system based on face and ear, and propose how to exploit the extracted deep features from Convolutional Neural Networks (CNNs) on the face and ear images to introduce more powerful discriminative features and robust representation ability for them. First, the deep features for face and ear images are extracted based on VGG-M Net. Second, the extracted deep features are fused by using a traditional concatenation and a Discriminant Correlation Analysis (DCA) algorithm. Third, multiclass support vector machine is adopted for matching and classification. The experimental results show that the proposed multimodal system based on deep features is efficient and achieves a promising recognition rate up to 100 % by using face and ear. In addition, the results indicate that the fusion based on DCA is superior to traditional fusion.

  4. Face scanning in autism spectrum disorder (ASD and attention deficit/hyperactivity disorder (ADHD: human versus dog face scanning

    Directory of Open Access Journals (Sweden)

    Mauro eMuszkat

    2015-10-01

    Full Text Available This study used eye-tracking to explore attention allocation to human and dog faces in children and adolescents with autism spectrum disorder (ASD, attention deficit/hyperactivity disorder (ADHD, and typical development (TD. Significant differences were found among the three groups. TD participants looked longer at the eyes than ASD and ADHD ones, irrespective of the faces presented. In spite of this difference, groups were similar in that they looked more to the eyes than to the mouth areas of interest. The ADHD group gazed longer at the mouth region than the other groups. Furthermore, groups were also similar in that they looked more to the dog than to the human faces. The eye tracking technology proved to be useful for behavioral investigation in different neurodevelopmental disorders.

  5. DEWA: A Multiaspect Approach for Multiple Face Detection in Complex Scene Digital Image

    Directory of Open Access Journals (Sweden)

    Setiawan Hadi

    2013-09-01

    Full Text Available A new approach for detecting faces in a digital image with unconstrained background has been developed. The approach is composed of three phases: segmentation phase, filtering phase and localization phase. In the segmentation phase, we utilized both training and non-training methods, which are implemented in user selectable color space. In the filtering phase, Minkowski addition-based objects removal has been used for image cleaning. In the last phase, an image processing method and a data mining method are employed for grouping and localizing objects, combined with geometric-based image analysis. Several experiments have been conducted using our special face database that consists of simple objects and complex objects. The experiment results demonstrated that the detection accuracy is around 90% and the detection speed is less than 1 second in average.

  6. Enhanced Visualization of Subtle Outer Retinal Pathology by En Face Optical Coherence Tomography and Correlation with Multi-Modal Imaging.

    Directory of Open Access Journals (Sweden)

    Danuta M Sampson

    Full Text Available To present en face optical coherence tomography (OCT images generated by graph-search theory algorithm-based custom software and examine correlation with other imaging modalities.En face OCT images derived from high density OCT volumetric scans of 3 healthy subjects and 4 patients using a custom algorithm (graph-search theory and commercial software (Heidelberg Eye Explorer software (Heidelberg Engineering were compared and correlated with near infrared reflectance, fundus autofluorescence, adaptive optics flood-illumination ophthalmoscopy (AO-FIO and microperimetry.Commercial software was unable to generate accurate en face OCT images in eyes with retinal pigment epithelium (RPE pathology due to segmentation error at the level of Bruch's membrane (BM. Accurate segmentation of the basal RPE and BM was achieved using custom software. The en face OCT images from eyes with isolated interdigitation or ellipsoid zone pathology were of similar quality between custom software and Heidelberg Eye Explorer software in the absence of any other significant outer retinal pathology. En face OCT images demonstrated angioid streaks, lesions of acute macular neuroretinopathy, hydroxychloroquine toxicity and Bietti crystalline deposits that correlated with other imaging modalities.Graph-search theory algorithm helps to overcome the limitations of outer retinal segmentation inaccuracies in commercial software. En face OCT images can provide detailed topography of the reflectivity within a specific layer of the retina which correlates with other forms of fundus imaging. Our results highlight the need for standardization of image reflectivity to facilitate quantification of en face OCT images and longitudinal analysis.

  7. Faces of CMS: Photomosaic (September 2013, low-resolution)

    CERN Multimedia

    Antonelli, Jamie

    2013-01-01

    The "Faces of CMS" photomosaic project aims to show the human element of the CMS Experiment. Most of the images for public outreach show the experimental equipment of CMS or physics results and collision displays. With a collaboration of around 3,000 people scattered around the globe, it's difficult to present the members of CMS in any one image. We asked any interested CMS members to sign up for the project, and allow us to use their photographs. The resulting photo mosaic contains the faces of 1,271 CMS members.

  8. Face averages enhance user recognition for smartphone security.

    Science.gov (United States)

    Robertson, David J; Kramer, Robin S S; Burton, A Mike

    2015-01-01

    Our recognition of familiar faces is excellent, and generalises across viewing conditions. However, unfamiliar face recognition is much poorer. For this reason, automatic face recognition systems might benefit from incorporating the advantages of familiarity. Here we put this to the test using the face verification system available on a popular smartphone (the Samsung Galaxy). In two experiments we tested the recognition performance of the smartphone when it was encoded with an individual's 'face-average'--a representation derived from theories of human face perception. This technique significantly improved performance for both unconstrained celebrity images (Experiment 1) and for real faces (Experiment 2): users could unlock their phones more reliably when the device stored an average of the user's face than when they stored a single image. This advantage was consistent across a wide variety of everyday viewing conditions. Furthermore, the benefit did not reduce the rejection of imposter faces. This benefit is brought about solely by consideration of suitable representations for automatic face recognition, and we argue that this is just as important as development of matching algorithms themselves. We propose that this representation could significantly improve recognition rates in everyday settings.

  9. Illumination robust face recognition using spatial adaptive shadow compensation based on face intensity prior

    Science.gov (United States)

    Hsieh, Cheng-Ta; Huang, Kae-Horng; Lee, Chang-Hsing; Han, Chin-Chuan; Fan, Kuo-Chin

    2017-12-01

    Robust face recognition under illumination variations is an important and challenging task in a face recognition system, particularly for face recognition in the wild. In this paper, a face image preprocessing approach, called spatial adaptive shadow compensation (SASC), is proposed to eliminate shadows in the face image due to different lighting directions. First, spatial adaptive histogram equalization (SAHE), which uses face intensity prior model, is proposed to enhance the contrast of each local face region without generating visible noises in smooth face areas. Adaptive shadow compensation (ASC), which performs shadow compensation in each local image block, is then used to produce a wellcompensated face image appropriate for face feature extraction and recognition. Finally, null-space linear discriminant analysis (NLDA) is employed to extract discriminant features from SASC compensated images. Experiments performed on the Yale B, Yale B extended, and CMU PIE face databases have shown that the proposed SASC always yields the best face recognition accuracy. That is, SASC is more robust to face recognition under illumination variations than other shadow compensation approaches.

  10. Extracting a Good Quality Frontal Face Image from a Low-Resolution Video Sequence

    DEFF Research Database (Denmark)

    Nasrollahi, Kamal; Moeslund, Thomas B.

    2011-01-01

    Feeding low-resolution and low-quality images, from inexpensive surveillance cameras, to systems like, e.g., face recognition, produces erroneous and unstable results. Therefore, there is a need for a mechanism to bridge the gap between on one hand low-resolution and low-quality images......, we use a learning-based super-resolution algorithm applied to the result of the reconstruction-based part to improve the quality by another factor of two. This results in an improvement factor of four for the entire system. The proposed system has been tested on 122 low-resolution sequences from two...... different databases. The experimental results show that the proposed system can indeed produce a high-resolution and good quality frontal face image from low-resolution video sequences....

  11. A Fusion Face Recognition Approach Based on 7-Layer Deep Learning Neural Network

    Directory of Open Access Journals (Sweden)

    Jianzheng Liu

    2016-01-01

    Full Text Available This paper presents a method for recognizing human faces with facial expression. In the proposed approach, a motion history image (MHI is employed to get the features in an expressive face. The face can be seen as a kind of physiological characteristic of a human and the expressions are behavioral characteristics. We fused the 2D images of a face and MHIs which were generated from the same face’s image sequences with expression. Then the fusion features were used to feed a 7-layer deep learning neural network. The previous 6 layers of the whole network can be seen as an autoencoder network which can reduce the dimension of the fusion features. The last layer of the network can be seen as a softmax regression; we used it to get the identification decision. Experimental results demonstrated that our proposed method performs favorably against several state-of-the-art methods.

  12. Do happy faces really modulate liking for Jackson Pollock art and statistical fractal noise images?

    Directory of Open Access Journals (Sweden)

    Mundloch Katrin

    2017-01-01

    Full Text Available Flexas et al. (2013 demonstrated that happy faces increase preference for abstract art if seen in short succession. We could not replicate their findings. In our first experiment, we tested whether valence, saliency or arousal of facial primes can modulate liking of Jackson Pollock art crops. In the second experiment, the emphasis was on testing another type of abstract visual stimuli which possess similar low-level image features: statistical fractal noise images. Pollock crops were rated significantly higher when primed with happy faces in contrast to neutral faces, but not differently to the no-prime condition. Findings of our study suggest that affective priming with happy faces may be stimulus-specific and may have inadvertent effects on other abstract visual material.

  13. The Processing of Human Emotional Faces by Pet and Lab Dogs: Evidence for Lateralization and Experience Effects

    Science.gov (United States)

    Barber, Anjuli L. A.; Randi, Dania; Müller, Corsin A.; Huber, Ludwig

    2016-01-01

    From all non-human animals dogs are very likely the best decoders of human behavior. In addition to a high sensitivity to human attentive status and to ostensive cues, they are able to distinguish between individual human faces and even between human facial expressions. However, so far little is known about how they process human faces and to what extent this is influenced by experience. Here we present an eye-tracking study with dogs emanating from two different living environments and varying experience with humans: pet and lab dogs. The dogs were shown pictures of familiar and unfamiliar human faces expressing four different emotions. The results, extracted from several different eye-tracking measurements, revealed pronounced differences in the face processing of pet and lab dogs, thus indicating an influence of the amount of exposure to humans. In addition, there was some evidence for the influences of both, the familiarity and the emotional expression of the face, and strong evidence for a left gaze bias. These findings, together with recent evidence for the dog's ability to discriminate human facial expressions, indicate that dogs are sensitive to some emotions expressed in human faces. PMID:27074009

  14. Measurements of the parapapillary atrophy zones in en face optical coherence tomography images.

    Directory of Open Access Journals (Sweden)

    Atsuya Miki

    Full Text Available To measure the parapapillary atrophy (PPA area in en face images obtained with swept-source optical coherence tomography (SS-OCT, and to evaluate its relationship to glaucoma, myopia, and age in non-highly myopic subjects.Retrospective, cross-sectional study.Fifty eyes of 30 subjects with open-angle glaucoma (G group and forty-three eyes of 26 healthy control subjects (C group. Eyes with high myopia (spherical equivalent refractive error ≤ -8 diopters or axial length ≥ 26.5 mm were excluded.Mean age ± standard deviation was 59.9 ± 12.4 years. The beta zone and the gamma zone PPA areas were measured in en face images reconstructed from three-dimensional SS-OCT images. Relationship between the PPA areas and patient characteristics such as glaucoma, axial length, and age was statistically evaluated using multivariate mixed-effects models.Areas of the beta zone and the gamma zone PPA measured on en face OCT images.Average ± standard deviation area of the beta and the gamma zone was 0.64 ± 0.79 and 0.16 ± 0.30 mm2, respectively. In multivariate models, the gamma zone significantly correlated with axial length (P = 0.001 but not with glaucoma (P = 0.944. In contrast, the beta zone significantly correlated with age (P = 0.0249 and glaucoma (P = 0.014.En face images reconstructed from 3D SS-OCT data facilitated measurements of the beta and the gamma PPA zones even in eyes with optic disc distortion. The OCT-defined beta zone is associated with glaucoma and age, whereas the gamma zone correlated with myopia but not with glaucoma. This study confirmed the clinical usefulness of OCT-based classification of the PPA zones in distinguishing glaucomatous damage of the optic nerve from myopic damage in non-highly myopic eyes.

  15. Imaging system for creating 3D block-face cryo-images of whole mice

    Science.gov (United States)

    Roy, Debashish; Breen, Michael; Salvado, Olivier; Heinzel, Meredith; McKinley, Eliot; Wilson, David

    2006-03-01

    We developed a cryomicrotome/imaging system that provides high resolution, high sensitivity block-face images of whole mice or excised organs, and applied it to a variety of biological applications. With this cryo-imaging system, we sectioned cryo-preserved tissues at 2-40 μm thickness and acquired high resolution brightfield and fluorescence images with microscopic in-plane resolution (as good as 1.2 μm). Brightfield images of normal and pathological anatomy show exquisite detail, especially in the abdominal cavity. Multi-planar reformatting and 3D renderings allow one to interrogate 3D structures. In this report, we present brightfield images of mouse anatomy, as well as 3D renderings of organs. For BPK mice model of polycystic kidney disease, we compared brightfield cryo-images and kidney volumes to MRI. The color images provided greater contrast and resolution of cysts as compared to in vivo MRI. We note that color cryo-images are closer to what a researcher sees in dissection, making it easier for them to interpret image data. The combination of field of view, depth of field, ultra high resolution and color/fluorescence contrast enables cryo-image volumes to provide details that cannot be found through in vivo imaging or other ex vivo optical imaging approaches. We believe that this novel imaging system will have applications that include identification of mouse phenotypes, characterization of diseases like blood vessel disease, kidney disease, and cancer, assessment of drug and gene therapy delivery and efficacy and validation of other imaging modalities.

  16. Behavioural and neurophysiological evidence for face identity and face emotion processing in animals

    Science.gov (United States)

    Tate, Andrew J; Fischer, Hanno; Leigh, Andrea E; Kendrick, Keith M

    2006-01-01

    Visual cues from faces provide important social information relating to individual identity, sexual attraction and emotional state. Behavioural and neurophysiological studies on both monkeys and sheep have shown that specialized skills and neural systems for processing these complex cues to guide behaviour have evolved in a number of mammals and are not present exclusively in humans. Indeed, there are remarkable similarities in the ways that faces are processed by the brain in humans and other mammalian species. While human studies with brain imaging and gross neurophysiological recording approaches have revealed global aspects of the face-processing network, they cannot investigate how information is encoded by specific neural networks. Single neuron electrophysiological recording approaches in both monkeys and sheep have, however, provided some insights into the neural encoding principles involved and, particularly, the presence of a remarkable degree of high-level encoding even at the level of a specific face. Recent developments that allow simultaneous recordings to be made from many hundreds of individual neurons are also beginning to reveal evidence for global aspects of a population-based code. This review will summarize what we have learned so far from these animal-based studies about the way the mammalian brain processes the faces and the emotions they can communicate, as well as associated capacities such as how identity and emotion cues are dissociated and how face imagery might be generated. It will also try to highlight what questions and advances in knowledge still challenge us in order to provide a complete understanding of just how brain networks perform this complex and important social recognition task. PMID:17118930

  17. Adaptation effects to attractiveness of face photographs and art portraits are domain-specific

    Science.gov (United States)

    Hayn-Leichsenring, Gregor U.; Kloth, Nadine; Schweinberger, Stefan R.; Redies, Christoph

    2013-01-01

    We studied the neural coding of facial attractiveness by investigating effects of adaptation to attractive and unattractive human faces on the perceived attractiveness of veridical human face pictures (Experiment 1) and art portraits (Experiment 2). Experiment 1 revealed a clear pattern of contrastive aftereffects. Relative to a pre-adaptation baseline, the perceived attractiveness of faces was increased after adaptation to unattractive faces, and was decreased after adaptation to attractive faces. Experiment 2 revealed similar aftereffects when art portraits rather than face photographs were used as adaptors and test stimuli, suggesting that effects of adaptation to attractiveness are not restricted to facial photographs. Additionally, we found similar aftereffects in art portraits for beauty, another aesthetic feature that, unlike attractiveness, relates to the properties of the image (rather than to the face displayed). Importantly, Experiment 3 showed that aftereffects were abolished when adaptors were art portraits and face photographs were test stimuli. These results suggest that adaptation to facial attractiveness elicits aftereffects in the perception of subsequently presented faces, for both face photographs and art portraits, and that these effects do not cross image domains. PMID:24349690

  18. Adaptation Effects to Attractiveness of Face Photographs and Art Portraits are Domain-Specific

    Directory of Open Access Journals (Sweden)

    Gregor U. Hayn-Leichsenring

    2013-08-01

    Full Text Available We studied the neural coding of facial attractiveness by investigating effects of adaptation to attractive and unattractive human faces on the perceived attractiveness of veridical human face pictures (Experiment 1 and art portraits (Experiment 2. Experiment 1 revealed a clear pattern of contrastive aftereffects. Relative to a pre-adaptation baseline, the perceived attractiveness of faces was increased after adaptation to unattractive faces, and was decreased after adaptation to attractive faces. Experiment 2 revealed similar aftereffects when art portraits rather than face photographs were used as adaptors and test stimuli, suggesting that effects of adaptation to attractiveness are not restricted to facial photographs. Additionally, we found similar aftereffects in art portraits for beauty, another aesthetic feature that, unlike attractiveness, relates to the properties of the image (rather than to the face displayed. Importantly, Experiment 3 showed that aftereffects were abolished when adaptors were art portraits and face photographs were test stimuli. These results suggest that adaptation to facial attractiveness elicits aftereffects in the perception of subsequently presented faces, for both face photographs and art portraits, and that these effects do not cross image domains.

  19. Baby schema in human and animal faces induces cuteness perception and gaze allocation in children

    Directory of Open Access Journals (Sweden)

    Marta eBorgi

    2014-05-01

    Full Text Available The baby schema concept was originally proposed as a set of infantile traits with high appeal for humans, subsequently shown to elicit caretaking behavior and to affect cuteness perception and attentional processes. However, it is unclear whether the response to the baby schema may be extended to the human-animal bond context. Moreover, questions remain as to whether the cute response is constant and persistent or whether it changes with development. In the present study we parametrically manipulated the baby schema in images of humans, dogs and cats. We analyzed responses of 3-6-year-old children, using both explicit (i.e. cuteness ratings and implicit (i.e. eye gaze patterns measures. By means of eye-tracking, we assessed children’s preferential attention to images varying only for the degree of baby schema and explored participants’ fixation patterns during a cuteness task. For comparative purposes, cuteness ratings were also obtained in a sample of adults. Overall our results show that the response to an infantile facial configuration emerges early during development. In children, the baby schema affects both cuteness perception and gaze allocation to infantile stimuli and to specific facial features, an effect not simply limited to human faces. In line with previous research, results confirm human positive appraisal towards animals and inform both educational and therapeutic interventions involving pets, helping to minimize risk factors (e.g. dog bites.

  20. Combining Deep and Handcrafted Image Features for Presentation Attack Detection in Face Recognition Systems Using Visible-Light Camera Sensors

    Directory of Open Access Journals (Sweden)

    Dat Tien Nguyen

    2018-02-01

    Full Text Available Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples. Therefore, a presentation attack detection (PAD method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP, local ternary pattern (LTP, and histogram of oriented gradients (HOG. As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN method to extract deep image features and the multi-level local binary pattern (MLBP method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases.

  1. Combining Deep and Handcrafted Image Features for Presentation Attack Detection in Face Recognition Systems Using Visible-Light Camera Sensors.

    Science.gov (United States)

    Nguyen, Dat Tien; Pham, Tuyen Danh; Baek, Na Rae; Park, Kang Ryoung

    2018-02-26

    Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP), local ternary pattern (LTP), and histogram of oriented gradients (HOG). As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN) method to extract deep image features and the multi-level local binary pattern (MLBP) method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM) method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases.

  2. Combining Deep and Handcrafted Image Features for Presentation Attack Detection in Face Recognition Systems Using Visible-Light Camera Sensors

    Science.gov (United States)

    Nguyen, Dat Tien; Pham, Tuyen Danh; Baek, Na Rae; Park, Kang Ryoung

    2018-01-01

    Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP), local ternary pattern (LTP), and histogram of oriented gradients (HOG). As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN) method to extract deep image features and the multi-level local binary pattern (MLBP) method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM) method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases. PMID:29495417

  3. Localized Retinal Nerve Fiber Layer Defects in Red-free Photographs Versus En Face Structural Optical Coherence Tomography Images.

    Science.gov (United States)

    Jung, Jae Hoon; Park, Ji-Hye; Yoo, Chungkwon; Kim, Yong Yeon

    2018-03-01

    The purpose of this article is to compare the locations of localized retinal nerve fiber layer (RNFL) defects in red-free fundus photographs and optical coherence tomography (OCT) en face images. We performed a retrospective, comparative study on 46 eyes from 46 glaucoma patients with localized RNFL defects observed in red-free fundus photographs. En face structural images were obtained in the superficial and whole retinal layers using OCT and were overlaid on the corresponding red-free fundus photographs. The proximal/distal angular locations and angular width of each RNFL defect in red-free photos (red-free defects) and in en face structural images (en face defects) were compared. In the superficial retinal layer, there were no significant differences between red-free and en face defects on the proximal/distal angular location and angular width. In the whole retinal layer, the degree of the distal angular location of the en face defects was significantly larger than that of the red-free defects (71.85±18.26 vs. 70.87±17.90 degrees, P=0.003). The correlations of clinical variables with the differences in angular parameters between red-free and en face defects were not significant in the superficial retinal layer. The average RNFL thickness was negatively correlated with the difference in the distal angular location in the whole retinal layer (Pearson correlation coefficient=-0.401, P=0.006). Localized RNFL defects detected in OCT en face structural images of the superficial retinal layer showed high topographic correlation with defects detected in red-free photographs. OCT en face structural images in the superficial layer may be an alternative to red-free fundus photography for the identification of localized RNFL defects in glaucomatous eyes.

  4. Face recognition based on depth maps and surface curvature

    Science.gov (United States)

    Gordon, Gaile G.

    1991-09-01

    This paper explores the representation of the human face by features based on the curvature of the face surface. Curature captures many features necessary to accurately describe the face, such as the shape of the forehead, jawline, and cheeks, which are not easily detected from standard intensity images. Moreover, the value of curvature at a point on the surface is also viewpoint invariant. Until recently range data of high enough resolution and accuracy to perform useful curvature calculations on the scale of the human face had been unavailable. Although several researchers have worked on the problem of interpreting range data from curved (although usually highly geometrically structured) surfaces, the main approaches have centered on segmentation by signs of mean and Gaussian curvature which have not proved sufficient in themselves for the case of the human face. This paper details the calculation of principal curvature for a particular data set, the calculation of general surface descriptors based on curvature, and the calculation of face specific descriptors based both on curvature features and a priori knowledge about the structure of the face. These face specific descriptors can be incorporated into many different recognition strategies. A system that implements one such strategy, depth template comparison, giving recognition rates between 80% and 90% is described.

  5. Two-step superresolution approach for surveillance face image through radial basis function-partial least squares regression and locality-induced sparse representation

    Science.gov (United States)

    Jiang, Junjun; Hu, Ruimin; Han, Zhen; Wang, Zhongyuan; Chen, Jun

    2013-10-01

    Face superresolution (SR), or face hallucination, refers to the technique of generating a high-resolution (HR) face image from a low-resolution (LR) one with the help of a set of training examples. It aims at transcending the limitations of electronic imaging systems. Applications of face SR include video surveillance, in which the individual of interest is often far from cameras. A two-step method is proposed to infer a high-quality and HR face image from a low-quality and LR observation. First, we establish the nonlinear relationship between LR face images and HR ones, according to radial basis function and partial least squares (RBF-PLS) regression, to transform the LR face into the global face space. Then, a locality-induced sparse representation (LiSR) approach is presented to enhance the local facial details once all the global faces for each LR training face are constructed. A comparison of some state-of-the-art SR methods shows the superiority of the proposed two-step approach, RBF-PLS global face regression followed by LiSR-based local patch reconstruction. Experiments also demonstrate the effectiveness under both simulation conditions and some real conditions.

  6. Observation of plasma-facing-wall via high dynamic range imaging

    International Nuclear Information System (INIS)

    Villamayor, Michelle Marie S.; Rosario, Leo Mendel D.; Viloan, Rommel Paulo B.

    2013-01-01

    Pictures of plasmas and deposits in a discharge chamber taken by varying shutter speeds have been integrated into high dynamic range (HDR) images. The HDR images of a graphite target surface of a compact planar magnetron (CPM) discharge device have clearly indicated the erosion pattern of the target, which are correlated to the light intensity distribution of plasma during operation. Based upon the HDR image technique coupled to colorimetry, a formation history of dust-like deposits inside of the CPM chamber has been recorded. The obtained HDR images have shown how the patterns of deposits changed in accordance with discharge duration. Results show that deposition takes place near the evacuation ports during the early stage of the plasma discharge. Discoloration of the plasma-facing-walls indicating erosion and redeposition eventually spreads at the periphery after several hours of operation. (author)

  7. Predicting Performance of a Face Recognition System Based on Image Quality

    NARCIS (Netherlands)

    Dutta, A.

    2015-01-01

    In this dissertation, we focus on several aspects of models that aim to predict performance of a face recognition system. Performance prediction models are commonly based on the following two types of performance predictor features: a) image quality features; and b) features derived solely from

  8. The other-race effect in face learning: Using naturalistic images to investigate face ethnicity effects in a learning paradigm.

    Science.gov (United States)

    Hayward, William G; Favelle, Simone K; Oxner, Matt; Chu, Ming Hon; Lam, Sze Man

    2017-05-01

    The other-race effect in face identification has been reported in many situations and by many different ethnicities, yet it remains poorly understood. One reason for this lack of clarity may be a limitation in the methodologies that have been used to test it. Experiments typically use an old-new recognition task to demonstrate the existence of the other-race effect, but such tasks are susceptible to different social and perceptual influences, particularly in terms of the extent to which all faces are equally individuated at study. In this paper we report an experiment in which we used a face learning methodology to measure the other-race effect. We obtained naturalistic photographs of Chinese and Caucasian individuals, which allowed us to test the ability of participants to generalize their learning to new ecologically valid exemplars of a face identity. We show a strong own-race advantage in face learning, such that participants required many fewer trials to learn names of own-race individuals than those of other-race individuals and were better able to identify learned own-race individuals in novel naturalistic stimuli. Since our methodology requires individuation of all faces, and generalization over large image changes, our finding of an other-race effect can be attributed to a specific deficit in the sensitivity of perceptual and memory processes to other-race faces.

  9. A Two-Stage Framework for 3D Face Reconstruction from RGBD Images.

    Science.gov (United States)

    Wang, Kangkan; Wang, Xianwang; Pan, Zhigeng; Liu, Kai

    2014-08-01

    This paper proposes a new approach for 3D face reconstruction with RGBD images from an inexpensive commodity sensor. The challenges we face are: 1) substantial random noise and corruption are present in low-resolution depth maps; and 2) there is high degree of variability in pose and face expression. We develop a novel two-stage algorithm that effectively maps low-quality depth maps to realistic face models. Each stage is targeted toward a certain type of noise. The first stage extracts sparse errors from depth patches through the data-driven local sparse coding, while the second stage smooths noise on the boundaries between patches and reconstructs the global shape by combining local shapes using our template-based surface refinement. Our approach does not require any markers or user interaction. We perform quantitative and qualitative evaluations on both synthetic and real test sets. Experimental results show that the proposed approach is able to produce high-resolution 3D face models with high accuracy, even if inputs are of low quality, and have large variations in viewpoint and face expression.

  10. Automated facial acne assessment from smartphone images

    Science.gov (United States)

    Amini, Mohammad; Vasefi, Fartash; Valdebran, Manuel; Huang, Kevin; Zhang, Haomiao; Kemp, William; MacKinnon, Nicholas

    2018-02-01

    A smartphone mobile medical application is presented, that provides analysis of the health of skin on the face using a smartphone image and cloud-based image processing techniques. The mobile application employs the use of the camera to capture a front face image of a subject, after which the captured image is spatially calibrated based on fiducial points such as position of the iris of the eye. A facial recognition algorithm is used to identify features of the human face image, to normalize the image, and to define facial regions of interest (ROI) for acne assessment. We identify acne lesions and classify them into two categories: those that are papules and those that are pustules. Automated facial acne assessment was validated by performing tests on images of 60 digital human models and 10 real human face images. The application was able to identify 92% of acne lesions within five facial ROIs. The classification accuracy for separating papules from pustules was 98%. Combined with in-app documentation of treatment, lifestyle factors, and automated facial acne assessment, the app can be used in both cosmetic and clinical dermatology. It allows users to quantitatively self-measure acne severity and treatment efficacy on an ongoing basis to help them manage their chronic facial acne.

  11. Very low resolution face recognition problem.

    Science.gov (United States)

    Zou, Wilman W W; Yuen, Pong C

    2012-01-01

    This paper addresses the very low resolution (VLR) problem in face recognition in which the resolution of the face image to be recognized is lower than 16 × 16. With the increasing demand of surveillance camera-based applications, the VLR problem happens in many face application systems. Existing face recognition algorithms are not able to give satisfactory performance on the VLR face image. While face super-resolution (SR) methods can be employed to enhance the resolution of the images, the existing learning-based face SR methods do not perform well on such a VLR face image. To overcome this problem, this paper proposes a novel approach to learn the relationship between the high-resolution image space and the VLR image space for face SR. Based on this new approach, two constraints, namely, new data and discriminative constraints, are designed for good visuality and face recognition applications under the VLR problem, respectively. Experimental results show that the proposed SR algorithm based on relationship learning outperforms the existing algorithms in public face databases.

  12. Individuating Faces and Common Objects Produces Equal Responses in Putative Face Processing Areas in the Ventral Occipitotemporal Cortex

    Directory of Open Access Journals (Sweden)

    Frank Haist

    2010-10-01

    Full Text Available Controversy surrounds the proposal that specific human cortical regions in the ventral occipitotemporal cortex, commonly called the fusiform face area (FFA and occipital face area (OFA, are specialized for face processing. Here, we present findings from a fMRI study of identity discrimination of faces and objects that demonstrates the FFA and OFA are equally responsive to processing stimuli at the level of individuals (i.e., individuation, be they human faces or non-face objects. The FFA and OFA were defined via a passive viewing task as regions that produced greater activation to faces relative to non-face stimuli within the middle fusiform gyrus and inferior occipital gyrus. In the individuation task, participants judged whether sequentially presented images of faces, diverse objects, or wristwatches depicted the identical or a different exemplar. All three stimulus types produced equivalent BOLD activation within the FFA and OFA; that is, there was no face-specific or face-preferential processing. Critically, individuation processing did not eliminate an object superiority effect relative to faces within a region more closely linked to object processing in the lateral occipital complex (LOC, suggesting that individuation processes are reasonably specific to the FFA and OFA. Taken together, these findings challenge the prevailing view that the FFA and OFA are face-specific processing regions, demonstrating instead that they function to individuate -- i.e., identify specific individuals -- within a category. These findings have significant implications for understanding the function of a brain region widely believed to play an important role in social cognition.

  13. Self-face recognition in social context.

    Science.gov (United States)

    Sugiura, Motoaki; Sassa, Yuko; Jeong, Hyeonjeong; Wakusawa, Keisuke; Horie, Kaoru; Sato, Shigeru; Kawashima, Ryuta

    2012-06-01

    The concept of "social self" is often described as a representation of the self-reflected in the eyes or minds of others. Although the appearance of one's own face has substantial social significance for humans, neuroimaging studies have failed to link self-face recognition and the likely neural substrate of the social self, the medial prefrontal cortex (MPFC). We assumed that the social self is recruited during self-face recognition under a rich social context where multiple other faces are available for comparison of social values. Using functional magnetic resonance imaging (fMRI), we examined the modulation of neural responses to the faces of the self and of a close friend in a social context. We identified an enhanced response in the ventral MPFC and right occipitoparietal sulcus in the social context specifically for the self-face. Neural response in the right lateral parietal and inferior temporal cortices, previously claimed as self-face-specific, was unaffected for the self-face but unexpectedly enhanced for the friend's face in the social context. Self-face-specific activation in the pars triangularis of the inferior frontal gyrus, and self-face-specific reduction of activation in the left middle temporal gyrus and the right supramarginal gyrus, replicating a previous finding, were not subject to such modulation. Our results thus demonstrated the recruitment of a social self during self-face recognition in the social context. At least three brain networks for self-face-specific activation may be dissociated by different patterns of response-modulation in the social context, suggesting multiple dynamic self-other representations in the human brain. Copyright © 2011 Wiley-Liss, Inc.

  14. Acute Solar Retinopathy Imaged With Adaptive Optics, Optical Coherence Tomography Angiography, and En Face Optical Coherence Tomography.

    Science.gov (United States)

    Wu, Chris Y; Jansen, Michael E; Andrade, Jorge; Chui, Toco Y P; Do, Anna T; Rosen, Richard B; Deobhakta, Avnish

    2018-01-01

    Solar retinopathy is a rare form of retinal injury that occurs after direct sungazing. To enhance understanding of the structural changes that occur in solar retinopathy by obtaining high-resolution in vivo en face images. Case report of a young adult woman who presented to the New York Eye and Ear Infirmary with symptoms of acute solar retinopathy after viewing the solar eclipse on August 21, 2017. Results of comprehensive ophthalmic examination and images obtained by fundus photography, microperimetry, spectral-domain optical coherence tomography (OCT), adaptive optics scanning light ophthalmoscopy, OCT angiography, and en face OCT. The patient was examined after viewing the solar eclipse. Visual acuity was 20/20 OD and 20/25 OS. The patient was left-eye dominant. Spectral-domain OCT images were consistent with mild and severe acute solar retinopathy in the right and left eye, respectively. Microperimetry was normal in the right eye but showed paracentral decreased retinal sensitivity in the left eye with a central absolute scotoma. Adaptive optics images of the right eye showed a small region of nonwaveguiding photoreceptors, while images of the left eye showed a large area of abnormal and nonwaveguiding photoreceptors. Optical coherence tomography angiography images were normal in both eyes. En face OCT images of the right eye showed a small circular hyperreflective area, with central hyporeflectivity in the outer retina of the right eye. The left eye showed a hyperreflective lesion that intensified in area from inner to middle retina and became mostly hyporeflective in the outer retina. The shape of the lesion on adaptive optics and en face OCT images of the left eye corresponded to the shape of the scotoma drawn by the patient on Amsler grid. Acute solar retinopathy can present with foveal cone photoreceptor mosaic disturbances on adaptive optics scanning light ophthalmoscopy imaging. Corresponding reflectivity changes can be seen on en face OCT, especially

  15. Seeing Jesus in toast: neural and behavioral correlates of face pareidolia.

    Science.gov (United States)

    Liu, Jiangang; Li, Jun; Feng, Lu; Li, Ling; Tian, Jie; Lee, Kang

    2014-04-01

    Face pareidolia is the illusory perception of non-existent faces. The present study, for the first time, contrasted behavioral and neural responses of face pareidolia with those of letter pareidolia to explore face-specific behavioral and neural responses during illusory face processing. Participants were shown pure-noise images but were led to believe that 50% of them contained either faces or letters; they reported seeing faces or letters illusorily 34% and 38% of the time, respectively. The right fusiform face area (rFFA) showed a specific response when participants "saw" faces as opposed to letters in the pure-noise images. Behavioral responses during face pareidolia produced a classification image (CI) that resembled a face, whereas those during letter pareidolia produced a CI that was letter-like. Further, the extent to which such behavioral CIs resembled faces was directly related to the level of face-specific activations in the rFFA. This finding suggests that the rFFA plays a specific role not only in processing of real faces but also in illusory face perception, perhaps serving to facilitate the interaction between bottom-up information from the primary visual cortex and top-down signals from the prefrontal cortex (PFC). Whole brain analyses revealed a network specialized in face pareidolia, including both the frontal and occipitotemporal regions. Our findings suggest that human face processing has a strong top-down component whereby sensory input with even the slightest suggestion of a face can result in the interpretation of a face. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Face recognition system and method using face pattern words and face pattern bytes

    Science.gov (United States)

    Zheng, Yufeng

    2014-12-23

    The present invention provides a novel system and method for identifying individuals and for face recognition utilizing facial features for face identification. The system and method of the invention comprise creating facial features or face patterns called face pattern words and face pattern bytes for face identification. The invention also provides for pattern recognitions for identification other than face recognition. The invention further provides a means for identifying individuals based on visible and/or thermal images of those individuals by utilizing computer software implemented by instructions on a computer or computer system and a computer readable medium containing instructions on a computer system for face recognition and identification.

  17. Social cognition in autism: Face tuning.

    Science.gov (United States)

    Pavlova, Marina A; Guerreschi, Michele; Tagliavento, Lucia; Gitti, Filippo; Sokolov, Alexander N; Fallgatter, Andreas J; Fazzi, Elisa

    2017-05-26

    Faces convey valuable information for social cognition, effective interpersonal interaction, and non-verbal communication. Face perception is believed to be atypical in autism, but the origin of this deficit is controversial. Dominant featural face encoding is suggested to be responsible for face tuning scarcity. Here we used a recently developed Face-n-Food paradigm for studying face tuning in individuals with autistic spectrum disorders (ASD). The key benefit of these images is that single components do not explicitly trigger face processing. In a spontaneous recognition task, adolescents with autism and typically developing matched controls were presented with a set of Face-n-Food images in different degree resembling a face (slightly bordering on the Giuseppe Arcimboldo style). The set of images was shown in a predetermined order from the least to most resembling a face. Thresholds for recognition of the Face-n-Food images as a face in ASD individuals were substantially higher than in typically developing controls: they did not report seeing a face on the images, which controls easily recognized as a face, and gave overall fewer face responses. This outcome not only lends support to atypical face tuning, but provides novel insights into the origin of face encoding deficits in autism.

  18. A Survey on Sentiment Classification in Face Recognition

    Science.gov (United States)

    Qian, Jingyu

    2018-01-01

    Face recognition has been an important topic for both industry and academia for a long time. K-means clustering, autoencoder, and convolutional neural network, each representing a design idea for face recognition method, are three popular algorithms to deal with face recognition problems. It is worthwhile to summarize and compare these three different algorithms. This paper will focus on one specific face recognition problem-sentiment classification from images. Three different algorithms for sentiment classification problems will be summarized, including k-means clustering, autoencoder, and convolutional neural network. An experiment with the application of these algorithms on a specific dataset of human faces will be conducted to illustrate how these algorithms are applied and their accuracy. Finally, the three algorithms are compared based on the accuracy result.

  19. Pleasant and unpleasant odors influence hedonic evaluations of human faces: an event-related potential study.

    Directory of Open Access Journals (Sweden)

    Stephanie Jane Cook

    2015-12-01

    Full Text Available Odors can alter hedonic evaluations of human faces, but the neural mechanisms of such effects are poorly understood. The present study aimed to analyze the neural underpinning of odor-induced changes in evaluations of human faces in an odor-priming paradigm, using event-related potentials (ERPs. Healthy, young participants (N = 20 rated neutral faces presented after a three second pulse of a pleasant odor (jasmine, unpleasant odor (methylmercaptan, or no-odor control (clean air. Neutral faces presented in the pleasant odor condition were rated more pleasant than the same faces presented in the no-odor control condition, which in turn were rated more pleasant than faces in the unpleasant odor condition. Analysis of face-related potentials revealed four clusters of electrodes significantly affected by odor condition at specific time points during long-latency epochs (600−950 ms. In the 620−640 ms interval, two scalp-time clusters showed greater negative potential in the right parietal electrodes in response to faces in the pleasant odor condition, compared to those in the no-odor and unpleasant odor conditions. At 926 ms, face-related potentials showed greater positivity in response to faces in the pleasant and unpleasant odor conditions at the left and right lateral frontal-temporal electrodes, respectively. Our data shows that odor-induced shifts in evaluations of faces were associated with amplitude changes in the late (> 600 and ultra-late (> 900 ms latency epochs. The observed amplitude changes during the ultra-late epoch are consistent with a left/right hemisphere bias towards pleasant/unpleasant odor effects. Odors alter evaluations of human faces, even when there is a temporal lag between presentation of odors and faces. Our results provide an initial understanding of the neural mechanisms underlying effects of odors on hedonic evaluations.

  20. Deep Convolutional Neural Networks for Classifying Body Constitution Based on Face Image.

    Science.gov (United States)

    Huan, Er-Yang; Wen, Gui-Hua; Zhang, Shi-Jun; Li, Dan-Yang; Hu, Yang; Chang, Tian-Yuan; Wang, Qing; Huang, Bing-Lin

    2017-01-01

    Body constitution classification is the basis and core content of traditional Chinese medicine constitution research. It is to extract the relevant laws from the complex constitution phenomenon and finally build the constitution classification system. Traditional identification methods have the disadvantages of inefficiency and low accuracy, for instance, questionnaires. This paper proposed a body constitution recognition algorithm based on deep convolutional neural network, which can classify individual constitution types according to face images. The proposed model first uses the convolutional neural network to extract the features of face image and then combines the extracted features with the color features. Finally, the fusion features are input to the Softmax classifier to get the classification result. Different comparison experiments show that the algorithm proposed in this paper can achieve the accuracy of 65.29% about the constitution classification. And its performance was accepted by Chinese medicine practitioners.

  1. Capturing specific abilities as a window into human individuality: the example of face recognition.

    Science.gov (United States)

    Wilmer, Jeremy B; Germine, Laura; Chabris, Christopher F; Chatterjee, Garga; Gerbasi, Margaret; Nakayama, Ken

    2012-01-01

    Proper characterization of each individual's unique pattern of strengths and weaknesses requires good measures of diverse abilities. Here, we advocate combining our growing understanding of neural and cognitive mechanisms with modern psychometric methods in a renewed effort to capture human individuality through a consideration of specific abilities. We articulate five criteria for the isolation and measurement of specific abilities, then apply these criteria to face recognition. We cleanly dissociate face recognition from more general visual and verbal recognition. This dissociation stretches across ability as well as disability, suggesting that specific developmental face recognition deficits are a special case of a broader specificity that spans the entire spectrum of human face recognition performance. Item-by-item results from 1,471 web-tested participants, included as supplementary information, fuel item analyses, validation, norming, and item response theory (IRT) analyses of our three tests: (a) the widely used Cambridge Face Memory Test (CFMT); (b) an Abstract Art Memory Test (AAMT), and (c) a Verbal Paired-Associates Memory Test (VPMT). The availability of this data set provides a solid foundation for interpreting future scores on these tests. We argue that the allied fields of experimental psychology, cognitive neuroscience, and vision science could fuel the discovery of additional specific abilities to add to face recognition, thereby providing new perspectives on human individuality.

  2. Human Bites of the Face with Tissue Losses in Cosmopolitan ...

    African Journals Online (AJOL)

    Dr. Milaki Asuku

    A retrospective series of thirty-six cases of human bites to the face with tissue losses requiring reconstruction ..... bite wounds when compared to other forms of trauma in our regional ... References. 1. Liston PN, Tong DC, Firth NA, Kieser JA.

  3. Rear-facing car seat (image)

    Science.gov (United States)

    A rear-facing car seat position is recommended for a child who is very young. Extreme injury can occur in an accident because ... child. In a frontal crash a rear-facing car seat is best, because it cradles the head, ...

  4. Face cognition in humans: Psychophysiological, developmental, and cross-cultural aspects

    Directory of Open Access Journals (Sweden)

    Chernorizov A. M.

    2016-12-01

    Full Text Available Investigators are finding increasing evidence for cross-cultural specificity in face cognition along with individual characteristics. The functions on which face cognition is based not only are types of general cognitive functions (perception, memory but are elements of specific mental processes. Face perception, memorization, correct recognition of faces, and understanding the information that faces provide are essential skills for humans as a social species and can be considered as facets of social (cultural intelligence. Face cognition is a difficult, multifaceted set of processes. The systems and processes involved in perceiving and recognizing faces are captured by several models focusing on the pertinent functions or including the presumably underlying neuroanatomical substrates. Thus, the study of face-cognition mechanisms is a cross-disciplinary topic. In Russia, Germany, and China there are plans to organize an interdisciplinary crosscultural study of face cognition. The first step of this scientific interaction is conducting psychological and psychophysiological studies of face cognition in multinational Russia within the frame of a grant supported by the Russian Science Foundation and devoted to “cross-cultural tolerance”. For that reason and in the presence of the huge diversity of data concerning face cognition, we suggest for discussion, specifically within the psychological scientific community, three aspects of face cognition: (1 psychophysiological (quantitative data, (2 developmental (qualitative data from developmental psychology, and (3 cross-cultural (qualitative data from cross-cultural studies. These three aspects reflect the different levels of investigations and constitute a comprehensive, multilateral approach to the problem. Unfortunately, as a rule, neuropsychological and psychological investigations are carried out independently of each other. However, for the purposes of our overview here, we assume that the

  5. DeitY-TU face database: its design, multiple camera capturing, characteristics, and evaluation

    Science.gov (United States)

    Bhowmik, Mrinal Kanti; Saha, Kankan; Saha, Priya; Bhattacharjee, Debotosh

    2014-10-01

    The development of the latest face databases is providing researchers different and realistic problems that play an important role in the development of efficient algorithms for solving the difficulties during automatic recognition of human faces. This paper presents the creation of a new visual face database, named the Department of Electronics and Information Technology-Tripura University (DeitY-TU) face database. It contains face images of 524 persons belonging to different nontribes and Mongolian tribes of north-east India, with their anthropometric measurements for identification. Database images are captured within a room with controlled variations in illumination, expression, and pose along with variability in age, gender, accessories, make-up, and partial occlusion. Each image contains the combined primary challenges of face recognition, i.e., illumination, expression, and pose. This database also represents some new features: soft biometric traits such as mole, freckle, scar, etc., and facial anthropometric variations that may be helpful for researchers for biometric recognition. It also gives an equivalent study of the existing two-dimensional face image databases. The database has been tested using two baseline algorithms: linear discriminant analysis and principal component analysis, which may be used by other researchers as the control algorithm performance score.

  6. Implementation of perceptual aspects in a face recognition algorithm

    International Nuclear Information System (INIS)

    Crenna, F; Bovio, L; Rossi, G B; Zappa, E; Testa, R; Gasparetto, M

    2013-01-01

    Automatic face recognition is a biometric technique particularly appreciated in security applications. In fact face recognition presents the opportunity to operate at a low invasive level without the collaboration of the subjects under tests, with face images gathered either from surveillance systems or from specific cameras located in strategic points. The automatic recognition algorithms perform a measurement, on the face images, of a set of specific characteristics of the subject and provide a recognition decision based on the measurement results. Unfortunately several quantities may influence the measurement of the face geometry such as its orientation, the lighting conditions, the expression and so on, affecting the recognition rate. On the other hand human recognition of face is a very robust process far less influenced by the surrounding conditions. For this reason it may be interesting to insert perceptual aspects in an automatic facial-based recognition algorithm to improve its robustness. This paper presents a first study in this direction investigating the correlation between the results of a perception experiment and the facial geometry, estimated by means of the position of a set of repere points

  7. Real Time Face Quality Assessment for Face Log Generation

    DEFF Research Database (Denmark)

    Kamal, Nasrollahi; Moeslund, Thomas B.

    2009-01-01

    Summarizing a long surveillance video to just a few best quality face images of each subject, a face-log, is of great importance in surveillance systems. Face quality assessment is the back-bone for face log generation and improving the quality assessment makes the face logs more reliable....... Developing a real time face quality assessment system using the most important facial features and employing it for face logs generation are the concerns of this paper. Extensive tests using four databases are carried out to validate the usability of the system....

  8. Interactive searching of facial image databases

    Science.gov (United States)

    Nicholls, Robert A.; Shepherd, John W.; Shepherd, Jean

    1995-09-01

    A set of psychological facial descriptors has been devised to enable computerized searching of criminal photograph albums. The descriptors have been used to encode image databased of up to twelve thousand images. Using a system called FACES, the databases are searched by translating a witness' verbal description into corresponding facial descriptors. Trials of FACES have shown that this coding scheme is more productive and efficient than searching traditional photograph albums. An alternative method of searching the encoded database using a genetic algorithm is currenly being tested. The genetic search method does not require the witness to verbalize a description of the target but merely to indicate a degree of similarity between the target and a limited selection of images from the database. The major drawback of FACES is that is requires a manual encoding of images. Research is being undertaken to automate the process, however, it will require an algorithm which can predict human descriptive values. Alternatives to human derived coding schemes exist using statistical classifications of images. Since databases encoded using statistical classifiers do not have an obvious direct mapping to human derived descriptors, a search method which does not require the entry of human descriptors is required. A genetic search algorithm is being tested for such a purpose.

  9. Social Cognition in Williams Syndrome: Face Tuning.

    Science.gov (United States)

    Pavlova, Marina A; Heiz, Julie; Sokolov, Alexander N; Barisnikov, Koviljka

    2016-01-01

    Many neurological, neurodevelopmental, neuropsychiatric, and psychosomatic disorders are characterized by impairments in visual social cognition, body language reading, and facial assessment of a social counterpart. Yet a wealth of research indicates that individuals with Williams syndrome exhibit remarkable concern for social stimuli and face fascination. Here individuals with Williams syndrome were presented with a set of Face-n-Food images composed of food ingredients and in different degree resembling a face (slightly bordering on the Giuseppe Arcimboldo style). The primary advantage of these images is that single components do not explicitly trigger face-specific processing, whereas in face images commonly used for investigating face perception (such as photographs or depictions), the mere occurrence of typical cues already implicates face presence. In a spontaneous recognition task, participants were shown a set of images in a predetermined order from the least to most resembling a face. Strikingly, individuals with Williams syndrome exhibited profound deficits in recognition of the Face-n-Food images as a face: they did not report seeing a face on the images, which typically developing controls effortlessly recognized as a face, and gave overall fewer face responses. This suggests atypical face tuning in Williams syndrome. The outcome is discussed in the light of a general pattern of social cognition in Williams syndrome and brain mechanisms underpinning face processing.

  10. IMPLEMENTATION OF ARTIFICIAL NEURAL NETWORK FOR FACE RECOGNITION USING GABOR FEATURE EXTRACTION

    Directory of Open Access Journals (Sweden)

    Muthukannan K

    2013-11-01

    Full Text Available Face detection and recognition is the first step for many applications in various fields such as identification and is used as a key to enter into the various electronic devices, video surveillance, and human computer interface and image database management. This paper focuses on feature extraction in an image using Gabor filter and the extracted image feature vector is then given as an input to the neural network. The neural network is trained with the input data. The Gabor wavelet concentrates on the important components of the face including eye, mouth, nose, cheeks. The main requirement of this technique is the threshold, which gives privileged sensitivity. The threshold values are the feature vectors taken from the faces. These feature vectors are given into the feed forward neural network to train the network. Using the feed forward neural network as a classifier, the recognized and unrecognized faces are classified. This classifier attains a higher face deduction rate. By training more input vectors the system proves to be effective. The effectiveness of the proposed method is demonstrated by the experimental results.

  11. Neural synchronization during face-to-face communication

    OpenAIRE

    Jiang, J.; Dai, B.; Peng, D.; Zhu, C.; Liu, L.; Lu, C.

    2012-01-01

    Although the human brain may have evolutionarily adapted to face-to-face communication, other modes of communication, e.g., telephone and e-mail, increasingly dominate our modern daily life. This study examined the neural difference between face-to-face communication and other types of communication by simultaneously measuring two brains using a hyperscanning approach. The results showed a significant increase in the neural synchronization in the left inferior frontal cortex during a face-to-...

  12. Elevated Amygdala Response to Faces following Early Deprivation

    Science.gov (United States)

    Tottenham, N.; Hare, T. A.; Millner, A.; Gilhooly, T.; Zevin, J. D.; Casey, B. J.

    2011-01-01

    A functional neuroimaging study examined the long-term neural correlates of early adverse rearing conditions in humans as they relate to socio-emotional development. Previously institutionalized (PI) children and a same-aged comparison group were scanned using functional magnetic resonance imaging (fMRI) while performing an Emotional Face Go/Nogo…

  13. Correction of rotational distortion for catheter-based en face OCT and OCT angiography

    Science.gov (United States)

    Ahsen, Osman O.; Lee, Hsiang-Chieh; Giacomelli, Michael G.; Wang, Zhao; Liang, Kaicheng; Tsai, Tsung-Han; Potsaid, Benjamin; Mashimo, Hiroshi; Fujimoto, James G.

    2015-01-01

    We demonstrate a computationally efficient method for correcting the nonuniform rotational distortion (NURD) in catheter-based imaging systems to improve endoscopic en face optical coherence tomography (OCT) and OCT angiography. The method performs nonrigid registration using fiducial markers on the catheter to correct rotational speed variations. Algorithm performance is investigated with an ultrahigh-speed endoscopic OCT system and micromotor catheter. Scan nonuniformity is quantitatively characterized, and artifacts from rotational speed variations are significantly reduced. Furthermore, we present endoscopic en face OCT and OCT angiography images of human gastrointestinal tract in vivo to demonstrate the image quality improvement using the correction algorithm. PMID:25361133

  14. 3D quantitative analysis of early decomposition changes of the human face.

    Science.gov (United States)

    Caplova, Zuzana; Gibelli, Daniele Maria; Poppa, Pasquale; Cummaudo, Marco; Obertova, Zuzana; Sforza, Chiarella; Cattaneo, Cristina

    2018-03-01

    Decomposition of the human body and human face is influenced, among other things, by environmental conditions. The early decomposition changes that modify the appearance of the face may hamper the recognition and identification of the deceased. Quantitative assessment of those changes may provide important information for forensic identification. This report presents a pilot 3D quantitative approach of tracking early decomposition changes of a single cadaver in controlled environmental conditions by summarizing the change with weekly morphological descriptions. The root mean square (RMS) value was used to evaluate the changes of the face after death. The results showed a high correlation (r = 0.863) between the measured RMS and the time since death. RMS values of each scan are presented, as well as the average weekly RMS values. The quantification of decomposition changes could improve the accuracy of antemortem facial approximation and potentially could allow the direct comparisons of antemortem and postmortem 3D scans.

  15. Lip colour affects perceived sex typicality and attractiveness of human faces.

    Science.gov (United States)

    Stephen, Ian D; McKeegan, Angela M

    2010-01-01

    The luminance contrast between facial features and facial skin is greater in women than in men, and women's use of make-up enhances this contrast. In black-and-white photographs, increased luminance contrast enhances femininity and attractiveness in women's faces, but reduces masculinity and attractiveness in men's faces. In Caucasians, much of the contrast between the lips and facial skin is in redness. Red lips have been considered attractive in women in geographically and temporally diverse cultures, possibly because they mimic vasodilation associated with sexual arousal. Here, we investigate the effects of lip luminance and colour contrast on the attractiveness and sex typicality (masculinity/femininity) of human faces. In a Caucasian sample, we allowed participants to manipulate the colour of the lips in colour-calibrated face photographs along CIELab L* (light--dark), a* (red--green), and b* (yellow--blue) axes to enhance apparent attractiveness and sex typicality. Participants increased redness contrast to enhance femininity and attractiveness of female faces, but reduced redness contrast to enhance masculinity of men's faces. Lip blueness was reduced more in female than male faces. Increased lightness contrast enhanced the attractiveness of both sexes, and had little effect on perceptions of sex typicality. The association between lip colour contrast and attractiveness in women's faces may be attributable to its association with oxygenated blood perfusion indicating oestrogen levels, sexual arousal, and cardiac and respiratory health.

  16. Learning discriminant face descriptor.

    Science.gov (United States)

    Lei, Zhen; Pietikäinen, Matti; Li, Stan Z

    2014-02-01

    Local feature descriptor is an important module for face recognition and those like Gabor and local binary patterns (LBP) have proven effective face descriptors. Traditionally, the form of such local descriptors is predefined in a handcrafted way. In this paper, we propose a method to learn a discriminant face descriptor (DFD) in a data-driven way. The idea is to learn the most discriminant local features that minimize the difference of the features between images of the same person and maximize that between images from different people. In particular, we propose to enhance the discriminative ability of face representation in three aspects. First, the discriminant image filters are learned. Second, the optimal neighborhood sampling strategy is soft determined. Third, the dominant patterns are statistically constructed. Discriminative learning is incorporated to extract effective and robust features. We further apply the proposed method to the heterogeneous (cross-modality) face recognition problem and learn DFD in a coupled way (coupled DFD or C-DFD) to reduce the gap between features of heterogeneous face images to improve the performance of this challenging problem. Extensive experiments on FERET, CAS-PEAL-R1, LFW, and HFB face databases validate the effectiveness of the proposed DFD learning on both homogeneous and heterogeneous face recognition problems. The DFD improves POEM and LQP by about 4.5 percent on LFW database and the C-DFD enhances the heterogeneous face recognition performance of LBP by over 25 percent.

  17. High-accuracy and robust face recognition system based on optical parallel correlator using a temporal image sequence

    Science.gov (United States)

    Watanabe, Eriko; Ishikawa, Mami; Ohta, Maiko; Kodate, Kashiko

    2005-09-01

    Face recognition is used in a wide range of security systems, such as monitoring credit card use, searching for individuals with street cameras via Internet and maintaining immigration control. There are still many technical subjects under study. For instance, the number of images that can be stored is limited under the current system, and the rate of recognition must be improved to account for photo shots taken at different angles under various conditions. We implemented a fully automatic Fast Face Recognition Optical Correlator (FARCO) system by using a 1000 frame/s optical parallel correlator designed and assembled by us. Operational speed for the 1: N (i.e. matching a pair of images among N, where N refers to the number of images in the database) identification experiment (4000 face images) amounts to less than 1.5 seconds, including the pre/post processing. From trial 1: N identification experiments using FARCO, we acquired low error rates of 2.6% False Reject Rate and 1.3% False Accept Rate. By making the most of the high-speed data-processing capability of this system, much more robustness can be achieved for various recognition conditions when large-category data are registered for a single person. We propose a face recognition algorithm for the FARCO while employing a temporal image sequence of moving images. Applying this algorithm to a natural posture, a two times higher recognition rate scored compared with our conventional system. The system has high potential for future use in a variety of purposes such as search for criminal suspects by use of street and airport video cameras, registration of babies at hospitals or handling of an immeasurable number of images in a database.

  18. RGB-D-T based Face Recognition

    DEFF Research Database (Denmark)

    Nikisins, Olegs; Nasrollahi, Kamal; Greitans, Modris

    2014-01-01

    Facial images are of critical importance in many real-world applications from gaming to surveillance. The current literature on facial image analysis, from face detection to face and facial expression recognition, are mainly performed in either RGB, Depth (D), or both of these modalities. But......, such analyzes have rarely included Thermal (T) modality. This paper paves the way for performing such facial analyzes using synchronized RGB-D-T facial images by introducing a database of 51 persons including facial images of different rotations, illuminations, and expressions. Furthermore, a face recognition...... algorithm has been developed to use these images. The experimental results show that face recognition using such three modalities provides better results compared to face recognition in any of such modalities in most of the cases....

  19. Magnetic resonance imaging - first human images in Australia

    International Nuclear Information System (INIS)

    Baddeley, H.; Doddrell, D.M.; Brooks, W.M.; Field, J.; Irving, M.; Williams, J.E.

    1986-01-01

    The use of magnetic resonance imaging, in the demonstration of internal human anatomy and in the diagnosis of disease, has the major advantages that the technique is non-invasive, does not require the use of ionizing radiation and that it can demonstrate neurological and cardiovascular lesions that cannot be diagnosed easily by other imaging methods. The first magnetic resonance images of humans were obtained in Australia in October 1985 on the research instrument of the Queensland Medical Magnetic Resonance Research Centre, which is based at the Mater Hospital in Brisbane

  20. A truly human interface: Interacting face-to-face with someone whose words are determined by a computer program

    Directory of Open Access Journals (Sweden)

    Kevin eCorti

    2015-05-01

    Full Text Available We use speech shadowing to create situations wherein people converse in person with a human whose words are determined by a conversational agent computer program. Speech shadowing involves a person (the shadower repeating vocal stimuli originating from a separate communication source in real-time. Humans shadowing for conversational agent sources (e.g., chat bots become hybrid agents (echoborgs capable of face-to-face interlocution. We report three studies that investigated people’s experiences interacting with echoborgs and the extent to which echoborgs pass as autonomous humans. First, participants in a Turing Test spoke with a chat bot via either a text interface or an echoborg. Human shadowing did not improve the chat bot’s chance of passing but did increase interrogators’ ratings of how human-like the chat bot seemed. In our second study, participants had to decide whether their interlocutor produced words generated by a chat bot or simply pretended to be one. Compared to those who engaged a text interface, participants who engaged an echoborg were more likely to perceive their interlocutor as pretending to be a chat bot. In our third study, participants were naïve to the fact that their interlocutor produced words generated by a chat bot. Unlike those who engaged a text interface, the vast majority of participants who engaged an echoborg neither sensed nor suspected a robotic interaction. These findings have implications for android science, the Turing Test paradigm, and human-computer interaction. The human body, as the delivery mechanism of communication, fundamentally alters the social psychological dynamics of interactions with machine intelligence.

  1. Building a 3-D Appearance Model of the Human Face

    DEFF Research Database (Denmark)

    Sjöstrand, Karl; Larsen, Rasmus; Lading, Brian

    2003-01-01

    This paper describes a method for building an appearance model from three-dimensional data of human faces. The data consists of 3-D vertices, polygons and a texture map. The method uses a set of nine manually placed landmarks to automatically form a dense correspondence of thousands of points...

  2. Visual adaptation of the perception of "life": animacy is a basic perceptual dimension of faces.

    Science.gov (United States)

    Koldewyn, Kami; Hanus, Patricia; Balas, Benjamin

    2014-08-01

    One critical component of understanding another's mind is the perception of "life" in a face. However, little is known about the cognitive and neural mechanisms underlying this perception of animacy. Here, using a visual adaptation paradigm, we ask whether face animacy is (1) a basic dimension of face perception and (2) supported by a common neural mechanism across distinct face categories defined by age and species. Observers rated the perceived animacy of adult human faces before and after adaptation to (1) adult faces, (2) child faces, and (3) dog faces. When testing the perception of animacy in human faces, we found significant adaptation to both adult and child faces, but not dog faces. We did, however, find significant adaptation when morphed dog images and dog adaptors were used. Thus, animacy perception in faces appears to be a basic dimension of face perception that is species specific but not constrained by age categories.

  3. Pet Face: Mechanisms Underlying Human-Animal Relationships.

    Science.gov (United States)

    Borgi, Marta; Cirulli, Francesca

    2016-01-01

    Accumulating behavioral and neurophysiological studies support the idea of infantile (cute) faces as highly biologically relevant stimuli rapidly and unconsciously capturing attention and eliciting positive/affectionate behaviors, including willingness to care. It has been hypothesized that the presence of infantile physical and behavioral features in companion (or pet) animals (i.e., dogs and cats) might form the basis of our attraction to these species. Preliminary evidence has indeed shown that the human attentional bias toward the baby schema may extend to animal facial configurations. In this review, the role of facial cues, specifically of infantile traits and facial signals (i.e., eyes gaze) as emotional and communicative signals is highlighted and discussed as regulating the human-animal bond, similarly to what can be observed in the adult-infant interaction context. Particular emphasis is given to the neuroendocrine regulation of the social bond between humans and animals through oxytocin secretion. Instead of considering companion animals as mere baby substitutes for their owners, in this review we highlight the central role of cats and dogs in human lives. Specifically, we consider the ability of companion animals to bond with humans as fulfilling the need for attention and emotional intimacy, thus serving similar psychological and adaptive functions as human-human friendships. In this context, facial cuteness is viewed not just as a releaser of care/parental behavior, but, more in general, as a trait motivating social engagement. To conclude, the impact of this information for applied disciplines is briefly described, particularly in consideration of the increasing evidence of the beneficial effects of contacts with animals for human health and wellbeing.

  4. PET FACE: MECHANISMS UNDERLYING HUMAN-ANIMAL RELATIONSHIPS

    Directory of Open Access Journals (Sweden)

    Marta eBorgi

    2016-03-01

    Full Text Available Accumulating behavioral and neurophysiological studies support the idea of infantile (cute faces as highly biologically relevant stimuli rapidly and unconsciously capturing attention and eliciting positive/affectionate behaviors, including willingness to care. It has been hypothesized that the presence of infantile physical and behavioral features in companion (or pet animals (i.e. dogs and cats might form the basis of our attraction to these species. Preliminary evidence has indeed shown that the human attentional bias toward the baby schema may extend to animal facial configurations. In this review, the role of facial cues, specifically of infantile traits and facial signals (i.e. eyes gaze as emotional and communicative signals is highlighted and discussed as regulating human-animal bond, similarly to what can be observed in the adult-infant interaction context. Particular emphasis is given to the neuroendocrine regulation of social bond between humans and animals through oxytocin secretion. Instead of considering companion animals as mere baby substitutes for their owners, in this review we highlight the central role of cats and dogs in human lives. Specifically, we consider the ability of companion animals to bond with humans as fulfilling the need for attention and emotional intimacy, thus serving similar psychological and adaptive functions as human-human friendships. In this context, facial cuteness is viewed not just as a releaser of care/parental behavior, but more in general as a trait motivating social engagement. To conclude, the impact of this information for applied disciplines is briefly described, particularly in consideration of the increasing evidence of the beneficial effects of contacts with animals for human health and wellbeing.

  5. Human brain imaging

    International Nuclear Information System (INIS)

    Kuhar, M.J.

    1987-01-01

    Just as there have been dramatic advances in the molecular biology of the human brain in recent years, there also have been remarkable advances in brain imaging. This paper reports on the development and broad application of microscopic imaging techniques which include the autoradiographic localization of receptors and the measurement of glucose utilization by autoradiography. These approaches provide great sensitivity and excellent anatomical resolution in exploring brain organization and function. The first noninvasive external imaging of receptor distributions in the living human brain was achieved by positron emission tomography (PET) scanning. Developments, techniques and applications continue to progress. Magnetic resonance imaging (MRI) is also becoming important. Its initial clinical applications were in examining the structure and anatomy of the brain. However, more recent uses, such as MRI spectroscopy, indicate the feasibility of exploring biochemical pathways in the brain, the metabolism of drugs in the brain, and also of examining some of these procedures at an anatomical resolution which is substantially greater than that obtainable by PET scanning. The issues will be discussed in greater detail

  6. Method for Face-Emotion Retrieval Using A Cartoon Emotional Expression Approach

    Science.gov (United States)

    Kostov, Vlaho; Yanagisawa, Hideyoshi; Johansson, Martin; Fukuda, Shuichi

    A simple method for extracting emotion from a human face, as a form of non-verbal communication, was developed to cope with and optimize mobile communication in a globalized and diversified society. A cartoon face based model was developed and used to evaluate emotional content of real faces. After a pilot survey, basic rules were defined and student subjects were asked to express emotion using the cartoon face. Their face samples were then analyzed using principal component analysis and the Mahalanobis distance method. Feature parameters considered as having relations with emotions were extracted and new cartoon faces (based on these parameters) were generated. The subjects evaluated emotion of these cartoon faces again and we confirmed these parameters were suitable. To confirm how these parameters could be applied to real faces, we asked subjects to express the same emotions which were then captured electronically. Simple image processing techniques were also developed to extract these features from real faces and we then compared them with the cartoon face parameters. It is demonstrated via the cartoon face that we are able to express the emotions from very small amounts of information. As a result, real and cartoon faces correspond to each other. It is also shown that emotion could be extracted from still and dynamic real face images using these cartoon-based features.

  7. FaceWarehouse: a 3D facial expression database for visual computing.

    Science.gov (United States)

    Cao, Chen; Weng, Yanlin; Zhou, Shun; Tong, Yiying; Zhou, Kun

    2014-03-01

    We present FaceWarehouse, a database of 3D facial expressions for visual computing applications. We use Kinect, an off-the-shelf RGBD camera, to capture 150 individuals aged 7-80 from various ethnic backgrounds. For each person, we captured the RGBD data of her different expressions, including the neutral expression and 19 other expressions such as mouth-opening, smile, kiss, etc. For every RGBD raw data record, a set of facial feature points on the color image such as eye corners, mouth contour, and the nose tip are automatically localized, and manually adjusted if better accuracy is required. We then deform a template facial mesh to fit the depth data as closely as possible while matching the feature points on the color image to their corresponding points on the mesh. Starting from these fitted face meshes, we construct a set of individual-specific expression blendshapes for each person. These meshes with consistent topology are assembled as a rank-3 tensor to build a bilinear face model with two attributes: identity and expression. Compared with previous 3D facial databases, for every person in our database, there is a much richer matching collection of expressions, enabling depiction of most human facial actions. We demonstrate the potential of FaceWarehouse for visual computing with four applications: facial image manipulation, face component transfer, real-time performance-based facial image animation, and facial animation retargeting from video to image.

  8. When does subliminal affective image priming influence the ability of schizophrenic patients to perceive face emotions?

    Science.gov (United States)

    Vaina, Lucia Maria; Rana, Kunjan D; Cotos, Ionela; Li-Yang, Chen; Huang, Melissa A; Podea, Delia

    2014-12-24

    Deficits in face emotion perception are among the most pervasive aspects of schizophrenia impairments which strongly affects interpersonal communication and social skills. Schizophrenic patients (PSZ) and healthy control subjects (HCS) performed 2 psychophysical tasks. One, the SAFFIMAP test, was designed to determine the impact of subliminally presented affective or neutral images on the accuracy of face-expression (angry or neutral) perception. In the second test, FEP, subjects saw pictures of face-expression and were asked to rate them as angry, happy, or neutral. The following clinical scales were used to determine the acute symptoms in PSZ: Positive and Negative Syndrome (PANSS), Young Mania Rating (YMRS), Hamilton Depression (HAM-D), and Hamilton Anxiety (HAM-A). On the SAFFIMAP test, different from the HCS group, the PSZ group tended to categorize the neutral expression of test faces as angry and their response to the test-face expression was not influenced by the affective content of the primes. In PSZ, the PANSS-positive score was significantly correlated with correct perception of angry faces for aggressive or pleasant primes. YMRS scores were strongly correlated with PSZ's tendency to recognize angry face expressions when the prime was a pleasant or a neutral image. The HAM-D score was positively correlated with categorizing the test-faces as neutral, regardless of the affective content of the prime or of the test-face expression (angry or neutral). Despite its exploratory nature, this study provides the first evidence that conscious perception and categorization of facial emotions (neutral or angry) in PSZ is directly affected by their positive or negative symptoms of the disease as defined by their individual scores on the clinical diagnostic scales.

  9. Mirror self-face perception in individuals with schizophrenia: Feelings of strangeness associated with one's own image.

    Science.gov (United States)

    Bortolon, Catherine; Capdevielle, Delphine; Altman, Rosalie; Macgregor, Alexandra; Attal, Jérôme; Raffard, Stéphane

    2017-07-01

    Self-face recognition is crucial for sense of identity and for maintaining a coherent sense of self. Most of our daily life experiences with the image of our own face happen when we look at ourselves in the mirror. However, to date, mirror self-perception in schizophrenia has received little attention despite evidence that face recognition deficits and self abnormalities have been described in schizophrenia. Thus, this study aims to investigate mirror self-face perception in schizophrenia patients and its correlation with clinical symptoms. Twenty-four schizophrenia patients and twenty-five healthy controls were explicitly requested to describe their image in detail during 2min whilst looking at themselves in a mirror. Then, they were asked to report whether they experienced any self-face recognition difficulties. Results showed that schizophrenia patients reported more feelings of strangeness towards their face compared to healthy controls (U=209.5, p=0.048, r=0.28), but no statistically significant differences were found regarding misidentification (p=0.111) and failures in recognition (p=0.081). Symptoms such as hallucinations, somatic concerns and depression were also associated with self-face perception abnormalities (all p-values>0.05). Feelings of strangeness toward one's own face in schizophrenia might be part of a familiar face perception deficit or a more global self-disturbance, which is characterized by a loss of self-other boundaries and has been associated with abnormal body experiences and first rank symptoms. Regarding this last hypothesis, multisensorial integration might have an impact on the way patients perceive themselves since it has an important role in mirror self-perception. Copyright © 2017. Published by Elsevier B.V.

  10. Effective Connectivity from Early Visual Cortex to Posterior Occipitotemporal Face Areas Supports Face Selectivity and Predicts Developmental Prosopagnosia.

    Science.gov (United States)

    Lohse, Michael; Garrido, Lucia; Driver, Jon; Dolan, Raymond J; Duchaine, Bradley C; Furl, Nicholas

    2016-03-30

    Face processing is mediated by interactions between functional areas in the occipital and temporal lobe, and the fusiform face area (FFA) and anterior temporal lobe play key roles in the recognition of facial identity. Individuals with developmental prosopagnosia (DP), a lifelong face recognition impairment, have been shown to have structural and functional neuronal alterations in these areas. The present study investigated how face selectivity is generated in participants with normal face processing, and how functional abnormalities associated with DP, arise as a function of network connectivity. Using functional magnetic resonance imaging and dynamic causal modeling, we examined effective connectivity in normal participants by assessing network models that include early visual cortex (EVC) and face-selective areas and then investigated the integrity of this connectivity in participants with DP. Results showed that a feedforward architecture from EVC to the occipital face area, EVC to FFA, and EVC to posterior superior temporal sulcus (pSTS) best explained how face selectivity arises in both controls and participants with DP. In this architecture, the DP group showed reduced connection strengths on feedforward connections carrying face information from EVC to FFA and EVC to pSTS. These altered network dynamics in DP contribute to the diminished face selectivity in the posterior occipitotemporal areas affected in DP. These findings suggest a novel view on the relevance of feedforward projection from EVC to posterior occipitotemporal face areas in generating cortical face selectivity and differences in face recognition ability. Areas of the human brain showing enhanced activation to faces compared to other objects or places have been extensively studied. However, the factors leading to this face selectively have remained mostly unknown. We show that effective connectivity from early visual cortex to posterior occipitotemporal face areas gives rise to face

  11. Perceived face size in healthy adults.

    Science.gov (United States)

    D'Amour, Sarah; Harris, Laurence R

    2017-01-01

    Perceptual body size distortions have traditionally been studied using subjective, qualitative measures that assess only one type of body representation-the conscious body image. Previous research on perceived body size has typically focused on measuring distortions of the entire body and has tended to overlook the face. Here, we present a novel psychophysical method for determining perceived body size that taps into implicit body representation. Using a two-alternative forced choice (2AFC), participants were sequentially shown two life-size images of their own face, viewed upright, upside down, or tilted 90°. In one interval, the width or length dimension was varied, while the other interval contained an undistorted image. Participants reported which image most closely matched their own face. An adaptive staircase adjusted the distorted image to hone in on the image that was equally likely to be judged as matching their perceived face as the accurate image. When viewed upright or upside down, face width was overestimated and length underestimated, whereas perception was accurate for the on-side views. These results provide the first psychophysically robust measurements of how accurately healthy participants perceive the size of their face, revealing distortions of the implicit body representation independent of the conscious body image.

  12. Robust Selectivity for Faces in the Human Amygdala in the Absence of Expressions

    Science.gov (United States)

    Mende-Siedlecki, Peter; Verosky, Sara C.; Turk-Browne, Nicholas B.; Todorov, Alexander

    2014-01-01

    There is a well-established posterior network of cortical regions that plays a central role in face processing and that has been investigated extensively. In contrast, although responsive to faces, the amygdala is not considered a core face-selective region, and its face selectivity has never been a topic of systematic research in human neuroimaging studies. Here, we conducted a large-scale group analysis of fMRI data from 215 participants. We replicated the posterior network observed in prior studies but found equally robust and reliable responses to faces in the amygdala. These responses were detectable in most individual participants, but they were also highly sensitive to the initial statistical threshold and habituated more rapidly than the responses in posterior face-selective regions. A multivariate analysis showed that the pattern of responses to faces across voxels in the amygdala had high reliability over time. Finally, functional connectivity analyses showed stronger coupling between the amygdala and posterior face-selective regions during the perception of faces than during the perception of control visual categories. These findings suggest that the amygdala should be considered a core face-selective region. PMID:23984945

  13. Analysis and Segmentation of Face Images using Point Annotations and Linear Subspace Techniques

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille

    2002-01-01

    This report provides an analysis of 37 annotated frontal face images. All results presented have been obtained using our freely available Active Appearance Model (AAM) implementation. To ensure the reproducibility of the presented experiments, the data set has also been made available. As such...

  14. Face pareidolia in the rhesus monkey

    OpenAIRE

    Taubert, Jessica; Wardle, Susan G.; Flessert, Molly; Leopold, David A.; Ungerleider, Leslie G.

    2017-01-01

    Face perception in humans and non-human primates is rapid and accurate[1–4]. In the human brain, a network of visual processing regions is specialized for faces[5–7]. Although face processing is a priority of the primate visual system, face detection is not infallible. Face pareidolia is the compelling illusion of perceiving facial features on inanimate objects, such as the illusory face on the surface of the moon. Although face pareidolia is commonly experienced by humans, its presence in ot...

  15. Actinic keratosis in the en-face and slice imaging mode of high-definition optical coherence tomography and comparison with histology.

    Science.gov (United States)

    Maier, T; Braun-Falco, M; Laubender, R P; Ruzicka, T; Berking, C

    2013-01-01

    Optical coherence tomography (OCT) allows real-time, in vivo examination of nonmelanoma skin cancer. An innovative high-definition (HD)-OCT with a horizontal (en-face) and vertical (slice) imaging mode offers additional information in the diagnosis of actinic keratosis (AK) and may potentially replace invasive diagnostic biopsies.  To define the characteristic morphological features of AK by using HD-OCT in the two imaging modes compared with histopathology as gold standard.  In total, 20 AKs were examined by HD-OCT in the en-face and slice imaging modes and characteristic features were described and evaluated in comparison with the histopathological findings. Furthermore, the HD-OCT images of a subgroup of AKs were compared with those of the clinically normal adjacent skin.  The preoperative in vivo diagnostics showed the following features in the en-face imaging mode of HD-OCT: disruption of stratum corneum, architectural disarray, cellular/nuclear polymorphism in the stratum granulosum/stratum spinosum, and bright irregular bundles in the superficial dermis. In the vertical slice imaging mode the following characteristics were found: irregular entrance signal, destruction of layering, white streaks and dots, and grey areas. In contrast, the clinically healthy adjacent skin showed mainly a regular epidermal 'honeycomb' pattern in the en-face mode and distinct layering of the skin in the slice mode.  HD-OCT with both the en-face and slice imaging modes offers additional information in the diagnosis of AK compared with conventional OCT and might enhance the possibility of the noninvasive diagnosis of AK prior to treatment procedures and possibly in the monitoring of noninvasive treatment strategies. © 2012 The Authors. BJD © 2012 British Association of Dermatologists.

  16. Interactions between masculinity--femininity and apparent health in face preferences

    OpenAIRE

    Finlay G. Smith; Benedict C. Jones; Lisa M. DeBruine; Anthony C. Little

    2009-01-01

    Consistent with Getty's (2002. Signaling health versus parasites. Am Nat. 159:363--371.) proposal that cues to long-term health and cues to current condition are at least partly independent, recent research on human face preferences has found divergent effects of masculinity--femininity, a cue to long-term health, and apparent health, a cue to current condition. In light of this, we tested for interactions between these 2 cues. Participants viewed composite images of opposite-sex faces that h...

  17. Untold stories: the human face of poverty dynamics

    DEFF Research Database (Denmark)

    Prowse, Martin

    2008-01-01

    Key Points • Life histories offer an important window for policy makers, and should be brought to the policy table much more frequently. • Life histories show the human face of chronic poverty. Such vignettes provide concrete examples of poverty traps – such as insecurity, social discrimination...... have ambivalent effects. • Whilst life histories are not representative, they highlight key themes and processes which are ‘typical’ of individuals with similar sets of sociobiographical characteristics who live in similar social, economic and political circumstances....

  18. Deep--deeper--deepest? Encoding strategies and the recognition of human faces.

    Science.gov (United States)

    Sporer, S L

    1991-03-01

    Various encoding strategies that supposedly promote deeper processing of human faces (e.g., character judgments) have led to better recognition than more shallow processing tasks (judging the width of the nose). However, does deeper processing actually lead to an improvement in recognition, or, conversely, does shallow processing lead to a deterioration in performance when compared with naturally employed encoding strategies? Three experiments systematically compared a total of 8 different encoding strategies manipulating depth of processing, amount of elaboration, and self-generation of judgmental categories. All strategies that required a scanning of the whole face were basically equivalent but no better than natural strategy controls. The consistently worst groups were the ones that rated faces along preselected physical dimensions. This can be explained by subjects' lesser task involvement as revealed by manipulation checks.

  19. A Database of Registered, Textured Models of the Human Face

    DEFF Research Database (Denmark)

    Sjöstrand, Karl; Lading, Brian

    2005-01-01

    This note describes a data set of 24 registered human faces represented by both shape and texture. The data was collected during 2003 as part of the preparation of the master thesis of Karl Sjöstrand (former name Karl Skoglund). The data is ready to be used in shape, appearance and data analysis....

  20. Design of an Active Multispectral SWIR Camera System for Skin Detection and Face Verification

    Directory of Open Access Journals (Sweden)

    Holger Steiner

    2016-01-01

    Full Text Available Biometric face recognition is becoming more frequently used in different application scenarios. However, spoofing attacks with facial disguises are still a serious problem for state of the art face recognition algorithms. This work proposes an approach to face verification based on spectral signatures of material surfaces in the short wave infrared (SWIR range. They allow distinguishing authentic human skin reliably from other materials, independent of the skin type. We present the design of an active SWIR imaging system that acquires four-band multispectral image stacks in real-time. The system uses pulsed small band illumination, which allows for fast image acquisition and high spectral resolution and renders it widely independent of ambient light. After extracting the spectral signatures from the acquired images, detected faces can be verified or rejected by classifying the material as “skin” or “no-skin.” The approach is extensively evaluated with respect to both acquisition and classification performance. In addition, we present a database containing RGB and multispectral SWIR face images, as well as spectrometer measurements of a variety of subjects, which is used to evaluate our approach and will be made available to the research community by the time this work is published.

  1. The sequence of cortical activity inferred by response latency variability in the human ventral pathway of face processing.

    Science.gov (United States)

    Lin, Jo-Fu Lotus; Silva-Pereyra, Juan; Chou, Chih-Che; Lin, Fa-Hsuan

    2018-04-11

    Variability in neuronal response latency has been typically considered caused by random noise. Previous studies of single cells and large neuronal populations have shown that the temporal variability tends to increase along the visual pathway. Inspired by these previous studies, we hypothesized that functional areas at later stages in the visual pathway of face processing would have larger variability in the response latency. To test this hypothesis, we used magnetoencephalographic data collected when subjects were presented with images of human faces. Faces are known to elicit a sequence of activity from the primary visual cortex to the fusiform gyrus. Our results revealed that the fusiform gyrus showed larger variability in the response latency compared to the calcarine fissure. Dynamic and spectral analyses of the latency variability indicated that the response latency in the fusiform gyrus was more variable than in the calcarine fissure between 70 ms and 200 ms after the stimulus onset and between 4 Hz and 40 Hz, respectively. The sequential processing of face information from the calcarine sulcus to the fusiform sulcus was more reliably detected based on sizes of the response variability than instants of the maximal response peaks. With two areas in the ventral visual pathway, we show that the variability in response latency across brain areas can be used to infer the sequence of cortical activity.

  2. The Human Face as a Dynamic Tool for Social Communication.

    Science.gov (United States)

    Jack, Rachael E; Schyns, Philippe G

    2015-07-20

    As a highly social species, humans frequently exchange social information to support almost all facets of life. One of the richest and most powerful tools in social communication is the face, from which observers can quickly and easily make a number of inferences - about identity, gender, sex, age, race, ethnicity, sexual orientation, physical health, attractiveness, emotional state, personality traits, pain or physical pleasure, deception, and even social status. With the advent of the digital economy, increasing globalization and cultural integration, understanding precisely which face information supports social communication and which produces misunderstanding is central to the evolving needs of modern society (for example, in the design of socially interactive digital avatars and companion robots). Doing so is challenging, however, because the face can be thought of as comprising a high-dimensional, dynamic information space, and this impacts cognitive science and neuroimaging, and their broader applications in the digital economy. New opportunities to address this challenge are arising from the development of new methods and technologies, coupled with the emergence of a modern scientific culture that embraces cross-disciplinary approaches. Here, we briefly review one such approach that combines state-of-the-art computer graphics, psychophysics and vision science, cultural psychology and social cognition, and highlight the main knowledge advances it has generated. In the light of current developments, we provide a vision of the future directions in the field of human facial communication within and across cultures. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Extracted facial feature of racial closely related faces

    Science.gov (United States)

    Liewchavalit, Chalothorn; Akiba, Masakazu; Kanno, Tsuneo; Nagao, Tomoharu

    2010-02-01

    Human faces contain a lot of demographic information such as identity, gender, age, race and emotion. Human being can perceive these pieces of information and use it as an important clue in social interaction with other people. Race perception is considered the most delicacy and sensitive parts of face perception. There are many research concerning image-base race recognition, but most of them are focus on major race group such as Caucasoid, Negroid and Mongoloid. This paper focuses on how people classify race of the racial closely related group. As a sample of racial closely related group, we choose Japanese and Thai face to represents difference between Northern and Southern Mongoloid. Three psychological experiment was performed to study the strategies of face perception on race classification. As a result of psychological experiment, it can be suggested that race perception is an ability that can be learn. Eyes and eyebrows are the most attention point and eyes is a significant factor in race perception. The Principal Component Analysis (PCA) was performed to extract facial features of sample race group. Extracted race features of texture and shape were used to synthesize faces. As the result, it can be suggested that racial feature is rely on detailed texture rather than shape feature. This research is a indispensable important fundamental research on the race perception which are essential in the establishment of human-like race recognition system.

  4. Fuzzy-cellular neural network for face recognition HCI Authentication

    Science.gov (United States)

    Hoomod, Haider K.; ali, Ahmed abd

    2018-05-01

    Because of the rapid development of mobile devices technology, ease of use and interact with humans. May have found a mobile device most uses in our communications. Mobile devices can carry large amounts of personal and sensitive data, but often left not guaranteed (pin) locks are inconvenient to use and thus have seen low adoption while biometrics is more convenient and less susceptible to fraud and manipulation. Were propose in this paper authentication technique for using a mobile face recognition based on cellular neural networks [1] and fuzzy rules control. The good speed and get recognition rate from applied the proposed system in Android system. The images obtained in real time for 60 persons each person has 20 t0 60 different shot face images (about 3600 images), were the results for (FAR = 0), (FRR = 1.66%), (FER = 1.66) and accuracy = 98.34

  5. Fusion of domain-specific and trainable features for gender recognition from face images

    NARCIS (Netherlands)

    Azzopardi, George; Greco, Antonio; Saggese, Alessia; Vento, Mario

    2018-01-01

    The popularity and the appeal of systems which are able to automatically determine the gender from face images is growing rapidly. Such a great interest arises from the wide variety of applications, especially in the fields of retail and video surveillance. In recent years there have been several

  6. Neural synchronization during face-to-face communication.

    Science.gov (United States)

    Jiang, Jing; Dai, Bohan; Peng, Danling; Zhu, Chaozhe; Liu, Li; Lu, Chunming

    2012-11-07

    Although the human brain may have evolutionarily adapted to face-to-face communication, other modes of communication, e.g., telephone and e-mail, increasingly dominate our modern daily life. This study examined the neural difference between face-to-face communication and other types of communication by simultaneously measuring two brains using a hyperscanning approach. The results showed a significant increase in the neural synchronization in the left inferior frontal cortex during a face-to-face dialog between partners but none during a back-to-back dialog, a face-to-face monologue, or a back-to-back monologue. Moreover, the neural synchronization between partners during the face-to-face dialog resulted primarily from the direct interactions between the partners, including multimodal sensory information integration and turn-taking behavior. The communicating behavior during the face-to-face dialog could be predicted accurately based on the neural synchronization level. These results suggest that face-to-face communication, particularly dialog, has special neural features that other types of communication do not have and that the neural synchronization between partners may underlie successful face-to-face communication.

  7. [Comparative studies of face recognition].

    Science.gov (United States)

    Kawai, Nobuyuki

    2012-07-01

    Every human being is proficient in face recognition. However, the reason for and the manner in which humans have attained such an ability remain unknown. These questions can be best answered-through comparative studies of face recognition in non-human animals. Studies in both primates and non-primates show that not only primates, but also non-primates possess the ability to extract information from their conspecifics and from human experimenters. Neural specialization for face recognition is shared with mammals in distant taxa, suggesting that face recognition evolved earlier than the emergence of mammals. A recent study indicated that a social insect, the golden paper wasp, can distinguish their conspecific faces, whereas a closely related species, which has a less complex social lifestyle with just one queen ruling a nest of underlings, did not show strong face recognition for their conspecifics. Social complexity and the need to differentiate between one another likely led humans to evolve their face recognition abilities.

  8. Recognition of face and non-face stimuli in autistic spectrum disorder.

    Science.gov (United States)

    Arkush, Leo; Smith-Collins, Adam P R; Fiorentini, Chiara; Skuse, David H

    2013-12-01

    The ability to remember faces is critical for the development of social competence. From childhood to adulthood, we acquire a high level of expertise in the recognition of facial images, and neural processes become dedicated to sustaining competence. Many people with autism spectrum disorder (ASD) have poor face recognition memory; changes in hairstyle or other non-facial features in an otherwise familiar person affect their recollection skills. The observation implies that they may not use the configuration of the inner face to achieve memory competence, but bolster performance in other ways. We aimed to test this hypothesis by comparing the performance of a group of high-functioning unmedicated adolescents with ASD and a matched control group on a "surprise" face recognition memory task. We compared their memory for unfamiliar faces with their memory for images of houses. To evaluate the role that is played by peripheral cues in assisting recognition memory, we cropped both sets of pictures, retaining only the most salient central features. ASD adolescents had poorer recognition memory for faces than typical controls, but their recognition memory for houses was unimpaired. Cropping images of faces did not disproportionately influence their recall accuracy, relative to controls. House recognition skills (cropped and uncropped) were similar in both groups. In the ASD group only, performance on both sets of task was closely correlated, implying that memory for faces and other complex pictorial stimuli is achieved by domain-general (non-dedicated) cognitive mechanisms. Adolescents with ASD apparently do not use domain-specialized processing of inner facial cues to support face recognition memory. © 2013 International Society for Autism Research, Wiley Periodicals, Inc.

  9. Classification in Medical Imaging

    DEFF Research Database (Denmark)

    Chen, Chen

    Classification is extensively used in the context of medical image analysis for the purpose of diagnosis or prognosis. In order to classify image content correctly, one needs to extract efficient features with discriminative properties and build classifiers based on these features. In addition...... on characterizing human faces and emphysema disease in lung CT images....

  10. Improving Face Detection with TOE Cameras

    DEFF Research Database (Denmark)

    Hansen, Dan Witzner; Larsen, Rasmus; Lauze, F

    2007-01-01

    A face detection method based on a boosted classifier using images from a time-of-flight sensor is presented. We show that the performance of face detection can be improved when using both depth and gray scale images and that the common use of integration of hypotheses for verification can...... be relaxed. Based on the detected face we employ an active contour method on depth images for full head segmentation....

  11. The hierarchical brain network for face recognition.

    Science.gov (United States)

    Zhen, Zonglei; Fang, Huizhen; Liu, Jia

    2013-01-01

    Numerous functional magnetic resonance imaging (fMRI) studies have identified multiple cortical regions that are involved in face processing in the human brain. However, few studies have characterized the face-processing network as a functioning whole. In this study, we used fMRI to identify face-selective regions in the entire brain and then explore the hierarchical structure of the face-processing network by analyzing functional connectivity among these regions. We identified twenty-five regions mainly in the occipital, temporal and frontal cortex that showed a reliable response selective to faces (versus objects) across participants and across scan sessions. Furthermore, these regions were clustered into three relatively independent sub-networks in a face-recognition task on the basis of the strength of functional connectivity among them. The functionality of the sub-networks likely corresponds to the recognition of individual identity, retrieval of semantic knowledge and representation of emotional information. Interestingly, when the task was switched to object recognition from face recognition, the functional connectivity between the inferior occipital gyrus and the rest of the face-selective regions were significantly reduced, suggesting that this region may serve as an entry node in the face-processing network. In sum, our study provides empirical evidence for cognitive and neural models of face recognition and helps elucidate the neural mechanisms underlying face recognition at the network level.

  12. An efficient method for facial component detection in thermal images

    Science.gov (United States)

    Paul, Michael; Blanik, Nikolai; Blazek, Vladimir; Leonhardt, Steffen

    2015-04-01

    A method to detect certain regions in thermal images of human faces is presented. In this approach, the following steps are necessary to locate the periorbital and the nose regions: First, the face is segmented from the background by thresholding and morphological filtering. Subsequently, a search region within the face, around its center of mass, is evaluated. Automatically computed temperature thresholds are used per subject and image or image sequence to generate binary images, in which the periorbital regions are located by integral projections. Then, the located positions are used to approximate the nose position. It is possible to track features in the located regions. Therefore, these regions are interesting for different applications like human-machine interaction, biometrics and biomedical imaging. The method is easy to implement and does not rely on any training images or templates. Furthermore, the approach saves processing resources due to simple computations and restricted search regions.

  13. Unaware person recognition from the body when face identification fails.

    Science.gov (United States)

    Rice, Allyson; Phillips, P Jonathon; Natu, Vaidehi; An, Xiaobo; O'Toole, Alice J

    2013-11-01

    How does one recognize a person when face identification fails? Here, we show that people rely on the body but are unaware of doing so. State-of-the-art face-recognition algorithms were used to select images of people with almost no useful identity information in the face. Recognition of the face alone in these cases was near chance level, but recognition of the person was accurate. Accuracy in identifying the person without the face was identical to that in identifying the whole person. Paradoxically, people reported relying heavily on facial features over noninternal face and body features in making their identity decisions. Eye movements indicated otherwise, with gaze duration and fixations shifting adaptively toward the body and away from the face when the body was a better indicator of identity than the face. This shift occurred with no cost to accuracy or response time. Human identity processing may be partially inaccessible to conscious awareness.

  14. Automatic landmark detection and face recognition for side-view face images

    NARCIS (Netherlands)

    Santemiz, P.; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.; Broemme, Arslan; Busch, Christoph

    2013-01-01

    In real-life scenarios where pose variation is up to side-view positions, face recognition becomes a challenging task. In this paper we propose an automatic side-view face recognition system designed for home-safety applications. Our goal is to recognize people as they pass through doors in order to

  15. Prosopagnosia when all faces look the same

    CERN Document Server

    Rivolta, Davide

    2014-01-01

    This book provides readers with a simplified and comprehensive account of the cognitive and neural bases of face perception in humans. Faces are ubiquitous in our environment and we rely on them during social interactions. The human face processing system allows us to extract information about the identity, gender, age, mood, race, attractiveness and approachability of other people in about a fraction of a second, just by glancing at their faces.  By introducing readers to the most relevant research on face recognition, this book seeks to answer the questions: “Why are humans so fast at recognizing faces?”, “Why are humans so efficient at recognizing faces?”, “Do faces represent a particular category for the human visual system?”, What makes face perception in humans so special?, “Can our face recognition system fail”?  This book presents the author’s findings on face perception during his research studies on both normal subjects and subjects with prosopagnosia, a neurological disorder cha...

  16. Face Pareidolia in the Rhesus Monkey.

    Science.gov (United States)

    Taubert, Jessica; Wardle, Susan G; Flessert, Molly; Leopold, David A; Ungerleider, Leslie G

    2017-08-21

    Face perception in humans and nonhuman primates is rapid and accurate [1-4]. In the human brain, a network of visual-processing regions is specialized for faces [5-7]. Although face processing is a priority of the primate visual system, face detection is not infallible. Face pareidolia is the compelling illusion of perceiving facial features on inanimate objects, such as the illusory face on the surface of the moon. Although face pareidolia is commonly experienced by humans, its presence in other species is unknown. Here we provide evidence for face pareidolia in a species known to possess a complex face-processing system [8-10]: the rhesus monkey (Macaca mulatta). In a visual preference task [11, 12], monkeys looked longer at photographs of objects that elicited face pareidolia in human observers than at photographs of similar objects that did not elicit illusory faces. Examination of eye movements revealed that monkeys fixated the illusory internal facial features in a pattern consistent with how they view photographs of faces [13]. Although the specialized response to faces observed in humans [1, 3, 5-7, 14] is often argued to be continuous across primates [4, 15], it was previously unclear whether face pareidolia arose from a uniquely human capacity. For example, pareidolia could be a product of the human aptitude for perceptual abstraction or result from frequent exposure to cartoons and illustrations that anthropomorphize inanimate objects. Instead, our results indicate that the perception of illusory facial features on inanimate objects is driven by a broadly tuned face-detection mechanism that we share with other species. Published by Elsevier Ltd.

  17. Laser Doppler imaging of cutaneous blood flow through transparent face masks: a necessary preamble to computer-controlled rapid prototyping fabrication with submillimeter precision.

    Science.gov (United States)

    Allely, Rebekah R; Van-Buendia, Lan B; Jeng, James C; White, Patricia; Wu, Jingshu; Niszczak, Jonathan; Jordan, Marion H

    2008-01-01

    A paradigm shift in management of postburn facial scarring is lurking "just beneath the waves" with the widespread availability of two recent technologies: precise three-dimensional scanning/digitizing of complex surfaces and computer-controlled rapid prototyping three-dimensional "printers". Laser Doppler imaging may be the sensible method to track the scar hyperemia that should form the basis of assessing progress and directing incremental changes in the digitized topographical face mask "prescription". The purpose of this study was to establish feasibility of detecting perfusion through transparent face masks using the Laser Doppler Imaging scanner. Laser Doppler images of perfusion were obtained at multiple facial regions on five uninjured staff members. Images were obtained without a mask, followed by images with a loose fitting mask with and without a silicone liner, and then with a tight fitting mask with and without a silicone liner. Right and left oblique images, in addition to the frontal images, were used to overcome unobtainable measurements at the extremes of face mask curvature. General linear model, mixed model, and t tests were used for data analysis. Three hundred seventy-five measurements were used for analysis, with a mean perfusion unit of 299 and pixel validity of 97%. The effect of face mask pressure with and without the silicone liner was readily quantified with significant changes in mean cutaneous blood flow (P face masks. Perfusion decreases with the application of pressure and with silicone. Every participant measured differently in perfusion units; however, consistent perfusion patterns in the face were observed.

  18. A comparison of student performance in human development classes using three different modes of delivery: Online, face-to-face, and combined

    Science.gov (United States)

    Kalsow, Susan Christensen

    1999-11-01

    The problem. The dual purposes of this research were to determine if there is a difference in student performance in three Human Development classes when the modes of delivery are different and to analyze student perceptions of using Web-based learning as all or part of their course experience. Procedures. Data for this study were collected from three Human Development courses taught at Drake University. Grades from five essays, projects, and overall grades were used in the three classes and analyzed using a single factor analysis of variance to determine if there was a significant difference. Content analysis was used on the evaluation comments of the participants in the online and combined classes to determine their perceptions of Web-based learning. Findings. The single factor analysis of variance measuring student performance showed no significant difference among the online, face-to-face, and combined scores at the .05 level of significance, however, the difference was significant at the .06. The content analysis of the online and combined course showed the three major strengths of learning totally or partly online to be increased comfort in using the computer, the quality of the overall experience, and convenience in terms of increased access to educational opportunities. The barriers included lack of human interaction and access to the professor. Conclusions. The study indicates that Web-based learning is a viable option for postsecondary educational delivery in terms of student performance and learning. On the average, performance is at least as good as performance in traditional face-to-face classrooms. Improved performance, however, is contingent on adequate access to equipment, faculty skill in teaching using a new mode of delivery, and the personality of the student. The convenient access to educational opportunities and becoming more comfortable with technology are benefits that were important to these two groups. Web-based learning is not for everyone

  19. Face Detection and Face Recognition in Android Mobile Applications

    Directory of Open Access Journals (Sweden)

    Octavian DOSPINESCU

    2016-01-01

    Full Text Available The quality of the smartphone’s camera enables us to capture high quality pictures at a high resolution, so we can perform different types of recognition on these images. Face detection is one of these types of recognition that is very common in our society. We use it every day on Facebook to tag friends in our pictures. It is also used in video games alongside Kinect concept, or in security to allow the access to private places only to authorized persons. These are just some examples of using facial recognition, because in modern society, detection and facial recognition tend to surround us everywhere. The aim of this article is to create an appli-cation for smartphones that can recognize human faces. The main goal of this application is to grant access to certain areas or rooms only to certain authorized persons. For example, we can speak here of hospitals or educational institutions where there are rooms where only certain employees can enter. Of course, this type of application can cover a wide range of uses, such as helping people suffering from Alzheimer's to recognize the people they loved, to fill gaps persons who can’t remember the names of their relatives or for example to automatically capture the face of our own children when they smile.

  20. Physiology-based face recognition in the thermal infrared spectrum.

    Science.gov (United States)

    Buddharaju, Pradeep; Pavlidis, Ioannis T; Tsiamyrtzis, Panagiotis; Bazakos, Mike

    2007-04-01

    The current dominant approaches to face recognition rely on facial characteristics that are on or over the skin. Some of these characteristics have low permanency can be altered, and their phenomenology varies significantly with environmental factors (e.g., lighting). Many methodologies have been developed to address these problems to various degrees. However, the current framework of face recognition research has a potential weakness due to its very nature. We present a novel framework for face recognition based on physiological information. The motivation behind this effort is to capitalize on the permanency of innate characteristics that are under the skin. To establish feasibility, we propose a specific methodology to capture facial physiological patterns using the bioheat information contained in thermal imagery. First, the algorithm delineates the human face from the background using the Bayesian framework. Then, it localizes the superficial blood vessel network using image morphology. The extracted vascular network produces contour shapes that are characteristic to each individual. The branching points of the skeletonized vascular network are referred to as Thermal Minutia Points (TMPs) and constitute the feature database. To render the method robust to facial pose variations, we collect for each subject to be stored in the database five different pose images (center, midleft profile, left profile, midright profile, and right profile). During the classification stage, the algorithm first estimates the pose of the test image. Then, it matches the local and global TMP structures extracted from the test image with those of the corresponding pose images in the database. We have conducted experiments on a multipose database of thermal facial images collected in our laboratory, as well as on the time-gap database of the University of Notre Dame. The good experimental results show that the proposed methodology has merit, especially with respect to the problem of

  1. The effect of cleft lip on adults' responses to faces: cross-species findings.

    Directory of Open Access Journals (Sweden)

    Christine E Parsons

    Full Text Available Cleft lip and palate is the most common of the congenital conditions affecting the face and cranial bones and is associated with a raised risk of difficulties in infant-caregiver interaction; the reasons for such difficulties are not fully understood. Here, we report two experiments designed to explore how adults respond to infant faces with and without cleft lip, using behavioural measures of attractiveness appraisal ('liking' and willingness to work to view or remove the images ('wanting'. We found that infants with cleft lip were rated as less attractive and were viewed for shorter durations than healthy infants, an effect that was particularly apparent where the cleft lip was severe. Women rated the infant faces as more attractive than men did, but there were no differences in men and women's viewing times of these faces. In a second experiment, we found that the presence of a cleft lip in domestic animals affected adults' 'liking' and 'wanting' responses in a comparable way to that seen for human infants. Adults' responses were also remarkably similar for images of infants and animals with cleft lip, although no gender difference in attractiveness ratings or viewing times emerged for animals. We suggest that the presence of a cleft lip can substantially change the way in which adults respond to human and animal faces. Furthermore, women may respond in different ways to men when asked to appraise infant attractiveness, despite the fact that men and women 'want' to view images of infants for similar durations.

  2. Human infant faces provoke implicit positive affective responses in parents and non-parents alike.

    Science.gov (United States)

    Senese, Vincenzo Paolo; De Falco, Simona; Bornstein, Marc H; Caria, Andrea; Buffolino, Simona; Venuti, Paola

    2013-01-01

    Human infants' complete dependence on adult caregiving suggests that mechanisms associated with adult responsiveness to infant cues might be deeply embedded in the brain. Behavioural and neuroimaging research has produced converging evidence for adults' positive disposition to infant cues, but these studies have not investigated directly the valence of adults' reactions, how they are moderated by biological and social factors, and if they relate to child caregiving. This study examines implicit affective responses of 90 adults toward faces of human and non-human (cats and dogs) infants and adults. Implicit reactions were assessed with Single Category Implicit Association Tests, and reports of childrearing behaviours were assessed by the Parental Style Questionnaire. The results showed that human infant faces represent highly biologically relevant stimuli that capture attention and are implicitly associated with positive emotions. This reaction holds independent of gender and parenthood status and is associated with ideal parenting behaviors.

  3. Reconstructing Face Image from the Thermal Infrared Spectrum to the Visible Spectrum

    Directory of Open Access Journals (Sweden)

    Brahmastro Kresnaraman

    2016-04-01

    Full Text Available During the night or in poorly lit areas, thermal cameras are a better choice instead of normal cameras for security surveillance because they do not rely on illumination. A thermal camera is able to detect a person within its view, but identification from only thermal information is not an easy task. The purpose of this paper is to reconstruct the face image of a person from the thermal spectrum to the visible spectrum. After the reconstruction, further image processing can be employed, including identification/recognition. Concretely, we propose a two-step thermal-to-visible-spectrum reconstruction method based on Canonical Correlation Analysis (CCA. The reconstruction is done by utilizing the relationship between images in both thermal infrared and visible spectra obtained by CCA. The whole image is processed in the first step while the second step processes patches in an image. Results show that the proposed method gives satisfying results with the two-step approach and outperforms comparative methods in both quality and recognition evaluations.

  4. Mapping face categorization in the human ventral occipitotemporal cortex with direct neural intracranial recordings.

    Science.gov (United States)

    Rossion, Bruno; Jacques, Corentin; Jonas, Jacques

    2018-02-26

    The neural basis of face categorization has been widely investigated with functional magnetic resonance imaging (fMRI), identifying a set of face-selective local regions in the ventral occipitotemporal cortex (VOTC). However, indirect recording of neural activity with fMRI is associated with large fluctuations of signal across regions, often underestimating face-selective responses in the anterior VOTC. While direct recording of neural activity with subdural grids of electrodes (electrocorticography, ECoG) or depth electrodes (stereotactic electroencephalography, SEEG) offers a unique opportunity to fill this gap in knowledge, these studies rather reveal widely distributed face-selective responses. Moreover, intracranial recordings are complicated by interindividual variability in neuroanatomy, ambiguity in definition, and quantification of responses of interest, as well as limited access to sulci with ECoG. Here, we propose to combine SEEG in large samples of individuals with fast periodic visual stimulation to objectively define, quantify, and characterize face categorization across the whole VOTC. This approach reconciles the wide distribution of neural face categorization responses with their (right) hemispheric and regional specialization, and reveals several face-selective regions in anterior VOTC sulci. We outline the challenges of this research program to understand the neural basis of face categorization and high-level visual recognition in general. © 2018 New York Academy of Sciences.

  5. Face recognition from unconstrained three-dimensional face images using multitask sparse representation

    Science.gov (United States)

    Bentaieb, Samia; Ouamri, Abdelaziz; Nait-Ali, Amine; Keche, Mokhtar

    2018-01-01

    We propose and evaluate a three-dimensional (3D) face recognition approach that applies the speeded up robust feature (SURF) algorithm to the depth representation of shape index map, under real-world conditions, using only a single gallery sample for each subject. First, the 3D scans are preprocessed, then SURF is applied on the shape index map to find interest points and their descriptors. Each 3D face scan is represented by keypoints descriptors, and a large dictionary is built from all the gallery descriptors. At the recognition step, descriptors of a probe face scan are sparsely represented by the dictionary. A multitask sparse representation classification is used to determine the identity of each probe face. The feasibility of the approach that uses the SURF algorithm on the shape index map for face identification/authentication is checked through an experimental investigation conducted on Bosphorus, University of Milano Bicocca, and CASIA 3D datasets. It achieves an overall rank one recognition rate of 97.75%, 80.85%, and 95.12%, respectively, on these datasets.

  6. Simple thermal to thermal face verification method based on local texture descriptors

    Science.gov (United States)

    Grudzien, A.; Palka, Norbert; Kowalski, M.

    2017-08-01

    Biometrics is a science that studies and analyzes physical structure of a human body and behaviour of people. Biometrics found many applications ranging from border control systems, forensics systems for criminal investigations to systems for access control. Unique identifiers, also referred to as modalities are used to distinguish individuals. One of the most common and natural human identifiers is a face. As a result of decades of investigations, face recognition achieved high level of maturity, however recognition in visible spectrum is still challenging due to illumination aspects or new ways of spoofing. One of the alternatives is recognition of face in different parts of light spectrum, e.g. in infrared spectrum. Thermal infrared offer new possibilities for human recognition due to its specific properties as well as mature equipment. In this paper we present the scheme of subject's verification methodology by using facial images in thermal range. The study is focused on the local feature extraction methods and on the similarity metrics. We present comparison of two local texture-based descriptors for thermal 1-to-1 face recognition.

  7. From face processing to face recognition: Comparing three different processing levels.

    Science.gov (United States)

    Besson, G; Barragan-Jason, G; Thorpe, S J; Fabre-Thorpe, M; Puma, S; Ceccaldi, M; Barbeau, E J

    2017-01-01

    Verifying that a face is from a target person (e.g. finding someone in the crowd) is a critical ability of the human face processing system. Yet how fast this can be performed is unknown. The 'entry-level shift due to expertise' hypothesis suggests that - since humans are face experts - processing faces should be as fast - or even faster - at the individual than at superordinate levels. In contrast, the 'superordinate advantage' hypothesis suggests that faces are processed from coarse to fine, so that the opposite pattern should be observed. To clarify this debate, three different face processing levels were compared: (1) a superordinate face categorization level (i.e. detecting human faces among animal faces), (2) a face familiarity level (i.e. recognizing famous faces among unfamiliar ones) and (3) verifying that a face is from a target person, our condition of interest. The minimal speed at which faces can be categorized (∼260ms) or recognized as familiar (∼360ms) has largely been documented in previous studies, and thus provides boundaries to compare our condition of interest to. Twenty-seven participants were included. The recent Speed and Accuracy Boosting procedure paradigm (SAB) was used since it constrains participants to use their fastest strategy. Stimuli were presented either upright or inverted. Results revealed that verifying that a face is from a target person (minimal RT at ∼260ms) was remarkably fast but longer than the face categorization level (∼240ms) and was more sensitive to face inversion. In contrast, it was much faster than recognizing a face as familiar (∼380ms), a level severely affected by face inversion. Face recognition corresponding to finding a specific person in a crowd thus appears achievable in only a quarter of a second. In favor of the 'superordinate advantage' hypothesis or coarse-to-fine account of the face visual hierarchy, these results suggest a graded engagement of the face processing system across processing

  8. The Functional Neuroanatomy of Human Face Perception.

    Science.gov (United States)

    Grill-Spector, Kalanit; Weiner, Kevin S; Kay, Kendrick; Gomez, Jesse

    2017-09-15

    Face perception is critical for normal social functioning and is mediated by a network of regions in the ventral visual stream. In this review, we describe recent neuroimaging findings regarding the macro- and microscopic anatomical features of the ventral face network, the characteristics of white matter connections, and basic computations performed by population receptive fields within face-selective regions composing this network. We emphasize the importance of the neural tissue properties and white matter connections of each region, as these anatomical properties may be tightly linked to the functional characteristics of the ventral face network. We end by considering how empirical investigations of the neural architecture of the face network may inform the development of computational models and shed light on how computations in the face network enable efficient face perception.

  9. The Body That Speaks: Recombining Bodies and Speech Sources in Unscripted Face-to-Face Communication.

    Science.gov (United States)

    Gillespie, Alex; Corti, Kevin

    2016-01-01

    This article examines advances in research methods that enable experimental substitution of the speaking body in unscripted face-to-face communication. A taxonomy of six hybrid social agents is presented by combining three types of bodies (mechanical, virtual, and human) with either an artificial or human speech source. Our contribution is to introduce and explore the significance of two particular hybrids: (1) the cyranoid method that enables humans to converse face-to-face through the medium of another person's body, and (2) the echoborg method that enables artificial intelligence to converse face-to-face through the medium of a human body. These two methods are distinct in being able to parse the unique influence of the human body when combined with various speech sources. We also introduce a new framework for conceptualizing the body's role in communication, distinguishing three levels: self's perspective on the body, other's perspective on the body, and self's perspective of other's perspective on the body. Within each level the cyranoid and echoborg methodologies make important research questions tractable. By conceptualizing and synthesizing these methods, we outline a novel paradigm of research on the role of the body in unscripted face-to-face communication.

  10. Face recognition accuracy of forensic examiners, superrecognizers, and face recognition algorithms.

    Science.gov (United States)

    Phillips, P Jonathon; Yates, Amy N; Hu, Ying; Hahn, Carina A; Noyes, Eilidh; Jackson, Kelsey; Cavazos, Jacqueline G; Jeckeln, Géraldine; Ranjan, Rajeev; Sankaranarayanan, Swami; Chen, Jun-Cheng; Castillo, Carlos D; Chellappa, Rama; White, David; O'Toole, Alice J

    2018-05-29

    Achieving the upper limits of face identification accuracy in forensic applications can minimize errors that have profound social and personal consequences. Although forensic examiners identify faces in these applications, systematic tests of their accuracy are rare. How can we achieve the most accurate face identification: using people and/or machines working alone or in collaboration? In a comprehensive comparison of face identification by humans and computers, we found that forensic facial examiners, facial reviewers, and superrecognizers were more accurate than fingerprint examiners and students on a challenging face identification test. Individual performance on the test varied widely. On the same test, four deep convolutional neural networks (DCNNs), developed between 2015 and 2017, identified faces within the range of human accuracy. Accuracy of the algorithms increased steadily over time, with the most recent DCNN scoring above the median of the forensic facial examiners. Using crowd-sourcing methods, we fused the judgments of multiple forensic facial examiners by averaging their rating-based identity judgments. Accuracy was substantially better for fused judgments than for individuals working alone. Fusion also served to stabilize performance, boosting the scores of lower-performing individuals and decreasing variability. Single forensic facial examiners fused with the best algorithm were more accurate than the combination of two examiners. Therefore, collaboration among humans and between humans and machines offers tangible benefits to face identification accuracy in important applications. These results offer an evidence-based roadmap for achieving the most accurate face identification possible. Copyright © 2018 the Author(s). Published by PNAS.

  11. Automated Facial Coding Software Outperforms People in Recognizing Neutral Faces as Neutral from Standardized Datasets

    Directory of Open Access Journals (Sweden)

    Peter eLewinski

    2015-09-01

    Full Text Available Little is known about people’s accuracy of recognizing neutral faces as neutral. In this paper, I demonstrate the importance of knowing how well people recognize neutral faces. I contrasted human recognition scores of 100 typical, neutral front-up facial images with scores of an arguably objective judge – automated facial coding (AFC software. I hypothesized that the software would outperform humans in recognizing neutral faces because of the inherently objective nature of computer algorithms. Results confirmed this hypothesis. I provided the first-ever evidence that computer software (90% was more accurate in recognizing neutral faces than people were (59%. I posited two theoretical mechanisms, i.e. smile-as-a-baseline and false recognition of emotion, as possible explanations for my findings.

  12. 3D imaging by serial block face scanning electron microscopy for materials science using ultramicrotomy

    Energy Technology Data Exchange (ETDEWEB)

    Hashimoto, Teruo, E-mail: t.hashimoto@manchester.ac.uk; Thompson, George E.; Zhou, Xiaorong; Withers, Philip J.

    2016-04-15

    Mechanical serial block face scanning electron microscopy (SBFSEM) has emerged as a means of obtaining three dimensional (3D) electron images over volumes much larger than possible by focused ion beam (FIB) serial sectioning and at higher spatial resolution than achievable with conventional X-ray computed tomography (CT). Such high resolution 3D electron images can be employed for precisely determining the shape, volume fraction, distribution and connectivity of important microstructural features. While soft (fixed or frozen) biological samples are particularly well suited for nanoscale sectioning using an ultramicrotome, the technique can also produce excellent 3D images at electron microscope resolution in a time and resource-efficient manner for engineering materials. Currently, a lack of appreciation of the capabilities of ultramicrotomy and the operational challenges associated with minimising artefacts for different materials is limiting its wider application to engineering materials. Consequently, this paper outlines the current state of the art for SBFSEM examining in detail how damage is introduced during slicing and highlighting strategies for minimising such damage. A particular focus of the study is the acquisition of 3D images for a variety of metallic and coated systems. - Highlights: • The roughness of the ultramicrotomed block face of AA2024 in Al area was 1.2 nm. • Surface texture associated with chattering was evident in grains with 45° diamond knife. • A 76° rake angle minimises the stress on the block face. • Using the oscillating knife with a cutting speed of 0.04 mms{sup −1} minimised the surface texture. • A variety of material applications were presented.

  13. Human fatigue expression recognition through image-based dynamic multi-information and bimodal deep learning

    Science.gov (United States)

    Zhao, Lei; Wang, Zengcai; Wang, Xiaojin; Qi, Yazhou; Liu, Qing; Zhang, Guoxin

    2016-09-01

    Human fatigue is an important cause of traffic accidents. To improve the safety of transportation, we propose, in this paper, a framework for fatigue expression recognition using image-based facial dynamic multi-information and a bimodal deep neural network. First, the landmark of face region and the texture of eye region, which complement each other in fatigue expression recognition, are extracted from facial image sequences captured by a single camera. Then, two stacked autoencoder neural networks are trained for landmark and texture, respectively. Finally, the two trained neural networks are combined by learning a joint layer on top of them to construct a bimodal deep neural network. The model can be used to extract a unified representation that fuses landmark and texture modalities together and classify fatigue expressions accurately. The proposed system is tested on a human fatigue dataset obtained from an actual driving environment. The experimental results demonstrate that the proposed method performs stably and robustly, and that the average accuracy achieves 96.2%.

  14. Neonatal face-to-face interactions promote later social behaviour in infant rhesus monkeys

    Science.gov (United States)

    Dettmer, Amanda M.; Kaburu, Stefano S. K.; Simpson, Elizabeth A.; Paukner, Annika; Sclafani, Valentina; Byers, Kristen L.; Murphy, Ashley M.; Miller, Michelle; Marquez, Neal; Miller, Grace M.; Suomi, Stephen J.; Ferrari, Pier F.

    2016-01-01

    In primates, including humans, mothers engage in face-to-face interactions with their infants, with frequencies varying both within and across species. However, the impact of this variation in face-to-face interactions on infant social development is unclear. Here we report that infant monkeys (Macaca mulatta) who engaged in more neonatal face-to-face interactions with mothers have increased social interactions at 2 and 5 months. In a controlled experiment, we show that this effect is not due to physical contact alone: monkeys randomly assigned to receive additional neonatal face-to-face interactions (mutual gaze and intermittent lip-smacking) with human caregivers display increased social interest at 2 months, compared with monkeys who received only additional handling. These studies suggest that face-to-face interactions from birth promote young primate social interest and competency. PMID:27300086

  15. Orientation Encoding and Viewpoint Invariance in Face Recognition: Inferring Neural Properties from Large-Scale Signals.

    Science.gov (United States)

    Ramírez, Fernando M

    2018-05-01

    Viewpoint-invariant face recognition is thought to be subserved by a distributed network of occipitotemporal face-selective areas that, except for the human anterior temporal lobe, have been shown to also contain face-orientation information. This review begins by highlighting the importance of bilateral symmetry for viewpoint-invariant recognition and face-orientation perception. Then, monkey electrophysiological evidence is surveyed describing key tuning properties of face-selective neurons-including neurons bimodally tuned to mirror-symmetric face-views-followed by studies combining functional magnetic resonance imaging (fMRI) and multivariate pattern analyses to probe the representation of face-orientation and identity information in humans. Altogether, neuroimaging studies suggest that face-identity is gradually disentangled from face-orientation information along the ventral visual processing stream. The evidence seems to diverge, however, regarding the prevalent form of tuning of neural populations in human face-selective areas. In this context, caveats possibly leading to erroneous inferences regarding mirror-symmetric coding are exposed, including the need to distinguish angular from Euclidean distances when interpreting multivariate pattern analyses. On this basis, this review argues that evidence from the fusiform face area is best explained by a view-sensitive code reflecting head angular disparity, consistent with a role of this area in face-orientation perception. Finally, the importance is stressed of explicit models relating neural properties to large-scale signals.

  16. Face recognition based on improved BP neural network

    Directory of Open Access Journals (Sweden)

    Yue Gaili

    2017-01-01

    Full Text Available In order to improve the recognition rate of face recognition, face recognition algorithm based on histogram equalization, PCA and BP neural network is proposed. First, the face image is preprocessed by histogram equalization. Then, the classical PCA algorithm is used to extract the features of the histogram equalization image, and extract the principal component of the image. And then train the BP neural network using the trained training samples. This improved BP neural network weight adjustment method is used to train the network because the conventional BP algorithm has the disadvantages of slow convergence, easy to fall into local minima and training process. Finally, the BP neural network with the test sample input is trained to classify and identify the face images, and the recognition rate is obtained. Through the use of ORL database face image simulation experiment, the analysis results show that the improved BP neural network face recognition method can effectively improve the recognition rate of face recognition.

  17. View based approach to forensic face recognition

    NARCIS (Netherlands)

    Dutta, A.; van Rootseler, R.T.A.; Veldhuis, Raymond N.J.; Spreeuwers, Lieuwe Jan

    Face recognition is a challenging problem for surveillance view images commonly encountered in a forensic face recognition case. One approach to deal with a non-frontal test image is to synthesize the corresponding frontal view image and compare it with frontal view reference images. However, it is

  18. Application of robust face recognition in video surveillance systems

    Science.gov (United States)

    Zhang, De-xin; An, Peng; Zhang, Hao-xiang

    2018-03-01

    In this paper, we propose a video searching system that utilizes face recognition as searching indexing feature. As the applications of video cameras have great increase in recent years, face recognition makes a perfect fit for searching targeted individuals within the vast amount of video data. However, the performance of such searching depends on the quality of face images recorded in the video signals. Since the surveillance video cameras record videos without fixed postures for the object, face occlusion is very common in everyday video. The proposed system builds a model for occluded faces using fuzzy principal component analysis (FPCA), and reconstructs the human faces with the available information. Experimental results show that the system has very high efficiency in processing the real life videos, and it is very robust to various kinds of face occlusions. Hence it can relieve people reviewers from the front of the monitors and greatly enhances the efficiency as well. The proposed system has been installed and applied in various environments and has already demonstrated its power by helping solving real cases.

  19. Body image and face image in Asian American and white women: Examining associations with surveillance, construal of self, perfectionism, and sociocultural pressures.

    Science.gov (United States)

    Frederick, David A; Kelly, Mackenzie C; Latner, Janet D; Sandhu, Gaganjyot; Tsong, Yuying

    2016-03-01

    Asian American women experience sociocultural pressures that could place them at increased risk for experiencing body and face dissatisfaction. Asian American and White women completed measures of appearance evaluation, overweight preoccupation, face satisfaction, face dissatisfaction frequency, perfectionism, surveillance, interdependent and independent self-construal, and perceived sociocultural pressures. In Study 1 (N=182), Asian American women were more likely than White women to report low appearance evaluation (24% vs. 12%; d=-0.50) and to be sometimes-always dissatisfied with the appearance of their eyes (38% vs. 6%; d=0.90) and face overall (59% vs. 34%; d=0.41). In Study 2 (N=488), they were more likely to report low appearance evaluation (36% vs. 23%; d=-0.31) and were less likely to report high eye appearance satisfaction (59% vs. 88%; d=-0.84). The findings highlight the importance of considering ethnic differences when assessing body and face image. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Testing the connections within face processing circuitry in Capgras delusion with diffusion imaging tractography

    Directory of Open Access Journals (Sweden)

    Maria A. Bobes

    2016-01-01

    Full Text Available Although Capgras delusion (CD patients are capable of recognizing familiar faces, they present a delusional belief that some relatives have been replaced by impostors. CD has been explained as a selective disruption of a pathway processing affective values of familiar faces. To test the integrity of connections within face processing circuitry, diffusion tensor imaging was performed in a CD patient and 10 age-matched controls. Voxel-based morphometry indicated gray matter damage in right frontal areas. Tractography was used to examine two important tracts of the face processing circuitry: the inferior fronto-occipital fasciculus (IFOF and the inferior longitudinal (ILF. The superior longitudinal fasciculus (SLF and commissural tracts were also assessed. CD patient did not differ from controls in the commissural fibers, or the SLF. Right and left ILF, and right IFOF were also equivalent to those of controls. However, the left IFOF was significantly reduced respect to controls, also showing a significant dissociation with the ILF, which represents a selective impairment in the fiber-tract connecting occipital and frontal areas. This suggests a possible involvement of the IFOF in affective processing of faces in typical observers and in covert recognition in some cases with prosopagnosia.

  1. Typical and Atypical Development of Functional Connectivity in the Face Network.

    Science.gov (United States)

    Song, Yiying; Zhu, Qi; Li, Jingguang; Wang, Xu; Liu, Jia

    2015-10-28

    Extensive studies have demonstrated that face recognition performance does not reach adult levels until adolescence. However, there is no consensus on whether such prolonged improvement stems from development of general cognitive factors or face-specific mechanisms. Here, we used behavioral experiments and functional magnetic resonance imaging (fMRI) to evaluate these two hypotheses. With a large cohort of children (n = 379), we found that the ability of face-specific recognition in humans increased with age throughout childhood and into late adolescence in both face memory and face perception. Neurally, to circumvent the potential problem of age differences in task performance, attention, or cognitive strategies in task-state fMRI studies, we measured the resting-state functional connectivity (RSFC) between the occipital face area (OFA) and fusiform face area (FFA) in human brain and found that the OFA-FFA RSFC increased until 11-13 years of age. Moreover, the OFA-FFA RSFC was selectively impaired in adults with developmental prosopagnosia (DP). In contrast, no age-related changes or differences between DP and normal adults were observed for RSFCs in the object system. Finally, the OFA-FFA RSFC matured earlier than face selectivity in either the OFA or FFA. These results suggest the critical role of the OFA-FFA RSFC in the development of face recognition. Together, our findings support the hypothesis that prolonged development of face recognition is face specific, not domain general. Copyright © 2015 the authors 0270-6474/15/3514624-12$15.00/0.

  2. Face Detection and Recognition

    National Research Council Canada - National Science Library

    Jain, Anil K

    2004-01-01

    .... Specifically, the report addresses the problem of detecting faces in color images in the presence of various lighting conditions and complex backgrounds as well as recognizing faces under variations...

  3. Advances in face detection and facial image analysis

    CERN Document Server

    Celebi, M; Smolka, Bogdan

    2016-01-01

    This book presents the state-of-the-art in face detection and analysis. It outlines new research directions, including in particular psychology-based facial dynamics recognition, aimed at various applications such as behavior analysis, deception detection, and diagnosis of various psychological disorders. Topics of interest include face and facial landmark detection, face recognition, facial expression and emotion analysis, facial dynamics analysis, face classification, identification, and clustering, and gaze direction and head pose estimation, as well as applications of face analysis.

  4. The human face of health disparities.

    Science.gov (United States)

    Green, Alexander R

    2003-01-01

    In the last 20 years, the issue of disparities in health between racial/ethnic groups has moved from the realm of common sense and anecdote to the realm of science. Hard, cold data now force us to consider what many had long taken for granted. Not only does health differ by race/ethnicity, but our health care system itself is deeply biased. From lack of diversity in the leadership and workforce, to ethnocentric systems of care, to biased clinical decision-making, the American health care system is geared to treat the majority, while the minority suffers. The photos shown here are of patients and scenes that recall some of the important landmarks in research on racial/ethnic disparities in health. The purpose is to put faces and humanity onto the numbers. While we now have great bodies of evidence upon which to lobby for change, in the end, each statistic still represents a personal tragedy or an individual triumph.

  5. Eigenvector Weighting Function in Face Recognition

    Directory of Open Access Journals (Sweden)

    Pang Ying Han

    2011-01-01

    Full Text Available Graph-based subspace learning is a class of dimensionality reduction technique in face recognition. The technique reveals the local manifold structure of face data that hidden in the image space via a linear projection. However, the real world face data may be too complex to measure due to both external imaging noises and the intra-class variations of the face images. Hence, features which are extracted by the graph-based technique could be noisy. An appropriate weight should be imposed to the data features for better data discrimination. In this paper, a piecewise weighting function, known as Eigenvector Weighting Function (EWF, is proposed and implemented in two graph based subspace learning techniques, namely Locality Preserving Projection and Neighbourhood Preserving Embedding. Specifically, the computed projection subspace of the learning approach is decomposed into three partitions: a subspace due to intra-class variations, an intrinsic face subspace, and a subspace which is attributed to imaging noises. Projected data features are weighted differently in these subspaces to emphasize the intrinsic face subspace while penalizing the other two subspaces. Experiments on FERET and FRGC databases are conducted to show the promising performance of the proposed technique.

  6. En-face Flying Spot OCT/Ophthalmoscope

    Science.gov (United States)

    Rosen, Richard B.; Garcia, Patricia; Podoleanu, Adrian Gh.; Cucu, Radu; Dobre, George; Trifanov, Irina; van Velthoven, Mirjam E. J.; de Smet, Marc D.; Rogers, John A.; Hathaway, Mark; Pedro, Justin; Weitz, Rishard

    This is a review of a technique for high-resolution imaging of the eye that allows multiple sample sectioning perspectives with different axial resolutions. The technique involves the flying spot approach employed in confocal scanning laser ophthalmoscopy which is extended to OCT imaging via time domain en face fast lateral scanning. The ability of imaging with multiple axial resolutions stimulated the development of the dual en face OCT-confocal imaging technology. Dual imaging also allows various other imaging combinations, such as OCT with confocal microscopy for imaging the eye anterior segment and OCT with fluorescence angiography imaging.

  7. Searching for Faces is Easiest when they are Cross(es

    Directory of Open Access Journals (Sweden)

    Guy M. Wallis

    2011-05-01

    Full Text Available It has been suggested that certain facial expressions are subject to enhanced processing to maximize the speed and accuracy with which humans locate individuals posing an imminant threat. Evidence supporting this proposal comes largely from visual search tasks which have demonstrated that threatening expressions are more rapidly detected than nonthreatening ones. An open criticism of this effect is that it may be due to low-level visual artifacts, rather than biological preparedness. One successful approach for controlling low-level, image-based differences has been to use schematic faces (simplified line drawings. We report experiments aimed at discovering whether the enhanced processing of threatening schematic faces, might also be due to low-level features. The first study replicated the standard threat search advantage, but also measured an effect using similar stimuli comprised of obliquely oriented lines. The effect was also present with these stimuli rotated, a manipulation which served to remove any residual resemblance the abstract images had to a face. The results suggest that low-level features underlie the search advantage for angry, schematic faces, thereby undermining a key source of evidence of a search advantage for specific facial expressions.

  8. Processing Distracting Non-face Emotional Images: No Evidence of an Age-Related Positivity Effect.

    Science.gov (United States)

    Madill, Mark; Murray, Janice E

    2017-01-01

    Cognitive aging may be accompanied by increased prioritization of social and emotional goals that enhance positive experiences and emotional states. The socioemotional selectivity theory suggests this may be achieved by giving preference to positive information and avoiding or suppressing negative information. Although there is some evidence of a positivity bias in controlled attention tasks, it remains unclear whether a positivity bias extends to the processing of affective stimuli presented outside focused attention. In two experiments, we investigated age-related differences in the effects of to-be-ignored non-face affective images on target processing. In Experiment 1, 27 older (64-90 years) and 25 young adults (19-29 years) made speeded valence judgments about centrally presented positive or negative target images taken from the International Affective Picture System. To-be-ignored distractor images were presented above and below the target image and were either positive, negative, or neutral in valence. The distractors were considered task relevant because they shared emotional characteristics with the target stimuli. Both older and young adults responded slower to targets when distractor valence was incongruent with target valence relative to when distractors were neutral. Older adults responded faster to positive than to negative targets but did not show increased interference effects from positive distractors. In Experiment 2, affective distractors were task irrelevant as the target was a three-digit array and did not share emotional characteristics with the distractors. Twenty-six older (63-84 years) and 30 young adults (18-30 years) gave speeded responses on a digit disparity task while ignoring the affective distractors positioned in the periphery. Task performance in either age group was not influenced by the task-irrelevant affective images. In keeping with the socioemotional selectivity theory, these findings suggest that older adults preferentially

  9. Reference frames for spatial frequency in face representation differ in the temporal visual cortex and amygdala.

    Science.gov (United States)

    Inagaki, Mikio; Fujita, Ichiro

    2011-07-13

    Social communication in nonhuman primates and humans is strongly affected by facial information from other individuals. Many cortical and subcortical brain areas are known to be involved in processing facial information. However, how the neural representation of faces differs across different brain areas remains unclear. Here, we demonstrate that the reference frame for spatial frequency (SF) tuning of face-responsive neurons differs in the temporal visual cortex and amygdala in monkeys. Consistent with psychophysical properties for face recognition, temporal cortex neurons were tuned to image-based SFs (cycles/image) and showed viewing distance-invariant representation of face patterns. On the other hand, many amygdala neurons were influenced by retina-based SFs (cycles/degree), a characteristic that is useful for social distance computation. The two brain areas also differed in the luminance contrast sensitivity of face-responsive neurons; amygdala neurons sharply reduced their responses to low luminance contrast images, while temporal cortex neurons maintained the level of their responses. From these results, we conclude that different types of visual processing in the temporal visual cortex and the amygdala contribute to the construction of the neural representations of faces.

  10. The human face as a dynamic tool for social communication

    OpenAIRE

    Jack, Rachael E.; Schyns, Philippe G.

    2015-01-01

    As a highly social species, humans frequently exchange social information to support almost all facets of life. One of the richest and most powerful tools in social communication is the face, from which observers can quickly and easily make a number of inferences — about identity, gender, sex, age, race, ethnicity, sexual orientation, physical health, attractiveness, emotional state, personality traits, pain or physical pleasure, deception, and even social status. With the advent of the digit...

  11. Humanity in God's Image: An Interdisciplinary Exploration

    DEFF Research Database (Denmark)

    Welz, Claudia

    . Claudia Welz offers an interdisciplinary exploration of theological and ethical 'visions' of the invisible. By analysing poetry and art, Welz exemplifies human self-understanding in the interface between the visual and the linguistic. The content of the imago Dei cannot be defined apart from the image......How can we, in our times, understand the biblical concept that human beings have been created in the image of an invisible God? This is a perennial but increasingly pressing question that lies at the heart of theological anthropology. Humanity in God's Image: An Interdisciplinary Exploration...

  12. Sub-pattern based multi-manifold discriminant analysis for face recognition

    Science.gov (United States)

    Dai, Jiangyan; Guo, Changlu; Zhou, Wei; Shi, Yanjiao; Cong, Lin; Yi, Yugen

    2018-04-01

    In this paper, we present a Sub-pattern based Multi-manifold Discriminant Analysis (SpMMDA) algorithm for face recognition. Unlike existing Multi-manifold Discriminant Analysis (MMDA) approach which is based on holistic information of face image for recognition, SpMMDA operates on sub-images partitioned from the original face image and then extracts the discriminative local feature from the sub-images separately. Moreover, the structure information of different sub-images from the same face image is considered in the proposed method with the aim of further improve the recognition performance. Extensive experiments on three standard face databases (Extended YaleB, CMU PIE and AR) demonstrate that the proposed method is effective and outperforms some other sub-pattern based face recognition methods.

  13. Emotional expectations influence neural sensitivity to fearful faces in humans:An event-related potential study

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    The present study tested whether neural sensitivity to salient emotional facial expressions was influenced by emotional expectations induced by a cue that validly predicted the expression of a subsequently presented target face. Event-related potentials (ERPs) elicited by fearful and neutral faces were recorded while participants performed a gender discrimination task under cued (‘expected’) and uncued (‘unexpected’) conditions. The behavioral results revealed that accuracy was lower for fearful compared with neutral faces in the unexpected condition, while accuracy was similar for fearful and neutral faces in the expected condition. ERP data revealed increased amplitudes in the P2 component and 200–250 ms interval for unexpected fearful versus neutral faces. By contrast, ERP responses were similar for fearful and neutral faces in the expected condition. These findings indicate that human neural sensitivity to fearful faces is modulated by emotional expectations. Although the neural system is sensitive to unpredictable emotionally salient stimuli, sensitivity to salient stimuli is reduced when these stimuli are predictable.

  14. Face Liveness Detection Based on Skin Blood Flow Analysis

    Directory of Open Access Journals (Sweden)

    Shun-Yi Wang

    2017-12-01

    Full Text Available Face recognition systems have been widely adopted for user authentication in security systems due to their simplicity and effectiveness. However, spoofing attacks, including printed photos, displayed photos, and replayed video attacks, are critical challenges to authentication, and these spoofing attacks allow malicious invaders to gain access to the system. This paper proposes two novel features for face liveness detection systems to protect against printed photo attacks and replayed attacks for biometric authentication systems. The first feature obtains the texture difference between red and green channels of face images inspired by the observation that skin blood flow in the face has properties that enable distinction between live and spoofing face images. The second feature estimates the color distribution in the local regions of face images, instead of whole images, because image quality might be more discriminative in small areas of face images. These two features are concatenated together, along with a multi-scale local binary pattern feature, and a support vector machine classifier is trained to discriminate between live and spoofing face images. The experimental results show that the performance of the proposed method for face spoof detection is promising when compared with that of previously published methods. Furthermore, the proposed system can be implemented in real time, which is valuable for mobile applications.

  15. Neural and behavioral responses to attractiveness in adult and infant faces.

    Science.gov (United States)

    Hahn, Amanda C; Perrett, David I

    2014-10-01

    Facial attractiveness provides a very powerful motivation for sexual and parental behavior. We therefore review the importance of faces to the study of neurobiological control of human reproductive motivations. For heterosexual individuals there is a common brain circuit involving the nucleus accumbens, the medial prefrontal, dorsal anterior cingulate and the orbitofrontal cortices that is activated more by attractive than unattractive faces, particularly for faces of the opposite sex. Behavioral studies indicate parallel effects of attractiveness on incentive salience or willingness to work to see faces. There is some evidence that the reward value of opposite sex attractiveness is more pronounced in men than women, perhaps reflecting the greater importance assigned to physical attractiveness by men when evaluating a potential mate. Sex differences and similarities in response to facial attractiveness are reviewed. Studies comparing heterosexual and homosexual observers indicate the orbitofrontal cortex and mediodorsal thalamus are more activated by faces of the desired sex than faces of the less-preferred sex, independent of observer gender or sexual orientation. Infant faces activate brain regions that partially overlap with those responsive to adult faces. Infant faces provide a powerful stimulus, which also elicits sex differences in behavior and brain responses that appear dependent on sex hormones. There are many facial dimensions affecting perceptions of attractiveness that remain unexplored in neuroimaging, and we conclude by suggesting that future studies combining parametric manipulation of face images, brain imaging, hormone assays and genetic polymorphisms in receptor sensitivity are needed to understand the neural and hormonal mechanisms underlying reproductive drives. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Pose-invariant face recognition using Markov random fields.

    Science.gov (United States)

    Ho, Huy Tho; Chellappa, Rama

    2013-04-01

    One of the key challenges for current face recognition techniques is how to handle pose variations between the probe and gallery face images. In this paper, we present a method for reconstructing the virtual frontal view from a given nonfrontal face image using Markov random fields (MRFs) and an efficient variant of the belief propagation algorithm. In the proposed approach, the input face image is divided into a grid of overlapping patches, and a globally optimal set of local warps is estimated to synthesize the patches at the frontal view. A set of possible warps for each patch is obtained by aligning it with images from a training database of frontal faces. The alignments are performed efficiently in the Fourier domain using an extension of the Lucas-Kanade algorithm that can handle illumination variations. The problem of finding the optimal warps is then formulated as a discrete labeling problem using an MRF. The reconstructed frontal face image can then be used with any face recognition technique. The two main advantages of our method are that it does not require manually selected facial landmarks or head pose estimation. In order to improve the performance of our pose normalization method in face recognition, we also present an algorithm for classifying whether a given face image is at a frontal or nonfrontal pose. Experimental results on different datasets are presented to demonstrate the effectiveness of the proposed approach.

  17. Gaze Cueing by Pareidolia Faces

    OpenAIRE

    Kohske Takahashi; Katsumi Watanabe

    2013-01-01

    Visual images that are not faces are sometimes perceived as faces (the pareidolia phenomenon). While the pareidolia phenomenon provides people with a strong impression that a face is present, it is unclear how deeply pareidolia faces are processed as faces. In the present study, we examined whether a shift in spatial attention would be produced by gaze cueing of face-like objects. A robust cueing effect was observed when the face-like objects were perceived as faces. The magnitude of the cuei...

  18. Complementary Cohort Strategy for Multimodal Face Pair Matching

    DEFF Research Database (Denmark)

    Sun, Yunlian; Nasrollahi, Kamal; Sun, Zhenan

    2016-01-01

    Face pair matching is the task of determining whether two face images represent the same person. Due to the limited expressive information embedded in the two face images as well as various sources of facial variations, it becomes a quite difficult problem. Towards the issue of few available images...... provided to represent each face, we propose to exploit an extra cohort set (identities in the cohort set are different from those being compared) by a series of cohort list comparisons. Useful cohort coefficients are then extracted from both sorted cohort identities and sorted cohort images...... for complementary information. To augment its robustness to complicated facial variations, we further employ multiple face modalities owing to their complementary value to each other for the face pair matching task. The final decision is made by fusing the extracted cohort coefficients with the direct matching...

  19. Embedded Face Detection and Recognition

    Directory of Open Access Journals (Sweden)

    Göksel Günlü

    2012-10-01

    Full Text Available The need to increase security in open or public spaces has in turn given rise to the requirement to monitor these spaces and analyse those images on-site and on-time. At this point, the use of smart cameras – of which the popularity has been increasing – is one step ahead. With sensors and Digital Signal Processors (DSPs, smart cameras generate ad hoc results by analysing the numeric images transmitted from the sensor by means of a variety of image-processing algorithms. Since the images are not transmitted to a distance processing unit but rather are processed inside the camera, it does not necessitate high-bandwidth networks or high processor powered systems; it can instantaneously decide on the required access. Nonetheless, on account of restricted memory, processing power and overall power, image processing algorithms need to be developed and optimized for embedded processors. Among these algorithms, one of the most important is for face detection and recognition. A number of face detection and recognition methods have been proposed recently and many of these methods have been tested on general-purpose processors. In smart cameras – which are real-life applications of such methods – the widest use is on DSPs. In the present study, the Viola-Jones face detection method – which was reported to run faster on PCs – was optimized for DSPs; the face recognition method was combined with the developed sub-region and mask-based DCT (Discrete Cosine Transform. As the employed DSP is a fixed-point processor, the processes were performed with integers insofar as it was possible. To enable face recognition, the image was divided into sub-regions and from each sub-region the robust coefficients against disruptive elements – like face expression, illumination, etc. – were selected as the features. The discrimination of the selected features was enhanced via LDA (Linear Discriminant Analysis and then employed for recognition. Thanks to its

  20. Early (M170) activation of face-specific cortex by face-like objects.

    Science.gov (United States)

    Hadjikhani, Nouchine; Kveraga, Kestutis; Naik, Paulami; Ahlfors, Seppo P

    2009-03-04

    The tendency to perceive faces in random patterns exhibiting configural properties of faces is an example of pareidolia. Perception of 'real' faces has been associated with a cortical response signal arising at approximately 170 ms after stimulus onset, but what happens when nonface objects are perceived as faces? Using magnetoencephalography, we found that objects incidentally perceived as faces evoked an early (165 ms) activation in the ventral fusiform cortex, at a time and location similar to that evoked by faces, whereas common objects did not evoke such activation. An earlier peak at 130 ms was also seen for images of real faces only. Our findings suggest that face perception evoked by face-like objects is a relatively early process, and not a late reinterpretation cognitive phenomenon.

  1. Human wagering behavior depends on opponents' faces.

    Directory of Open Access Journals (Sweden)

    Erik J Schlicht

    Full Text Available Research in competitive games has exclusively focused on how opponent models are developed through previous outcomes and how peoples' decisions relate to normative predictions. Little is known about how rapid impressions of opponents operate and influence behavior in competitive economic situations, although such subjective impressions have been shown to influence cooperative decision-making. This study investigates whether an opponent's face influences players' wagering decisions in a zero-sum game with hidden information. Participants made risky choices in a simplified poker task while being presented opponents whose faces differentially correlated with subjective impressions of trust. Surprisingly, we find that threatening face information has little influence on wagering behavior, but faces relaying positive emotional characteristics impact peoples' decisions. Thus, people took significantly longer and made more mistakes against emotionally positive opponents. Differences in reaction times and percent correct were greatest around the optimal decision boundary, indicating that face information is predominantly used when making decisions during medium-value gambles. Mistakes against emotionally positive opponents resulted from increased folding rates, suggesting that participants may have believed that these opponents were betting with hands of greater value than other opponents. According to these results, the best "poker face" for bluffing may not be a neutral face, but rather a face that contains emotional correlates of trustworthiness. Moreover, it suggests that rapid impressions of an opponent play an important role in competitive games, especially when people have little or no experience with an opponent.

  2. The FaceBase Consortium: a comprehensive resource for craniofacial researchers

    Science.gov (United States)

    Brinkley, James F.; Fisher, Shannon; Harris, Matthew P.; Holmes, Greg; Hooper, Joan E.; Wang Jabs, Ethylin; Jones, Kenneth L.; Kesselman, Carl; Klein, Ophir D.; Maas, Richard L.; Marazita, Mary L.; Selleri, Licia; Spritz, Richard A.; van Bakel, Harm; Visel, Axel; Williams, Trevor J.; Wysocka, Joanna

    2016-01-01

    The FaceBase Consortium, funded by the National Institute of Dental and Craniofacial Research, National Institutes of Health, is designed to accelerate understanding of craniofacial developmental biology by generating comprehensive data resources to empower the research community, exploring high-throughput technology, fostering new scientific collaborations among researchers and human/computer interactions, facilitating hypothesis-driven research and translating science into improved health care to benefit patients. The resources generated by the FaceBase projects include a number of dynamic imaging modalities, genome-wide association studies, software tools for analyzing human facial abnormalities, detailed phenotyping, anatomical and molecular atlases, global and specific gene expression patterns, and transcriptional profiling over the course of embryonic and postnatal development in animal models and humans. The integrated data visualization tools, faceted search infrastructure, and curation provided by the FaceBase Hub offer flexible and intuitive ways to interact with these multidisciplinary data. In parallel, the datasets also offer unique opportunities for new collaborations and training for researchers coming into the field of craniofacial studies. Here, we highlight the focus of each spoke project and the integration of datasets contributed by the spokes to facilitate craniofacial research. PMID:27287806

  3. Early (N170) activation of face-specific cortex by face-like objects

    Science.gov (United States)

    Hadjikhani, Nouchine; Kveraga, Kestutis; Naik, Paulami; Ahlfors, Seppo P.

    2009-01-01

    The tendency to perceive faces in random patterns exhibiting configural properties of faces is an example of pareidolia. Perception of ‘real’ faces has been associated with a cortical response signal arising at about 170ms after stimulus onset; but what happens when non-face objects are perceived as faces? Using magnetoencephalography (MEG), we found that objects incidentally perceived as faces evoked an early (165ms) activation in the ventral fusiform cortex, at a time and location similar to that evoked by faces, whereas common objects did not evoke such activation. An earlier peak at 130 ms was also seen for images of real faces only. Our findings suggest that face perception evoked by face-like objects is a relatively early process, and not a late re-interpretation cognitive phenomenon. PMID:19218867

  4. The non-linear development of the right hemispheric specialization for human face perception.

    Science.gov (United States)

    Lochy, Aliette; de Heering, Adélaïde; Rossion, Bruno

    2017-06-24

    The developmental origins of human adults' right hemispheric specialization for face perception remain unclear. On the one hand, infant studies have shown a right hemispheric advantage for face perception. On the other hand, it has been proposed that the adult right hemispheric lateralization for face perception slowly emerges during childhood due to reading acquisition, which increases left lateralized posterior responses to competing written material (e.g., visual letters and words). Since methodological approaches used in infant and children typically differ when their face capabilities are explored, resolving this issue has been difficult. Here we tested 5-year-old preschoolers varying in their level of visual letter knowledge with the same fast periodic visual stimulation (FPVS) paradigm leading to strongly right lateralized electrophysiological occipito-temporal face-selective responses in 4- to 6-month-old infants (de Heering and Rossion, 2015). Children's face-selective response was quantitatively larger and differed in scalp topography from infants', but did not differ across hemispheres. There was a small positive correlation between preschoolers' letter knowledge and a non-normalized index of right hemispheric specialization for faces. These observations show that previous discrepant results in the literature reflect a genuine nonlinear development of the neural processes underlying face perception and are not merely due to methodological differences across age groups. We discuss several factors that could contribute to the adult right hemispheric lateralization for faces, such as myelination of the corpus callosum and reading acquisition. Our findings point to the value of FPVS coupled with electroencephalography to assess specialized face perception processes throughout development with the same methodology. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Image-based Analysis of Emotional Facial Expressions in Full Face Transplants.

    Science.gov (United States)

    Bedeloglu, Merve; Topcu, Çagdas; Akgul, Arzu; Döger, Ela Naz; Sever, Refik; Ozkan, Ozlenen; Ozkan, Omer; Uysal, Hilmi; Polat, Ovunc; Çolak, Omer Halil

    2018-01-20

    In this study, it is aimed to determine the degree of the development in emotional expression of full face transplant patients from photographs. Hence, a rehabilitation process can be planned according to the determination of degrees as a later work. As envisaged, in full face transplant cases, the determination of expressions can be confused or cannot be achieved as the healthy control group. In order to perform image-based analysis, a control group consist of 9 healthy males and 2 full-face transplant patients participated in the study. Appearance-based Gabor Wavelet Transform (GWT) and Local Binary Pattern (LBP) methods are adopted for recognizing neutral and 6 emotional expressions which consist of angry, scared, happy, hate, confused and sad. Feature extraction was carried out by using both methods and combination of these methods serially. In the performed expressions, the extracted features of the most distinct zones in the facial area where the eye and mouth region, have been used to classify the emotions. Also, the combination of these region features has been used to improve classifier performance. Control subjects and transplant patients' ability to perform emotional expressions have been determined with K-nearest neighbor (KNN) classifier with region-specific and method-specific decision stages. The results have been compared with healthy group. It has been observed that transplant patients don't reflect some emotional expressions. Also, there were confusions among expressions.

  6. Perceptual expertise in forensic facial image comparison.

    Science.gov (United States)

    White, David; Phillips, P Jonathon; Hahn, Carina A; Hill, Matthew; O'Toole, Alice J

    2015-09-07

    Forensic facial identification examiners are required to match the identity of faces in images that vary substantially, owing to changes in viewing conditions and in a person's appearance. These identifications affect the course and outcome of criminal investigations and convictions. Despite calls for research on sources of human error in forensic examination, existing scientific knowledge of face matching accuracy is based, almost exclusively, on people without formal training. Here, we administered three challenging face matching tests to a group of forensic examiners with many years' experience of comparing face images for law enforcement and government agencies. Examiners outperformed untrained participants and computer algorithms, thereby providing the first evidence that these examiners are experts at this task. Notably, computationally fusing responses of multiple experts produced near-perfect performance. Results also revealed qualitative differences between expert and non-expert performance. First, examiners' superiority was greatest at longer exposure durations, suggestive of more entailed comparison in forensic examiners. Second, experts were less impaired by image inversion than non-expert students, contrasting with face memory studies that show larger face inversion effects in high performers. We conclude that expertise in matching identity across unfamiliar face images is supported by processes that differ qualitatively from those supporting memory for individual faces. © 2015 The Author(s).

  7. Comparing Face Detection and Recognition Techniques

    OpenAIRE

    Korra, Jyothi

    2016-01-01

    This paper implements and compares different techniques for face detection and recognition. One is find where the face is located in the images that is face detection and second is face recognition that is identifying the person. We study three techniques in this paper: Face detection using self organizing map (SOM), Face recognition by projection and nearest neighbor and Face recognition using SVM.

  8. Locally Linear Embedding of Local Orthogonal Least Squares Images for Face Recognition

    Science.gov (United States)

    Hafizhelmi Kamaru Zaman, Fadhlan

    2018-03-01

    Dimensionality reduction is very important in face recognition since it ensures that high-dimensionality data can be mapped to lower dimensional space without losing salient and integral facial information. Locally Linear Embedding (LLE) has been previously used to serve this purpose, however, the process of acquiring LLE features requires high computation and resources. To overcome this limitation, we propose a locally-applied Local Orthogonal Least Squares (LOLS) model can be used as initial feature extraction before the application of LLE. By construction of least squares regression under orthogonal constraints we can preserve more discriminant information in the local subspace of facial features while reducing the overall features into a more compact form that we called LOLS images. LLE can then be applied on the LOLS images to maps its representation into a global coordinate system of much lower dimensionality. Several experiments carried out using publicly available face datasets such as AR, ORL, YaleB, and FERET under Single Sample Per Person (SSPP) constraint demonstrates that our proposed method can reduce the time required to compute LLE features while delivering better accuracy when compared to when either LLE or OLS alone is used. Comparison against several other feature extraction methods and more recent feature-learning method such as state-of-the-art Convolutional Neural Networks (CNN) also reveal the superiority of the proposed method under SSPP constraint.

  9. Heterogeneous sharpness for cross-spectral face recognition

    Science.gov (United States)

    Cao, Zhicheng; Schmid, Natalia A.

    2017-05-01

    Matching images acquired in different electromagnetic bands remains a challenging problem. An example of this type of comparison is matching active or passive infrared (IR) against a gallery of visible face images, known as cross-spectral face recognition. Among many unsolved issues is the one of quality disparity of the heterogeneous images. Images acquired in different spectral bands are of unequal image quality due to distinct imaging mechanism, standoff distances, or imaging environment, etc. To reduce the effect of quality disparity on the recognition performance, one can manipulate images to either improve the quality of poor-quality images or to degrade the high-quality images to the level of the quality of their heterogeneous counterparts. To estimate the level of discrepancy in quality of two heterogeneous images a quality metric such as image sharpness is needed. It provides a guidance in how much quality improvement or degradation is appropriate. In this work we consider sharpness as a relative measure of heterogeneous image quality. We propose a generalized definition of sharpness by first achieving image quality parity and then finding and building a relationship between the image quality of two heterogeneous images. Therefore, the new sharpness metric is named heterogeneous sharpness. Image quality parity is achieved by experimentally finding the optimal cross-spectral face recognition performance where quality of the heterogeneous images is varied using a Gaussian smoothing function with different standard deviation. This relationship is established using two models; one of them involves a regression model and the other involves a neural network. To train, test and validate the model, we use composite operators developed in our lab to extract features from heterogeneous face images and use the sharpness metric to evaluate the face image quality within each band. Images from three different spectral bands visible light, near infrared, and short

  10. Uyghur face recognition method combining 2DDCT with POEM

    Science.gov (United States)

    Yi, Lihamu; Ya, Ermaimaiti

    2017-11-01

    In this paper, in light of the reduced recognition rate and poor robustness of Uyghur face under illumination and partial occlusion, a Uyghur face recognition method combining Two Dimension Discrete Cosine Transform (2DDCT) with Patterns Oriented Edge Magnitudes (POEM) was proposed. Firstly, the Uyghur face images were divided into 8×8 block matrix, and the Uyghur face images after block processing were converted into frequency-domain status using 2DDCT; secondly, the Uyghur face images were compressed to exclude non-sensitive medium frequency parts and non-high frequency parts, so it can reduce the feature dimensions necessary for the Uyghur face images, and further reduce the amount of computation; thirdly, the corresponding POEM histograms of the Uyghur face images were obtained by calculating the feature quantity of POEM; fourthly, the POEM histograms were cascaded together as the texture histogram of the center feature point to obtain the texture features of the Uyghur face feature points; finally, classification of the training samples was carried out using deep learning algorithm. The simulation experiment results showed that the proposed algorithm further improved the recognition rate of the self-built Uyghur face database, and greatly improved the computing speed of the self-built Uyghur face database, and had strong robustness.

  11. Face recognition based on matching of local features on 3D dynamic range sequences

    Science.gov (United States)

    Echeagaray-Patrón, B. A.; Kober, Vitaly

    2016-09-01

    3D face recognition has attracted attention in the last decade due to improvement of technology of 3D image acquisition and its wide range of applications such as access control, surveillance, human-computer interaction and biometric identification systems. Most research on 3D face recognition has focused on analysis of 3D still data. In this work, a new method for face recognition using dynamic 3D range sequences is proposed. Experimental results are presented and discussed using 3D sequences in the presence of pose variation. The performance of the proposed method is compared with that of conventional face recognition algorithms based on descriptors.

  12. Imaging Human Brain Perfusion with Inhaled Hyperpolarized 129Xe MR Imaging.

    Science.gov (United States)

    Rao, Madhwesha R; Stewart, Neil J; Griffiths, Paul D; Norquay, Graham; Wild, Jim M

    2018-02-01

    Purpose To evaluate the feasibility of directly imaging perfusion of human brain tissue by using magnetic resonance (MR) imaging with inhaled hyperpolarized xenon 129 ( 129 Xe). Materials and Methods In vivo imaging with 129 Xe was performed in three healthy participants. The combination of a high-yield spin-exchange optical pumping 129 Xe polarizer, custom-built radiofrequency coils, and an optimized gradient-echo MR imaging protocol was used to achieve signal sensitivity sufficient to directly image hyperpolarized 129 Xe dissolved in the human brain. Conventional T1-weighted proton (hydrogen 1 [ 1 H]) images and perfusion images by using arterial spin labeling were obtained for comparison. Results Images of 129 Xe uptake were obtained with a signal-to-noise ratio of 31 ± 9 and demonstrated structural similarities to the gray matter distribution on conventional T1-weighted 1 H images and to perfusion images from arterial spin labeling. Conclusion Hyperpolarized 129 Xe MR imaging is an injection-free means of imaging the perfusion of cerebral tissue. The proposed method images the uptake of inhaled xenon gas to the extravascular brain tissue compartment across the intact blood-brain barrier. This level of sensitivity is not readily available with contemporary MR imaging methods. © RSNA, 2017.

  13. Clustering Millions of Faces by Identity.

    Science.gov (United States)

    Otto, Charles; Wang, Dayong; Jain, Anil K

    2018-02-01

    Given a large collection of unlabeled face images, we address the problem of clustering faces into an unknown number of identities. This problem is of interest in social media, law enforcement, and other applications, where the number of faces can be of the order of hundreds of million, while the number of identities (clusters) can range from a few thousand to millions. To address the challenges of run-time complexity and cluster quality, we present an approximate Rank-Order clustering algorithm that performs better than popular clustering algorithms (k-Means and Spectral). Our experiments include clustering up to 123 million face images into over 10 million clusters. Clustering results are analyzed in terms of external (known face labels) and internal (unknown face labels) quality measures, and run-time. Our algorithm achieves an F-measure of 0.87 on the LFW benchmark (13 K faces of 5,749 individuals), which drops to 0.27 on the largest dataset considered (13 K faces in LFW + 123M distractor images). Additionally, we show that frames in the YouTube benchmark can be clustered with an F-measure of 0.71. An internal per-cluster quality measure is developed to rank individual clusters for manual exploration of high quality clusters that are compact and isolated.

  14. Multiview face detection based on position estimation over multicamera surveillance system

    Science.gov (United States)

    Huang, Ching-chun; Chou, Jay; Shiu, Jia-Hou; Wang, Sheng-Jyh

    2012-02-01

    In this paper, we propose a multi-view face detection system that locates head positions and indicates the direction of each face in 3-D space over a multi-camera surveillance system. To locate 3-D head positions, conventional methods relied on face detection in 2-D images and projected the face regions back to 3-D space for correspondence. However, the inevitable false face detection and rejection usually degrades the system performance. Instead, our system searches for the heads and face directions over the 3-D space using a sliding cube. Each searched 3-D cube is projected onto the 2-D camera views to determine the existence and direction of human faces. Moreover, a pre-process to estimate the locations of candidate targets is illustrated to speed-up the searching process over the 3-D space. In summary, our proposed method can efficiently fuse multi-camera information and suppress the ambiguity caused by detection errors. Our evaluation shows that the proposed approach can efficiently indicate the head position and face direction on real video sequences even under serious occlusion.

  15. Effects of compression and individual variability on face recognition performance

    Science.gov (United States)

    McGarry, Delia P.; Arndt, Craig M.; McCabe, Steven A.; D'Amato, Donald P.

    2004-08-01

    for automated FR systems and human inspectors. Working within the M1-Biometrics Technical Committee of the InterNational Committee for Information Technology Standards (INCITS) organization, a standard face image format will be tested and submitted to organizations such as ICAO.

  16. Reading faces: differential lateral gaze bias in processing canine and human facial expressions in dogs and 4-year-old children.

    Science.gov (United States)

    Racca, Anaïs; Guo, Kun; Meints, Kerstin; Mills, Daniel S

    2012-01-01

    Sensitivity to the emotions of others provides clear biological advantages. However, in the case of heterospecific relationships, such as that existing between dogs and humans, there are additional challenges since some elements of the expression of emotions are species-specific. Given that faces provide important visual cues for communicating emotional state in both humans and dogs, and that processing of emotions is subject to brain lateralisation, we investigated lateral gaze bias in adult dogs when presented with pictures of expressive human and dog faces. Our analysis revealed clear differences in laterality of eye movements in dogs towards conspecific faces according to the emotional valence of the expressions. Differences were also found towards human faces, but to a lesser extent. For comparative purpose, a similar experiment was also run with 4-year-old children and it was observed that they showed differential processing of facial expressions compared to dogs, suggesting a species-dependent engagement of the right or left hemisphere in processing emotions.

  17. Reading faces: differential lateral gaze bias in processing canine and human facial expressions in dogs and 4-year-old children.

    Directory of Open Access Journals (Sweden)

    Anaïs Racca

    Full Text Available Sensitivity to the emotions of others provides clear biological advantages. However, in the case of heterospecific relationships, such as that existing between dogs and humans, there are additional challenges since some elements of the expression of emotions are species-specific. Given that faces provide important visual cues for communicating emotional state in both humans and dogs, and that processing of emotions is subject to brain lateralisation, we investigated lateral gaze bias in adult dogs when presented with pictures of expressive human and dog faces. Our analysis revealed clear differences in laterality of eye movements in dogs towards conspecific faces according to the emotional valence of the expressions. Differences were also found towards human faces, but to a lesser extent. For comparative purpose, a similar experiment was also run with 4-year-old children and it was observed that they showed differential processing of facial expressions compared to dogs, suggesting a species-dependent engagement of the right or left hemisphere in processing emotions.

  18. Common cortical responses evoked by appearance, disappearance and change of the human face

    Directory of Open Access Journals (Sweden)

    Kida Tetsuo

    2009-04-01

    Full Text Available Abstract Background To segregate luminance-related, face-related and non-specific components involved in spatio-temporal dynamics of cortical activations to a face stimulus, we recorded cortical responses to face appearance (Onset, disappearance (Offset, and change (Change using magnetoencephalography. Results Activity in and around the primary visual cortex (V1/V2 showed luminance-dependent behavior. Any of the three events evoked activity in the middle occipital gyrus (MOG at 150 ms and temporo-parietal junction (TPJ at 250 ms after the onset of each event. Onset and Change activated the fusiform gyrus (FG, while Offset did not. This FG activation showed a triphasic waveform, consistent with results of intracranial recordings in humans. Conclusion Analysis employed in this study successfully segregated four different elements involved in the spatio-temporal dynamics of cortical activations in response to a face stimulus. The results show the responses of MOG and TPJ to be associated with non-specific processes, such as the detection of abrupt changes or exogenous attention. Activity in FG corresponds to a face-specific response recorded by intracranial studies, and that in V1/V2 is related to a change in luminance.

  19. Successful decoding of famous faces in the fusiform face area.

    Directory of Open Access Journals (Sweden)

    Vadim Axelrod

    Full Text Available What are the neural mechanisms of face recognition? It is believed that the network of face-selective areas, which spans the occipital, temporal, and frontal cortices, is important in face recognition. A number of previous studies indeed reported that face identity could be discriminated based on patterns of multivoxel activity in the fusiform face area and the anterior temporal lobe. However, given the difficulty in localizing the face-selective area in the anterior temporal lobe, its role in face recognition is still unknown. Furthermore, previous studies limited their analysis to occipito-temporal regions without testing identity decoding in more anterior face-selective regions, such as the amygdala and prefrontal cortex. In the current high-resolution functional Magnetic Resonance Imaging study, we systematically examined the decoding of the identity of famous faces in the temporo-frontal network of face-selective and adjacent non-face-selective regions. A special focus has been put on the face-area in the anterior temporal lobe, which was reliably localized using an optimized scanning protocol. We found that face-identity could be discriminated above chance level only in the fusiform face area. Our results corroborate the role of the fusiform face area in face recognition. Future studies are needed to further explore the role of the more recently discovered anterior face-selective areas in face recognition.

  20. Face distortion aftereffects evoked by featureless first-order stimulus configurations

    Directory of Open Access Journals (Sweden)

    Pál eVakli

    2012-12-01

    Full Text Available After prolonged exposure to a distorted face with expanded or contracted inner features, a subsequently presented normal face appears distorted towards the opposite direction. This phenomenon, termed as face distortion aftereffect (FDAE, is thought to occur as a result of changes in the mechanisms involved in higher order visual processing. However, the extent to which FDAE is mediated by face-specific configural processing is less known. In the present study, we investigated whether similar aftereffects can be induced by stimuli lacking all the typical characteristics of a human face except for its first-order configural properties. We found a significant FDAE after adaptation to a stimulus consisting of three white dots arranged in a triangular fashion and placed in a grey oval. FDAEs occurred also when the adapting and test stimuli differed in size or when the contrast polarity of the adaptor image was changed. However, the inversion of the adapting image as well as the reduction of its contrast abolished the aftereffect entirely. Taken together, our results suggest that higher-level visual areas, which are involved in the processing of facial configurations, mediate the FDAE. Further, while adaptation seems to be largely invariant to contrast polarity, it appears sensitive to orientation and to lower level manipulations that affect the saliency of the inner features.

  1. Connectome imaging for mapping human brain pathways.

    Science.gov (United States)

    Shi, Y; Toga, A W

    2017-09-01

    With the fast advance of connectome imaging techniques, we have the opportunity of mapping the human brain pathways in vivo at unprecedented resolution. In this article we review the current developments of diffusion magnetic resonance imaging (MRI) for the reconstruction of anatomical pathways in connectome studies. We first introduce the background of diffusion MRI with an emphasis on the technical advances and challenges in state-of-the-art multi-shell acquisition schemes used in the Human Connectome Project. Characterization of the microstructural environment in the human brain is discussed from the tensor model to the general fiber orientation distribution (FOD) models that can resolve crossing fibers in each voxel of the image. Using FOD-based tractography, we describe novel methods for fiber bundle reconstruction and graph-based connectivity analysis. Building upon these novel developments, there have already been successful applications of connectome imaging techniques in reconstructing challenging brain pathways. Examples including retinofugal and brainstem pathways will be reviewed. Finally, we discuss future directions in connectome imaging and its interaction with other aspects of brain imaging research.

  2. Thermal-Polarimetric and Visible Data Collection for Face Recognition

    Science.gov (United States)

    2016-09-01

    matching a thermal face image with visible spectrum face images for interoperability with existing biometric face databases and watch lists. One of the...Byrd KA Preview of the newly acquired NVESD-ARL multimodal face database. Proc SPIE DSS. 2013;8734. 10. Yuffa AJ, Gurton KP, Videen G. Appl Optics

  3. Detection of hypercholesterolemia using hyperspectral imaging of human skin

    Science.gov (United States)

    Milanic, Matija; Bjorgan, Asgeir; Larsson, Marcus; Strömberg, Tomas; Randeberg, Lise L.

    2015-07-01

    Hypercholesterolemia is characterized by high blood levels of cholesterol and is associated with increased risk of atherosclerosis and cardiovascular disease. Xanthelasma is a subcutaneous lesion appearing in the skin around the eyes. Xanthelasma is related to hypercholesterolemia. Identifying micro-xanthelasma can thereforeprovide a mean for early detection of hypercholesterolemia and prevent onset and progress of disease. The goal of this study was to investigate spectral and spatial characteristics of hypercholesterolemia in facial skin. Optical techniques like hyperspectral imaging (HSI) might be a suitable tool for such characterization as it simultaneously provides high resolution spatial and spectral information. In this study a 3D Monte Carlo model of lipid inclusions in human skin was developed to create hyperspectral images in the spectral range 400-1090 nm. Four lesions with diameters 0.12-1.0 mm were simulated for three different skin types. The simulations were analyzed using three algorithms: the Tissue Indices (TI), the two layer Diffusion Approximation (DA), and the Minimum Noise Fraction transform (MNF). The simulated lesions were detected by all methods, but the best performance was obtained by the MNF algorithm. The results were verified using data from 11 volunteers with known cholesterol levels. The face of the volunteers was imaged by a LCTF system (400- 720 nm), and the images were analyzed using the previously mentioned algorithms. The identified features were then compared to the known cholesterol levels of the subjects. Significant correlation was obtained for the MNF algorithm only. This study demonstrates that HSI can be a promising, rapid modality for detection of hypercholesterolemia.

  4. The organisational and human resource challenges facing primary care trusts: protocol of a multiple case study

    Directory of Open Access Journals (Sweden)

    Tim Scott J

    2001-11-01

    Full Text Available Abstract Background The study is designed to assess the organisational and human resource challenges faced by Primary Care Trusts (PCTs. Its objectives are to: specify the organisational and human resources challenges faced by PCTs in fulfilling the roles envisaged in government and local policy; examine how PCTs are addressing these challenges, in particular, to describe the organisational forms they have adopted, and the OD/HR strategies and initiatives they have planned or in place; assess how effective these structures, strategies and initiatives have been in enabling the PCTs to meet the organisational and human resources challenges they face; identify the factors, both internal to the PCT and in the wider health community, which have contributed to the success or failure of different structures, strategies and initiatives. Methods The study will be undertaken in three stages. In Stage 1 the key literature on public sector and NHS organisational development and human resources management will be reviewed, and discussions will be held with key researchers and policy makers working in this area. Stage 2 will focus on detailed case studies in six PCTs designed to examine the organisational and human resources challenges they face. Data will be collected using semi-structured interviews, group discussion, site visits, observation of key meetings and examination of local documentation. The findings from the case study PCTs will be cross checked with a Reference Group of up to 20 other PCG/Ts, and key officers working in organisational development or primary care at local, regional and national level. In Stage 3 analysis of findings from the preparatory work, the case studies and the feedback from the Reference Group will be used to identify practical lessons for PCTs, key messages for policy makers, and contributions to further theoretical development.

  5. Face-sensitive processes one hundred milliseconds after picture onset

    Directory of Open Access Journals (Sweden)

    Benjamin eDering

    2011-09-01

    Full Text Available The human face is the most studied object category in visual neuroscience. In a quest for markers of face processing, event-related potential (ERP studies have debated whether two peaks of activity –P1 and N170– are category-selective. Whilst most studies have used photographs of unaltered images of faces, others have used cropped faces in an attempt to reduce the influence of features surrounding the face-object sensu stricto. However, results from studies comparing cropped faces with unaltered objects from other categories are inconsistent with results from studies comparing whole faces and objects. Here, we recorded ERPs elicited by full-front views of faces and cars, either unaltered or cropped. We found that cropping artificially enhanced the N170 whereas it did not significantly modulate P1. In a second experiment, we compared faces and butterflies, either unaltered or cropped, matched for size and luminance across conditions, and within a narrow contrast bracket. Results of experiment 2 replicated the main findings of experiment 1. We then used face-car morphs in a third experiment to manipulate the perceived face-likeness of stimuli (100% face, 70% face and 30% car, 30% face and 70% car, or 100% car and the N170 failed to differentiate between faces and cars. Critically, in all three experiments, P1 amplitude was modulated in a face-sensitive fashion independent of cropping or morphing. Therefore, P1 is a reliable event sensitive to face processing as early as 100 ms after picture onset.

  6. Human body region enhancement method based on Kinect infrared imaging

    Science.gov (United States)

    Yang, Lei; Fan, Yubo; Song, Xiaowei; Cai, Wenjing

    2016-10-01

    To effectively improve the low contrast of human body region in the infrared images, a combing method of several enhancement methods is utilized to enhance the human body region. Firstly, for the infrared images acquired by Kinect, in order to improve the overall contrast of the infrared images, an Optimal Contrast-Tone Mapping (OCTM) method with multi-iterations is applied to balance the contrast of low-luminosity infrared images. Secondly, to enhance the human body region better, a Level Set algorithm is employed to improve the contour edges of human body region. Finally, to further improve the human body region in infrared images, Laplacian Pyramid decomposition is adopted to enhance the contour-improved human body region. Meanwhile, the background area without human body region is processed by bilateral filtering to improve the overall effect. With theoretical analysis and experimental verification, the results show that the proposed method could effectively enhance the human body region of such infrared images.

  7. Image Visual Realism: From Human Perception to Machine Computation.

    Science.gov (United States)

    Fan, Shaojing; Ng, Tian-Tsong; Koenig, Bryan L; Herberg, Jonathan S; Jiang, Ming; Shen, Zhiqi; Zhao, Qi

    2017-08-30

    Visual realism is defined as the extent to which an image appears to people as a photo rather than computer generated. Assessing visual realism is important in applications like computer graphics rendering and photo retouching. However, current realism evaluation approaches use either labor-intensive human judgments or automated algorithms largely dependent on comparing renderings to reference images. We develop a reference-free computational framework for visual realism prediction to overcome these constraints. First, we construct a benchmark dataset of 2520 images with comprehensive human annotated attributes. From statistical modeling on this data, we identify image attributes most relevant for visual realism. We propose both empirically-based (guided by our statistical modeling of human data) and CNN-learned features to predict visual realism of images. Our framework has the following advantages: (1) it creates an interpretable and concise empirical model that characterizes human perception of visual realism; (2) it links computational features to latent factors of human image perception.

  8. Letting Our Hearts Break: On Facing the "Hidden Wound" of Human Supremacy

    Science.gov (United States)

    Martusewicz, Rebecca

    2014-01-01

    In this paper I argue that education must be defined by our willingness to experience compassion in the face of others' suffering and thus by an ethical imperative, and seek to expose psycho-social processes of shame as dark matters that inferiorize and subjugate those expressing such compassion for the more-than-human world. Beginning with…

  9. Neglect in human communication: quantifying the cost of cell-phone interruptions in face to face dialogs.

    Science.gov (United States)

    Lopez-Rosenfeld, Matías; Calero, Cecilia I; Fernandez Slezak, Diego; Garbulsky, Gerry; Bergman, Mariano; Trevisan, Marcos; Sigman, Mariano

    2015-01-01

    There is a prevailing belief that interruptions using cellular phones during face to face interactions may affect severely how people relate and perceive each other. We set out to determine this cost quantitatively through an experiment performed in dyads, in a large audience in a TEDx event. One of the two participants (the speaker) narrates a story vividly. The listener is asked to deliberately ignore the speaker during part of the story (for instance, attending to their cell-phone). The speaker is not aware of this treatment. We show that total amount of attention is the major factor driving subjective beliefs about the story and the conversational partner. The effects are mostly independent on how attention is distributed in time. All social parameters of human communication are affected by attention time with a sole exception: the perceived emotion of the story. Interruptions during day-to-day communication between peers are extremely frequent. Our data should provide a note of caution, by indicating that they have a major effect on the perception people have about what they say (whether it is interesting or not . . .) and about the virtues of the people around them.

  10. Neglect in human communication: quantifying the cost of cell-phone interruptions in face to face dialogs.

    Directory of Open Access Journals (Sweden)

    Matías Lopez-Rosenfeld

    Full Text Available There is a prevailing belief that interruptions using cellular phones during face to face interactions may affect severely how people relate and perceive each other. We set out to determine this cost quantitatively through an experiment performed in dyads, in a large audience in a TEDx event. One of the two participants (the speaker narrates a story vividly. The listener is asked to deliberately ignore the speaker during part of the story (for instance, attending to their cell-phone. The speaker is not aware of this treatment. We show that total amount of attention is the major factor driving subjective beliefs about the story and the conversational partner. The effects are mostly independent on how attention is distributed in time. All social parameters of human communication are affected by attention time with a sole exception: the perceived emotion of the story. Interruptions during day-to-day communication between peers are extremely frequent. Our data should provide a note of caution, by indicating that they have a major effect on the perception people have about what they say (whether it is interesting or not . . . and about the virtues of the people around them.

  11. What's in a crowd? Analysis of face-to-face behavioral networks.

    Science.gov (United States)

    Isella, Lorenzo; Stehlé, Juliette; Barrat, Alain; Cattuto, Ciro; Pinton, Jean-François; Van den Broeck, Wouter

    2011-02-21

    The availability of new data sources on human mobility is opening new avenues for investigating the interplay of social networks, human mobility and dynamical processes such as epidemic spreading. Here we analyze data on the time-resolved face-to-face proximity of individuals in large-scale real-world scenarios. We compare two settings with very different properties, a scientific conference and a long-running museum exhibition. We track the behavioral networks of face-to-face proximity, and characterize them from both a static and a dynamic point of view, exposing differences and similarities. We use our data to investigate the dynamics of a susceptible-infected model for epidemic spreading that unfolds on the dynamical networks of human proximity. The spreading patterns are markedly different for the conference and the museum case, and they are strongly impacted by the causal structure of the network data. A deeper study of the spreading paths shows that the mere knowledge of static aggregated networks would lead to erroneous conclusions about the transmission paths on the dynamical networks. Copyright © 2010 Elsevier Ltd. All rights reserved.

  12. A mismatch in the human realism of face and voice produces an uncanny valley

    Science.gov (United States)

    Mitchell, Wade J; Szerszen, Kevin A; Lu, Amy Shirong; Schermerhorn, Paul W; Scheutz, Matthias; MacDorman, Karl F

    2011-01-01

    The uncanny valley has become synonymous with the uneasy feeling of viewing an animated character or robot that looks imperfectly human. Although previous uncanny valley experiments have focused on relations among a character's visual elements, the current experiment examines whether a mismatch in the human realism of a character's face and voice causes it to be evaluated as eerie. The results support this hypothesis. PMID:23145223

  13. Putting a Face to a Name: Visualising Human Rights

    Directory of Open Access Journals (Sweden)

    Vera Mackie

    2014-03-01

    Full Text Available In this essay, I focus on a text which attempts to deal with human rights issues in an accessible media format, Kälin, Müller and Wyttenbach’s book, The Face of Human Rights. I am interested in this text as an attempt to translate between different modes of communicating about human rights, which we might call the academic mode, the bureaucratic mode, the activist mode and the popular media mode. There are significant gaps between the academic debates on human rights, the actual language and protocols of the bodies devoted to ensuring the achievement of basic human rights, the language of activists, and the ways in which these issues are discussed in the media. These issues are compounded in a transnational frame where people must find ways of communicating across differences of language and culture. These problems of communicating across difference are inherent to the contemporary machinery of the international human rights system, where global institutions of governance are implicated in the claims of individuals who are located in diverse national contexts. Several commentators have noted the importance of narrative in human rights advocacy, while others have explored the role of art. I am interested in analysing narrative and representational strategies, from a consciousness that texts work not only through vocabulary and propositional content, but also through discursive positioning. It is necessary to look at the structure of texts, the contents of texts, and the narrative strategies and discursive frameworks which inform them. Similar points can be made about photography, which must be analysed in terms of the specific representational possibilities of visual culture.

  14. Human gene therapy and imaging: cardiology

    International Nuclear Information System (INIS)

    Wu, Joseph C.; Yla-Herttuala, Seppo

    2005-01-01

    This review discusses the basics of cardiovascular gene therapy, the results of recent human clinical trials, and the rapid progress in imaging techniques in cardiology. Improved understanding of the molecular and genetic basis of coronary heart disease has made gene therapy a potential new alternative for the treatment of cardiovascular diseases. Experimental studies have established the proof-of-principle that gene transfer to the cardiovascular system can achieve therapeutic effects. First human clinical trials provided initial evidence of feasibility and safety of cardiovascular gene therapy. However, phase II/III clinical trials have so far been rather disappointing and one of the major problems in cardiovascular gene therapy has been the inability to verify gene expression in the target tissue. New imaging techniques could significantly contribute to the development of better gene therapeutic approaches. Although the exact choice of imaging modality will depend on the biological question asked, further improvement in image resolution and detection sensitivity will be needed for all modalities as we move from imaging of organs and tissues to imaging of cells and genes. (orig.)

  15. Neuro-fuzzy model for estimating race and gender from geometric distances of human face across pose

    Science.gov (United States)

    Nanaa, K.; Rahman, M. N. A.; Rizon, M.; Mohamad, F. S.; Mamat, M.

    2018-03-01

    Classifying human face based on race and gender is a vital process in face recognition. It contributes to an index database and eases 3D synthesis of the human face. Identifying race and gender based on intrinsic factor is problematic, which is more fitting to utilizing nonlinear model for estimating process. In this paper, we aim to estimate race and gender in varied head pose. For this purpose, we collect dataset from PICS and CAS-PEAL databases, detect the landmarks and rotate them to the frontal pose. After geometric distances are calculated, all of distance values will be normalized. Implementation is carried out by using Neural Network Model and Fuzzy Logic Model. These models are combined by using Adaptive Neuro-Fuzzy Model. The experimental results showed that the optimization of address fuzzy membership. Model gives a better assessment rate and found that estimating race contributing to a more accurate gender assessment.

  16. Human gene therapy and imaging in neurological diseases

    International Nuclear Information System (INIS)

    Jacobs, Andreas H.; Winkler, Alexandra; Castro, Maria G.; Lowenstein, Pedro

    2005-01-01

    Molecular imaging aims to assess non-invasively disease-specific biological and molecular processes in animal models and humans in vivo. Apart from precise anatomical localisation and quantification, the most intriguing advantage of such imaging is the opportunity it provides to investigate the time course (dynamics) of disease-specific molecular events in the intact organism. Further, molecular imaging can be used to address basic scientific questions, e.g. transcriptional regulation, signal transduction or protein/protein interaction, and will be essential in developing treatment strategies based on gene therapy. Most importantly, molecular imaging is a key technology in translational research, helping to develop experimental protocols which may later be applied to human patients. Over the past 20 years, imaging based on positron emission tomography (PET) and magnetic resonance imaging (MRI) has been employed for the assessment and ''phenotyping'' of various neurological diseases, including cerebral ischaemia, neurodegeneration and brain gliomas. While in the past neuro-anatomical studies had to be performed post mortem, molecular imaging has ushered in the era of in vivo functional neuro-anatomy by allowing neuroscience to image structure, function, metabolism and molecular processes of the central nervous system in vivo in both health and disease. Recently, PET and MRI have been successfully utilised together in the non-invasive assessment of gene transfer and gene therapy in humans. To assess the efficiency of gene transfer, the same markers are being used in animals and humans, and have been applied for phenotyping human disease. Here, we review the imaging hallmarks of focal and disseminated neurological diseases, such as cerebral ischaemia, neurodegeneration and glioblastoma multiforme, as well as the attempts to translate gene therapy's experimental knowledge into clinical applications and the way in which this process is being promoted through the use of

  17. Neonatal face-to-face interactions promote later social behaviour in infant rhesus monkeys

    OpenAIRE

    Dettmer, Amanda M.; Kaburu, Stefano S. K.; Simpson, Elizabeth A.; Paukner, Annika; Sclafani, Valentina; Byers, Kristen L.; Murphy, Ashley M.; Miller, Michelle; Marquez, Neal; Miller, Grace M.; Suomi, Stephen J.; Ferrari, Pier F.

    2016-01-01

    In primates, including humans, mothers engage in face-to-face interactions with their infants, with frequencies varying both within and across species. However, the impact of this variation in face-to-face interactions on infant social development is unclear. Here we report that infant monkeys (Macaca mulatta) who engaged in more neonatal face-to-face interactions with mothers have increased social interactions at 2 and 5 months. In a controlled experiment, we show that this effect is not due...

  18. Three-dimensional facial digitization using advanced digital image correlation.

    Science.gov (United States)

    Nguyen, Hieu; Kieu, Hien; Wang, Zhaoyang; Le, Hanh N D

    2018-03-20

    Presented in this paper is an effective technique to acquire the three-dimensional (3D) digital images of the human face without the use of active lighting and artificial patterns. The technique is based on binocular stereo imaging and digital image correlation, and it includes two key steps: camera calibration and image matching. The camera calibration involves a pinhole model and a bundle-adjustment approach, and the governing equations of the 3D digitization process are described. For reliable pixel-to-pixel image matching, the skin pores and freckles or lentigines on the human face serve as the required pattern features to facilitate the process. It employs feature-matching-based initial guess, multiple subsets, iterative optimization algorithm, and reliability-guided computation path to achieve fast and accurate image matching. Experiments have been conducted to demonstrate the validity of the proposed technique. The simplicity of the approach and the affordable cost of the implementation show its practicability in scientific and engineering applications.

  19. Probing the Feature Map for Faces in Visual Search

    Directory of Open Access Journals (Sweden)

    Hua Yang

    2011-05-01

    Full Text Available Controversy surrounds the mechanisms underlying the pop-out effect for faces in visual search. Is there a feature map for faces? If so, does it rely on the categorical distinction between faces and nonfaces, or on image-level face semblance? To probe the feature map, we compared search efficiency for faces, and nonface stimuli with high, low, and no face semblance. First, subjects performed a visual search task with objects as distractors. Only faces popped-out. Moreover, search efficiency for nonfaces correlated with image-level face semblance of the target. In a second experiment, faces were used as distractors but nonfaces did not pop-out. Interestingly, search efficiency for nonfaces was not modulated by face semblance, although searching for a face among faces was particularly difficult, reflecting a categorical boundary between nonfaces and faces. Finally, inversion and contrast negation significantly interacted with the effect of face semblance, ruling out the possibility that search efficiency solely depends on low-level features. Our study supports a parallel search for faces that is perhaps preattentive. Like other features (color, orientation etc., there appears to be a continuous face feature map for visual search. Our results also suggest that this map may include both image-level face semblance and face categoricity.

  20. Real-time teleophthalmology versus face-to-face consultation: A systematic review.

    Science.gov (United States)

    Tan, Irene J; Dobson, Lucy P; Bartnik, Stephen; Muir, Josephine; Turner, Angus W

    2017-08-01

    Introduction Advances in imaging capabilities and the evolution of real-time teleophthalmology have the potential to provide increased coverage to areas with limited ophthalmology services. However, there is limited research assessing the diagnostic accuracy of face-to-face teleophthalmology consultation. This systematic review aims to determine if real-time teleophthalmology provides comparable accuracy to face-to-face consultation for the diagnosis of common eye health conditions. Methods A search of PubMed, Embase, Medline and Cochrane databases and manual citation review was conducted on 6 February and 7 April 2016. Included studies involved real-time telemedicine in the field of ophthalmology or optometry, and assessed diagnostic accuracy against gold-standard face-to-face consultation. The revised quality assessment of diagnostic accuracy studies (QUADAS-2) tool assessed risk of bias. Results Twelve studies were included, with participants ranging from four to 89 years old. A broad number of conditions were assessed and include corneal and retinal pathologies, strabismus, oculoplastics and post-operative review. Quality assessment identified a high or unclear risk of bias in patient selection (75%) due to an undisclosed recruitment processes. The index test showed high risk of bias in the included studies, due to the varied interpretation and conduct of real-time teleophthalmology methods. Reference standard risk was overall low (75%), as was the risk due to flow and timing (75%). Conclusion In terms of diagnostic accuracy, real-time teleophthalmology was considered superior to face-to-face consultation in one study and comparable in six studies. Store-and-forward image transmission coupled with real-time videoconferencing is a suitable alternative to overcome poor internet transmission speeds.

  1. Human Empathy, Personality and Experience Affect the Emotion Ratings of Dog and Human Facial Expressions

    Science.gov (United States)

    Kujala, Miiamaaria V.; Somppi, Sanni; Jokela, Markus; Vainio, Outi; Parkkonen, Lauri

    2017-01-01

    Facial expressions are important for humans in communicating emotions to the conspecifics and enhancing interpersonal understanding. Many muscles producing facial expressions in humans are also found in domestic dogs, but little is known about how humans perceive dog facial expressions, and which psychological factors influence people’s perceptions. Here, we asked 34 observers to rate the valence, arousal, and the six basic emotions (happiness, sadness, surprise, disgust, fear, and anger/aggressiveness) from images of human and dog faces with Pleasant, Neutral and Threatening expressions. We investigated how the subjects’ personality (the Big Five Inventory), empathy (Interpersonal Reactivity Index) and experience of dog behavior affect the ratings of dog and human faces. Ratings of both species followed similar general patterns: human subjects classified dog facial expressions from pleasant to threatening very similarly to human facial expressions. Subjects with higher emotional empathy evaluated Threatening faces of both species as more negative in valence and higher in anger/aggressiveness. More empathetic subjects also rated the happiness of Pleasant humans but not dogs higher, and they were quicker in their valence judgments of Pleasant human, Threatening human and Threatening dog faces. Experience with dogs correlated positively with ratings of Pleasant and Neutral dog faces. Personality also had a minor effect on the ratings of Pleasant and Neutral faces in both species. The results imply that humans perceive human and dog facial expression in a similar manner, and the perception of both species is influenced by psychological factors of the evaluators. Especially empathy affects both the speed and intensity of rating dogs’ emotional facial expressions. PMID:28114335

  2. Human Empathy, Personality and Experience Affect the Emotion Ratings of Dog and Human Facial Expressions.

    Directory of Open Access Journals (Sweden)

    Miiamaaria V Kujala

    Full Text Available Facial expressions are important for humans in communicating emotions to the conspecifics and enhancing interpersonal understanding. Many muscles producing facial expressions in humans are also found in domestic dogs, but little is known about how humans perceive dog facial expressions, and which psychological factors influence people's perceptions. Here, we asked 34 observers to rate the valence, arousal, and the six basic emotions (happiness, sadness, surprise, disgust, fear, and anger/aggressiveness from images of human and dog faces with Pleasant, Neutral and Threatening expressions. We investigated how the subjects' personality (the Big Five Inventory, empathy (Interpersonal Reactivity Index and experience of dog behavior affect the ratings of dog and human faces. Ratings of both species followed similar general patterns: human subjects classified dog facial expressions from pleasant to threatening very similarly to human facial expressions. Subjects with higher emotional empathy evaluated Threatening faces of both species as more negative in valence and higher in anger/aggressiveness. More empathetic subjects also rated the happiness of Pleasant humans but not dogs higher, and they were quicker in their valence judgments of Pleasant human, Threatening human and Threatening dog faces. Experience with dogs correlated positively with ratings of Pleasant and Neutral dog faces. Personality also had a minor effect on the ratings of Pleasant and Neutral faces in both species. The results imply that humans perceive human and dog facial expression in a similar manner, and the perception of both species is influenced by psychological factors of the evaluators. Especially empathy affects both the speed and intensity of rating dogs' emotional facial expressions.

  3. Quantitatively Plotting the Human Face for Multivariate Data Visualisation Illustrated by Health Assessments Using Laboratory Parameters

    Directory of Open Access Journals (Sweden)

    Wang Hongwei

    2013-01-01

    Full Text Available Objective. The purpose of this study was to describe a new data visualisation system by plotting the human face to observe the comprehensive effects of multivariate data. Methods. The Graphics Device Interface (GDI+ in the Visual Studio.NET development platform was used to write a program that enables facial image parameters to be recorded, such as cropping and rotation, and can generate a new facial image according to Z values from sets of normal data (Z>3 was still counted as 3. The measured clinical laboratory parameters related to health status were obtained from senile people, glaucoma patients, and fatty liver patients to illustrate the facial data visualisation system. Results. When the eyes, nose, and mouth were rotated around their own axes at the same angle, the deformation effects were similar. The deformation effects for any abnormality of the eyes, nose, or mouth should be slightly higher than those for simultaneous abnormalities. The facial changes in the populations with different health statuses were significant compared with a control population. Conclusions. The comprehensive effects of multivariate may not equal the sum of each variable. The 3Z facial data visualisation system can effectively distinguish people with poor health status from healthy people.

  4. Testing the Utility of a Data-Driven Approach for Assessing BMI from Face Images

    DEFF Research Database (Denmark)

    Wolffhechel, Karin Marie Brandt; Hahn, Amanda C.; Jarmer, Hanne Østergaard

    2015-01-01

    Several lines of evidence suggest that facial cues of adiposity may be important for human social interaction. However, tests for quantifiable cues of body mass index (BMI) in the face have examined only a small number of facial proportions and these proportions were found to have relatively low...

  5. Composite multi-lobe descriptor for cross spectral face recognition: matching active IR to visible light images

    Science.gov (United States)

    Cao, Zhicheng; Schmid, Natalia A.

    2015-05-01

    Matching facial images across electromagnetic spectrum presents a challenging problem in the field of biometrics and identity management. An example of this problem includes cross spectral matching of active infrared (IR) face images or thermal IR face images against a dataset of visible light images. This paper describes a new operator named Composite Multi-Lobe Descriptor (CMLD) for facial feature extraction in cross spectral matching of near-infrared (NIR) or short-wave infrared (SWIR) against visible light images. The new operator is inspired by the design of ordinal measures. The operator combines Gaussian-based multi-lobe kernel functions, Local Binary Pattern (LBP), generalized LBP (GLBP) and Weber Local Descriptor (WLD) and modifies them into multi-lobe functions with smoothed neighborhoods. The new operator encodes both the magnitude and phase responses of Gabor filters. The combining of LBP and WLD utilizes both the orientation and intensity information of edges. Introduction of multi-lobe functions with smoothed neighborhoods further makes the proposed operator robust against noise and poor image quality. Output templates are transformed into histograms and then compared by means of a symmetric Kullback-Leibler metric resulting in a matching score. The performance of the multi-lobe descriptor is compared with that of other operators such as LBP, Histogram of Oriented Gradients (HOG), ordinal measures, and their combinations. The experimental results show that in many cases the proposed method, CMLD, outperforms the other operators and their combinations. In addition to different infrared spectra, various standoff distances from close-up (1.5 m) to intermediate (50 m) and long (106 m) are also investigated in this paper. Performance of CMLD is evaluated for of each of the three cases of distances.

  6. Infrared and visible fusion face recognition based on NSCT domain

    Science.gov (United States)

    Xie, Zhihua; Zhang, Shuai; Liu, Guodong; Xiong, Jinquan

    2018-01-01

    Visible face recognition systems, being vulnerable to illumination, expression, and pose, can not achieve robust performance in unconstrained situations. Meanwhile, near infrared face images, being light- independent, can avoid or limit the drawbacks of face recognition in visible light, but its main challenges are low resolution and signal noise ratio (SNR). Therefore, near infrared and visible fusion face recognition has become an important direction in the field of unconstrained face recognition research. In this paper, a novel fusion algorithm in non-subsampled contourlet transform (NSCT) domain is proposed for Infrared and visible face fusion recognition. Firstly, NSCT is used respectively to process the infrared and visible face images, which exploits the image information at multiple scales, orientations, and frequency bands. Then, to exploit the effective discriminant feature and balance the power of high-low frequency band of NSCT coefficients, the local Gabor binary pattern (LGBP) and Local Binary Pattern (LBP) are applied respectively in different frequency parts to obtain the robust representation of infrared and visible face images. Finally, the score-level fusion is used to fuse the all the features for final classification. The visible and near infrared face recognition is tested on HITSZ Lab2 visible and near infrared face database. Experiments results show that the proposed method extracts the complementary features of near-infrared and visible-light images and improves the robustness of unconstrained face recognition.

  7. Optical imaging of the chorioretinal vasculature in the living human eye.

    Science.gov (United States)

    Kim, Dae Yu; Fingler, Jeff; Zawadzki, Robert J; Park, Susanna S; Morse, Lawrence S; Schwartz, Daniel M; Fraser, Scott E; Werner, John S

    2013-08-27

    Detailed visualization of microvascular changes in the human retina is clinically limited by the capabilities of angiography imaging, a 2D fundus photograph that requires an intravenous injection of fluorescent dye. Whereas current angiography methods enable visualization of some retinal capillary detail, they do not adequately reveal the choriocapillaris or other microvascular features beneath the retina. We have developed a noninvasive microvascular imaging technique called phase-variance optical coherence tomography (pvOCT), which identifies vasculature three dimensionally through analysis of data acquired with OCT systems. The pvOCT imaging method is not only capable of generating capillary perfusion maps for the retina, but it can also use the 3D capabilities to segment the data in depth to isolate vasculature in different layers of the retina and choroid. This paper demonstrates some of the capabilities of pvOCT imaging of the anterior layers of choroidal vasculature of a healthy normal eye as well as of eyes with geographic atrophy (GA) secondary to age-related macular degeneration. The pvOCT data presented permit digital segmentation to produce 2D depth-resolved images of the retinal vasculature, the choriocapillaris, and the vessels in Sattler's and Haller's layers. Comparisons are presented between en face projections of pvOCT data within the superficial choroid and clinical angiography images for regions of GA. Abnormalities and vascular dropout observed within the choriocapillaris for pvOCT are compared with regional GA progression. The capability of pvOCT imaging of the microvasculature of the choriocapillaris and the anterior choroidal vasculature has the potential to become a unique tool to evaluate therapies and understand the underlying mechanisms of age-related macular degeneration progression.

  8. Efficient search for a face by chimpanzees (Pan troglodytes)

    OpenAIRE

    Tomonaga, Masaki; Imura, Tomoko

    2015-01-01

    The face is quite an important stimulus category for human and nonhuman primates in their social lives. Recent advances in comparative-cognitive research clearly indicate that chimpanzees and humans process faces in a special manner; that is, using holistic or configural processing. Both species exhibit the face-inversion effect in which the inverted presentation of a face deteriorates their perception and recognition. Furthermore, recent studies have shown that humans detect human faces amon...

  9. A real time mobile-based face recognition with fisherface methods

    Science.gov (United States)

    Arisandi, D.; Syahputra, M. F.; Putri, I. L.; Purnamawati, S.; Rahmat, R. F.; Sari, P. P.

    2018-03-01

    Face Recognition is a field research in Computer Vision that study about learning face and determine the identity of the face from a picture sent to the system. By utilizing this face recognition technology, learning process about people’s identity between students in a university will become simpler. With this technology, student won’t need to browse student directory in university’s server site and look for the person with certain face trait. To obtain this goal, face recognition application use image processing methods consist of two phase, pre-processing phase and recognition phase. In pre-processing phase, system will process input image into the best image for recognition phase. Purpose of this pre-processing phase is to reduce noise and increase signal in image. Next, to recognize face phase, we use Fisherface Methods. This methods is chosen because of its advantage that would help system of its limited data. Therefore from experiment the accuracy of face recognition using fisherface is 90%.

  10. Vasomotor response of the human face: laser-Doppler measurements during mild hypo- and hyperthermia.

    Science.gov (United States)

    Rasch, W; Cabanac, M

    1993-04-01

    The skin of the face is reputed not to vasoconstrict in response to cold stress because the face skin temperature remains steady during hypothermia. The purpose of the present work was to measure the vasomotor response of the human face to whole-body hypothermia, and to compare it with hyperthermia. Six male subjects were immersed in cold and in warm water to obtain the two conditions. Skin blood flow, evaporation, and skin temperature (Tsk) were recorded in three loci of the face, the forehead, the infra orbital area, and the cheek. Tympanic (Tty) and oesophageal (Toes) temperatures were also recorded during the different thermal states. Normothermic measurements served as control. Blood flow was recorded with a laser-Doppler flowmeter, evaporation measured with an evaporimeter. Face Tsk remained stable between normo-, hypo-, and hyperthermia. Facial blood flow, however, did not follow the same pattern. The facial blood flow remained at minimal vasoconstricted level when the subjects' condition was changed from normo- to hypothermia. When the condition changed from hypo- to hyperthermia a 3 to 9-fold increase in the blood flow was recorded. From these results it was concluded that a vasoconstriction seems to be the general vasomotor state in the face during normothermia.

  11. Supervised Filter Learning for Representation Based Face Recognition.

    Directory of Open Access Journals (Sweden)

    Chao Bi

    Full Text Available Representation based classification methods, such as Sparse Representation Classification (SRC and Linear Regression Classification (LRC have been developed for face recognition problem successfully. However, most of these methods use the original face images without any preprocessing for recognition. Thus, their performances may be affected by some problematic factors (such as illumination and expression variances in the face images. In order to overcome this limitation, a novel supervised filter learning algorithm is proposed for representation based face recognition in this paper. The underlying idea of our algorithm is to learn a filter so that the within-class representation residuals of the faces' Local Binary Pattern (LBP features are minimized and the between-class representation residuals of the faces' LBP features are maximized. Therefore, the LBP features of filtered face images are more discriminative for representation based classifiers. Furthermore, we also extend our algorithm for heterogeneous face recognition problem. Extensive experiments are carried out on five databases and the experimental results verify the efficacy of the proposed algorithm.

  12. Face sketch recognition based on edge enhancement via deep learning

    Science.gov (United States)

    Xie, Zhenzhu; Yang, Fumeng; Zhang, Yuming; Wu, Congzhong

    2017-11-01

    In this paper,we address the face sketch recognition problem. Firstly, we utilize the eigenface algorithm to convert a sketch image into a synthesized sketch face image. Subsequently, considering the low-level vision problem in synthesized face sketch image .Super resolution reconstruction algorithm based on CNN(convolutional neural network) is employed to improve the visual effect. To be specific, we uses a lightweight super-resolution structure to learn a residual mapping instead of directly mapping the feature maps from the low-level space to high-level patch representations, which making the networks are easier to optimize and have lower computational complexity. Finally, we adopt LDA(Linear Discriminant Analysis) algorithm to realize face sketch recognition on synthesized face image before super resolution and after respectively. Extensive experiments on the face sketch database(CUFS) from CUHK demonstrate that the recognition rate of SVM(Support Vector Machine) algorithm improves from 65% to 69% and the recognition rate of LDA(Linear Discriminant Analysis) algorithm improves from 69% to 75%.What'more,the synthesized face image after super resolution can not only better describer image details such as hair ,nose and mouth etc, but also improve the recognition accuracy effectively.

  13. Intravascular photoacoustic imaging of human coronary atherosclerosis

    Science.gov (United States)

    Jansen, Krista; van der Steen, Antonius F. W.; Springeling, Geert; van Beusekom, Heleen M. M.; Oosterhuis, J. Wolter; van Soest, Gijs

    2011-03-01

    We demonstrate intravascular photoacoustic imaging of human coronary atherosclerotic plaque. We specifically imaged lipid content, a key factor in vulnerable plaques that may lead to myocardial infarction. An integrated intravascular photoacoustics (IVPA) and ultrasound (IVUS) catheter with an outer diameter of 1.25 mm was developed. The catheter comprises an angle-polished optical fiber adjacent to a 30 MHz single-element transducer. The ultrasonic transducer was optically isolated to eliminate artifacts in the PA image. We performed measurements on a cylindrical vessel phantom and isolated point targets to demonstrate its imaging performance. Axial and lateral point spread function widths were 110 μm and 550 μm, respectively, for PA and 89 μm and 420 μm for US. We imaged two fresh human coronary arteries, showing different stages of disease, ex vivo. Specific photoacoustic imaging of lipid content, is achieved by spectroscopic imaging at different wavelengths between 1180 and 1230 nm.

  14. FaceIt: face recognition from static and live video for law enforcement

    Science.gov (United States)

    Atick, Joseph J.; Griffin, Paul M.; Redlich, A. N.

    1997-01-01

    Recent advances in image and pattern recognition technology- -especially face recognition--are leading to the development of a new generation of information systems of great value to the law enforcement community. With these systems it is now possible to pool and manage vast amounts of biometric intelligence such as face and finger print records and conduct computerized searches on them. We review one of the enabling technologies underlying these systems: the FaceIt face recognition engine; and discuss three applications that illustrate its benefits as a problem-solving technology and an efficient and cost effective investigative tool.

  15. Gaze Cueing by Pareidolia Faces

    Directory of Open Access Journals (Sweden)

    Kohske Takahashi

    2013-12-01

    Full Text Available Visual images that are not faces are sometimes perceived as faces (the pareidolia phenomenon. While the pareidolia phenomenon provides people with a strong impression that a face is present, it is unclear how deeply pareidolia faces are processed as faces. In the present study, we examined whether a shift in spatial attention would be produced by gaze cueing of face-like objects. A robust cueing effect was observed when the face-like objects were perceived as faces. The magnitude of the cueing effect was comparable between the face-like objects and a cartoon face. However, the cueing effect was eliminated when the observer did not perceive the objects as faces. These results demonstrated that pareidolia faces do more than give the impression of the presence of faces; indeed, they trigger an additional face-specific attentional process.

  16. Gaze cueing by pareidolia faces.

    Science.gov (United States)

    Takahashi, Kohske; Watanabe, Katsumi

    2013-01-01

    Visual images that are not faces are sometimes perceived as faces (the pareidolia phenomenon). While the pareidolia phenomenon provides people with a strong impression that a face is present, it is unclear how deeply pareidolia faces are processed as faces. In the present study, we examined whether a shift in spatial attention would be produced by gaze cueing of face-like objects. A robust cueing effect was observed when the face-like objects were perceived as faces. The magnitude of the cueing effect was comparable between the face-like objects and a cartoon face. However, the cueing effect was eliminated when the observer did not perceive the objects as faces. These results demonstrated that pareidolia faces do more than give the impression of the presence of faces; indeed, they trigger an additional face-specific attentional process.

  17. Imaging oxygenation of human tumours

    International Nuclear Information System (INIS)

    Padhani, Anwar R.; Krohn, Kenneth A.; Lewis, Jason S.; Alber, Markus

    2007-01-01

    Tumour hypoxia represents a significant challenge to the curability of human tumours leading to treatment resistance and enhanced tumour progression. Tumour hypoxia can be detected by non-invasive and invasive techniques but the inter-relationships between these remains largely undefined. 18 F-MISO and Cu-ATSM-PET, and BOLD-MRI are the lead contenders for human application based on their non-invasive nature, ease of use and robustness, measurement of hypoxia status, validity, ability to demonstrate heterogeneity and general availability, these techniques are the primary focus of this review. We discuss where developments are required for hypoxia imaging to become clinically useful and explore potential new uses for hypoxia imaging techniques including biological conformal radiotherapy. (orig.)

  18. Quantified Faces

    DEFF Research Database (Denmark)

    Sørensen, Mette-Marie Zacher

    2016-01-01

    artist Marnix de Nijs' Physiognomic Scrutinizer is an interactive installation whereby the viewer's face is scanned and identified with historical figures. The American artist Zach Blas' project Fag Face Mask consists of three-dimensional portraits that blend biometric facial data from 30 gay men's faces...... and critically examine bias in surveillance technologies, as well as scientific investigations, regarding the stereotyping mode of the human gaze. The American artist Heather Dewey-Hagborg creates three-dimensional portraits of persons she has “identified” from their garbage. Her project from 2013 entitled...

  19. The influence of banner advertisements on attention and memory: human faces with averted gaze can enhance advertising effectiveness.

    Science.gov (United States)

    Sajjacholapunt, Pitch; Ball, Linden J

    2014-01-01

    Research suggests that banner advertisements used in online marketing are often overlooked, especially when positioned horizontally on webpages. Such inattention invariably gives rise to an inability to remember advertising brands and messages, undermining the effectiveness of this marketing method. Recent interest has focused on whether human faces within banner advertisements can increase attention to the information they contain, since the gaze cues conveyed by faces can influence where observers look. We report an experiment that investigated the efficacy of faces located in banner advertisements to enhance the attentional processing and memorability of banner contents. We tracked participants' eye movements when they examined webpages containing either bottom-right vertical banners or bottom-center horizontal banners. We also manipulated facial information such that banners either contained no face, a face with mutual gaze or a face with averted gaze. We additionally assessed people's memories for brands and advertising messages. Results indicated that relative to other conditions, the condition involving faces with averted gaze increased attention to the banner overall, as well as to the advertising text and product. Memorability of the brand and advertising message was also enhanced. Conversely, in the condition involving faces with mutual gaze, the focus of attention was localized more on the face region rather than on the text or product, weakening any memory benefits for the brand and advertising message. This detrimental impact of mutual gaze on attention to advertised products was especially marked for vertical banners. These results demonstrate that the inclusion of human faces with averted gaze in banner advertisements provides a promising means for marketers to increase the attention paid to such adverts, thereby enhancing memory for advertising information.

  20. The influence of banner advertisements on attention and memory: Human faces with averted gaze can enhance advertising effectiveness

    Directory of Open Access Journals (Sweden)

    Pitch eSajjacholapunt

    2014-03-01

    Full Text Available Research suggests that banner advertisements used in online marketing are often overlooked, especially when positioned horizontally on webpages. Such inattention invariably gives rise to an inability to remember advertising brands and messages, undermining the effectiveness of this marketing method. Recent interest has focused on whether human faces within banner advertisements can increase attention to the information they contain, since the gaze cues conveyed by faces can influence where observers look. We report an experiment that investigated the efficacy of faces located in banner advertisements to enhance the attentional processing and memorability of banner contents. We tracked participants’ eye movements when they examined webpages containing either bottom-right vertical banners or bottom-centre horizontal banners. We also manipulated facial information such that banners either contained no face, a face with mutual gaze or a face with averted gaze. We additionally assessed people’s memories for brands and advertising messages. Results indicated that relative to other conditions, the condition involving faces with averted gaze increased attention to the banner overall, as well as to the advertising text and product. Memorability of the brand and advertising message was also enhanced. Conversely, in the condition involving faces with mutual gaze, the focus of attention was localised more on the face region rather than on the text or product, weakening any memory benefits for the brand and advertising message. This detrimental impact of mutual gaze on attention to advertised products was especially marked for vertical banners. These results demonstrate that the inclusion of human faces with averted gaze in banner advertisements provides a promising means for marketers to increase the attention paid to such adverts, thereby enhancing memory for advertising information.

  1. Audio-Visual Speech Recognition Using Lip Information Extracted from Side-Face Images

    Directory of Open Access Journals (Sweden)

    Koji Iwano

    2007-03-01

    Full Text Available This paper proposes an audio-visual speech recognition method using lip information extracted from side-face images as an attempt to increase noise robustness in mobile environments. Our proposed method assumes that lip images can be captured using a small camera installed in a handset. Two different kinds of lip features, lip-contour geometric features and lip-motion velocity features, are used individually or jointly, in combination with audio features. Phoneme HMMs modeling the audio and visual features are built based on the multistream HMM technique. Experiments conducted using Japanese connected digit speech contaminated with white noise in various SNR conditions show effectiveness of the proposed method. Recognition accuracy is improved by using the visual information in all SNR conditions. These visual features were confirmed to be effective even when the audio HMM was adapted to noise by the MLLR method.

  2. Locality constrained joint dynamic sparse representation for local matching based face recognition.

    Science.gov (United States)

    Wang, Jianzhong; Yi, Yugen; Zhou, Wei; Shi, Yanjiao; Qi, Miao; Zhang, Ming; Zhang, Baoxue; Kong, Jun

    2014-01-01

    Recently, Sparse Representation-based Classification (SRC) has attracted a lot of attention for its applications to various tasks, especially in biometric techniques such as face recognition. However, factors such as lighting, expression, pose and disguise variations in face images will decrease the performances of SRC and most other face recognition techniques. In order to overcome these limitations, we propose a robust face recognition method named Locality Constrained Joint Dynamic Sparse Representation-based Classification (LCJDSRC) in this paper. In our method, a face image is first partitioned into several smaller sub-images. Then, these sub-images are sparsely represented using the proposed locality constrained joint dynamic sparse representation algorithm. Finally, the representation results for all sub-images are aggregated to obtain the final recognition result. Compared with other algorithms which process each sub-image of a face image independently, the proposed algorithm regards the local matching-based face recognition as a multi-task learning problem. Thus, the latent relationships among the sub-images from the same face image are taken into account. Meanwhile, the locality information of the data is also considered in our algorithm. We evaluate our algorithm by comparing it with other state-of-the-art approaches. Extensive experiments on four benchmark face databases (ORL, Extended YaleB, AR and LFW) demonstrate the effectiveness of LCJDSRC.

  3. Locality constrained joint dynamic sparse representation for local matching based face recognition.

    Directory of Open Access Journals (Sweden)

    Jianzhong Wang

    Full Text Available Recently, Sparse Representation-based Classification (SRC has attracted a lot of attention for its applications to various tasks, especially in biometric techniques such as face recognition. However, factors such as lighting, expression, pose and disguise variations in face images will decrease the performances of SRC and most other face recognition techniques. In order to overcome these limitations, we propose a robust face recognition method named Locality Constrained Joint Dynamic Sparse Representation-based Classification (LCJDSRC in this paper. In our method, a face image is first partitioned into several smaller sub-images. Then, these sub-images are sparsely represented using the proposed locality constrained joint dynamic sparse representation algorithm. Finally, the representation results for all sub-images are aggregated to obtain the final recognition result. Compared with other algorithms which process each sub-image of a face image independently, the proposed algorithm regards the local matching-based face recognition as a multi-task learning problem. Thus, the latent relationships among the sub-images from the same face image are taken into account. Meanwhile, the locality information of the data is also considered in our algorithm. We evaluate our algorithm by comparing it with other state-of-the-art approaches. Extensive experiments on four benchmark face databases (ORL, Extended YaleB, AR and LFW demonstrate the effectiveness of LCJDSRC.

  4. In vivo imaging of palisades of Vogt in dry eye versus normal subjects using en-face spectral-domain optical coherence tomography.

    Directory of Open Access Journals (Sweden)

    Wajdene Ghouali

    Full Text Available To evaluate a possible clinical application of spectral-domain optical coherence tomography (SD-OCT using en-face module for the imaging of the corneoscleral limbus in normal subjects and dry eye patients.Seventy-six subjects were included in this study. Seventy eyes of 35 consecutive patients with dry eye disease and 82 eyes of 41 healthy control subjects were investigated. All subjects were examined with the Avanti RTVue® anterior segment OCT. En-face OCT images of the corneoscleral limbus were acquired in four quadrants (inferior, superior, nasal and temporal and then were analyzed semi-quantitatively according to whether or not palisades of Vogt (POV were visible. En-face OCT images were then compared to in vivo confocal microscopy (IVCM in eleven eyes of 7 healthy and dry eye patients.En-face SD-OCT showed POV as a radially oriented network, located in superficial corneoscleral limbus, with a good correlation with IVCM features. It provided an easy and reproducible identification of POV without any special preparation or any direct contact, with a grading scale from 0 (no visualization to 3 (high visualization. The POV were found predominantly in superior (P<0.001 and inferior (P<0.001 quadrants when compared to the nasal and temporal quadrants for all subjects examined. The visibility score decreased with age (P<0.001 and was lower in dry eye patients (P<0.01. In addition, the score decreased in accordance with the severity of dry eye disease (P<0.001.En-face SD-OCT is a non-contact imaging technique that can be used to evaluate the POV, thus providing valuable information about differences in the limbal anatomy of dry eye patients as compared to healthy patients.

  5. Portraits made to measure: manipulating social judgments about individuals with a statistical face model.

    Science.gov (United States)

    Walker, Mirella; Vetter, Thomas

    2009-10-13

    The social judgments people make on the basis of the facial appearance of strangers strongly affect their behavior in different contexts. However, almost nothing is known about the physical information underlying these judgments. In this article, we present a new technology (a) to quantify the information in faces that is used for social judgments and (b) to manipulate the image of a human face in a way which is almost imperceptible but changes the personality traits ascribed to the depicted person. This method was developed in a high-dimensional face space by identifying vectors that capture maximum variability in judgments of personality traits. Our method of manipulating the salience of these vectors in faces was successfully transferred to novel photographs from an independent database. We evaluated this method by showing pairs of face photographs which differed only in the salience of one of six personality traits. Subjects were asked to decide which face was more extreme with respect to the trait in question. Results show that the image manipulation produced the intended attribution effect. All response accuracies were significantly above chance level. This approach to understanding and manipulating how a person is socially perceived could be useful in psychological research and could also be applied in advertising or the film industries.

  6. Attention to internal face features in unfamiliar face matching.

    Science.gov (United States)

    Fletcher, Kingsley I; Butavicius, Marcus A; Lee, Michael D

    2008-08-01

    Accurate matching of unfamiliar faces is vital in security and forensic applications, yet previous research has suggested that humans often perform poorly when matching unfamiliar faces. Hairstyle and facial hair can strongly influence unfamiliar face matching but are potentially unreliable cues. This study investigated whether increased attention to the more stable internal face features of eyes, nose, and mouth was associated with more accurate face-matching performance. Forty-three first-year psychology students decided whether two simultaneously presented faces were of the same person or not. The faces were displayed for either 2 or 6 seconds, and had either similar or dissimilar hairstyles. The level of attention to internal features was measured by the proportion of fixation time spent on the internal face features and the sensitivity of discrimination to changes in external feature similarity. Increased attention to internal features was associated with increased discrimination in the 2-second display-time condition, but no significant relationship was found in the 6-second condition. Individual differences in eye-movements were highly stable across the experimental conditions.

  7. Efficient search for a face by chimpanzees (Pan troglodytes).

    Science.gov (United States)

    Tomonaga, Masaki; Imura, Tomoko

    2015-07-16

    The face is quite an important stimulus category for human and nonhuman primates in their social lives. Recent advances in comparative-cognitive research clearly indicate that chimpanzees and humans process faces in a special manner; that is, using holistic or configural processing. Both species exhibit the face-inversion effect in which the inverted presentation of a face deteriorates their perception and recognition. Furthermore, recent studies have shown that humans detect human faces among non-facial objects rapidly. We report that chimpanzees detected chimpanzee faces among non-facial objects quite efficiently. This efficient search was not limited to own-species faces. They also found human adult and baby faces--but not monkey faces--efficiently. Additional testing showed that a front-view face was more readily detected than a profile, suggesting the important role of eye-to-eye contact. Chimpanzees also detected a photograph of a banana as efficiently as a face, but a further examination clearly indicated that the banana was detected mainly due to a low-level feature (i.e., color). Efficient face detection was hampered by an inverted presentation, suggesting that configural processing of faces is a critical element of efficient face detection in both species. This conclusion was supported by a simple simulation experiment using the saliency model.

  8. High-Resolution En Face Images of Microcystic Macular Edema in Patients with Autosomal Dominant Optic Atrophy

    Directory of Open Access Journals (Sweden)

    Kiyoko Gocho

    2013-01-01

    Full Text Available The purpose of this study was to investigate the characteristics of microcystic macular edema (MME determined from the en face images obtained by an adaptive optics (AO fundus camera in patients with autosomal dominant optic atrophy (ADOA and to try to determine the mechanisms underlying the degeneration of the inner retinal cells and RNFL by using the advantage of AO. Six patients from 4 families with ADOA underwent detailed ophthalmic examinations including spectral domain optical coherence tomography (SD-OCT. Mutational screening of all coding and flanking intron sequences of the OPA1 gene was performed by DNA sequencing. SD-OCT showed a severe reduction in the retinal nerve fiber layer (RNFL thickness in all patients. A new splicing defect and two new frameshift mutations with premature termination of the Opa1 protein were identified in three families. A reported nonsense mutation was identified in one family. SD-OCT of one patient showed MME in the inner nuclear layer (INL of the retina. AO images showed microcysts in the en face images of the INL. Our data indicate that AO is a useful method to identify MME in neurodegenerative diseases and may also help determine the mechanisms underlying the degeneration of the inner retinal cells and RNFL.

  9. Occlusion invariant face recognition using mean based weight ...

    Indian Academy of Sciences (India)

    degrade the recognition performance, and thus a robust algorithm for occluded faces is indispens- able to ... In this work, the face image is divided into .... occluded images of both men and women) were used for the training the targetclass.

  10. Activity in the fusiform face area supports expert perception in radiologists and does not depend upon holistic processing of images

    Science.gov (United States)

    Engel, Stephen A.; Harley, Erin M.; Pope, Whitney B.; Villablanca, J. Pablo; Mazziotta, John C.; Enzmann, Dieter

    2009-02-01

    Training in radiology dramatically changes observers' ability to process images, but the neural bases of this visual expertise remain unexplored. Prior imaging work has suggested that the fusiform face area (FFA), normally selectively responsive to faces, becomes responsive to images in observers' area of expertise. The FFA has been hypothesized to be important for "holistic" processing that integrates information across the entire image. Here, we report a cross-sectional study of radiologists that used functional magnetic resonance imaging to measure neural activity in first-year radiology residents, fourth-year radiology residents, and practicing radiologists as they detected abnormalities in chest radiographs. Across subjects, activity in the FFA correlated with visual expertise, measured as behavioral performance during scanning. To test whether processing in the FFA was holistic, we measured its responses both to intact radiographs and radiographs that had been divided into 25 square pieces whose locations were scrambled. Activity in the FFA was equal in magnitude for intact and scrambled images, and responses to both kinds of stimuli correlated reliably with expertise. These results suggest that the FFA is one of the cortical regions that provides the basis of expertise in radiology, but that its contribution is not holistic processing of images.

  11. Cross spectral, active and passive approach to face recognition for improved performance

    Science.gov (United States)

    Grudzien, A.; Kowalski, M.; Szustakowski, M.

    2017-08-01

    Biometrics is a technique for automatic recognition of a person based on physiological or behavior characteristics. Since the characteristics used are unique, biometrics can create a direct link between a person and identity, based on variety of characteristics. The human face is one of the most important biometric modalities for automatic authentication. The most popular method of face recognition which relies on processing of visual information seems to be imperfect. Thermal infrared imagery may be a promising alternative or complement to visible range imaging due to its several reasons. This paper presents an approach of combining both methods.

  12. Breast Imaging: The Face of Imaging 3.0.

    Science.gov (United States)

    Mayo, Ray Cody; Parikh, Jay R

    2016-08-01

    In preparation for impending changes to the health care delivery and reimbursement models, the ACR has provided a roadmap for success via the Imaging 3.0 (®)platform. The authors illustrate how the field of breast imaging demonstrates the following Imaging 3.0 concepts: value, patient-centered care, clinical integration, structured reporting, outcome metrics, and radiology's role in the accountable care organization environment. Much of breast imaging's success may be adapted and adopted by other fields in radiology to ensure that all radiologists become more visible and provide the value sought by patients and payers. Copyright © 2016 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  13. IntraFace

    OpenAIRE

    De la Torre, Fernando; Chu, Wen-Sheng; Xiong, Xuehan; Vicente, Francisco; Ding, Xiaoyu; Cohn, Jeffrey

    2015-01-01

    Within the last 20 years, there has been an increasing interest in the computer vision community in automated facial image analysis algorithms. This has been driven by applications in animation, market research, autonomous-driving, surveillance, and facial editing among others. To date, there exist several commercial packages for specific facial image analysis tasks such as facial expression recognition, facial attribute analysis or face tracking. However, free and easy-to-use software that i...

  14. Importance of the brow in facial expressiveness during human communication.

    Science.gov (United States)

    Neely, John Gail; Lisker, Paul; Drapekin, Jesse

    2014-03-01

    The objective of this study was to evaluate laterality and upper/lower face dominance of expressiveness during prescribed speech using a unique validated image subtraction system capable of sensitive and reliable measurement of facial surface deformation. Observations and experiments of central control of facial expressions during speech and social utterances in humans and animals suggest that the right mouth moves more than the left during nonemotional speech. However, proficient lip readers seem to attend to the whole face to interpret meaning from expressed facial cues, also implicating a horizontal (upper face-lower face) axis. Prospective experimental design. Experimental maneuver: recited speech. image-subtraction strength-duration curve amplitude. Thirty normal human adults were evaluated during memorized nonemotional recitation of 2 short sentences. Facial movements were assessed using a video-image subtractions system capable of simultaneously measuring upper and lower specific areas of each hemiface. The results demonstrate both axes influence facial expressiveness in human communication; however, the horizontal axis (upper versus lower face) would appear dominant, especially during what would appear to be spontaneous breakthrough unplanned expressiveness. These data are congruent with the concept that the left cerebral hemisphere has control over nonemotionally stimulated speech; however, the multisynaptic brainstem extrapyramidal pathways may override hemiface laterality and preferentially take control of the upper face. Additionally, these data demonstrate the importance of the often-ignored brow in facial expressiveness. Experimental study. EBM levels not applicable.

  15. 3D Face Apperance Model

    DEFF Research Database (Denmark)

    Lading, Brian; Larsen, Rasmus; Astrom, K

    2006-01-01

    We build a 3D face shape model, including inter- and intra-shape variations, derive the analytical Jacobian of its resulting 2D rendered image, and show example of its fitting performance with light, pose, id, expression and texture variations......We build a 3D face shape model, including inter- and intra-shape variations, derive the analytical Jacobian of its resulting 2D rendered image, and show example of its fitting performance with light, pose, id, expression and texture variations...

  16. Elevated responses to constant facial emotions in different faces in the human amygdala: an fMRI study of facial identity and expression

    Directory of Open Access Journals (Sweden)

    Weiller Cornelius

    2004-11-01

    Full Text Available Abstract Background Human faces provide important signals in social interactions by inferring two main types of information, individual identity and emotional expression. The ability to readily assess both, the variability and consistency among emotional expressions in different individuals, is central to one's own interpretation of the imminent environment. A factorial design was used to systematically test the interaction of either constant or variable emotional expressions with constant or variable facial identities in areas involved in face processing using functional magnetic resonance imaging. Results Previous studies suggest a predominant role of the amygdala in the assessment of emotional variability. Here we extend this view by showing that this structure activated to faces with changing identities that display constant emotional expressions. Within this condition, amygdala activation was dependent on the type and intensity of displayed emotion, with significant responses to fearful expressions and, to a lesser extent so to neutral and happy expressions. In contrast, the lateral fusiform gyrus showed a binary pattern of increased activation to changing stimulus features while it was also differentially responsive to the intensity of displayed emotion when processing different facial identities. Conclusions These results suggest that the amygdala might serve to detect constant facial emotions in different individuals, complementing its established role for detecting emotional variability.

  17. Advanced human machine interaction for an image interpretation workstation

    Science.gov (United States)

    Maier, S.; Martin, M.; van de Camp, F.; Peinsipp-Byma, E.; Beyerer, J.

    2016-05-01

    In recent years, many new interaction technologies have been developed that enhance the usability of computer systems and allow for novel types of interaction. The areas of application for these technologies have mostly been in gaming and entertainment. However, in professional environments, there are especially demanding tasks that would greatly benefit from improved human machine interfaces as well as an overall improved user experience. We, therefore, envisioned and built an image-interpretation-workstation of the future, a multi-monitor workplace comprised of four screens. Each screen is dedicated to a complex software product such as a geo-information system to provide geographic context, an image annotation tool, software to generate standardized reports and a tool to aid in the identification of objects. Using self-developed systems for hand tracking, pointing gestures and head pose estimation in addition to touchscreens, face identification, and speech recognition systems we created a novel approach to this complex task. For example, head pose information is used to save the position of the mouse cursor on the currently focused screen and to restore it as soon as the same screen is focused again while hand gestures allow for intuitive manipulation of 3d objects in mid-air. While the primary focus is on the task of image interpretation, all of the technologies involved provide generic ways of efficiently interacting with a multi-screen setup and could be utilized in other fields as well. In preliminary experiments, we received promising feedback from users in the military and started to tailor the functionality to their needs

  18. Malar augmentation assessed by magnetic resonance imaging in patients after face lift and fat injection.

    Science.gov (United States)

    Swanson, Eric

    2011-05-01

    Restoration of cheek volume is recognized as an important part of facial rejuvenation. However, there are no previous studies that have determined whether any soft-tissue technique is effective for achieving lasting malar augmentation. This study prospectively evaluated a subset of five patients who had deep-plane face lifts with fat injection, and other facial cosmetic procedures. The mean volumes of fat injected were 9.1 cc (range, 4 to 12 cc) into the right cheek and 8.5 cc (range, 4 to 11.5 cc) into the left cheek. Magnetic resonance imaging scans were obtained before surgery and at intervals after surgery up to 6 months (and 1 year in one patient) for a total of 22 studies. Axial, coronal, and sagittal images, T1- and T2-weighted, were obtained. Thickness of the malar fat pads was measured. Malar thicknesses showed significant increases at the time of the 1-month follow-up appointments (p lift surgery produces an increase in malar volume that is still present up to 6 months after surgery. This study confirms the rationale for injecting fat at the time of face-lift surgery.

  19. Statistical Model-Based Face Pose Estimation

    Institute of Scientific and Technical Information of China (English)

    GE Xinliang; YANG Jie; LI Feng; WANG Huahua

    2007-01-01

    A robust face pose estimation approach is proposed by using face shape statistical model approach and pose parameters are represented by trigonometric functions. The face shape statistical model is firstly built by analyzing the face shapes from different people under varying poses. The shape alignment is vital in the process of building the statistical model. Then, six trigonometric functions are employed to represent the face pose parameters. Lastly, the mapping function is constructed between face image and face pose by linearly relating different parameters. The proposed approach is able to estimate different face poses using a few face training samples. Experimental results are provided to demonstrate its efficiency and accuracy.

  20. The occipital face area is causally involved in the formation of identity-specific face representations.

    Science.gov (United States)

    Ambrus, Géza Gergely; Dotzer, Maria; Schweinberger, Stefan R; Kovács, Gyula

    2017-12-01

    Transcranial magnetic stimulation (TMS) and neuroimaging studies suggest a role of the right occipital face area (rOFA) in early facial feature processing. However, the degree to which rOFA is necessary for the encoding of facial identity has been less clear. Here we used a state-dependent TMS paradigm, where stimulation preferentially facilitates attributes encoded by less active neural populations, to investigate the role of the rOFA in face perception and specifically in image-independent identity processing. Participants performed a familiarity decision task for famous and unknown target faces, preceded by brief (200 ms) or longer (3500 ms) exposures to primes which were either an image of a different identity (DiffID), another image of the same identity (SameID), the same image (SameIMG), or a Fourier-randomized noise pattern (NOISE) while either the rOFA or the vertex as control was stimulated by single-pulse TMS. Strikingly, TMS to the rOFA eliminated the advantage of SameID over DiffID condition, thereby disrupting identity-specific priming, while leaving image-specific priming (better performance for SameIMG vs. SameID) unaffected. Our results suggest that the role of rOFA is not limited to low-level feature processing, and emphasize its role in image-independent facial identity processing and the formation of identity-specific memory traces.

  1. Face recognition: a convolutional neural-network approach.

    Science.gov (United States)

    Lawrence, S; Giles, C L; Tsoi, A C; Back, A D

    1997-01-01

    We present a hybrid neural-network for human face recognition which compares favourably with other methods. The system combines local image sampling, a self-organizing map (SOM) neural network, and a convolutional neural network. The SOM provides a quantization of the image samples into a topological space where inputs that are nearby in the original space are also nearby in the output space, thereby providing dimensionality reduction and invariance to minor changes in the image sample, and the convolutional neural network provides partial invariance to translation, rotation, scale, and deformation. The convolutional network extracts successively larger features in a hierarchical set of layers. We present results using the Karhunen-Loeve transform in place of the SOM, and a multilayer perceptron (MLP) in place of the convolutional network for comparison. We use a database of 400 images of 40 individuals which contains quite a high degree of variability in expression, pose, and facial details. We analyze the computational complexity and discuss how new classes could be added to the trained recognizer.

  2. Acne, cystic on the face (image)

    Science.gov (United States)

    The face is the most common location of acne. Here, there are 4 to 6 millimeter red ( ... scars and fistulous tract formation (connecting passages). Severe acne may have a profound psychological impact and may ...

  3. Enlarge the training set based on inter-class relationship for face recognition from one image per person.

    Science.gov (United States)

    Li, Qin; Wang, Hua Jing; You, Jane; Li, Zhao Ming; Li, Jin Xue

    2013-01-01

    In some large-scale face recognition task, such as driver license identification and law enforcement, the training set only contains one image per person. This situation is referred to as one sample problem. Because many face recognition techniques implicitly assume that several (at least two) images per person are available for training, they cannot deal with the one sample problem. This paper investigates principal component analysis (PCA), Fisher linear discriminant analysis (LDA), and locality preserving projections (LPP) and shows why they cannot perform well in one sample problem. After that, this paper presents four reasons that make one sample problem itself difficult: the small sample size problem; the lack of representative samples; the underestimated intra-class variation; and the overestimated inter-class variation. Based on the analysis, this paper proposes to enlarge the training set based on the inter-class relationship. This paper also extends LDA and LPP to extract features from the enlarged training set. The experimental results show the effectiveness of the proposed method.

  4. Enabling dynamics in face analysis

    NARCIS (Netherlands)

    Dibeklioğlu, H.

    2014-01-01

    Most of the approaches in automatic face analysis rely solely on static appearance. However, temporal analysis of expressions reveals interesting patterns. For a better understanding of the human face, this thesis focuses on temporal changes in the face, and dynamic patterns of expressions. In

  5. Forensic Face Recognition: A Survey

    NARCIS (Netherlands)

    Ali, Tauseef; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.; Quaglia, Adamo; Epifano, Calogera M.

    2012-01-01

    The improvements of automatic face recognition during the last 2 decades have disclosed new applications like border control and camera surveillance. A new application field is forensic face recognition. Traditionally, face recognition by human experts has been used in forensics, but now there is a

  6. Images of war: using satellite images for human rights monitoring in Turkish Kurdistan

    NARCIS (Netherlands)

    Vos, de H.; Jongerden, J.P.; Etten, van J.

    2008-01-01

    In areas of war and armed conflict it is difficult to get trustworthy and coherent information. Civil society and human rights groups often face problems of dealing with fragmented witness reports, disinformation of war propaganda, and difficult direct access to these areas. Turkish Kurdistan was

  7. IntraFace.

    Science.gov (United States)

    De la Torre, Fernando; Chu, Wen-Sheng; Xiong, Xuehan; Vicente, Francisco; Ding, Xiaoyu; Cohn, Jeffrey

    2015-05-01

    Within the last 20 years, there has been an increasing interest in the computer vision community in automated facial image analysis algorithms. This has been driven by applications in animation, market research, autonomous-driving, surveillance, and facial editing among others. To date, there exist several commercial packages for specific facial image analysis tasks such as facial expression recognition, facial attribute analysis or face tracking. However, free and easy-to-use software that incorporates all these functionalities is unavailable. This paper presents IntraFace (IF), a publicly-available software package for automated facial feature tracking, head pose estimation, facial attribute recognition, and facial expression analysis from video. In addition, IFincludes a newly develop technique for unsupervised synchrony detection to discover correlated facial behavior between two or more persons, a relatively unexplored problem in facial image analysis. In tests, IF achieved state-of-the-art results for emotion expression and action unit detection in three databases, FERA, CK+ and RU-FACS; measured audience reaction to a talk given by one of the authors; and discovered synchrony for smiling in videos of parent-infant interaction. IF is free of charge for academic use at http://www.humansensing.cs.cmu.edu/intraface/.

  8. Average Gait Differential Image Based Human Recognition

    Directory of Open Access Journals (Sweden)

    Jinyan Chen

    2014-01-01

    Full Text Available The difference between adjacent frames of human walking contains useful information for human gait identification. Based on the previous idea a silhouettes difference based human gait recognition method named as average gait differential image (AGDI is proposed in this paper. The AGDI is generated by the accumulation of the silhouettes difference between adjacent frames. The advantage of this method lies in that as a feature image it can preserve both the kinetic and static information of walking. Comparing to gait energy image (GEI, AGDI is more fit to representation the variation of silhouettes during walking. Two-dimensional principal component analysis (2DPCA is used to extract features from the AGDI. Experiments on CASIA dataset show that AGDI has better identification and verification performance than GEI. Comparing to PCA, 2DPCA is a more efficient and less memory storage consumption feature extraction method in gait based recognition.

  9. A multi-view face recognition system based on cascade face detector and improved Dlib

    Science.gov (United States)

    Zhou, Hongjun; Chen, Pei; Shen, Wei

    2018-03-01

    In this research, we present a framework for multi-view face detect and recognition system based on cascade face detector and improved Dlib. This method is aimed to solve the problems of low efficiency and low accuracy in multi-view face recognition, to build a multi-view face recognition system, and to discover a suitable monitoring scheme. For face detection, the cascade face detector is used to extracted the Haar-like feature from the training samples, and Haar-like feature is used to train a cascade classifier by combining Adaboost algorithm. Next, for face recognition, we proposed an improved distance model based on Dlib to improve the accuracy of multiview face recognition. Furthermore, we applied this proposed method into recognizing face images taken from different viewing directions, including horizontal view, overlooks view, and looking-up view, and researched a suitable monitoring scheme. This method works well for multi-view face recognition, and it is also simulated and tested, showing satisfactory experimental results.

  10. Molecular imaging of melanin distribution in vivo and quantitative differential diagnosis of human pigmented lesions using label-free harmonic generation biopsy (Conference Presentation)

    Science.gov (United States)

    Sun, Chi-Kuang; Wei, Ming-Liang; Su, Yu-Hsiang; Weng, Wei-Hung; Liao, Yi-Hua

    2017-02-01

    Harmonic generation microscopy is a noninvasive repetitive imaging technique that provides real-time 3D microscopic images of human skin with a sub-femtoliter resolution and high penetration down to the reticular dermis. In this talk, we show that with a strong resonance effect, the third-harmonic-generation (THG) modality provides enhanced contrast on melanin and allows not only differential diagnosis of various pigmented skin lesions but also quantitative imaging for longterm tracking. This unique capability makes THG microscopy the only label-free technique capable of identifying the active melanocytes in human skin and to image their different dendriticity patterns. In this talk, we will review our recent efforts to in vivo image melanin distribution and quantitatively diagnose pigmented skin lesions using label-free harmonic generation biopsy. This talk will first cover the spectroscopic study on the melanin enhanced THG effect in human cells and the calibration strategy inside human skin for quantitative imaging. We will then review our recent clinical trials including: differential diagnosis capability study on pigmented skin tumors; as well as quantitative virtual biopsy study on pre- and post- treatment evaluation on melasma and solar lentigo. Our study indicates the unmatched capability of harmonic generation microscopy to perform virtual biopsy for noninvasive histopathological diagnosis of various pigmented skin tumors, as well as its unsurpassed capability to noninvasively reveal the pathological origin of different hyperpigmentary diseases on human face as well as to monitor the efficacy of laser depigmentation treatments. This work is sponsored by National Health Research Institutes.

  11. Human face recognition ability is specific and highly heritable.

    Science.gov (United States)

    Wilmer, Jeremy B; Germine, Laura; Chabris, Christopher F; Chatterjee, Garga; Williams, Mark; Loken, Eric; Nakayama, Ken; Duchaine, Bradley

    2010-03-16

    Compared with notable successes in the genetics of basic sensory transduction, progress on the genetics of higher level perception and cognition has been limited. We propose that investigating specific cognitive abilities with well-defined neural substrates, such as face recognition, may yield additional insights. In a twin study of face recognition, we found that the correlation of scores between monozygotic twins (0.70) was more than double the dizygotic twin correlation (0.29), evidence for a high genetic contribution to face recognition ability. Low correlations between face recognition scores and visual and verbal recognition scores indicate that both face recognition ability itself and its genetic basis are largely attributable to face-specific mechanisms. The present results therefore identify an unusual phenomenon: a highly specific cognitive ability that is highly heritable. Our results establish a clear genetic basis for face recognition, opening this intensively studied and socially advantageous cognitive trait to genetic investigation.

  12. Do bodily expressions compete with facial expressions? Time course of integration of emotional signals from the face and the body.

    Science.gov (United States)

    Gu, Yuanyuan; Mai, Xiaoqin; Luo, Yue-jia

    2013-01-01

    The decoding of social signals from nonverbal cues plays a vital role in the social interactions of socially gregarious animals such as humans. Because nonverbal emotional signals from the face and body are normally seen together, it is important to investigate the mechanism underlying the integration of emotional signals from these two sources. We conducted a study in which the time course of the integration of facial and bodily expressions was examined via analysis of event-related potentials (ERPs) while the focus of attention was manipulated. Distinctive integrating features were found during multiple stages of processing. In the first stage, threatening information from the body was extracted automatically and rapidly, as evidenced by enhanced P1 amplitudes when the subjects viewed compound face-body images with fearful bodies compared with happy bodies. In the second stage, incongruency between emotional information from the face and the body was detected and captured by N2. Incongruent compound images elicited larger N2s than did congruent compound images. The focus of attention modulated the third stage of integration. When the subjects' attention was focused on the face, images with congruent emotional signals elicited larger P3s than did images with incongruent signals, suggesting more sustained attention and elaboration of congruent emotional information extracted from the face and body. On the other hand, when the subjects' attention was focused on the body, images with fearful bodies elicited larger P3s than did images with happy bodies, indicating more sustained attention and elaboration of threatening information from the body during evaluative processes.

  13. Challenges and advantages in wide-field optical coherence tomography angiography imaging of the human retinal and choroidal vasculature at 1.7-MHz A-scan rate

    Science.gov (United States)

    Poddar, Raju; Migacz, Justin V.; Schwartz, Daniel M.; Werner, John S.; Gorczynska, Iwona

    2017-10-01

    We present noninvasive, three-dimensional, depth-resolved imaging of human retinal and choroidal blood circulation with a swept-source optical coherence tomography (OCT) system at 1065-nm center wavelength. Motion contrast OCT imaging was performed with the phase-variance OCT angiography method. A Fourier-domain mode-locked light source was used to enable an imaging rate of 1.7 MHz. We experimentally demonstrate the challenges and advantages of wide-field OCT angiography (OCTA). In the discussion, we consider acquisition time, scanning area, scanning density, and their influence on visualization of selected features of the retinal and choroidal vascular networks. The OCTA imaging was performed with a field of view of 16 deg (5 mm×5 mm) and 30 deg (9 mm×9 mm). Data were presented in en face projections generated from single volumes and in en face projection mosaics generated from up to 4 datasets. OCTA imaging at 1.7 MHz A-scan rate was compared with results obtained from a commercial OCTA instrument and with conventional ophthalmic diagnostic methods: fundus photography, fluorescein, and indocyanine green angiography. Comparison of images obtained from all methods is demonstrated using the same eye of a healthy volunteer. For example, imaging of retinal pathology is presented in three cases of advanced age-related macular degeneration.

  14. The human body odor compound androstadienone leads to anger-dependent effects in an emotional Stroop but not dot-probe task using human faces.

    Science.gov (United States)

    Hornung, Jonas; Kogler, Lydia; Wolpert, Stephan; Freiherr, Jessica; Derntl, Birgit

    2017-01-01

    The androgen derivative androstadienone is a substance found in human sweat and thus is a putative human chemosignal. Androstadienone has been studied with respect to effects on mood states, attractiveness ratings, physiological and neural activation. With the current experiment, we aimed to explore in which way androstadienone affects attention to social cues (human faces). Moreover, we wanted to test whether effects depend on specific emotions, the participants' sex and individual sensitivity to smell androstadienone. To do so, we investigated 56 healthy individuals (thereof 29 females taking oral contraceptives) with two attention tasks on two consecutive days (once under androstadienone, once under placebo exposure in pseudorandomized order). With an emotional dot-probe task we measured visuo-spatial cueing while an emotional Stroop task allowed us to investigate interference control. Our results suggest that androstadienone acts in a sex, task and emotion-specific manner as a reduction in interference processes in the emotional Stroop task was only apparent for angry faces in men under androstadienone exposure. More specifically, men showed a smaller difference in reaction times for congruent compared to incongruent trials. At the same time also women were slightly affected by smelling androstadienone as they classified angry faces more often correctly under androstadienone. For the emotional dot-probe task no modulation by androstadienone was observed. Furthermore, in both attention paradigms individual sensitivity to androstadienone was neither correlated with reaction times nor error rates in men and women. To conclude, exposure to androstadienone seems to potentiate the relevance of angry faces in both men and women in connection with interference control, while processes of visuo-spatial cueing remain unaffected.

  15. Real-Time 3D Face Acquisition Using Reconfigurable Hybrid Architecture

    Directory of Open Access Journals (Sweden)

    Mitéran Johel

    2007-01-01

    Full Text Available Acquiring 3D data of human face is a general problem which can be applied in face recognition, virtual reality, and many other applications. It can be solved using stereovision. This technique consists in acquiring data in three dimensions from two cameras. The aim is to implement an algorithmic chain which makes it possible to obtain a three-dimensional space from two two-dimensional spaces: two images coming from the two cameras. Several implementations have already been considered. We propose a new simple real-time implementation based on a hybrid architecture (FPGA-DSP, allowing to consider an embedded and reconfigurable processing. Then we show our method which provides depth map of face, dense and reliable, and which can be implemented on an embedded architecture. A various architecture study led us to a judicious choice allowing to obtain the desired result. The real-time data processing is implemented in an embedded architecture. We obtain a dense face disparity map, precise enough for considered applications (multimedia, virtual worlds, biometrics and using a reliable method.

  16. Face Recognition Is Shaped by the Use of Sign Language

    Science.gov (United States)

    Stoll, Chloé; Palluel-Germain, Richard; Caldara, Roberto; Lao, Junpeng; Dye, Matthew W. G.; Aptel, Florent; Pascalis, Olivier

    2018-01-01

    Previous research has suggested that early deaf signers differ in face processing. Which aspects of face processing are changed and the role that sign language may have played in that change are however unclear. Here, we compared face categorization (human/non-human) and human face recognition performance in early profoundly deaf signers, hearing…

  17. Human low vision image warping - Channel matching considerations

    Science.gov (United States)

    Juday, Richard D.; Smith, Alan T.; Loshin, David S.

    1992-01-01

    We are investigating the possibility that a video image may productively be warped prior to presentation to a low vision patient. This could form part of a prosthesis for certain field defects. We have done preliminary quantitative studies on some notions that may be valid in calculating the image warpings. We hope the results will help make best use of time to be spent with human subjects, by guiding the selection of parameters and their range to be investigated. We liken a warping optimization to opening the largest number of spatial channels between the pixels of an input imager and resolution cells in the visual system. Some important effects are not quantified that will require human evaluation, such as local 'squashing' of the image, taken as the ratio of eigenvalues of the Jacobian of the transformation. The results indicate that the method shows quantitative promise. These results have identified some geometric transformations to evaluate further with human subjects.

  18. Human age estimation framework using different facial parts

    Directory of Open Access Journals (Sweden)

    Mohamed Y. El Dib

    2011-03-01

    Full Text Available Human age estimation from facial images has a wide range of real-world applications in human computer interaction (HCI. In this paper, we use the bio-inspired features (BIF to analyze different facial parts: (a eye wrinkles, (b whole internal face (without forehead area and (c whole face (with forehead area using different feature shape points. The analysis shows that eye wrinkles which cover 30% of the facial area contain the most important aging features compared to internal face and whole face. Furthermore, more extensive experiments are made on FG-NET database by increasing the number of missing pictures in older age groups using MORPH database to enhance the results.

  19. The Perception of Four Basic Emotions in Human and Nonhuman Faces by Children with Autism and Other Developmental Disabilities

    Science.gov (United States)

    Gross, Thomas F.

    2004-01-01

    Children who experienced autism, mental retardation, and language disorders; and, children in a clinical control group were shown photographs of human female, orangutan, and canine (boxer) faces expressing happiness, sadness, anger, surprise and a neutral expression. For each species of faces, children were asked to identify the happy, sad, angry,…

  20. Meta-analytic review of the development of face discrimination in infancy: Face race, face gender, infant age, and methodology moderate face discrimination.

    Science.gov (United States)

    Sugden, Nicole A; Marquis, Alexandra R

    2017-11-01

    Infants show facility for discriminating between individual faces within hours of birth. Over the first year of life, infants' face discrimination shows continued improvement with familiar face types, such as own-race faces, but not with unfamiliar face types, like other-race faces. The goal of this meta-analytic review is to provide an effect size for infants' face discrimination ability overall, with own-race faces, and with other-race faces within the first year of life, how this differs with age, and how it is influenced by task methodology. Inclusion criteria were (a) infant participants aged 0 to 12 months, (b) completing a human own- or other-race face discrimination task, (c) with discrimination being determined by infant looking. Our analysis included 30 works (165 samples, 1,926 participants participated in 2,623 tasks). The effect size for infants' face discrimination was small, 6.53% greater than chance (i.e., equal looking to the novel and familiar). There was a significant difference in discrimination by race, overall (own-race, 8.18%; other-race, 3.18%) and between ages (own-race: 0- to 4.5-month-olds, 7.32%; 5- to 7.5-month-olds, 9.17%; and 8- to 12-month-olds, 7.68%; other-race: 0- to 4.5-month-olds, 6.12%; 5- to 7.5-month-olds, 3.70%; and 8- to 12-month-olds, 2.79%). Multilevel linear (mixed-effects) models were used to predict face discrimination; infants' capacity to discriminate faces is sensitive to face characteristics including race, gender, and emotion as well as the methods used, including task timing, coding method, and visual angle. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  1. Perspective projection for variance pose face recognition from camera calibration

    Science.gov (United States)

    Fakhir, M. M.; Woo, W. L.; Chambers, J. A.; Dlay, S. S.

    2016-04-01

    Variance pose is an important research topic in face recognition. The alteration of distance parameters across variance pose face features is a challenging. We provide a solution for this problem using perspective projection for variance pose face recognition. Our method infers intrinsic camera parameters of the image which enable the projection of the image plane into 3D. After this, face box tracking and centre of eyes detection can be identified using our novel technique to verify the virtual face feature measurements. The coordinate system of the perspective projection for face tracking allows the holistic dimensions for the face to be fixed in different orientations. The training of frontal images and the rest of the poses on FERET database determine the distance from the centre of eyes to the corner of box face. The recognition system compares the gallery of images against different poses. The system initially utilises information on position of both eyes then focuses principally on closest eye in order to gather data with greater reliability. Differentiation between the distances and position of the right and left eyes is a unique feature of our work with our algorithm outperforming other state of the art algorithms thus enabling stable measurement in variance pose for each individual.

  2. Applications of PCA and SVM-PSO Based Real-Time Face Recognition System

    Directory of Open Access Journals (Sweden)

    Ming-Yuan Shieh

    2014-01-01

    Full Text Available This paper incorporates principal component analysis (PCA with support vector machine-particle swarm optimization (SVM-PSO for developing real-time face recognition systems. The integrated scheme aims to adopt the SVM-PSO method to improve the validity of PCA based image recognition systems on dynamically visual perception. The face recognition for most human-robot interaction applications is accomplished by PCA based method because of its dimensionality reduction. However, PCA based systems are only suitable for processing the faces with the same face expressions and/or under the same view directions. Since the facial feature selection process can be considered as a problem of global combinatorial optimization in machine learning, the SVM-PSO is usually used as an optimal classifier of the system. In this paper, the PSO is used to implement a feature selection, and the SVMs serve as fitness functions of the PSO for classification problems. Experimental results demonstrate that the proposed method simplifies features effectively and obtains higher classification accuracy.

  3. Improving Shadow Suppression for Illumination Robust Face Recognition

    KAUST Repository

    Zhang, Wuming

    2017-10-13

    2D face analysis techniques, such as face landmarking, face recognition and face verification, are reasonably dependent on illumination conditions which are usually uncontrolled and unpredictable in the real world. An illumination robust preprocessing method thus remains a significant challenge in reliable face analysis. In this paper we propose a novel approach for improving lighting normalization through building the underlying reflectance model which characterizes interactions between skin surface, lighting source and camera sensor, and elaborates the formation of face color appearance. Specifically, the proposed illumination processing pipeline enables the generation of Chromaticity Intrinsic Image (CII) in a log chromaticity space which is robust to illumination variations. Moreover, as an advantage over most prevailing methods, a photo-realistic color face image is subsequently reconstructed which eliminates a wide variety of shadows whilst retaining the color information and identity details. Experimental results under different scenarios and using various face databases show the effectiveness of the proposed approach to deal with lighting variations, including both soft and hard shadows, in face recognition.

  4. The roles of perceptual and conceptual information in face recognition.

    Science.gov (United States)

    Schwartz, Linoy; Yovel, Galit

    2016-11-01

    The representation of familiar objects is comprised of perceptual information about their visual properties as well as the conceptual knowledge that we have about them. What is the relative contribution of perceptual and conceptual information to object recognition? Here, we examined this question by designing a face familiarization protocol during which participants were either exposed to rich perceptual information (viewing each face in different angles and illuminations) or with conceptual information (associating each face with a different name). Both conditions were compared with single-view faces presented with no labels. Recognition was tested on new images of the same identities to assess whether learning generated a view-invariant representation. Results showed better recognition of novel images of the learned identities following association of a face with a name label, but no enhancement following exposure to multiple face views. Whereas these findings may be consistent with the role of category learning in object recognition, face recognition was better for labeled faces only when faces were associated with person-related labels (name, occupation), but not with person-unrelated labels (object names or symbols). These findings suggest that association of meaningful conceptual information with an image shifts its representation from an image-based percept to a view-invariant concept. They further indicate that the role of conceptual information should be considered to account for the superior recognition that we have for familiar faces and objects. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  5. Face recognition increases during saccade preparation.

    Science.gov (United States)

    Lin, Hai; Rizak, Joshua D; Ma, Yuan-ye; Yang, Shang-chuan; Chen, Lin; Hu, Xin-tian

    2014-01-01

    Face perception is integral to human perception system as it underlies social interactions. Saccadic eye movements are frequently made to bring interesting visual information, such as faces, onto the fovea for detailed processing. Just before eye movement onset, the processing of some basic features, such as the orientation, of an object improves at the saccade landing point. Interestingly, there is also evidence that indicates faces are processed in early visual processing stages similar to basic features. However, it is not known whether this early enhancement of processing includes face recognition. In this study, three experiments were performed to map the timing of face presentation to the beginning of the eye movement in order to evaluate pre-saccadic face recognition. Faces were found to be similarly processed as simple objects immediately prior to saccadic movements. Starting ∼ 120 ms before a saccade to a target face, independent of whether or not the face was surrounded by other faces, the face recognition gradually improved and the critical spacing of the crowding decreased as saccade onset was approaching. These results suggest that an upcoming saccade prepares the visual system for new information about faces at the saccade landing site and may reduce the background in a crowd to target the intended face. This indicates an important role of pre-saccadic eye movement signals in human face recognition.

  6. Shared or separate mechanisms for self-face and other-face processing? Evidence from adaptation.

    Directory of Open Access Journals (Sweden)

    Brendan eRooney

    2012-03-01

    Full Text Available Evidence that self-face recognition is dissociable from general face recognition hasimportant implications both for models of social cognition and for our understanding offace recognition. In two studies, we examine how adaptation affects the perception ofpersonally familiar faces, and we use a visual adaptation paradigm to investigatewhether the neural mechanisms underlying the recognition of one’s own and other facesare shared or separate. In Study 1 we show that the representation of personally familiarfaces is rapidly updated by visual experience with unfamiliar faces, so that theperception of one’s own face and a friend’s face is altered by a brief period ofadaptation to distorted unfamiliar faces. In Study 2, participants adapted to images oftheir own and a friend’s face distorted in opposite directions; the contingent aftereffectswe observe are indicative of separate neural populations, but we suggest that thesereflect coding of facial identity rather than of the categories ‘self’ and ‘other’.

  7. Face Recognition and Tracking in Videos

    Directory of Open Access Journals (Sweden)

    Swapnil Vitthal Tathe

    2017-07-01

    Full Text Available Advancement in computer vision technology and availability of video capturing devices such as surveillance cameras has evoked new video processing applications. The research in video face recognition is mostly biased towards law enforcement applications. Applications involves human recognition based on face and iris, human computer interaction, behavior analysis, video surveillance etc. This paper presents face tracking framework that is capable of face detection using Haar features, recognition using Gabor feature extraction, matching using correlation score and tracking using Kalman filter. The method has good recognition rate for real-life videos and robust performance to changes due to illumination, environmental factors, scale, pose and orientations.

  8. A smart technique for attendance system to recognize faces through parallelism

    Science.gov (United States)

    Prabhavathi, B.; Tanuja, V.; Madhu Viswanatham, V.; Rajashekhara Babu, M.

    2017-11-01

    Major part of recognising a person is face with the help of image processing techniques we can exploit the physical features of a person. In the old approach method that is used in schools and colleges it is there that the professor calls the student name and then the attendance for the students marked. Here in paper want to deviate from the old approach and go with the new approach by using techniques that are there in image processing. In this paper we presenting spontaneous presence for students in classroom. At first classroom image has been in use and after that image is kept in data record. For the images that are stored in the database we apply system algorithm which includes steps such as, histogram classification, noise removal, face detection and face recognition methods. So by using these steps we detect the faces and then compare it with the database. The attendance gets marked automatically if the system recognizes the faces.

  9. Enlarge the training set based on inter-class relationship for face recognition from one image per person.

    Directory of Open Access Journals (Sweden)

    Qin Li

    Full Text Available In some large-scale face recognition task, such as driver license identification and law enforcement, the training set only contains one image per person. This situation is referred to as one sample problem. Because many face recognition techniques implicitly assume that several (at least two images per person are available for training, they cannot deal with the one sample problem. This paper investigates principal component analysis (PCA, Fisher linear discriminant analysis (LDA, and locality preserving projections (LPP and shows why they cannot perform well in one sample problem. After that, this paper presents four reasons that make one sample problem itself difficult: the small sample size problem; the lack of representative samples; the underestimated intra-class variation; and the overestimated inter-class variation. Based on the analysis, this paper proposes to enlarge the training set based on the inter-class relationship. This paper also extends LDA and LPP to extract features from the enlarged training set. The experimental results show the effectiveness of the proposed method.

  10. Exploring Human Cognition Using Large Image Databases.

    Science.gov (United States)

    Griffiths, Thomas L; Abbott, Joshua T; Hsu, Anne S

    2016-07-01

    Most cognitive psychology experiments evaluate models of human cognition using a relatively small, well-controlled set of stimuli. This approach stands in contrast to current work in neuroscience, perception, and computer vision, which have begun to focus on using large databases of natural images. We argue that natural images provide a powerful tool for characterizing the statistical environment in which people operate, for better evaluating psychological theories, and for bringing the insights of cognitive science closer to real applications. We discuss how some of the challenges of using natural images as stimuli in experiments can be addressed through increased sample sizes, using representations from computer vision, and developing new experimental methods. Finally, we illustrate these points by summarizing recent work using large image databases to explore questions about human cognition in four different domains: modeling subjective randomness, defining a quantitative measure of representativeness, identifying prior knowledge used in word learning, and determining the structure of natural categories. Copyright © 2016 Cognitive Science Society, Inc.

  11. Monte Carlo modeling of human tooth optical coherence tomography imaging

    International Nuclear Information System (INIS)

    Shi, Boya; Meng, Zhuo; Wang, Longzhi; Liu, Tiegen

    2013-01-01

    We present a Monte Carlo model for optical coherence tomography (OCT) imaging of human tooth. The model is implemented by combining the simulation of a Gaussian beam with simulation for photon propagation in a two-layer human tooth model with non-parallel surfaces through a Monte Carlo method. The geometry and the optical parameters of the human tooth model are chosen on the basis of the experimental OCT images. The results show that the simulated OCT images are qualitatively consistent with the experimental ones. Using the model, we demonstrate the following: firstly, two types of photons contribute to the information of morphological features and noise in the OCT image of a human tooth, respectively. Secondly, the critical imaging depth of the tooth model is obtained, and it is found to decrease significantly with increasing mineral loss, simulated as different enamel scattering coefficients. Finally, the best focus position is located below and close to the dental surface by analysis of the effect of focus positions on the OCT signal and critical imaging depth. We anticipate that this modeling will become a powerful and accurate tool for a preliminary numerical study of the OCT technique on diseases of dental hard tissue in human teeth. (paper)

  12. A retrospective look at replacing face-to-face embryology instruction with online lectures in a human anatomy course.

    Science.gov (United States)

    Beale, Elmus G; Tarwater, Patrick M; Lee, Vaughan H

    2014-01-01

    Embryology is integrated into the Clinically Oriented Anatomy course at the Texas Tech University Health Sciences Center School of Medicine. Before 2008, the same instructor presented embryology in 13 face-to-face lectures distributed by organ systems throughout the course. For the 2008 and 2009 offerings of the course, a hybrid embryology instruction model with four face-to-face classes that supplemented online recorded lectures was used. One instructor delivered the lectures face-to-face in 2007 and by online videos in 2008-2009, while a second instructor provided the supplemental face-to-face classes in 2008-2009. The same embryology learning objectives and selected examination questions were used for each of the three years. This allowed direct comparison of learning outcomes, as measured by examination performance, for students receiving only face-to-face embryology instruction versus the hybrid approach. Comparison of the face-to-face lectures to the hybrid approach showed no difference in overall class performance on embryology questions that were used all three years. Moreover, there was no differential effect of the delivery method on the examination scores for bottom quartile students. Students completed an end-of-course survey to assess their opinions. They rated the two forms of delivery similarly on a six-point Likert scale and reported that face-to-face lectures have the advantage of allowing them to interact with the instructor, whereas online lectures could be paused, replayed, and viewed at any time. These experiences suggest the need for well-designed prospective studies to determine whether online lectures can be used to enhance the efficacy of embryology instruction. © 2013 American Association of Anatomists.

  13. Near infrared and visible face recognition based on decision fusion of LBP and DCT features

    Science.gov (United States)

    Xie, Zhihua; Zhang, Shuai; Liu, Guodong; Xiong, Jinquan

    2018-03-01

    Visible face recognition systems, being vulnerable to illumination, expression, and pose, can not achieve robust performance in unconstrained situations. Meanwhile, near infrared face images, being light- independent, can avoid or limit the drawbacks of face recognition in visible light, but its main challenges are low resolution and signal noise ratio (SNR). Therefore, near infrared and visible fusion face recognition has become an important direction in the field of unconstrained face recognition research. In order to extract the discriminative complementary features between near infrared and visible images, in this paper, we proposed a novel near infrared and visible face fusion recognition algorithm based on DCT and LBP features. Firstly, the effective features in near-infrared face image are extracted by the low frequency part of DCT coefficients and the partition histograms of LBP operator. Secondly, the LBP features of visible-light face image are extracted to compensate for the lacking detail features of the near-infrared face image. Then, the LBP features of visible-light face image, the DCT and LBP features of near-infrared face image are sent to each classifier for labeling. Finally, decision level fusion strategy is used to obtain the final recognition result. The visible and near infrared face recognition is tested on HITSZ Lab2 visible and near infrared face database. The experiment results show that the proposed method extracts the complementary features of near-infrared and visible face images and improves the robustness of unconstrained face recognition. Especially for the circumstance of small training samples, the recognition rate of proposed method can reach 96.13%, which has improved significantly than 92.75 % of the method based on statistical feature fusion.

  14. Quality of life differences in patients with right- versus left-sided facial paralysis: Universal preference of right-sided human face recognition.

    Science.gov (United States)

    Ryu, Nam Gyu; Lim, Byung Woo; Cho, Jae Keun; Kim, Jin

    2016-09-01

    We investigated whether experiencing right- or left-sided facial paralysis would affect an individual's ability to recognize one side of the human face using hybrid hemi-facial photos by preliminary study. Further investigation looked at the relationship between facial recognition ability, stress, and quality of life. To investigate predominance of one side of the human face for face recognition, 100 normal participants (right-handed: n = 97, left-handed: n = 3, right brain dominance: n = 56, left brain dominance: n = 44) answered a questionnaire that included hybrid hemi-facial photos developed to determine decide superiority of one side for human face recognition. To determine differences of stress level and quality of life between individuals experiencing right- and left-sided facial paralysis, 100 patients (right side:50, left side:50, not including traumatic facial nerve paralysis) answered a questionnaire about facial disability index test and quality of life (SF-36 Korean version). Regardless of handedness or hemispheric dominance, the proportion of predominance of the right side in human face recognition was larger than the left side (71% versus 12%, neutral: 17%). Facial distress index of the patients with right-sided facial paralysis was lower than that of left-sided patients (68.8 ± 9.42 versus 76.4 ± 8.28), and the SF-36 scores of right-sided patients were lower than left-sided patients (119.07 ± 15.24 versus 123.25 ± 16.48, total score: 166). Universal preference for the right side in human face recognition showed worse psychological mood and social interaction in patients with right-side facial paralysis than left-sided paralysis. This information is helpful to clinicians in that psychological and social factors should be considered when treating patients with facial-paralysis. Copyright © 2016 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  15. Development of a human body RMN imaging device

    International Nuclear Information System (INIS)

    Saint-Jalmes, H.

    1984-03-01

    Imaging device for human body is studied in this thesis. The section images presented are got by a projection-reconstruction method associated to a section plane selection by an oscillating gradient application. Different stages of the machine development are presented: - design and calculation of a resistive magnet for very homogeneous field imaging - design of gradient coils for imaging magnets - realization of control and acquisition interfaces - realization of imaging software in real time [fr

  16. Enhancing the performance of cooperative face detector by NFGS

    Science.gov (United States)

    Yesugade, Snehal; Dave, Palak; Srivastava, Srinkhala; Das, Apurba

    2015-07-01

    Computerized human face detection is an important task of deformable pattern recognition in today's world. Especially in cooperative authentication scenarios like ATM fraud detection, attendance recording, video tracking and video surveillance, the accuracy of the face detection engine in terms of accuracy, memory utilization and speed have been active areas of research for the last decade. The Haar based face detection or SIFT and EBGM based face recognition systems are fairly reliable in this regard. But, there the features are extracted in terms of gray textures. When the input is a high resolution online video with a fairly large viewing area, Haar needs to search for face everywhere (say 352×250 pixels) and every time (e.g., 30 FPS capture all the time). In the current paper we have proposed to address both the aforementioned scenarios by a neuro-visually inspired method of figure-ground segregation (NFGS) [5] to result in a two-dimensional binary array from gray face image. The NFGS would identify the reference video frame in a low sampling rate and updates the same with significant change of environment like illumination. The proposed algorithm would trigger the face detector only when appearance of a new entity is encountered into the viewing area. To address the detection accuracy, classical face detector would be enabled only in a narrowed down region of interest (RoI) as fed by the NFGS. The act of updating the RoI would be done in each frame online with respect to the moving entity which in turn would improve both FR (False Rejection) and FA (False Acceptance) of the face detection system.

  17. Robust Face Recognition Based on Texture Analysis

    Directory of Open Access Journals (Sweden)

    Sanun Srisuk

    2013-01-01

    Full Text Available In this paper, we present a new framework for face recognition with varying illumination based on DCT total variation minimization (DTV, a Gabor filter, a sub-micro-pattern analysis (SMP and discriminated accumulative feature transform (DAFT. We first suppress the illumination effect by using the DCT with the help of TV as a tool for face normalization. The DTV image is then emphasized by the Gabor filter. The facial features are encoded by our proposed method - the SMP. The SMP image is then transformed to the 2D histogram using DAFT. Our system is verified with experiments on the AR and the Yale face database B.

  18. False memory for face in short-term memory and neural activity in human amygdala.

    Science.gov (United States)

    Iidaka, Tetsuya; Harada, Tokiko; Sadato, Norihiro

    2014-12-03

    Human memory is often inaccurate. Similar to words and figures, new faces are often recognized as seen or studied items in long- and short-term memory tests; however, the neural mechanisms underlying this false memory remain elusive. In a previous fMRI study using morphed faces and a standard false memory paradigm, we found that there was a U-shaped response curve of the amygdala to old, new, and lure items. This indicates that the amygdala is more active in response to items that are salient (hit and correct rejection) compared to items that are less salient (false alarm), in terms of memory retrieval. In the present fMRI study, we determined whether the false memory for faces occurs within the short-term memory range (a few seconds), and assessed which neural correlates are involved in veridical and illusory memories. Nineteen healthy participants were scanned by 3T MRI during a short-term memory task using morphed faces. The behavioral results indicated that the occurrence of false memories was within the short-term range. We found that the amygdala displayed a U-shaped response curve to memory items, similar to those observed in our previous study. These results suggest that the amygdala plays a common role in both long- and short-term false memory for faces. We made the following conclusions: First, the amygdala is involved in detecting the saliency of items, in addition to fear, and supports goal-oriented behavior by modulating memory. Second, amygdala activity and response time might be related with a subject's response criterion for similar faces. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. AN ILLUMINATION INVARIANT FACE RECOGNITION BY ENHANCED CONTRAST LIMITED ADAPTIVE HISTOGRAM EQUALIZATION

    Directory of Open Access Journals (Sweden)

    A. Thamizharasi

    2016-05-01

    Full Text Available Face recognition system is gaining more importance in social networks and surveillance. The face recognition task is complex due to the variations in illumination, expression, occlusion, aging and pose. The illumination variations in image are due to changes in lighting conditions, poor illumination, low contrast or increased brightness. The variations in illumination adversely affect the quality of image and recognition accuracy. The illumination variations in face image have to be pre-processed prior to face recognition. The Contrast Limited Adaptive Histogram Equalization (CLAHE is an image enhancement technique popular in enhancing medical images. The proposed work is to create illumination invariant face recognition system by enhancing Contrast Limited Adaptive Histogram Equalization technique. This method is termed as “Enhanced CLAHE”. The efficiency of Enhanced CLAHE is tested using Fuzzy K Nearest Neighbour classifier and fisher face subspace projection method. The face recognition accuracy percentage rate, Equal Error Rate and False Acceptance Rate at 1% are calculated. The performance of CLAHE and Enhanced CLAHE methods is compared. The efficiency of the Enhanced CLAHE method is tested with three public face databases AR, Yale and ORL. The Enhanced CLAHE has very high recognition accuracy percentage rate when compared to CLAHE.

  20. Face validation using 3D information from single calibrated camera

    DEFF Research Database (Denmark)

    Katsarakis, N.; Pnevmatikakis, A.

    2009-01-01

    stages in the cascade. This constrains the misses by making detection easier, but increases the false positives. False positives can be reduced by validating the detected image regions as faces. This has been accomplished using color and pattern information of the detected image regions. In this paper we......Detection of faces in cluttered scenes under arbitrary imaging conditions (pose, expression, illumination and distance) is prone to miss and false positive errors. The well-established approach of using boosted cascades of simple classifiers addresses the problem of missing faces by using fewer...

  1. How does image noise affect actual and predicted human gaze allocation in assessing image quality?

    Science.gov (United States)

    Röhrbein, Florian; Goddard, Peter; Schneider, Michael; James, Georgina; Guo, Kun

    2015-07-01

    A central research question in natural vision is how to allocate fixation to extract informative cues for scene perception. With high quality images, psychological and computational studies have made significant progress to understand and predict human gaze allocation in scene exploration. However, it is unclear whether these findings can be generalised to degraded naturalistic visual inputs. In this eye-tracking and computational study, we methodically distorted both man-made and natural scenes with Gaussian low-pass filter, circular averaging filter and Additive Gaussian white noise, and monitored participants' gaze behaviour in assessing perceived image qualities. Compared with original high quality images, distorted images attracted fewer numbers of fixations but longer fixation durations, shorter saccade distance and stronger central fixation bias. This impact of image noise manipulation on gaze distribution was mainly determined by noise intensity rather than noise type, and was more pronounced for natural scenes than for man-made scenes. We furthered compared four high performing visual attention models in predicting human gaze allocation in degraded scenes, and found that model performance lacked human-like sensitivity to noise type and intensity, and was considerably worse than human performance measured as inter-observer variance. Furthermore, the central fixation bias is a major predictor for human gaze allocation, which becomes more prominent with increased noise intensity. Our results indicate a crucial role of external noise intensity in determining scene-viewing gaze behaviour, which should be considered in the development of realistic human-vision-inspired attention models. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Digital correlation applied to recognition and identification faces

    International Nuclear Information System (INIS)

    Arroyave, S.; Hernandez, L. J.; Torres, Cesar; Matos, Lorenzo

    2009-01-01

    It developed a system capable of recognizing faces of people from their facial features, the images are taken by the software automatically through a process of validating the presence of face to the camera lens, the digitized image is compared with a database that contains previously images captured, to subsequently be recognized and finally identified. The contribution of system set out is the fact that the acquisition of data is done in real time and using a web cam commercial usb interface offering an system equally optimal but much more economical. This tool is very effective in systems where the security is off vital importance, support with a high degree of verification to entities that possess databases with faces of people. (Author)

  3. Image Quality Assessment for Fake Biometric Detection: Application to Iris, Fingerprint, and Face Recognition.

    Science.gov (United States)

    Galbally, Javier; Marcel, Sébastien; Fierrez, Julian

    2014-02-01

    To ensure the actual presence of a real legitimate trait in contrast to a fake self-manufactured synthetic or reconstructed sample is a significant problem in biometric authentication, which requires the development of new and efficient protection measures. In this paper, we present a novel software-based fake detection method that can be used in multiple biometric systems to detect different types of fraudulent access attempts. The objective of the proposed system is to enhance the security of biometric recognition frameworks, by adding liveness assessment in a fast, user-friendly, and non-intrusive manner, through the use of image quality assessment. The proposed approach presents a very low degree of complexity, which makes it suitable for real-time applications, using 25 general image quality features extracted from one image (i.e., the same acquired for authentication purposes) to distinguish between legitimate and impostor samples. The experimental results, obtained on publicly available data sets of fingerprint, iris, and 2D face, show that the proposed method is highly competitive compared with other state-of-the-art approaches and that the analysis of the general image quality of real biometric samples reveals highly valuable information that may be very efficiently used to discriminate them from fake traits.

  4. Microwave non-contact imaging of subcutaneous human body tissues.

    Science.gov (United States)

    Kletsov, Andrey; Chernokalov, Alexander; Khripkov, Alexander; Cho, Jaegeol; Druchinin, Sergey

    2015-10-01

    A small-size microwave sensor is developed for non-contact imaging of a human body structure in 2D, enabling fitness and health monitoring using mobile devices. A method for human body tissue structure imaging is developed and experimentally validated. Subcutaneous fat tissue reconstruction depth of up to 70 mm and maximum fat thickness measurement error below 2 mm are demonstrated by measurements with a human body phantom and human subjects. Electrically small antennas are developed for integration of the microwave sensor into a mobile device. Usability of the developed microwave sensor for fitness applications, healthcare, and body weight management is demonstrated.

  5. Effects of acute psychosocial stress on neural activity to emotional and neutral faces in a face recognition memory paradigm.

    Science.gov (United States)

    Li, Shijia; Weerda, Riklef; Milde, Christopher; Wolf, Oliver T; Thiel, Christiane M

    2014-12-01

    Previous studies have shown that acute psychosocial stress impairs recognition of declarative memory and that emotional material is especially sensitive to this effect. Animal studies suggest a central role of the amygdala which modulates memory processes in hippocampus, prefrontal cortex and other brain areas. We used functional magnetic resonance imaging (fMRI) to investigate neural correlates of stress-induced modulation of emotional recognition memory in humans. Twenty-seven healthy, right-handed, non-smoker male volunteers performed an emotional face recognition task. During encoding, participants were presented with 50 fearful and 50 neutral faces. One hour later, they underwent either a stress (Trier Social Stress Test) or a control procedure outside the scanner which was followed immediately by the recognition session inside the scanner, where participants had to discriminate between 100 old and 50 new faces. Stress increased salivary cortisol, blood pressure and pulse, and decreased the mood of participants but did not impact recognition memory. BOLD data during recognition revealed a stress condition by emotion interaction in the left inferior frontal gyrus and right hippocampus which was due to a stress-induced increase of neural activity to fearful and a decrease to neutral faces. Functional connectivity analyses revealed a stress-induced increase in coupling between the right amygdala and the right fusiform gyrus, when processing fearful as compared to neutral faces. Our results provide evidence that acute psychosocial stress affects medial temporal and frontal brain areas differentially for neutral and emotional items, with a stress-induced privileged processing of emotional stimuli.

  6. Resonant imaging of carotenoid pigments in the human retina

    Science.gov (United States)

    Gellermann, Werner; Emakov, Igor V.; McClane, Robert W.

    2002-06-01

    We have generated high spatial resolution images showing the distribution of carotenoid macular pigments in the human retina using Raman spectroscopy. A low level of macular pigments is associated with an increased risk of developing age-related macular degeneration, a leading cause of irreversible blindness. Using excised human eyecups and resonant excitation of the pigment molecules with narrow bandwidth blue light from a mercury arc lamp, we record Raman images originating from the carbon-carbon double bond stretch vibrations of lutein and zeaxanthin, the carotenoids comprising human macular pigments. Our Raman images reveal significant differences among subjects, both in regard to absolute levels as well as spatial distribution within the macula. Since the light levels used to obtain these images are well below established safety limits, this technique holds promise for developing a rapid screening diagnostic in large populations at risk for vision loss from age-related macular degeneration.

  7. Lurking on the Internet: A Small-Group Assignment that Puts a Human Face on Psychopathology

    Science.gov (United States)

    Lowman, Joseph; Judge, Abigail M.; Wiss, Charles

    2010-01-01

    Lurking on the Internet aims to put a human face on psychopathology for the abnormal psychology course. Student groups are assigned major diagnostic categories and instructed to search the Internet for discussion forums, individual blogs, or YouTube videos where affected individuals discuss their symptoms and lives. After discussing the ethics of…

  8. Image enhancement using thermal-visible fusion for human detection

    Science.gov (United States)

    Zaihidee, Ezrinda Mohd; Hawari Ghazali, Kamarul; Zuki Saleh, Mohd

    2017-09-01

    An increased interest in detecting human beings in video surveillance system has emerged in recent years. Multisensory image fusion deserves more research attention due to the capability to improve the visual interpretability of an image. This study proposed fusion techniques for human detection based on multiscale transform using grayscale visual light and infrared images. The samples for this study were taken from online dataset. Both images captured by the two sensors were decomposed into high and low frequency coefficients using Stationary Wavelet Transform (SWT). Hence, the appropriate fusion rule was used to merge the coefficients and finally, the final fused image was obtained by using inverse SWT. From the qualitative and quantitative results, the proposed method is more superior than the two other methods in terms of enhancement of the target region and preservation of details information of the image.

  9. Real-time face and gesture analysis for human-robot interaction

    Science.gov (United States)

    Wallhoff, Frank; Rehrl, Tobias; Mayer, Christoph; Radig, Bernd

    2010-05-01

    Human communication relies on a large number of different communication mechanisms like spoken language, facial expressions, or gestures. Facial expressions and gestures are one of the main nonverbal communication mechanisms and pass large amounts of information between human dialog partners. Therefore, to allow for intuitive human-machine interaction, a real-time capable processing and recognition of facial expressions, hand and head gestures are of great importance. We present a system that is tackling these challenges. The input features for the dynamic head gestures and facial expressions are obtained from a sophisticated three-dimensional model, which is fitted to the user in a real-time capable manner. Applying this model different kinds of information are extracted from the image data and afterwards handed over to a real-time capable data-transferring framework, the so-called Real-Time DataBase (RTDB). In addition to the head and facial-related features, also low-level image features regarding the human hand - optical flow, Hu-moments are stored into the RTDB for the evaluation process of hand gestures. In general, the input of a single camera is sufficient for the parallel evaluation of the different gestures and facial expressions. The real-time capable recognition of the dynamic hand and head gestures are performed via different Hidden Markov Models, which have proven to be a quick and real-time capable classification method. On the other hand, for the facial expressions classical decision trees or more sophisticated support vector machines are used for the classification process. These obtained results of the classification processes are again handed over to the RTDB, where other processes (like a Dialog Management Unit) can easily access them without any blocking effects. In addition, an adjustable amount of history can be stored by the RTDB buffer unit.

  10. Ethics and policies in the face of research into extending human life.

    Science.gov (United States)

    Bellver Capella, Vicente

    2014-01-01

    If the prediction of some scientists comes true, then we are only few years away from the appearance of the first generation of human beings who will be able to add one year to each remaining year of life expectancy. Faced with this possibility, it seems appropriate to give thought to the public policies that should be adopted. It is better to anticipate the various future scenarios than react to a reality which is a fait accompli. To date, the debate has mainly focused on the ethical question: is it good or bad for us humans to achieve immortal life? Until now, neither legal guidelines at State level nor those of international organisations which deal with bioethical issues have concerned themselves with this matter. But before discussing policies, two other matters should be addressed: first, to show how the prolongation of human life can be as much the unwanted outcome of legitimate efforts in search of healthy aging, as one of the aims of the post-humanist project; second, to present the most consistent and shared ethical reasons for rejecting the human immortality project.

  11. Chemical Achievers: The Human Face of the Chemical Sciences (by Mary Ellen Bowden)

    Science.gov (United States)

    Kauffman, George B.

    1999-02-01

    Chemical Heritage Foundation: Philadelphia, PA, 1997. viii + 180 pp. 21.6 x 27.8 cm. ISBN 0-941901-15-1. Paper. 20.00 (10.00 for high school teachers who provide documentation). At a 1991 summer workshop sponsored by the Chemical Heritage Foundation and taught by Derek A. Davenport and William B. Jensen, high school and college teachers of introductory chemistry requested a source of pictorial material about famous chemical scientists suitable as a classroom aid. CHF responded by publishing this attractive, inexpensive paperback volume, which reflects the considerable research effort needed to locate appropriate images and to write the biographical essays. Printed on heavy, glossy paper and spiral bound to facilitate conversion to overhead transparencies, it contains 157 images from pictorial collections at CHF and many other institutions on two types of achievers: the historical "greats" most often referred to in introductory courses, and scientists who made contributions in areas of the chemical sciences that are of special relevance to modern life and the career choices students will make. The pictures are intended to provide the "human face" of the book's subtitle- "to point to the human beings who had the insights and made the major advances that [teachers] ask students to master." Thus, for example, Boyle's law becomes less cold and abstract if the student can connect it with the two portraits of the Irish scientist even if his face is topped with a wig. Marie Curie can be seen in the role of wife and mother as well as genius scientist in the photographs of her with her two daughters, one of whom also became a Nobel laureate. And students are reminded of the ubiquity of the contribution of the chemical scientists to all aspects of our everyday life by the stories and pictures of Wallace Hume Carothers' path to nylon, Percy Lavon Julian's work on hormones, and Charles F. Chandler and Rachel Carson's efforts to preserve the environment. In addition to portraits

  12. Fusion of LBP and SWLD using spatio-spectral information for hyperspectral face recognition

    Science.gov (United States)

    Xie, Zhihua; Jiang, Peng; Zhang, Shuai; Xiong, Jinquan

    2018-01-01

    Hyperspectral imaging, recording intrinsic spectral information of the skin cross different spectral bands, become an important issue for robust face recognition. However, the main challenges for hyperspectral face recognition are high data dimensionality, low signal to noise ratio and inter band misalignment. In this paper, hyperspectral face recognition based on LBP (Local binary pattern) and SWLD (Simplified Weber local descriptor) is proposed to extract discriminative local features from spatio-spectral fusion information. Firstly, the spatio-spectral fusion strategy based on statistical information is used to attain discriminative features of hyperspectral face images. Secondly, LBP is applied to extract the orientation of the fusion face edges. Thirdly, SWLD is proposed to encode the intensity information in hyperspectral images. Finally, we adopt a symmetric Kullback-Leibler distance to compute the encoded face images. The hyperspectral face recognition is tested on Hong Kong Polytechnic University Hyperspectral Face database (PolyUHSFD). Experimental results show that the proposed method has higher recognition rate (92.8%) than the state of the art hyperspectral face recognition algorithms.

  13. Humans make efficient use of natural image statistics when performing spatial interpolation.

    Science.gov (United States)

    D'Antona, Anthony D; Perry, Jeffrey S; Geisler, Wilson S

    2013-12-16

    Visual systems learn through evolution and experience over the lifespan to exploit the statistical structure of natural images when performing visual tasks. Understanding which aspects of this statistical structure are incorporated into the human nervous system is a fundamental goal in vision science. To address this goal, we measured human ability to estimate the intensity of missing image pixels in natural images. Human estimation accuracy is compared with various simple heuristics (e.g., local mean) and with optimal observers that have nearly complete knowledge of the local statistical structure of natural images. Human estimates are more accurate than those of simple heuristics, and they match the performance of an optimal observer that knows the local statistical structure of relative intensities (contrasts). This optimal observer predicts the detailed pattern of human estimation errors and hence the results place strong constraints on the underlying neural mechanisms. However, humans do not reach the performance of an optimal observer that knows the local statistical structure of the absolute intensities, which reflect both local relative intensities and local mean intensity. As predicted from a statistical analysis of natural images, human estimation accuracy is negligibly improved by expanding the context from a local patch to the whole image. Our results demonstrate that the human visual system exploits efficiently the statistical structure of natural images.

  14. Deep Appearance Models: A Deep Boltzmann Machine Approach for Face Modeling

    OpenAIRE

    Duong, Chi Nhan; Luu, Khoa; Quach, Kha Gia; Bui, Tien D.

    2016-01-01

    The "interpretation through synthesis" approach to analyze face images, particularly Active Appearance Models (AAMs) method, has become one of the most successful face modeling approaches over the last two decades. AAM models have ability to represent face images through synthesis using a controllable parameterized Principal Component Analysis (PCA) model. However, the accuracy and robustness of the synthesized faces of AAM are highly depended on the training sets and inherently on the genera...

  15. Sonar Subsea Images of Large Temples, Mammoths, Giant Sloths. Huge Artwork Carvings, Eroded Cities, Human Images, and Paleo Astronomy Sites that Must be Over Ten Thousand Years Old.

    Science.gov (United States)

    Allen, R. L.

    2016-12-01

    Computer enhancing of side scanning sonar plots revealed images of massive art, apparent ruins of cities, and subsea temples. Some images are about four to twenty kilometers in length. Present water depths imply that many of the finds must have been created over ten thousand years ago. Also, large carvings of giant sloths, Ice Age elk, mammoths, mastodons, and other cold climate creatures concurrently indicate great age. In offshore areas of North America, some human faces have beards and what appear to be Caucasian characteristics that clearly contrast with the native tribal images. A few images have possible physical appearances associated with Polynesians. Contacts and at least limited migrations must have occurred much further in the ancient past than previously believed. Greatly rising sea levels and radical changes away from late Ice Age climates had to be devastating to very ancient civilizations. Many images indicate that these cultures were capable of construction and massive art at or near the technological level of the Old Kingdom in Egypt. Paleo astronomy is obvious in some plots. Major concerns are how to further evaluate, catalog, protect, and conserve the creations of those cultures.

  16. High-speed adaptive optics line scan confocal retinal imaging for human eye.

    Science.gov (United States)

    Lu, Jing; Gu, Boyu; Wang, Xiaolin; Zhang, Yuhua

    2017-01-01

    Continuous and rapid eye movement causes significant intraframe distortion in adaptive optics high resolution retinal imaging. To minimize this artifact, we developed a high speed adaptive optics line scan confocal retinal imaging system. A high speed line camera was employed to acquire retinal image and custom adaptive optics was developed to compensate the wave aberration of the human eye's optics. The spatial resolution and signal to noise ratio were assessed in model eye and in living human eye. The improvement of imaging fidelity was estimated by reduction of intra-frame distortion of retinal images acquired in the living human eyes with frame rates at 30 frames/second (FPS), 100 FPS, and 200 FPS. The device produced retinal image with cellular level resolution at 200 FPS with a digitization of 512×512 pixels/frame in the living human eye. Cone photoreceptors in the central fovea and rod photoreceptors near the fovea were resolved in three human subjects in normal chorioretinal health. Compared with retinal images acquired at 30 FPS, the intra-frame distortion in images taken at 200 FPS was reduced by 50.9% to 79.7%. We demonstrated the feasibility of acquiring high resolution retinal images in the living human eye at a speed that minimizes retinal motion artifact. This device may facilitate research involving subjects with nystagmus or unsteady fixation due to central vision loss.

  17. Human face recognition using eigenface in cloud computing environment

    Science.gov (United States)

    Siregar, S. T. M.; Syahputra, M. F.; Rahmat, R. F.

    2018-02-01

    Doing a face recognition for one single face does not take a long time to process, but if we implement attendance system or security system on companies that have many faces to be recognized, it will take a long time. Cloud computing is a computing service that is done not on a local device, but on an internet connected to a data center infrastructure. The system of cloud computing also provides a scalability solution where cloud computing can increase the resources needed when doing larger data processing. This research is done by applying eigenface while collecting data as training data is also done by using REST concept to provide resource, then server can process the data according to existing stages. After doing research and development of this application, it can be concluded by implementing Eigenface, recognizing face by applying REST concept as endpoint in giving or receiving related information to be used as a resource in doing model formation to do face recognition.

  18. Molecular Imaging of Human Embryonic Stem Cells Stably Expressing Human PET Reporter Genes After Zinc Finger Nuclease-Mediated Genome Editing.

    Science.gov (United States)

    Wolfs, Esther; Holvoet, Bryan; Ordovas, Laura; Breuls, Natacha; Helsen, Nicky; Schönberger, Matthias; Raitano, Susanna; Struys, Tom; Vanbilloen, Bert; Casteels, Cindy; Sampaolesi, Maurilio; Van Laere, Koen; Lambrichts, Ivo; Verfaillie, Catherine M; Deroose, Christophe M

    2017-10-01

    Molecular imaging is indispensable for determining the fate and persistence of engrafted stem cells. Standard strategies for transgene induction involve the use of viral vectors prone to silencing and insertional mutagenesis or the use of nonhuman genes. Methods: We used zinc finger nucleases to induce stable expression of human imaging reporter genes into the safe-harbor locus adeno-associated virus integration site 1 in human embryonic stem cells. Plasmids were generated carrying reporter genes for fluorescence, bioluminescence imaging, and human PET reporter genes. Results: In vitro assays confirmed their functionality, and embryonic stem cells retained differentiation capacity. Teratoma formation assays were performed, and tumors were imaged over time with PET and bioluminescence imaging. Conclusion: This study demonstrates the application of genome editing for targeted integration of human imaging reporter genes in human embryonic stem cells for long-term molecular imaging. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.

  19. Human ear detection in the thermal infrared spectrum

    Science.gov (United States)

    Abaza, Ayman; Bourlai, Thirimachos

    2012-06-01

    In this paper the problem of human ear detection in the thermal infrared (IR) spectrum is studied in order to illustrate the advantages and limitations of the most important steps of ear-based biometrics that can operate in day and night time environments. The main contributions of this work are two-fold: First, a dual-band database is assembled that consists of visible and thermal profile face images. The thermal data was collected using a high definition middle-wave infrared (3-5 microns) camera that is capable of acquiring thermal imprints of human skin. Second, a fully automated, thermal imaging based ear detection method is developed for real-time segmentation of human ears in either day or night time environments. The proposed method is based on Haar features forming a cascaded AdaBoost classifier (our modified version of the original Viola-Jones approach1 that was designed to be applied mainly in visible band images). The main advantage of the proposed method, applied on our profile face image data set collected in the thermal-band, is that it is designed to reduce the learning time required by the original Viola-Jones method from several weeks to several hours. Unlike other approaches reported in the literature, which have been tested but not designed to operate in the thermal band, our method yields a high detection accuracy that reaches ~ 91.5%. Further analysis on our data set yielded that: (a) photometric normalization techniques do not directly improve ear detection performance. However, when using a certain photometric normalization technique (CLAHE) on falsely detected images, the detection rate improved by ~ 4%; (b) the high detection accuracy of our method did not degrade when we lowered down the original spatial resolution of thermal ear images. For example, even after using one third of the original spatial resolution (i.e. ~ 20% of the original computational time) of the thermal profile face images, the high ear detection accuracy of our method

  20. Effects of Implied Motion and Facing Direction on Positional Preferences in Single-Object Pictures.

    Science.gov (United States)

    Palmer, Stephen E; Langlois, Thomas A

    2017-07-01

    Palmer, Gardner, and Wickens studied aesthetic preferences for pictures of single objects and found a strong inward bias: Right-facing objects were preferred left-of-center and left-facing objects right-of-center. They found no effect of object motion (people and cars showed the same inward bias as chairs and teapots), but the objects were not depicted as moving. Here we measured analogous inward biases with objects depicted as moving with an implied direction and speed by having participants drag-and-drop target objects into the most aesthetically pleasing position. In Experiment 1, human figures were shown diving or falling while moving forward or backward. Aesthetic biases were evident for both inward-facing and inward-moving figures, but the motion-based bias dominated so strongly that backward divers or fallers were preferred moving inward but facing outward. Experiment 2 investigated implied speed effects using images of humans, horses, and cars moving at different speeds (e.g., standing, walking, trotting, and galloping horses). Inward motion or facing biases were again present, and differences in their magnitude due to speed were evident. Unexpectedly, faster moving objects were generally preferred closer to frame center than slower moving objects. These results are discussed in terms of the combined effects of prospective, future-oriented biases, and retrospective, past-oriented biases.

  1. Semisupervised kernel marginal Fisher analysis for face recognition.

    Science.gov (United States)

    Wang, Ziqiang; Sun, Xia; Sun, Lijun; Huang, Yuchun

    2013-01-01

    Dimensionality reduction is a key problem in face recognition due to the high-dimensionality of face image. To effectively cope with this problem, a novel dimensionality reduction algorithm called semisupervised kernel marginal Fisher analysis (SKMFA) for face recognition is proposed in this paper. SKMFA can make use of both labelled and unlabeled samples to learn the projection matrix for nonlinear dimensionality reduction. Meanwhile, it can successfully avoid the singularity problem by not calculating the matrix inverse. In addition, in order to make the nonlinear structure captured by the data-dependent kernel consistent with the intrinsic manifold structure, a manifold adaptive nonparameter kernel is incorporated into the learning process of SKMFA. Experimental results on three face image databases demonstrate the effectiveness of our proposed algorithm.

  2. Semantic analisis on faces using deep neural networks

    Directory of Open Access Journals (Sweden)

    Nicolás Federico Pellejero

    2018-03-01

    Full Text Available In this paper we address the problem of automatic emotion recognition and classification through video. Nowadays there are excellent results focused on lab-made datasets, with posed facial expressions. On the other hand there is room for a lot of improvement in the case of `in the wild' datasets, where light, face angle to the camera, etc. are taken into account. In these cases it could be very harmful to work with a small dataset. Currently, there are not big enough datasets of adequately labeled faces for the task.\\\\ We use Generative Adversarial Networks in order to train models in a semi-supervised fashion, generating realistic face images in the process, allowing the exploitation of a big cumulus of unlabeled face images.

  3. In vivo imaging of human biochemistry

    International Nuclear Information System (INIS)

    Hall, L.D.

    1983-01-01

    Positron Emission Tomography (PET) is an extremely powerful method for studying aspects of the biochemistry of defined regions of the human body, literally 'in-vivo' biochemistry. To place this technique in the broader perspective of medical diagnostic methods an introduction is given to some of the more important imaging methods which are already widely used clinically. A brief summary of the most recently developed imaging method, which is based on Nuclear Magnetic Resonance (NMR) Spectroscopy, is also included

  4. Extending Face-to-Face Interactions: Understanding and Developing an Online Teacher and Family Community

    Science.gov (United States)

    Zhang, Chun; Du, Jianxia; Sun, Li; Ding, Yi

    2018-01-01

    Technology has been quickly changing human interactions, traditional practices, and almost every aspect of our lives. It is important to maintain effective face-to-face communication and interactions between teachers and families. Nonetheless, technology and its tools can also extend and enhance family-teacher relationships and partnerships. This…

  5. Perceptual Learning: 12-Month-Olds' Discrimination of Monkey Faces

    Science.gov (United States)

    Fair, Joseph; Flom, Ross; Jones, Jacob; Martin, Justin

    2012-01-01

    Six-month-olds reliably discriminate different monkey and human faces whereas 9-month-olds only discriminate different human faces. It is often falsely assumed that perceptual narrowing reflects a permanent change in perceptual abilities. In 3 experiments, ninety-six 12-month-olds' discrimination of unfamiliar monkey faces was examined. Following…

  6. Neural Correlates of Body and Face Perception Following Bilateral Destruction of the Primary Visual Cortices

    Directory of Open Access Journals (Sweden)

    Jan eVan den Stock

    2014-02-01

    Full Text Available Non-conscious visual processing of different object categories was investigated in a rare patient with bilateral destruction of the visual cortex (V1 and clinical blindness over the entire visual field. Images of biological and non-biological object categories were presented consisting of human bodies, faces, butterflies, cars, and scrambles. Behaviorally, only the body shape induced higher perceptual sensitivity, as revealed by signal detection analysis. Passive exposure to bodies and faces activated amygdala and superior temporal sulcus. In addition, bodies also activated the extrastriate body area, insula, orbitofrontal cortex (OFC and cerebellum. The results show that following bilateral damage to the primary visual cortex and ensuing complete cortical blindness, the human visual system is able to process categorical properties of human body shapes. This residual vision may be based on V1-independent input to body-selective areas along the ventral stream, in concert with areas involved in the representation of bodily states, like insula, OFC and cerebellum.

  7. Comparison of 3D cellular imaging techniques based on scanned electron probes: Serial block face SEM vs. Axial bright-field STEM tomography.

    Science.gov (United States)

    McBride, E L; Rao, A; Zhang, G; Hoyne, J D; Calco, G N; Kuo, B C; He, Q; Prince, A A; Pokrovskaya, I D; Storrie, B; Sousa, A A; Aronova, M A; Leapman, R D

    2018-06-01

    Microscopies based on focused electron probes allow the cell biologist to image the 3D ultrastructure of eukaryotic cells and tissues extending over large volumes, thus providing new insight into the relationship between cellular architecture and function of organelles. Here we compare two such techniques: electron tomography in conjunction with axial bright-field scanning transmission electron microscopy (BF-STEM), and serial block face scanning electron microscopy (SBF-SEM). The advantages and limitations of each technique are illustrated by their application to determining the 3D ultrastructure of human blood platelets, by considering specimen geometry, specimen preparation, beam damage and image processing methods. Many features of the complex membranes composing the platelet organelles can be determined from both approaches, although STEM tomography offers a higher ∼3 nm isotropic pixel size, compared with ∼5 nm for SBF-SEM in the plane of the block face and ∼30 nm in the perpendicular direction. In this regard, we demonstrate that STEM tomography is advantageous for visualizing the platelet canalicular system, which consists of an interconnected network of narrow (∼50-100 nm) membranous cisternae. In contrast, SBF-SEM enables visualization of complete platelets, each of which extends ∼2 µm in minimum dimension, whereas BF-STEM tomography can typically only visualize approximately half of the platelet volume due to a rapid non-linear loss of signal in specimens of thickness greater than ∼1.5 µm. We also show that the limitations of each approach can be ameliorated by combining 3D and 2D measurements using a stereological approach. Copyright © 2018. Published by Elsevier Inc.

  8. Holistic Processing of Static and Moving Faces

    Science.gov (United States)

    Zhao, Mintao; Bülthoff, Isabelle

    2017-01-01

    Humans' face ability develops and matures with extensive experience in perceiving, recognizing, and interacting with faces that move most of the time. However, how facial movements affect 1 core aspect of face ability--holistic face processing--remains unclear. Here we investigated the influence of rigid facial motion on holistic and part-based…

  9. Learning representative features for facial images based on a modified principal component analysis

    Science.gov (United States)

    Averkin, Anton; Potapov, Alexey

    2013-05-01

    The paper is devoted to facial image analysis and particularly deals with the problem of automatic evaluation of the attractiveness of human faces. We propose a new approach for automatic construction of feature space based on a modified principal component analysis. Input data sets for the algorithm are the learning data sets of facial images, which are rated by one person. The proposed approach allows one to extract features of the individual subjective face beauty perception and to predict attractiveness values for new facial images, which were not included into a learning data set. The Pearson correlation coefficient between values predicted by our method for new facial images and personal attractiveness estimation values equals to 0.89. This means that the new approach proposed is promising and can be used for predicting subjective face attractiveness values in real systems of the facial images analysis.

  10. Component-Based Cartoon Face Generation

    Directory of Open Access Journals (Sweden)

    Saman Sepehri Nejad

    2016-11-01

    Full Text Available In this paper, we present a cartoon face generation method that stands on a component-based facial feature extraction approach. Given a frontal face image as an input, our proposed system has the following stages. First, face features are extracted using an extended Active Shape Model. Outlines of the components are locally modified using edge detection, template matching and Hermit interpolation. This modification enhances the diversity of output and accuracy of the component matching required for cartoon generation. Second, to bring cartoon-specific features such as shadows, highlights and, especially, stylish drawing, an array of various face photographs and corresponding hand-drawn cartoon faces are collected. These cartoon templates are automatically decomposed into cartoon components using our proposed method for parameterizing cartoon samples, which is fast and simple. Then, using shape matching methods, the appropriate cartoon component is selected and deformed to fit the input face. Finally, a cartoon face is rendered in a vector format using the rendering rules of the selected template. Experimental results demonstrate effectiveness of our approach in generating life-like cartoon faces.

  11. Face Hallucination with Linear Regression Model in Semi-Orthogonal Multilinear PCA Method

    Science.gov (United States)

    Asavaskulkiet, Krissada

    2018-04-01

    In this paper, we propose a new face hallucination technique, face images reconstruction in HSV color space with a semi-orthogonal multilinear principal component analysis method. This novel hallucination technique can perform directly from tensors via tensor-to-vector projection by imposing the orthogonality constraint in only one mode. In our experiments, we use facial images from FERET database to test our hallucination approach which is demonstrated by extensive experiments with high-quality hallucinated color faces. The experimental results assure clearly demonstrated that we can generate photorealistic color face images by using the SO-MPCA subspace with a linear regression model.

  12. Task relevance differentially shapes ventral visual stream sensitivity to visible and invisible faces

    DEFF Research Database (Denmark)

    Kouider, Sid; Barbot, Antoine; Madsen, Kristoffer Hougaard

    2016-01-01

    requires dissociating it from the top-down influences underlying conscious recognition. Here, using visual masking to abolish perceptual consciousness in humans, we report that functional magnetic resonance imaging (fMRI) responses to invisible faces in the fusiform gyrus are enhanced when they are task...... relevance crucially shapes the sensitivity of fusiform regions to face stimuli, leading from enhancement to suppression of neural activity when the top-down influences accruing from conscious recognition are prevented.......Top-down modulations of the visual cortex can be driven by task relevance. Yet, several accounts propose that the perceptual inferences underlying conscious recognition involve similar top-down modulations of sensory responses. Studying the pure impact of task relevance on sensory responses...

  13. Emotion Recognition in Animated Compared to Human Stimuli in Adolescents with Autism Spectrum Disorder

    Science.gov (United States)

    Brosnan, Mark; Johnson, Hilary; Grawmeyer, Beate; Chapman, Emma; Benton, Laura

    2015-01-01

    There is equivocal evidence as to whether there is a deficit in recognising emotional expressions in Autism spectrum disorder (ASD). This study compared emotion recognition in ASD in three types of emotion expression media (still image, dynamic image, auditory) across human stimuli (e.g. photo of a human face) and animated stimuli (e.g. cartoon…

  14. 2D-3D Face Recognition Method Basedon a Modified CCA-PCA Algorithm

    Directory of Open Access Journals (Sweden)

    Patrik Kamencay

    2014-03-01

    Full Text Available This paper presents a proposed methodology for face recognition based on an information theory approach to coding and decoding face images. In this paper, we propose a 2D-3D face-matching method based on a principal component analysis (PCA algorithm using canonical correlation analysis (CCA to learn the mapping between a 2D face image and 3D face data. This method makes it possible to match a 2D face image with enrolled 3D face data. Our proposed fusion algorithm is based on the PCA method, which is applied to extract base features. PCA feature-level fusion requires the extraction of different features from the source data before features are merged together. Experimental results on the TEXAS face image database have shown that the classification and recognition results based on the modified CCA-PCA method are superior to those based on the CCA method. Testing the 2D-3D face match results gave a recognition rate for the CCA method of a quite poor 55% while the modified CCA method based on PCA-level fusion achieved a very good recognition score of 85%.

  15. Sex differences in social cognition: The case of face processing.

    Science.gov (United States)

    Proverbio, Alice Mado

    2017-01-02

    Several studies have demonstrated that women show a greater interest for social information and empathic attitude than men. This article reviews studies on sex differences in the brain, with particular reference to how males and females process faces and facial expressions, social interactions, pain of others, infant faces, faces in things (pareidolia phenomenon), opposite-sex faces, humans vs. landscapes, incongruent behavior, motor actions, biological motion, erotic pictures, and emotional information. Sex differences in oxytocin-based attachment response and emotional memory are also mentioned. In addition, we investigated how 400 different human faces were evaluated for arousal and valence dimensions by a group of healthy male and female University students. Stimuli were carefully balanced for sensory and perceptual characteristics, age, facial expression, and sex. As a whole, women judged all human faces as more positive and more arousing than men. Furthermore, they showed a preference for the faces of children and the elderly in the arousal evaluation. Regardless of face aesthetics, age, or facial expression, women rated human faces higher than men. The preference for opposite- vs. same-sex faces strongly interacted with facial age. Overall, both women and men exhibited differences in facial processing that could be interpreted in the light of evolutionary psychobiology. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  16. Fractal analysis of en face tomographic images obtained with full field optical coherence tomography

    Energy Technology Data Exchange (ETDEWEB)

    Gao, Wanrong; Zhu, Yue [Department of Optical Engineering, Nanjing University of Science and Technology, Jiangsu (China)

    2017-03-15

    The quantitative modeling of the imaging signal of pathological areas and healthy areas is necessary to improve the specificity of diagnosis with tomographic en face images obtained with full field optical coherence tomography (FFOCT). In this work, we propose to use the depth-resolved change in the fractal parameter as a quantitative specific biomarker of the stages of disease. The idea is based on the fact that tissue is a random medium and only statistical parameters that characterize tissue structure are appropriate. We successfully relate the imaging signal in FFOCT to the tissue structure in terms of the scattering function and the coherent transfer function of the system. The formula is then used to analyze the ratio of the Fourier transforms of the cancerous tissue to the normal tissue. We found that when the tissue changes from the normal to cancerous the ratio of the spectrum of the index inhomogeneities takes the form of an inverse power law and the changes in the fractal parameter can be determined by estimating slopes of the spectra of the ratio plotted on a log-log scale. The fresh normal and cancer liver tissues were imaged to demonstrate the potential diagnostic value of the method at early stages when there are no significant changes in tissue microstructures. (copyright 2016 by WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  17. Recognizing the same face in different contexts: Testing within-person face recognition in typical development and in autism.

    Science.gov (United States)

    Neil, Louise; Cappagli, Giulia; Karaminis, Themelis; Jenkins, Rob; Pellicano, Elizabeth

    2016-03-01

    Unfamiliar face recognition follows a particularly protracted developmental trajectory and is more likely to be atypical in children with autism than those without autism. There is a paucity of research, however, examining the ability to recognize the same face across multiple naturally varying images. Here, we investigated within-person face recognition in children with and without autism. In Experiment 1, typically developing 6- and 7-year-olds, 8- and 9-year-olds, 10- and 11-year-olds, 12- to 14-year-olds, and adults were given 40 grayscale photographs of two distinct male identities (20 of each face taken at different ages, from different angles, and in different lighting conditions) and were asked to sort them by identity. Children mistook images of the same person as images of different people, subdividing each individual into many perceived identities. Younger children divided images into more perceived identities than adults and also made more misidentification errors (placing two different identities together in the same group) than older children and adults. In Experiment 2, we used the same procedure with 32 cognitively able children with autism. Autistic children reported a similar number of identities and made similar numbers of misidentification errors to a group of typical children of similar age and ability. Fine-grained analysis using matrices revealed marginal group differences in overall performance. We suggest that the immature performance in typical and autistic children could arise from problems extracting the perceptual commonalities from different images of the same person and building stable representations of facial identity. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  18. Distance Adaptive Tensor Discriminative Geometry Preserving Projection for Face Recognition

    Directory of Open Access Journals (Sweden)

    Ziqiang Wang

    2012-09-01

    Full Text Available There is a growing interest in dimensionality reduction techniques for face recognition, however, the traditional dimensionality reduction algorithms often transform the input face image data into vectors before embedding. Such vectorization often ignores the underlying data structure and leads to higher computational complexity. To effectively cope with these problems, a novel dimensionality reduction algorithm termed distance adaptive tensor discriminative geometry preserving projection (DATDGPP is proposed in this paper. The key idea of DATDGPP is as follows: first, the face image data are directly encoded in high-order tensor structure so that the relationships among the face image data can be preserved; second, the data-adaptive tensor distance is adopted to model the correlation among different coordinates of tensor data; third, the transformation matrix which can preserve discrimination and local geometry information is obtained by an iteration algorithm. Experimental results on three face databases show that the proposed algorithm outperforms other representative dimensionality reduction algorithms.

  19. An LG-graph-based early evaluation of segmented images

    International Nuclear Information System (INIS)

    Tsitsoulis, Athanasios; Bourbakis, Nikolaos

    2012-01-01

    Image segmentation is one of the first important parts of image analysis and understanding. Evaluation of image segmentation, however, is a very difficult task, mainly because it requires human intervention and interpretation. In this work, we propose a blind reference evaluation scheme based on regional local–global (RLG) graphs, which aims at measuring the amount and distribution of detail in images produced by segmentation algorithms. The main idea derives from the field of image understanding, where image segmentation is often used as a tool for scene interpretation and object recognition. Evaluation here derives from summarization of the structural information content and not from the assessment of performance after comparisons with a golden standard. Results show measurements for segmented images acquired from three segmentation algorithms, applied on different types of images (human faces/bodies, natural environments and structures (buildings)). (paper)

  20. Evaluation of a processing scheme for calcified atheromatous carotid artery detection in face/neck CBCT images

    Science.gov (United States)

    Matheus, B. R. N.; Centurion, B. S.; Rubira-Bullen, I. R. F.; Schiabel, H.

    2017-03-01

    Cone Beam Computed Tomography (CBCT), a kind of face and neck exams can be opportunity to identify, as an incidental finding, calcifications of the carotid artery (CACA). Given the similarity of the CACA with calcification found in several x-ray exams, this work suggests that a similar technique designed to detect breast calcifications in mammography images could be applied to detect such calcifications in CBCT. The method used a 3D version of the calcification detection technique [1], based on a signal enhancement using a convolution with a 3D Laplacian of Gaussian (LoG) function followed by removing the high contrast bone structure from the image. Initial promising results show a 71% sensitivity with 0.48 false positive per exam.

  1. Intraoperative intrinsic optical imaging of human somatosensory cortex during neurosurgical operations.

    Science.gov (United States)

    Sato, Katsushige; Nariai, Tadashi; Momose-Sato, Yoko; Kamino, Kohtaro

    2017-07-01

    Intrinsic optical imaging as developed by Grinvald et al. is a powerful technique for monitoring neural function in the in vivo central nervous system. The advent of this dye-free imaging has also enabled us to monitor human brain function during neurosurgical operations. We briefly describe our own experience in functional mapping of the human somatosensory cortex, carried out using intraoperative optical imaging. The maps obtained demonstrate new additional evidence of a hierarchy for sensory response patterns in the human primary somatosensory cortex.

  2. Holographic line field en-face OCT with digital adaptive optics in the retina in vivo.

    Science.gov (United States)

    Ginner, Laurin; Schmoll, Tilman; Kumar, Abhishek; Salas, Matthias; Pricoupenko, Nastassia; Wurster, Lara M; Leitgeb, Rainer A

    2018-02-01

    We demonstrate a high-resolution line field en-face time domain optical coherence tomography (OCT) system using an off-axis holography configuration. Line field en-face OCT produces high speed en-face images at rates of up to 100 Hz. The high frame rate favors good phase stability across the lateral field-of-view which is indispensable for digital adaptive optics (DAO). Human retinal structures are acquired in-vivo with a broadband light source at 840 nm, and line rates of 10 kHz to 100 kHz. Structures of different retinal layers, such as photoreceptors, capillaries, and nerve fibers are visualized with high resolution of 2.8 µm and 5.5 µm in lateral directions. Subaperture based DAO is successfully applied to increase the visibility of cone-photoreceptors and nerve fibers. Furthermore, en-face Doppler OCT maps are generated based on calculating the differential phase shifts between recorded lines.

  3. An Island of Stability: Art Images and Natural Scenes - but Not Natural Faces - Show Consistent Esthetic Response in Alzheimer's-Related Dementia.

    Science.gov (United States)

    Graham, Daniel J; Stockinger, Simone; Leder, Helmut

    2013-01-01

    Alzheimer's disease (AD) causes severe impairments in cognitive function but there is evidence that aspects of esthetic perception are somewhat spared, at least in early stages of the disease. People with early Alzheimer's-related dementia have been found to show similar degrees of stability over time in esthetic judgment of paintings compared to controls, despite poor explicit memory for the images. Here we expand on this line of inquiry to investigate the types of perceptual judgments involved, and to test whether people in later stages of the disease also show evidence of preserved esthetic judgment. Our results confirm that, compared to healthy controls, there is similar esthetic stability in early stage AD in the absence of explicit memory, and we report here that people with later stages of the disease also show similar stability compared to controls. However, while we find that stability for portrait paintings, landscape paintings, and landscape photographs is not different compared to control group performance, stability for face photographs - which were matched for identity with the portrait paintings - was significantly impaired in the AD group. We suggest that partially spared face-processing systems interfere with esthetic processing of natural faces in ways that are not found for artistic images and landscape photographs. Thus, our work provides a novel form of evidence regarding face-processing in healthy and diseased aging. Our work also gives insights into general theories of esthetics, since people with AD are not encumbered by many of the semantic and emotional factors that otherwise color esthetic judgment. We conclude that, for people with AD, basic esthetic judgment of artistic images represents an "island of stability" in a condition that in most other respects causes profound cognitive disruption. As such, esthetic response could be a promising route to future therapies.

  4. Interocularly merged face percepts eliminate binocular rivalry

    NARCIS (Netherlands)

    Klink, P. Christiaan; Boucherie, Daphne; Denys, Damiaan; Roelfsema, Pieter R.; Self, Matthew W.

    2017-01-01

    Faces are important visual objects for humans and other social animals. A complex network of specialized brain areas is involved in the recognition and interpretation of faces. This network needs to strike a balance between being sensitive enough to distinguish between different faces with similar

  5. Tagging like Humans: Diverse and Distinct Image Annotation

    KAUST Repository

    Wu, Baoyuan

    2018-03-31

    In this work we propose a new automatic image annotation model, dubbed {\\\\bf diverse and distinct image annotation} (D2IA). The generative model D2IA is inspired by the ensemble of human annotations, which create semantically relevant, yet distinct and diverse tags. In D2IA, we generate a relevant and distinct tag subset, in which the tags are relevant to the image contents and semantically distinct to each other, using sequential sampling from a determinantal point process (DPP) model. Multiple such tag subsets that cover diverse semantic aspects or diverse semantic levels of the image contents are generated by randomly perturbing the DPP sampling process. We leverage a generative adversarial network (GAN) model to train D2IA. Extensive experiments including quantitative and qualitative comparisons, as well as human subject studies, on two benchmark datasets demonstrate that the proposed model can produce more diverse and distinct tags than the state-of-the-arts.

  6. Early adversity and brain response to faces in young adulthood.

    Science.gov (United States)

    Lieslehto, Johannes; Kiviniemi, Vesa; Mäki, Pirjo; Koivukangas, Jenni; Nordström, Tanja; Miettunen, Jouko; Barnett, Jennifer H; Jones, Peter B; Murray, Graham K; Moilanen, Irma; Paus, Tomáš; Veijola, Juha

    2017-09-01

    Early stressors play a key role in shaping interindividual differences in vulnerability to various psychopathologies, which according to the diathesis-stress model might relate to the elevated glucocorticoid secretion and impaired responsiveness to stress. Furthermore, previous studies have shown that individuals exposed to early adversity have deficits in emotion processing from faces. This study aims to explore whether early adversities associate with brain response to faces and whether this association might associate with the regional variations in mRNA expression of the glucocorticoid receptor gene (NR3C1). A total of 104 individuals drawn from the Northern Finland Brith Cohort 1986 participated in a face-task functional magnetic resonance imaging (fMRI) study. A large independent dataset (IMAGEN, N = 1739) was utilized for reducing fMRI data-analytical space in the NFBC 1986 dataset. Early adversities were associated with deviant brain response to fearful faces (MANCOVA, P = 0.006) and with weaker performance in fearful facial expression recognition (P = 0.01). Glucocorticoid receptor gene expression (data from the Allen Human Brain Atlas) correlated with the degree of associations between early adversities and brain response to fearful faces (R 2  = 0.25, P = 0.01) across different brain regions. Our results suggest that early adversities contribute to brain response to faces and that this association is mediated in part by the glucocorticoid system. Hum Brain Mapp 38:4470-4478, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  7. Computer vision and soft computing for automatic skull-face overlay in craniofacial superimposition.

    Science.gov (United States)

    Campomanes-Álvarez, B Rosario; Ibáñez, O; Navarro, F; Alemán, I; Botella, M; Damas, S; Cordón, O

    2014-12-01

    Craniofacial superimposition can provide evidence to support that some human skeletal remains belong or not to a missing person. It involves the process of overlaying a skull with a number of ante mortem images of an individual and the analysis of their morphological correspondence. Within the craniofacial superimposition process, the skull-face overlay stage just focuses on achieving the best possible overlay of the skull and a single ante mortem image of the suspect. Although craniofacial superimposition has been in use for over a century, skull-face overlay is still applied by means of a trial-and-error approach without an automatic method. Practitioners finish the process once they consider that a good enough overlay has been attained. Hence, skull-face overlay is a very challenging, subjective, error prone, and time consuming part of the whole process. Though the numerical assessment of the method quality has not been achieved yet, computer vision and soft computing arise as powerful tools to automate it, dramatically reducing the time taken by the expert and obtaining an unbiased overlay result. In this manuscript, we justify and analyze the use of these techniques to properly model the skull-face overlay problem. We also present the automatic technical procedure we have developed using these computational methods and show the four overlays obtained in two craniofacial superimposition cases. This automatic procedure can be thus considered as a tool to aid forensic anthropologists to develop the skull-face overlay, automating and avoiding subjectivity of the most tedious task within craniofacial superimposition. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  8. Challenges for data storage in medical imaging research.

    Science.gov (United States)

    Langer, Steve G

    2011-04-01

    Researchers in medical imaging have multiple challenges for storing, indexing, maintaining viability, and sharing their data. Addressing all these concerns requires a constellation of tools, but not all of them need to be local to the site. In particular, the data storage challenges faced by researchers can begin to require professional information technology skills. With limited human resources and funds, the medical imaging researcher may be better served with an outsourcing strategy for some management aspects. This paper outlines an approach to manage the main objectives faced by medical imaging scientists whose work includes processing and data mining on non-standard file formats, and relating those files to the their DICOM standard descendents. The capacity of the approach scales as the researcher's need grows by leveraging the on-demand provisioning ability of cloud computing.

  9. Facial biometrics of Yorubas of Nigeria using Akinlolu-Raji image-processing algorithm

    Directory of Open Access Journals (Sweden)

    Adelaja Abdulazeez Akinlolu

    2016-01-01

    Full Text Available Background: Forensic anthropology deals with the establishment of human identity using genetics, biometrics, and face recognition technology. This study aims to compute facial biometrics of Yorubas of Osun State of Nigeria using a novel Akinlolu-Raji image-processing algorithm. Materials and Methods: Three hundred Yorubas of Osun State (150 males and 150 females, aged 15–33 years were selected as subjects for the study with informed consents and when established as Yorubas by parents and grandparents. Height, body weight, and facial biometrics (evaluated on three-dimensional [3D] facial photographs were measured on all subjects. The novel Akinlolu-Raji image-processing algorithm for forensic face recognition was developed using the modified row method of computer programming. Facial width, total face height, short forehead height, long forehead height, upper face height, nasal bridge length, nose height, morphological face height, and lower face height computed from readings of the Akinlolu-Raji image-processing algorithm were analyzed using z-test (P ≤ 0.05 of 2010 Microsoft Excel statistical software. Results: Statistical analyzes of facial measurements showed nonsignificant higher mean values (P > 0.05 in Yoruba males compared to females. Yoruba males and females have the leptoprosopic face type based on classifications of face types from facial indices. Conclusions: Akinlolu-Raji image-processing algorithm can be employed for computing anthropometric, forensic, diagnostic, or any other measurements on 2D and 3D images, and data computed from its readings can be converted to actual or life sizes as obtained in 1D measurements. Furthermore, Yoruba males and females have the leptoprosopic face type.

  10. Face-selective regions differ in their ability to classify facial expressions.

    Science.gov (United States)

    Zhang, Hui; Japee, Shruti; Nolan, Rachel; Chu, Carlton; Liu, Ning; Ungerleider, Leslie G

    2016-04-15

    Recognition of facial expressions is crucial for effective social interactions. Yet, the extent to which the various face-selective regions in the human brain classify different facial expressions remains unclear. We used functional magnetic resonance imaging (fMRI) and support vector machine pattern classification analysis to determine how well face-selective brain regions are able to decode different categories of facial expression. Subjects participated in a slow event-related fMRI experiment in which they were shown 32 face pictures, portraying four different expressions: neutral, fearful, angry, and happy and belonging to eight different identities. Our results showed that only the amygdala and the posterior superior temporal sulcus (STS) were able to accurately discriminate between these expressions, albeit in different ways: the amygdala discriminated fearful faces from non-fearful faces, whereas STS discriminated neutral from emotional (fearful, angry and happy) faces. In contrast to these findings on the classification of emotional expression, only the fusiform face area (FFA) and anterior inferior temporal cortex (aIT) could discriminate among the various facial identities. Further, the amygdala and STS were better than FFA and aIT at classifying expression, while FFA and aIT were better than the amygdala and STS at classifying identity. Taken together, our findings indicate that the decoding of facial emotion and facial identity occurs in different neural substrates: the amygdala and STS for the former and FFA and aIT for the latter. Published by Elsevier Inc.

  11. [Leonardo da Vinci the first human body imaging specialist. A brief communication on the thorax oseum images].

    Science.gov (United States)

    Cicero, Raúl; Criales, José Luis; Cardoso, Manuel

    2009-01-01

    The impressive development of computed tomography (CT) techniques such as the three dimensional helical CT produces a spatial image of the thoracic skull. At the beginning of the 16th century Leonardo da Vinci drew with great precision the thorax oseum. These drawings show an outstanding similarity with the images obtained by three dimensional helical CT. The cumbersome task of the Renaissance genius is a prime example of the careful study of human anatomy. Modern imaging techniques require perfect anatomic knowledge of the human body in order to generate exact interpretations of images. Leonardo's example is alive for anybody devoted to modern imaging studies.

  12. Similarity measures for face recognition

    CERN Document Server

    Vezzetti, Enrico

    2015-01-01

    Face recognition has several applications, including security, such as (authentication and identification of device users and criminal suspects), and in medicine (corrective surgery and diagnosis). Facial recognition programs rely on algorithms that can compare and compute the similarity between two sets of images. This eBook explains some of the similarity measures used in facial recognition systems in a single volume. Readers will learn about various measures including Minkowski distances, Mahalanobis distances, Hansdorff distances, cosine-based distances, among other methods. The book also summarizes errors that may occur in face recognition methods. Computer scientists "facing face" and looking to select and test different methods of computing similarities will benefit from this book. The book is also useful tool for students undertaking computer vision courses.

  13. An Experiment Comparing HBSE Graduate Social Work Classes: Face-to-Face and at a Distance

    Science.gov (United States)

    Woehle, Ralph; Quinn, Andrew

    2009-01-01

    This article describes a quasi-experimental comparison of two master's level social work classes delivering content on human behavior in the social environment. One class, delivered face-to-face, was largely synchronous. The other class, delivered using distance technologies, was more asynchronous than the first. The authors hypothesized that…

  14. 3D imaging by serial block face scanning electron microscopy for materials science using ultramicrotomy.

    Science.gov (United States)

    Hashimoto, Teruo; Thompson, George E; Zhou, Xiaorong; Withers, Philip J

    2016-04-01

    Mechanical serial block face scanning electron microscopy (SBFSEM) has emerged as a means of obtaining three dimensional (3D) electron images over volumes much larger than possible by focused ion beam (FIB) serial sectioning and at higher spatial resolution than achievable with conventional X-ray computed tomography (CT). Such high resolution 3D electron images can be employed for precisely determining the shape, volume fraction, distribution and connectivity of important microstructural features. While soft (fixed or frozen) biological samples are particularly well suited for nanoscale sectioning using an ultramicrotome, the technique can also produce excellent 3D images at electron microscope resolution in a time and resource-efficient manner for engineering materials. Currently, a lack of appreciation of the capabilities of ultramicrotomy and the operational challenges associated with minimising artefacts for different materials is limiting its wider application to engineering materials. Consequently, this paper outlines the current state of the art for SBFSEM examining in detail how damage is introduced during slicing and highlighting strategies for minimising such damage. A particular focus of the study is the acquisition of 3D images for a variety of metallic and coated systems. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  15. Combined in-depth, 3D, en face imaging of the optic disc, optic disc pits and optic disc pit maculopathy using swept-source megahertz OCT at 1050 nm.

    Science.gov (United States)

    Maertz, Josef; Kolb, Jan Philip; Klein, Thomas; Mohler, Kathrin J; Eibl, Matthias; Wieser, Wolfgang; Huber, Robert; Priglinger, Siegfried; Wolf, Armin

    2018-02-01

    To demonstrate papillary imaging of eyes with optic disc pits (ODP) or optic disc pit associated maculopathy (ODP-M) with ultrahigh-speed swept-source optical coherence tomography (SS-OCT) at 1.68 million A-scans/s. To generate 3D-renderings of the papillary area with 3D volume-reconstructions of the ODP and highly resolved en face images from a single densely-sampled megahertz-OCT (MHz-OCT) dataset for investigation of ODP-characteristics. A 1.68 MHz-prototype SS-MHz-OCT system at 1050 nm based on a Fourier-domain mode-locked laser was employed to acquire high-definition, 3D datasets with a dense sampling of 1600 × 1600 A-scans over a 45° field of view. Six eyes with ODPs, and two further eyes with glaucomatous alteration or without ocular pathology are presented. 3D-rendering of the deep papillary structures, virtual 3D-reconstructions of the ODPs and depth resolved isotropic en face images were generated using semiautomatic segmentation. 3D-rendering and en face imaging of the optic disc, ODPs and ODP associated pathologies showed a broad spectrum regarding ODP characteristics. Between individuals the shape of the ODP and the appending pathologies varied considerably. MHz-OCT en face imaging generates distinct top-view images of ODPs and ODP-M. MHz-OCT generates high resolution images of retinal pathologies associated with ODP-M and allows visualizing ODPs with depths of up to 2.7 mm. Different patterns of ODPs can be visualized in patients for the first time using 3D-reconstructions and co-registered high-definition en face images extracted from a single densely sampled 1050 nm megahertz-OCT (MHz-OCT) dataset. As the immediate vicinity to the SAS and the site of intrapapillary proliferation is located at the bottom of the ODP it is crucial to image the complete structure and the whole depth of ODPs. Especially in very deep pits, where non-swept-source OCT fails to reach the bottom, conventional swept-source devices and the MHz-OCT alike are feasible

  16. Self-Face Recognition Begins to Share Active Region in Right Inferior Parietal Lobule with Proprioceptive Illusion During Adolescence.

    Science.gov (United States)

    Morita, Tomoyo; Saito, Daisuke N; Ban, Midori; Shimada, Koji; Okamoto, Yuko; Kosaka, Hirotaka; Okazawa, Hidehiko; Asada, Minoru; Naito, Eiichi

    2018-04-01

    We recently reported that right-side dominance of the inferior parietal lobule (IPL) in self-body recognition (proprioceptive illusion) task emerges during adolescence in typical human development. Here, we extend this finding by demonstrating that functional lateralization to the right IPL also develops during adolescence in another self-body (specifically a self-face) recognition task. We collected functional magnetic resonance imaging (fMRI) data from 60 right-handed healthy children (8-11 years), adolescents (12-15 years), and adults (18-23 years; 20 per group) while they judged whether a presented face was their own (Self) or that of somebody else (Other). We also analyzed fMRI data collected while they performed proprioceptive illusion task. All participants performed self-face recognition with high accuracy. Among brain regions where self-face-related activity (Self vs. Other) developed, only right IPL activity developed predominantly for self-face processing, with no substantial involvement in other-face processing. Adult-like right-dominant use of IPL emerged during adolescence, but was not yet present in childhood. Adult-like common activation between the tasks also emerged during adolescence. Adolescents showing stronger right-lateralized IPL activity during illusion also showed this during self-face recognition. Our results suggest the importance of the right IPL in neuronal processing of information associated with one's own body in typically developing humans.

  17. Adaptive weighted local textural features for illumination, expression, and occlusion invariant face recognition

    Science.gov (United States)

    Cui, Chen; Asari, Vijayan K.

    2014-03-01

    Biometric features such as fingerprints, iris patterns, and face features help to identify people and restrict access to secure areas by performing advanced pattern analysis and matching. Face recognition is one of the most promising biometric methodologies for human identification in a non-cooperative security environment. However, the recognition results obtained by face recognition systems are a affected by several variations that may happen to the patterns in an unrestricted environment. As a result, several algorithms have been developed for extracting different facial features for face recognition. Due to the various possible challenges of data captured at different lighting conditions, viewing angles, facial expressions, and partial occlusions in natural environmental conditions, automatic facial recognition still remains as a difficult issue that needs to be resolved. In this paper, we propose a novel approach to tackling some of these issues by analyzing the local textural descriptions for facial feature representation. The textural information is extracted by an enhanced local binary pattern (ELBP) description of all the local regions of the face. The relationship of each pixel with respect to its neighborhood is extracted and employed to calculate the new representation. ELBP reconstructs a much better textural feature extraction vector from an original gray level image in different lighting conditions. The dimensionality of the texture image is reduced by principal component analysis performed on each local face region. Each low dimensional vector representing a local region is now weighted based on the significance of the sub-region. The weight of each sub-region is determined by employing the local variance estimate of the respective region, which represents the significance of the region. The final facial textural feature vector is obtained by concatenating the reduced dimensional weight sets of all the modules (sub-regions) of the face image

  18. Task-irrelevant emotion facilitates face discrimination learning.

    Science.gov (United States)

    Lorenzino, Martina; Caudek, Corrado

    2015-03-01

    We understand poorly how the ability to discriminate faces from one another is shaped by visual experience. The purpose of the present study is to determine whether face discrimination learning can be facilitated by facial emotions. To answer this question, we used a task-irrelevant perceptual learning paradigm because it closely mimics the learning processes that, in daily life, occur without a conscious intention to learn and without an attentional focus on specific facial features. We measured face discrimination thresholds before and after training. During the training phase (4 days), participants performed a contrast discrimination task on face images. They were not informed that we introduced (task-irrelevant) subtle variations in the face images from trial to trial. For the Identity group, the task-irrelevant features were variations along a morphing continuum of facial identity. For the Emotion group, the task-irrelevant features were variations along an emotional expression morphing continuum. The Control group did not undergo contrast discrimination learning and only performed the pre-training and post-training tests, with the same temporal gap between them as the other two groups. Results indicate that face discrimination improved, but only for the Emotion group. Participants in the Emotion group, moreover, showed face discrimination improvements also for stimulus variations along the facial identity dimension, even if these (task-irrelevant) stimulus features had not been presented during training. The present results highlight the importance of emotions for face discrimination learning. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. On Decomposing Object Appearance using PCA and Wavelet bases with Applications to Image Segmentation

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille; Forchhammer, Søren

    2002-01-01

    the complete object surface using principal component analysis. This typically involves matrices with a few thousands and up to 100.000+ rows. This paper demonstrates applications of such models applied on colour images of human faces and cardiac magnetic resonance images. Further, we devise methods...

  20. Functional MRI studies of human vision on a clinical imager

    International Nuclear Information System (INIS)

    George, J.S.; Lewine, J.D.; Aine, C.J.; van Hulsteyn, D.; Wood, C.C.; Sanders, J.; Maclin, E.; Belliveau, J.W.; Caprihan, A.

    1992-01-01

    During the past decade, Magnetic Resonance Imaging (MRI) has become the method of choice for imaging the anatomy of the human brain. Recently, Belliveau and colleagues have reported the use of echo planar magnetic resonance imaging (EPI) to image patterns of neural activity. Here, we report functional MR imaging in response to visual stimulation without the use of contrast agents, and without the extensive hardware modifications required for EPI. Regions of activity were observed near the expected locations of V1, V2 and possibly V3 and another active region was observed near the parietal-occipital sulcus on the superior surface of the cerebrum. These locations are consistent with sources observed in neuromagnetic studies of the human visual response

  1. Second Harmonic Generation Imaging Analysis of Collagen Arrangement in Human Cornea.

    Science.gov (United States)

    Park, Choul Yong; Lee, Jimmy K; Chuck, Roy S

    2015-08-01

    To describe the horizontal arrangement of human corneal collagen bundles by using second harmonic generation (SHG) imaging. Human corneas were imaged with an inverted two photon excitation fluorescence microscope. The excitation laser (Ti:Sapphire) was tuned to 850 nm. Backscatter signals of SHG were collected through a 425/30-nm bandpass emission filter. Multiple, consecutive, and overlapping image stacks (z-stacks) were acquired to generate three dimensional data sets. ImageJ software was used to analyze the arrangement pattern (irregularity) of collagen bundles at each image plane. Collagen bundles in the corneal lamellae demonstrated a complex layout merging and splitting within a single lamellar plane. The patterns were significantly different in the superficial and limbal cornea when compared with deep and central regions. Collagen bundles were smaller in the superficial layer and larger in deep lamellae. By using SHG imaging, the horizontal arrangement of corneal collagen bundles was elucidated at different depths and focal regions of the human cornea.

  2. Live face detection based on the analysis of Fourier spectra

    Science.gov (United States)

    Li, Jiangwei; Wang, Yunhong; Tan, Tieniu; Jain, Anil K.

    2004-08-01

    Biometrics is a rapidly developing technology that is to identify a person based on his or her physiological or behavioral characteristics. To ensure the correction of authentication, the biometric system must be able to detect and reject the use of a copy of a biometric instead of the live biometric. This function is usually termed "liveness detection". This paper describes a new method for live face detection. Using structure and movement information of live face, an effective live face detection algorithm is presented. Compared to existing approaches, which concentrate on the measurement of 3D depth information, this method is based on the analysis of Fourier spectra of a single face image or face image sequences. Experimental results show that the proposed method has an encouraging performance.

  3. Designing a Low-Resolution Face Recognition System for Long-Range Surveillance

    NARCIS (Netherlands)

    Peng, Y.; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.

    2016-01-01

    Most face recognition systems deal well with high-resolution facial images, but perform much worse on low-resolution facial images. In low-resolution face recognition, there is a specific but realistic surveillance scenario: a surveillance camera monitoring a large area. In this scenario, usually

  4. The importance of internal facial features in learning new faces.

    Science.gov (United States)

    Longmore, Christopher A; Liu, Chang Hong; Young, Andrew W

    2015-01-01

    For familiar faces, the internal features (eyes, nose, and mouth) are known to be differentially salient for recognition compared to external features such as hairstyle. Two experiments are reported that investigate how this internal feature advantage accrues as a face becomes familiar. In Experiment 1, we tested the contribution of internal and external features to the ability to generalize from a single studied photograph to different views of the same face. A recognition advantage for the internal features over the external features was found after a change of viewpoint, whereas there was no internal feature advantage when the same image was used at study and test. In Experiment 2, we removed the most salient external feature (hairstyle) from studied photographs and looked at how this affected generalization to a novel viewpoint. Removing the hair from images of the face assisted generalization to novel viewpoints, and this was especially the case when photographs showing more than one viewpoint were studied. The results suggest that the internal features play an important role in the generalization between different images of an individual's face by enabling the viewer to detect the common identity-diagnostic elements across non-identical instances of the face.

  5. Optimizing Fuzzy Rule Base for Illumination Compensation in Face Recognition using Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Bima Sena Bayu Dewantara

    2014-12-01

    Full Text Available Fuzzy rule optimization is a challenging step in the development of a fuzzy model. A simple two inputs fuzzy model may have thousands of combination of fuzzy rules when it deals with large number of input variations. Intuitively and trial‐error determination of fuzzy rule is very difficult. This paper addresses the problem of optimizing Fuzzy rule using Genetic Algorithm to compensate illumination effect in face recognition. Since uneven illumination contributes negative effects to the performance of face recognition, those effects must be compensated. We have developed a novel algorithmbased on a reflectance model to compensate the effect of illumination for human face recognition. We build a pair of model from a single image and reason those modelsusing Fuzzy.Fuzzy rule, then, is optimized using Genetic Algorithm. This approachspendsless computation cost by still keepinga high performance. Based on the experimental result, we can show that our algorithm is feasiblefor recognizing desired person under variable lighting conditions with faster computation time. Keywords: Face recognition, harsh illumination, reflectance model, fuzzy, genetic algorithm

  6. Image-based occupancy sensor

    Science.gov (United States)

    Polese, Luigi Gentile; Brackney, Larry

    2015-05-19

    An image-based occupancy sensor includes a motion detection module that receives and processes an image signal to generate a motion detection signal, a people detection module that receives the image signal and processes the image signal to generate a people detection signal, a face detection module that receives the image signal and processes the image signal to generate a face detection signal, and a sensor integration module that receives the motion detection signal from the motion detection module, receives the people detection signal from the people detection module, receives the face detection signal from the face detection module, and generates an occupancy signal using the motion detection signal, the people detection signal, and the face detection signal, with the occupancy signal indicating vacancy or occupancy, with an occupancy indication specifying that one or more people are detected within the monitored volume.

  7. Dog experts' brains distinguish socially relevant body postures similarly in dogs and humans.

    Science.gov (United States)

    Kujala, Miiamaaria V; Kujala, Jan; Carlson, Synnöve; Hari, Riitta

    2012-01-01

    We read conspecifics' social cues effortlessly, but little is known about our abilities to understand social gestures of other species. To investigate the neural underpinnings of such skills, we used functional magnetic resonance imaging to study the brain activity of experts and non-experts of dog behavior while they observed humans or dogs either interacting with, or facing away from a conspecific. The posterior superior temporal sulcus (pSTS) of both subject groups dissociated humans facing toward each other from humans facing away, and in dog experts, a distinction also occurred for dogs facing toward vs. away in a bilateral area extending from the pSTS to the inferior temporo-occipital cortex: the dissociation of dog behavior was significantly stronger in expert than control group. Furthermore, the control group had stronger pSTS responses to humans than dogs facing toward a conspecific, whereas in dog experts, the responses were of similar magnitude. These findings suggest that dog experts' brains distinguish socially relevant body postures similarly in dogs and humans.

  8. Gender Recognition from Unconstrained and Articulated Human Body

    OpenAIRE

    Wu, Qin; Guo, Guodong

    2014-01-01

    Gender recognition has many useful applications, ranging from business intelligence to image search and social activity analysis. Traditional research on gender recognition focuses on face images in a constrained environment. This paper proposes a method for gender recognition in articulated human body images acquired from an unconstrained environment in the real world. A systematic study of some critical issues in body-based gender recognition, such as which body parts are informative, ho...

  9. Multi-task pose-invariant face recognition.

    Science.gov (United States)

    Ding, Changxing; Xu, Chang; Tao, Dacheng

    2015-03-01

    Face images captured in unconstrained environments usually contain significant pose variation, which dramatically degrades the performance of algorithms designed to recognize frontal faces. This paper proposes a novel face identification framework capable of handling the full range of pose variations within ±90° of yaw. The proposed framework first transforms the original pose-invariant face recognition problem into a partial frontal face recognition problem. A robust patch-based face representation scheme is then developed to represent the synthesized partial frontal faces. For each patch, a transformation dictionary is learnt under the proposed multi-task learning scheme. The transformation dictionary transforms the features of different poses into a discriminative subspace. Finally, face matching is performed at patch level rather than at the holistic level. Extensive and systematic experimentation on FERET, CMU-PIE, and Multi-PIE databases shows that the proposed method consistently outperforms single-task-based baselines as well as state-of-the-art methods for the pose problem. We further extend the proposed algorithm for the unconstrained face verification problem and achieve top-level performance on the challenging LFW data set.

  10. Fingerprint and Face Identification for Large User Population

    Directory of Open Access Journals (Sweden)

    Teddy Ko

    2003-06-01

    Full Text Available The main objective of this paper is to present the state-of-the-art of the current biometric (fingerprint and face technology, lessons learned during the investigative analysis performed to ascertain the benefits of using combined fingerprint and facial technologies, and recommendations for the use of current available fingerprint and face identification technologies for optimum identification performance for applications using large user population. Prior fingerprint and face identification test study results have shown that their identification accuracies are strongly dependent on the image quality of the biometric inputs. Recommended methodologies for ensuring the capture of acceptable quality fingerprint and facial images of subjects are also presented in this paper.

  11. Implicit prosody mining based on the human eye image capture technology

    Science.gov (United States)

    Gao, Pei-pei; Liu, Feng

    2013-08-01

    The technology of eye tracker has become the main methods of analyzing the recognition issues in human-computer interaction. Human eye image capture is the key problem of the eye tracking. Based on further research, a new human-computer interaction method introduced to enrich the form of speech synthetic. We propose a method of Implicit Prosody mining based on the human eye image capture technology to extract the parameters from the image of human eyes when reading, control and drive prosody generation in speech synthesis, and establish prosodic model with high simulation accuracy. Duration model is key issues for prosody generation. For the duration model, this paper put forward a new idea for obtaining gaze duration of eyes when reading based on the eye image capture technology, and synchronous controlling this duration and pronunciation duration in speech synthesis. The movement of human eyes during reading is a comprehensive multi-factor interactive process, such as gaze, twitching and backsight. Therefore, how to extract the appropriate information from the image of human eyes need to be considered and the gaze regularity of eyes need to be obtained as references of modeling. Based on the analysis of current three kinds of eye movement control model and the characteristics of the Implicit Prosody reading, relative independence between speech processing system of text and eye movement control system was discussed. It was proved that under the same text familiarity condition, gaze duration of eyes when reading and internal voice pronunciation duration are synchronous. The eye gaze duration model based on the Chinese language level prosodic structure was presented to change previous methods of machine learning and probability forecasting, obtain readers' real internal reading rhythm and to synthesize voice with personalized rhythm. This research will enrich human-computer interactive form, and will be practical significance and application prospect in terms of

  12. ROBUSTNESS OF A FACE-RECOGNITION TECHNIQUE BASED ON SUPPORT VECTOR MACHINES

    OpenAIRE

    Prashanth Harshangi; Koshy George

    2010-01-01

    The ever-increasing requirements of security concerns have placed a greater demand for face recognition surveillance systems. However, most current face recognition techniques are not quite robust with respect to factors such as variable illumination, facial expression and detail, and noise in images. In this paper, we demonstrate that face recognition using support vector machines are sufficiently robust to different kinds of noise, does not require image pre-processing, and can be used with...

  13. MR chemical shift imaging of human atheroma

    International Nuclear Information System (INIS)

    Mohiaddin, R.H.; Underwood, R.; Firmin, D.; Abdulla, A.K.; Rees, S.; Longmore, D.

    1988-01-01

    The lipid content of atheromatous plaques has been measured with chemical shift MR imaging by taking advantage of the different resonance frequencies of protons in lipid and water. Fifteen postmortem aortic specimens of the human descending aorta and the aortae of seven patients with documented peripheral vascular disease were studied at 0.5 T. Spin-echo images were used to localize the lesions before acquisition of the chemical shift images. The specimens were examined histologically, and the lipid distribution in the plaque showed good correlation with the chemical shift data. Validation in vivo and clinical applications remain to be established

  14. AN ILLUMINATION INVARIANT TEXTURE BASED FACE RECOGNITION

    Directory of Open Access Journals (Sweden)

    K. Meena

    2013-11-01

    Full Text Available Automatic face recognition remains an interesting but challenging computer vision open problem. Poor illumination is considered as one of the major issue, since illumination changes cause large variation in the facial features. To resolve this, illumination normalization preprocessing techniques are employed in this paper to enhance the face recognition rate. The methods such as Histogram Equalization (HE, Gamma Intensity Correction (GIC, Normalization chain and Modified Homomorphic Filtering (MHF are used for preprocessing. Owing to great success, the texture features are commonly used for face recognition. But these features are severely affected by lighting changes. Hence texture based models Local Binary Pattern (LBP, Local Derivative Pattern (LDP, Local Texture Pattern (LTP and Local Tetra Patterns (LTrPs are experimented under different lighting conditions. In this paper, illumination invariant face recognition technique is developed based on the fusion of illumination preprocessing with local texture descriptors. The performance has been evaluated using YALE B and CMU-PIE databases containing more than 1500 images. The results demonstrate that MHF based normalization gives significant improvement in recognition rate for the face images with large illumination conditions.

  15. Attentional System for Face Detection and Tracking

    Directory of Open Access Journals (Sweden)

    Leonardo Pinto da Silva Panta Leão

    2011-04-01

    Full Text Available The human visual system quickly performs complex decisions due, in part, to attentional system, which positions the most relevant targets in the center of the visual field, region with greatest concentration of photoreceptor cells. The attentional system involves sensory, cognitive and also mechanical elements, because the eye and head muscles must be activated to produce movement. In this paper we present the proposal of a face detector system that, as well as the biological system, produces a coordinated movement with the purpose of positioning the target image in the center of camera's visual field. The developed system has distinct parts, one responsible for video pattern recognition and other for controlling the mechanical part, implemented as processes that communicate with each other by sockets.

  16. Towards Robust and Accurate Multi-View and Partially-Occluded Face Alignment.

    Science.gov (United States)

    Xing, Junliang; Niu, Zhiheng; Huang, Junshi; Hu, Weiming; Zhou, Xi; Yan, Shuicheng

    2018-04-01

    Face alignment acts as an important task in computer vision. Regression-based methods currently dominate the approach to solving this problem, which generally employ a series of mapping functions from the face appearance to iteratively update the face shape hypothesis. One keypoint here is thus how to perform the regression procedure. In this work, we formulate this regression procedure as a sparse coding problem. We learn two relational dictionaries, one for the face appearance and the other one for the face shape, with coupled reconstruction coefficient to capture their underlying relationships. To deploy this model for face alignment, we derive the relational dictionaries in a stage-wised manner to perform close-loop refinement of themselves, i.e., the face appearance dictionary is first learned from the face shape dictionary and then used to update the face shape hypothesis, and the updated face shape dictionary from the shape hypothesis is in return used to refine the face appearance dictionary. To improve the model accuracy, we extend this model hierarchically from the whole face shape to face part shapes, thus both the global and local view variations of a face are captured. To locate facial landmarks under occlusions, we further introduce an occlusion dictionary into the face appearance dictionary to recover face shape from partially occluded face appearance. The occlusion dictionary is learned in a data driven manner from background images to represent a set of elemental occlusion patterns, a sparse combination of which models various practical partial face occlusions. By integrating all these technical innovations, we obtain a robust and accurate approach to locate facial landmarks under different face views and possibly severe occlusions for face images in the wild. Extensive experimental analyses and evaluations on different benchmark datasets, as well as two new datasets built by ourselves, have demonstrated the robustness and accuracy of our proposed

  17. The part task of the part-spacing paradigm is not a pure measurement of part-based information of faces.

    Directory of Open Access Journals (Sweden)

    Qi Zhu

    Full Text Available BACKGROUND: Faces are arguably one of the most important object categories encountered by human observers, yet they present one of the most difficult challenges to both the human and artificial visual systems. A variety of experimental paradigms have been developed to study how faces are represented and recognized, among which is the part-spacing paradigm. This paradigm is presumed to characterize the processing of both the featural and configural information of faces, and it has become increasingly popular for testing hypotheses on face specificity and in the diagnosis of face perception in cognitive disorders. METHODOLOGY/PRINCIPAL FINDINGS: In two experiments we questioned the validity of the part task of this paradigm by showing that, in this task, measuring pure information about face parts is confounded by the effect of face configuration on the perception of those parts. First, we eliminated or reduced contributions from face configuration by either rearranging face parts into a non-face configuration or by removing the low spatial frequencies of face images. We found that face parts were no longer sensitive to inversion, suggesting that the previously reported inversion effect observed in the part task was due in fact to the presence of face configuration. Second, self-reported prosopagnosic patients who were selectively impaired in the holistic processing of faces failed to detect part changes when face configurations were presented. When face configurations were scrambled, however, their performance was as good as that of normal controls. CONCLUSIONS/SIGNIFICANCE: In sum, consistent evidence from testing both normal and prosopagnosic subjects suggests the part task of the part-spacing paradigm is not an appropriate task for either measuring how face parts alone are processed or for providing a valid contrast to the spacing task. Therefore, conclusions from previous studies using the part-spacing paradigm may need re-evaluation with

  18. Presentation and validation of the Radboud Faces Database

    NARCIS (Netherlands)

    Langer, O.; Dotsch, R.; Bijlstra, G.; Wigboldus, D.H.J.; Hawk, S.T.; van Knippenberg, A.

    2010-01-01

    Many research fields concerned with the processing of information contained in human faces would benefit from face stimulus sets in which specific facial characteristics are systematically varied while other important picture characteristics are kept constant. Specifically, a face database in which

  19. Predicting mortality from human faces.

    Science.gov (United States)

    Dykiert, Dominika; Bates, Timothy C; Gow, Alan J; Penke, Lars; Starr, John M; Deary, Ian J

    2012-01-01

    To investigate whether and to what extent mortality is predictable from facial photographs of older people. High-quality facial photographs of 292 members of the Lothian Birth Cohort 1921, taken at the age of about 83 years, were rated in terms of apparent age, health, attractiveness, facial symmetry, intelligence, and well-being by 12 young-adult raters. Cox proportional hazards regression was used to study associations between these ratings and mortality during a 7-year follow-up period. All ratings had adequate reliability. Concurrent validity was found for facial symmetry and intelligence (as determined by correlations with actual measures of fluctuating asymmetry in the faces and Raven Standard Progressive Matrices score, respectively), but not for the other traits. Age as rated from facial photographs, adjusted for sex and chronological age, was a significant predictor of mortality (hazard ratio = 1.36, 95% confidence interval = 1.12-1.65) and remained significant even after controlling for concurrent, objectively measured health and cognitive ability, and the other ratings. Health as rated from facial photographs, adjusted for sex and chronological age, significantly predicted mortality (hazard ratio = 0.81, 95% confidence interval = 0.67-0.99) but not after adjusting for rated age or objectively measured health and cognition. Rated attractiveness, symmetry, intelligence, and well-being were not significantly associated with mortality risk. Rated age of the face is a significant predictor of mortality risk among older people, with predictive value over and above that of objective or rated health status and cognitive ability.

  20. Comparison of human and automatic segmentations of kidneys from CT images

    International Nuclear Information System (INIS)

    Rao, Manjori; Stough, Joshua; Chi, Y.-Y.; Muller, Keith; Tracton, Gregg; Pizer, Stephen M.; Chaney, Edward L.

    2005-01-01

    Purpose: A controlled observer study was conducted to compare a method for automatic image segmentation with conventional user-guided segmentation of right and left kidneys from planning computerized tomographic (CT) images. Methods and materials: Deformable shape models called m-reps were used to automatically segment right and left kidneys from 12 target CT images, and the results were compared with careful manual segmentations performed by two human experts. M-rep models were trained based on manual segmentations from a collection of images that did not include the targets. Segmentation using m-reps began with interactive initialization to position the kidney model over the target kidney in the image data. Fully automatic segmentation proceeded through two stages at successively smaller spatial scales. At the first stage, a global similarity transformation of the kidney model was computed to position the model closer to the target kidney. The similarity transformation was followed by large-scale deformations based on principal geodesic analysis (PGA). During the second stage, the medial atoms comprising the m-rep model were deformed one by one. This procedure was iterated until no changes were observed. The transformations and deformations at both stages were driven by optimizing an objective function with two terms. One term penalized the currently deformed m-rep by an amount proportional to its deviation from the mean m-rep derived from PGA of the training segmentations. The second term computed a model-to-image match term based on the goodness of match of the trained intensity template for the currently deformed m-rep with the corresponding intensity data in the target image. Human and m-rep segmentations were compared using quantitative metrics provided in a toolset called Valmet. Metrics reported in this article include (1) percent volume overlap; (2) mean surface distance between two segmentations; and (3) maximum surface separation (Hausdorff distance

  1. Unfamiliar face matching with photographs of infants and children

    Directory of Open Access Journals (Sweden)

    Robin S.S. Kramer

    2018-06-01

    Full Text Available Background Infants and children travel using passports that are typically valid for five years (e.g. Canada, United Kingdom, United States and Australia. These individuals may also need to be identified using images taken from videos and other sources in forensic situations including child exploitation cases. However, few researchers have examined how useful these images are as a means of identification. Methods We investigated the effectiveness of photo identification for infants and children using a face matching task, where participants were presented with two images simultaneously and asked whether the images depicted the same child or two different children. In Experiment 1, both images showed an infant (<1 year old, whereas in Experiment 2, one image again showed an infant but the second image of the child was taken at 4–5 years of age. In Experiments 3a and 3b, we asked participants to complete shortened versions of both these tasks (selecting the most difficult trials as well as the short version Glasgow face matching test. Finally, in Experiment 4, we investigated whether information regarding the sex of the infants and children could be accurately perceived from the images. Results In Experiment 1, we found low levels of performance (72% accuracy for matching two infant photos. For Experiment 2, performance was lower still (64% accuracy when infant and child images were presented, given the significant changes in appearance that occur over the first five years of life. In Experiments 3a and 3b, when participants completed both these tasks, as well as a measure of adult face matching ability, we found lowest performance for the two infant tasks, along with mixed evidence of within-person correlations in sensitivities across all three tasks. The use of only same-sex pairings on mismatch trials, in comparison with random pairings, had little effect on performance measures. In Experiment 4, accuracy when judging the sex of infants was at

  2. Neural correlates of the eye dominance effect in human face perception: the left-visual-field superiority for faces revisited.

    Science.gov (United States)

    Jung, Wookyoung; Kang, Joong-Gu; Jeon, Hyeonjin; Shim, Miseon; Sun Kim, Ji; Leem, Hyun-Sung; Lee, Seung-Hwan

    2017-08-01

    Faces are processed best when they are presented in the left visual field (LVF), a phenomenon known as LVF superiority. Although one eye contributes more when perceiving faces, it is unclear how the dominant eye (DE), the eye we unconsciously use when performing a monocular task, affects face processing. Here, we examined the influence of the DE on the LVF superiority for faces using event-related potentials. Twenty left-eye-dominant (LDE group) and 23 right-eye-dominant (RDE group) participants performed the experiments. Face stimuli were randomly presented in the LVF or right visual field (RVF). The RDE group exhibited significantly larger N170 amplitudes compared with the LDE group. Faces presented in the LVF elicited N170 amplitudes that were significantly more negative in the RDE group than they were in the LDE group, whereas the amplitudes elicited by stimuli presented in the RVF were equivalent between the groups. The LVF superiority was maintained in the RDE group but not in the LDE group. Our results provide the first neural evidence of the DE's effects on the LVF superiority for faces. We propose that the RDE may be more biologically specialized for face processing. © The Author (2017). Published by Oxford University Press.

  3. Decoding facial expressions based on face-selective and motion-sensitive areas.

    Science.gov (United States)

    Liang, Yin; Liu, Baolin; Xu, Junhai; Zhang, Gaoyan; Li, Xianglin; Wang, Peiyuan; Wang, Bin

    2017-06-01

    Humans can easily recognize others' facial expressions. Among the brain substrates that enable this ability, considerable attention has been paid to face-selective areas; in contrast, whether motion-sensitive areas, which clearly exhibit sensitivity to facial movements, are involved in facial expression recognition remained unclear. The present functional magnetic resonance imaging (fMRI) study used multi-voxel pattern analysis (MVPA) to explore facial expression decoding in both face-selective and motion-sensitive areas. In a block design experiment, participants viewed facial expressions of six basic emotions (anger, disgust, fear, joy, sadness, and surprise) in images, videos, and eyes-obscured videos. Due to the use of multiple stimulus types, the impacts of facial motion and eye-related information on facial expression decoding were also examined. It was found that motion-sensitive areas showed significant responses to emotional expressions and that dynamic expressions could be successfully decoded in both face-selective and motion-sensitive areas. Compared with static stimuli, dynamic expressions elicited consistently higher neural responses and decoding performance in all regions. A significant decrease in both activation and decoding accuracy due to the absence of eye-related information was also observed. Overall, the findings showed that emotional expressions are represented in motion-sensitive areas in addition to conventional face-selective areas, suggesting that motion-sensitive regions may also effectively contribute to facial expression recognition. The results also suggested that facial motion and eye-related information played important roles by carrying considerable expression information that could facilitate facial expression recognition. Hum Brain Mapp 38:3113-3125, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  4. Sodium lauryl sulfate-induced irritation in the human face: regional and age-related differences.

    Science.gov (United States)

    Marrakchi, S; Maibach, H I

    2006-01-01

    The particular sensitivity of the human face to care products prompted us to study irritation induced by sodium lauryl sulfate (SLS) in its various regions. We examined regional and age-related differences, correlating basal transepidermal water loss (TEWL) and capacitance to SLS irritation. SLS (2% aq.) was applied under occlusion for 1 h to the forehead, cheek, nose, nasolabial and perioral areas, chin, neck and forearm to two groups of subjects--one with 10 subjects with an average age of 25.2 +/- 4.7 years and another with 10 subjects with an average age of 73.7 +/- 3.9 years. TEWL was measured before and 1 h and 23 h after patch removal. Baseline stratum corneum hydration was also measured. Irritation was assessed by the changes in TEWL (deltaTEWL = TEWL after patch removal - basal TEWL) after corrections to the control. In the younger group, all areas of the face and the neck reacted to SLS, whereas the forearm did not. In the older group, the nose, perioral area and forearm did not react. In both age groups, some significant differences between the regions of the face were detected. The younger group showed higher changes in TEWL than the older group in all the areas studied, but only in the chin and nasolabial area were the differences statistically significant. Significant correlations were found between basal TEWL and deltaTEWL in 5 of the 7 areas which reacted to SLS. Baseline TEWL is one parameter that correlates with the susceptibility of the face to this irritant. 2006 S. Karger AG, Basel

  5. A level-set method for pathology segmentation in fluorescein angiograms and en face retinal images of patients with age-related macular degeneration

    Science.gov (United States)

    Mohammad, Fatimah; Ansari, Rashid; Shahidi, Mahnaz

    2013-03-01

    The visibility and continuity of the inner segment outer segment (ISOS) junction layer of the photoreceptors on spectral domain optical coherence tomography images is known to be related to visual acuity in patients with age-related macular degeneration (AMD). Automatic detection and segmentation of lesions and pathologies in retinal images is crucial for the screening, diagnosis, and follow-up of patients with retinal diseases. One of the challenges of using the classical level-set algorithms for segmentation involves the placement of the initial contour. Manually defining the contour or randomly placing it in the image may lead to segmentation of erroneous structures. It is important to be able to automatically define the contour by using information provided by image features. We explored a level-set method which is based on the classical Chan-Vese model and which utilizes image feature information for automatic contour placement for the segmentation of pathologies in fluorescein angiograms and en face retinal images of the ISOS layer. This was accomplished by exploiting a priori knowledge of the shape and intensity distribution allowing the use of projection profiles to detect the presence of pathologies that are characterized by intensity differences with surrounding areas in retinal images. We first tested our method by applying it to fluorescein angiograms. We then applied our method to en face retinal images of patients with AMD. The experimental results included demonstrate that the proposed method provided a quick and improved outcome as compared to the classical Chan-Vese method in which the initial contour is randomly placed, thus indicating the potential to provide a more accurate and detailed view of changes in pathologies due to disease progression and treatment.

  6. Diffusion Tensor Imaging-Based Research on Human White Matter Anatomy

    Directory of Open Access Journals (Sweden)

    Ming-guo Qiu

    2012-01-01

    Full Text Available The aim of this study is to investigate the white matter by the diffusion tensor imaging and the Chinese visible human dataset and to provide the 3D anatomical data of the corticospinal tract for the neurosurgical planning by studying the probabilistic maps and the reproducibility of the corticospinal tract. Diffusion tensor images and high-resolution T1-weighted images of 15 healthy volunteers were acquired; the DTI data were processed using DtiStudio and FSL software. The FA and color FA maps were compared with the sectional images of the Chinese visible human dataset. The probability maps of the corticospinal tract were generated as a quantitative measure of reproducibility for each voxel of the stereotaxic space. The fibers displayed by the diffusion tensor imaging were well consistent with the sectional images of the Chinese visible human dataset and the existing anatomical knowledge. The three-dimensional architecture of the white matter fibers could be clearly visualized on the diffusion tensor tractography. The diffusion tensor tractography can establish the 3D probability maps of the corticospinal tract, in which the degree of intersubject reproducibility of the corticospinal tract is consistent with the previous architectonic report. DTI is a reliable method of studying the fiber connectivity in human brain, but it is difficult to identify the tiny fibers. The probability maps are useful for evaluating and identifying the corticospinal tract in the DTI, providing anatomical information for the preoperative planning and improving the accuracy of surgical risk assessments preoperatively.

  7. Is beauty in the face of the beholder?

    Directory of Open Access Journals (Sweden)

    Bruno Laeng

    Full Text Available Opposing forces influence assortative mating so that one seeks a similar mate while at the same time avoiding inbreeding with close relatives. Thus, mate choice may be a balancing of phenotypic similarity and dissimilarity between partners. In the present study, we assessed the role of resemblance to Self's facial traits in judgments of physical attractiveness. Participants chose the most attractive face image of their romantic partner among several variants, where the faces were morphed so as to include only 22% of another face. Participants distinctly preferred a "Self-based morph" (i.e., their partner's face with a small amount of Self's face blended into it to other morphed images. The Self-based morph was also preferred to the morph of their partner's face blended with the partner's same-sex "prototype", although the latter face was ("objectively" judged more attractive by other individuals. When ranking morphs differing in level of amalgamation (i.e., 11% vs. 22% vs. 33% of another face, the 22% was chosen consistently as the preferred morph and, in particular, when Self was blended in the partner's face. A forced-choice signal-detection paradigm showed that the effect of self-resemblance operated at an unconscious level, since the same participants were unable to detect the presence of their own faces in the above morphs. We concluded that individuals, if given the opportunity, seek to promote "positive assortment" for Self's phenotype, especially when the level of similarity approaches an optimal point that is similar to Self without causing a conscious acknowledgment of the similarity.

  8. Thresholding magnetic resonance images of human brain

    Institute of Scientific and Technical Information of China (English)

    Qing-mao HU; Wieslaw L NOWINSKI

    2005-01-01

    In this paper, methods are proposed and validated to determine low and high thresholds to segment out gray matter and white matter for MR images of different pulse sequences of human brain. First, a two-dimensional reference image is determined to represent the intensity characteristics of the original three-dimensional data. Then a region of interest of the reference image is determined where brain tissues are present. The non-supervised fuzzy c-means clustering is employed to determine: the threshold for obtaining head mask, the low threshold for T2-weighted and PD-weighted images, and the high threshold for T1-weighted, SPGR and FLAIR images. Supervised range-constrained thresholding is employed to determine the low threshold for T1-weighted, SPGR and FLAIR images. Thresholding based on pairs of boundary pixels is proposed to determine the high threshold for T2- and PD-weighted images. Quantification against public data sets with various noise and inhomogeneity levels shows that the proposed methods can yield segmentation robust to noise and intensity inhomogeneity. Qualitatively the proposed methods work well with real clinical data.

  9. Designing of Medium-Size Humanoid Robot with Face Recognition Features

    Directory of Open Access Journals (Sweden)

    Christian Tarunajaya

    2016-02-01

    Full Text Available owadays, there have been so many development of robot that can receive command and do speech recognition and face recognition. In this research, we develop a humanoid robot system with a controller that based on Raspberry Pi 2. The methods we used are based on Audio recognition and detection, and also face recognition using PCA (Principal Component Analysis with OpenCV and Python. PCA is one of the algorithms to do face detection by doing reduction to the number of dimension of the image possessed. The result of this reduction process is then known as eigenface to do face recognition process. In this research, we still find a false recognition. It can be caused by many things, like database condition, maybe the images are too dark or less varied, blur test image, etc. The accuracy from 3 tests on different people is about 93% (28 correct recognitions out of 30.

  10. En-face imaging of the ellipsoid zone in the retina from optical coherence tomography B-scans

    Science.gov (United States)

    Holmes, T.; Larkin, S.; Downing, M.; Csaky, K.

    2015-03-01

    It is generally believed that photoreceptor integrity is related to the ellipsoid zone appearance in optical coherence tomography (OCT) B-scans. Algorithms and software were developed for viewing and analyzing the ellipsoid zone. The software performs the following: (a), automated ellipsoid zone isolation in the B-scans, (b), en-face view of the ellipsoid-zone reflectance, (c), alignment and overlay of (b) onto reflectance images of the retina, and (d), alignment and overlay of (c) with microperimetry sensitivity points. Dataset groups were compared from normal and dry age related macular degeneration (DAMD) subjects. Scalar measurements for correlation against condition included the mean and standard deviation of the ellipsoid zone's reflectance. The imageprocessing techniques for automatically finding the ellipsoid zone are based upon a calculation of optical flow which tracks the edges of laminated structures across an image. Statistical significance was shown in T-tests of these measurements with the population pools separated as normal and DAMD subjects. A display of en-face ellipsoid-zone reflectance shows a clear and recognizable difference between any of the normal and DAMD subjects in that they show generally uniform and nonuniform reflectance, respectively, over the region near the macula. Regions surrounding points of low microperimetry (μP) sensitivity have nonregular and lower levels of ellipsoid-zone reflectance nearby. These findings support the idea that the photoreceptor integrity could be affecting both the ellipsoid-zone reflectance and the sensitivity measurements.

  11. [Comparation on Haversian system between human and animal bones by imaging analysis].

    Science.gov (United States)

    Lu, Hui-Ling; Zheng, Jing; Yao, Ya-Nan; Chen, Sen; Wang, Hui-Pin; Chen, Li-Xian; Guo, Jing-Yuan

    2006-04-01

    To explore the differences in Haversian system between human and animal bones through imaging analysis and morphology description. Thirty-five slices grinding from human being as well as dog, pig, cow and sheep bones were observed to compare their structure, then were analysed with the researchful microscope. Plexiform bone or oeston band was not found in human bones; There were significant differences in the shape, size, location, density of Haversian system, between human and animal bones. The amount of Haversian lamella and diameter of central canal in human were the biggest; Significant differences in the central canal diameter and total area percentage between human and animal bones were shown by imaging analysis. (1) Plexiform bone and osteon band could be the exclusive index in human bone; (2) There were significant differences in the structure of Haversian system between human and animal bones; (3) The percentage of central canals total area was valuable in species identification through imaging analysis.

  12. A Classification Framework for Large-Scale Face Recognition Systems

    OpenAIRE

    Zhou, Ziheng; Deravi, Farzin

    2009-01-01

    This paper presents a generic classification framework for large-scale face recognition systems. Within the framework, a data sampling strategy is proposed to tackle the data imbalance when image pairs are sampled from thousands of face images for preparing a training dataset. A modified kernel Fisher discriminant classifier is proposed to make it computationally feasible to train the kernel-based classification method using tens of thousands of training samples. The framework is tested in an...

  13. 头部多角度人脸快速跟踪算法DSP实现%Fast face tracking algorithm of head multi-position based on DSP

    Institute of Scientific and Technical Information of China (English)

    姜俊金; 王增才; 朱淑亮

    2012-01-01

    针对传统驾驶员疲劳检测人脸跟踪算法复杂,DSP实现时实时性不强,不能有效地实现多角度人脸跟踪的问题,提出了一种快速人脸跟踪算法.该算法通过对YCbCr肤色模型进行图像预处理、肤色检测,提取人脸区域,通过对亮度信号Y进行统计运算,判断人脸边界,再进行相似度判断,从而实现人脸区域的跟踪.实验结果表明,该方法简单、鲁棒性强,能够快速地实现彩色图像人脸多角度跟踪.%In the field of real-time human face tracking for driver fatigue detection, the classic algorithms are so complex that the DSP system can not track the face in multi-angle state quickly and exactly, so a new face tracking algorithm is presented. In YCbCr human face color model, the image is preprocessed first, then the face region is extracted through the face color detec-tion. Through calculating the brightness signal Y, the face edge can be detected. Then a symmetry similarity measure is used to check the factuality of the face tracking. In this way, the face region can be tracked Experimental results indicate that this algo-rithm is so simple and can realize the tracking of face in multi-angle of color image.

  14. The effect of human image in B2C website design: an eye-tracking study

    Science.gov (United States)

    Wang, Qiuzhen; Yang, Yi; Wang, Qi; Ma, Qingguo

    2014-09-01

    On B2C shopping websites, effective visual designs can bring about consumers' positive emotional experience. From this perspective, this article developed a research model to explore the impact of human image as a visual element on consumers' online shopping emotions and subsequent attitudes towards websites. This study conducted an eye-tracking experiment to collect both eye movement data and questionnaire data to test the research model. Questionnaire data analysis showed that product pictures combined with human image induced positive emotions among participants, thus promoting their attitudes towards online shopping websites. Specifically, product pictures with human image first produced higher levels of image appeal and perceived social presence, thus stimulating higher levels of enjoyment and subsequent positive attitudes towards the websites. Moreover, a moderating effect of product type was demonstrated on the relationship between the presence of human image and the level of image appeal. Specifically, human image significantly increased the level of image appeal when integrated in entertainment product pictures while this relationship was not significant in terms of utilitarian products. Eye-tracking data analysis further supported these results and provided plausible explanations. The presence of human image significantly increased the pupil size of participants regardless of product types. For entertainment products, participants paid more attention to product pictures integrated with human image whereas for utilitarian products more attention was paid to functional information of products than to product pictures no matter whether or not integrated with human image.

  15. The human brain and face: mechanisms of cranial, neurological and facial development revealed through malformations of holoprosencephaly, cyclopia and aberrations in chromosome 18.

    Science.gov (United States)

    Gondré-Lewis, Marjorie C; Gboluaje, Temitayo; Reid, Shaina N; Lin, Stephen; Wang, Paul; Green, William; Diogo, Rui; Fidélia-Lambert, Marie N; Herman, Mary M

    2015-09-01

    The study of inborn genetic errors can lend insight into mechanisms of normal human development and congenital malformations. Here, we present the first detailed comparison of cranial and neuro pathology in two exceedingly rare human individuals with cyclopia and alobar holoprosencephaly (HPE) in the presence and absence of aberrant chromosome 18 (aCh18). The aCh18 fetus contained one normal Ch18 and one with a pseudo-isodicentric duplication of chromosome 18q and partial deletion of 18p from 18p11.31 where the HPE gene, TGIF, resides, to the p terminus. In addition to synophthalmia, the aCh18 cyclopic malformations included a failure of induction of most of the telencephalon - closely approximating anencephaly, unchecked development of brain stem structures, near absence of the sphenoid bone and a malformed neurocranium and viscerocranium that constitute the median face. Although there was complete erasure of the olfactory and superior nasal structures, rudiments of nasal structures derived from the maxillary bone were evident, but with absent pharyngeal structures. The second non-aCh18 cyclopic fetus was initially classified as a true Cyclops, as it appeared to have a proboscis and one median eye with a single iris, but further analysis revealed two eye globes as expected for synophthalmic cyclopia. Furthermore, the proboscis was associated with the medial ethmoid ridge, consistent with an incomplete induction of these nasal structures, even as the nasal septum and paranasal sinuses were apparently developed. An important conclusion of this study is that it is the brain that predicts the overall configuration of the face, due to its influence on the development of surrounding skeletal structures. The present data using a combination of macroscopic, computed tomography (CT) and magnetic resonance imaging (MRI) techniques provide an unparalleled analysis on the extent of the effects of median defects, and insight into normal development and patterning of the brain

  16. Comparison between Face and Object Processing in Youths with Autism Spectrum Disorder: An event related potentials study.

    Directory of Open Access Journals (Sweden)

    Anahita Khorrami

    2013-12-01

    Full Text Available Incapability in face perception and recognition is one of the main issues in autism spectrum disorders (ASD. Event related potential (ERP studies have revealed controversial insights on autistic brain responses to faces and objects. The current investigation examined the ERP components of young children with ASD compared to a typically developing (TD group when looking at the upright and inverted images of faces and cars.Fourteen children and adolescents aged between 9 and 17 diagnosed as having ASD were compared with 18 age- gender matched normally developing individuals. All participants' ERPs were recorded while they were seeing the images of human faces and objects in both upright and inverted positions. The ERP components including N170 (latency and amplitude were compared between the two groups in two conditions of upright and inverted using the repeated measure analysis method.The processing speed for upright faces was faster than the inverted faces in the TD group; however, the difference was not significant. A significant difference was observed in terms of N170 latency between the two groups for different stimulus categories such as objects and faces(p<0.05. Moreover, inverted vs. upright stimuli in both groups elicited a greater response in terms of N170 amplitude in both groups, and this effect was significantly prominent in the right hemisphere (p<0.05. The N170 amplitude turned out to be greater for the inverted vs. upright stimuli irrespective of the stimuli type and group.These data suggest youths with ASD have difficulty processing information, particularly in face perception regardless of the stimuli orientation.

  17. Preferential amygdala reactivity to the negative assessment of neutral faces.

    Science.gov (United States)

    Blasi, Giuseppe; Hariri, Ahmad R; Alce, Guilna; Taurisano, Paolo; Sambataro, Fabio; Das, Saumitra; Bertolino, Alessandro; Weinberger, Daniel R; Mattay, Venkata S

    2009-11-01

    Prior studies suggest that the amygdala shapes complex behavioral responses to socially ambiguous cues. We explored human amygdala function during explicit behavioral decision making about discrete emotional facial expressions that can represent socially unambiguous and ambiguous cues. During functional magnetic resonance imaging, 43 healthy adults were required to make complex social decisions (i.e., approach or avoid) about either relatively unambiguous (i.e., angry, fearful, happy) or ambiguous (i.e., neutral) facial expressions. Amygdala activation during this task was compared with that elicited by simple, perceptual decisions (sex discrimination) about the identical facial stimuli. Angry and fearful expressions were more frequently judged as avoidable and happy expressions most often as approachable. Neutral expressions were equally judged as avoidable and approachable. Reaction times to neutral expressions were longer than those to angry, fearful, and happy expressions during social judgment only. Imaging data on stimuli judged to be avoided revealed a significant task by emotion interaction in the amygdala. Here, only neutral facial expressions elicited greater activity during social judgment than during sex discrimination. Furthermore, during social judgment only, neutral faces judged to be avoided were associated with greater amygdala activity relative to neutral faces that were judged as approachable. Moreover, functional coupling between the amygdala and both dorsolateral prefrontal (social judgment > sex discrimination) and cingulate (sex discrimination > social judgment) cortices was differentially modulated by task during processing of neutral faces. Our results suggest that increased amygdala reactivity and differential functional coupling with prefrontal circuitries may shape complex decisions and behavioral responses to socially ambiguous cues.

  18. Neural correlates of face and object perception in an awake chimpanzee (Pan troglodytes examined by scalp-surface event-related potentials.

    Directory of Open Access Journals (Sweden)

    Hirokata Fukushima

    Full Text Available BACKGROUND: The neural system of our closest living relative, the chimpanzee, is a topic of increasing research interest. However, electrophysiological examinations of neural activity during visual processing in awake chimpanzees are currently lacking. METHODOLOGY/PRINCIPAL FINDINGS: In the present report, skin-surface event-related brain potentials (ERPs were measured while a fully awake chimpanzee observed photographs of faces and objects in two experiments. In Experiment 1, human faces and stimuli composed of scrambled face images were displayed. In Experiment 2, three types of pictures (faces, flowers, and cars were presented. The waveforms evoked by face stimuli were distinguished from other stimulus types, as reflected by an enhanced early positivity appearing before 200 ms post stimulus, and an enhanced late negativity after 200 ms, around posterior and occipito-temporal sites. Face-sensitive activity was clearly observed in both experiments. However, in contrast to the robustly observed face-evoked N170 component in humans, we found that faces did not elicit a peak in the latency range of 150-200 ms in either experiment. CONCLUSIONS/SIGNIFICANCE: Although this pilot study examined a single subject and requires further examination, the observed scalp voltage patterns suggest that selective processing of faces in the chimpanzee brain can be detected by recording surface ERPs. In addition, this non-invasive method for examining an awake chimpanzee can be used to extend our knowledge of the characteristics of visual cognition in other primate species.

  19. Face recognition by combining eigenface method with different wavelet subbands

    Institute of Scientific and Technical Information of China (English)

    MA Yan; LI Shun-bao

    2006-01-01

    @@ A method combining eigenface with different wavelet subbands for face recognition is proposed.Each training image is decomposed into multi-subbands for extracting their eigenvector sets and projection vectors.In the recognition process,the inner product distance between the projection vectors of the test image and that of the training image are calculated.The training image,corresponding to the maximum distance under the given threshold condition,is considered as the final result.The experimental results on the ORL and YALE face database show that,compared with the eigenface method directly on the image domain or on a single wavelet subband,the recognition accuracy using the proposed method is improved by 5% without influencing the recognition speed.

  20. Face perception is tuned to horizontal orientation in the N170 time window.

    Science.gov (United States)

    Jacques, Corentin; Schiltz, Christine; Goffaux, Valerie

    2014-02-07

    The specificity of face perception is thought to reside both in its dramatic vulnerability to picture-plane inversion and its strong reliance on horizontally oriented image content. Here we asked when in the visual processing stream face-specific perception is tuned to horizontal information. We measured the behavioral performance and scalp event-related potentials (ERP) when participants viewed upright and inverted images of faces and cars (and natural scenes) that were phase-randomized in a narrow orientation band centered either on vertical or horizontal orientation. For faces, the magnitude of the inversion effect (IE) on behavioral discrimination performance was significantly reduced for horizontally randomized compared to vertically or nonrandomized images, confirming the importance of horizontal information for the recruitment of face-specific processing. Inversion affected the processing of nonrandomized and vertically randomized faces early, in the N170 time window. In contrast, the magnitude of the N170 IE was much smaller for horizontally randomized faces. The present research indicates that the early face-specific neural representations are preferentially tuned to horizontal information and offers new perspectives for a description of the visual information feeding face-specific perception.