WorldWideScience

Sample records for human face images

  1. Image Pixel Fusion for Human Face Recognition

    CERN Document Server

    Bhowmik, Mrinal Kanti; Nasipuri, Mita; Basu, Dipak Kumar; Kundu, Mahantapas

    2010-01-01

    In this paper we present a technique for fusion of optical and thermal face images based on image pixel fusion approach. Out of several factors, which affect face recognition performance in case of visual images, illumination changes are a significant factor that needs to be addressed. Thermal images are better in handling illumination conditions but not very consistent in capturing texture details of the faces. Other factors like sunglasses, beard, moustache etc also play active role in adding complicacies to the recognition process. Fusion of thermal and visual images is a solution to overcome the drawbacks present in the individual thermal and visual face images. Here fused images are projected into an eigenspace and the projected images are classified using a radial basis function (RBF) neural network and also by a multi-layer perceptron (MLP). In the experiments Object Tracking and Classification Beyond Visible Spectrum (OTCBVS) database benchmark for thermal and visual face images have been used. Compar...

  2. Demographic Estimation from Face Images: Human vs. Machine Performance.

    Science.gov (United States)

    Han, Hu; Otto, Charles; Liu, Xiaoming; Jain, Anil K

    2015-06-01

    Demographic estimation entails automatic estimation of age, gender and race of a person from his face image, which has many potential applications ranging from forensics to social media. Automatic demographic estimation, particularly age estimation, remains a challenging problem because persons belonging to the same demographic group can be vastly different in their facial appearances due to intrinsic and extrinsic factors. In this paper, we present a generic framework for automatic demographic (age, gender and race) estimation. Given a face image, we first extract demographic informative features via a boosting algorithm, and then employ a hierarchical approach consisting of between-group classification, and within-group regression. Quality assessment is also developed to identify low-quality face images that are difficult to obtain reliable demographic estimates. Experimental results on a diverse set of face image databases, FG-NET (1K images), FERET (3K images), MORPH II (75K images), PCSO (100K images), and a subset of LFW (4K images), show that the proposed approach has superior performance compared to the state of the art. Finally, we use crowdsourcing to study the human perception ability of estimating demographics from face images. A side-by-side comparison of the demographic estimates from crowdsourced data and the proposed algorithm provides a number of insights into this challenging problem.

  3. Recognizing age-separated face images: humans and machines.

    Science.gov (United States)

    Yadav, Daksha; Singh, Richa; Vatsa, Mayank; Noore, Afzel

    2014-01-01

    Humans utilize facial appearance, gender, expression, aging pattern, and other ancillary information to recognize individuals. It is interesting to observe how humans perceive facial age. Analyzing these properties can help in understanding the phenomenon of facial aging and incorporating the findings can help in designing effective algorithms. Such a study has two components--facial age estimation and age-separated face recognition. Age estimation involves predicting the age of an individual given his/her facial image. On the other hand, age-separated face recognition consists of recognizing an individual given his/her age-separated images. In this research, we investigate which facial cues are utilized by humans for estimating the age of people belonging to various age groups along with analyzing the effect of one's gender, age, and ethnicity on age estimation skills. We also analyze how various facial regions such as binocular and mouth regions influence age estimation and recognition capabilities. Finally, we propose an age-invariant face recognition algorithm that incorporates the knowledge learned from these observations. Key observations of our research are: (1) the age group of newborns and toddlers is easiest to estimate, (2) gender and ethnicity do not affect the judgment of age group estimation, (3) face as a global feature, is essential to achieve good performance in age-separated face recognition, and (4) the proposed algorithm yields improved recognition performance compared to existing algorithms and also outperforms a commercial system in the young image as probe scenario.

  4. Recognizing age-separated face images: humans and machines.

    Directory of Open Access Journals (Sweden)

    Daksha Yadav

    Full Text Available Humans utilize facial appearance, gender, expression, aging pattern, and other ancillary information to recognize individuals. It is interesting to observe how humans perceive facial age. Analyzing these properties can help in understanding the phenomenon of facial aging and incorporating the findings can help in designing effective algorithms. Such a study has two components--facial age estimation and age-separated face recognition. Age estimation involves predicting the age of an individual given his/her facial image. On the other hand, age-separated face recognition consists of recognizing an individual given his/her age-separated images. In this research, we investigate which facial cues are utilized by humans for estimating the age of people belonging to various age groups along with analyzing the effect of one's gender, age, and ethnicity on age estimation skills. We also analyze how various facial regions such as binocular and mouth regions influence age estimation and recognition capabilities. Finally, we propose an age-invariant face recognition algorithm that incorporates the knowledge learned from these observations. Key observations of our research are: (1 the age group of newborns and toddlers is easiest to estimate, (2 gender and ethnicity do not affect the judgment of age group estimation, (3 face as a global feature, is essential to achieve good performance in age-separated face recognition, and (4 the proposed algorithm yields improved recognition performance compared to existing algorithms and also outperforms a commercial system in the young image as probe scenario.

  5. Digital image watermarking on a special object: the human face

    Science.gov (United States)

    Oh, HwangSeok; Chang, Duk-Ho; Lee, Choong-Hoon; Lee, Heung-Kyu

    2000-05-01

    In this paper, we present a method for protection of digital contents by using the watermark embedding in special object, especially, human faces. To insert the watermark signals that are composed of noise like binary signals, we first localize the face regions within images by using the color and edge information. The skin color area is filtered out and then edge detector is applied for skin area to find out face features. These features are used for decision whether the skin area is face region or not. The face region is divide non-overlapping sub-blocks and a watermark bit is inserted into the each sub- block by considering the block activity. We insert a watermark bit in DCT domain of each sub-block. The level of modification of the DCT coefficients is determined considering the block variance. The non-zero coefficients of the DCT are selected and modified according to the robustness levels. Then, inverse DCT is performed. The extraction of the watermark is performed by comparing the original image in DCT domain. The robustness of the watermarking is similar to the other methods in DCT, but it has good visual qualities and less intended external piracy in terms of psychology.

  6. Expressive line drawings of human faces from range images

    Institute of Scientific and Technical Information of China (English)

    HUANG YueZhu; MARTIN Ralph R.; ROSIN Paul L.; MENG XiangXu; YANG ChengLei

    2009-01-01

    We propose a novel technique to extract features from a range image and use them to produce a 3D pen-and-ink style portrait similar to a traditional artistic drawing. Unlike most previous template-based, component-based or example-based face sketching methods, which work from a frontal photograph as input, our system uses a range Image as input. Our method runs in real-time for models of moderate complexity, allowing the pose and drawing style to be modified interactively. Portrait drawing in our system makes use of occluding contours and suggestive contours as the most important shape cues. However, current 3D feature line detection methods require a smooth mesh and cannot be reliably applied directly to noisy range images. We thus present an improved silhouette line detection algorithm. Feature edges related to the significant parts of a face are extracted from the range image, connected, and smoothed, allowing us to construct chains of line paths which can then be rendered as desired. We also incorporate various portrait-drawing principles to provide several simple yet effective non-photorealistic portrait renderers such as a pen-and-ink shader, a hatch shader and a sketch shader. These are able to generate various life-like impressions in different styles from a user-chosen viewpoint. To obtain satisfactory results, we refine rendered output by smoothing changes in line thickness and opacity. We are careful to provide appropriate visual cues to enhance the viewer's comprehension of the human face. Our experimental results demonstrate the robustness and effectiveness of our approach, and further suggest that our approach can be extended to other 3D geometric objects.

  7. Face Synthesis (FASY) System for Generation of a Face Image from Human Description

    CERN Document Server

    Halder, Santanu; Nasipuri, Mita; Basu, Dipak Kumar; Kundu, Mahantapas

    2010-01-01

    This paper aims at generating a new face based on the human like description using a new concept. The FASY (FAce SYnthesis) System is a Face Database Retrieval and new Face generation System that is under development. One of its main features is the generation of the requested face when it is not found in the existing database, which allows a continuous growing of the database also.

  8. Image-based modeling of objects and human faces

    Science.gov (United States)

    Zhang, Zhengyou

    2000-12-01

    In this paper, provided is an overview of our project on 3D object and face modeling from images taken by a free-moving camera. We strive to advance the state of the art in 3D computer vision, and develop flexible and robust techniques for ordinary users to gain 3D experience from a ste of casually collected 2D images. Applications include product advertisement on the Web, virtual conference, and interactive games. We briefly cover the following topics: camera calibration, stereo rectification, image matching, 3D photo editing, object modeling, and face modeling. Demos on the last three topics will be shown during the conference.

  9. Recognizing Age-Separated Face Images: Humans and Machines

    OpenAIRE

    Daksha Yadav; Richa Singh; Mayank Vatsa; Afzel Noore

    2014-01-01

    Humans utilize facial appearance, gender, expression, aging pattern, and other ancillary information to recognize individuals. It is interesting to observe how humans perceive facial age. Analyzing these properties can help in understanding the phenomenon of facial aging and incorporating the findings can help in designing effective algorithms. Such a study has two components - facial age estimation and age-separated face recognition. Age estimation involves predicting the age of an individua...

  10. Our Faces in the Dog's Brain: Functional Imaging Reveals Temporal Cortex Activation during Perception of Human Faces.

    Science.gov (United States)

    Cuaya, Laura V; Hernández-Pérez, Raúl; Concha, Luis

    2016-01-01

    Dogs have a rich social relationship with humans. One fundamental aspect of it is how dogs pay close attention to human faces in order to guide their behavior, for example, by recognizing their owner and his/her emotional state using visual cues. It is well known that humans have specific brain regions for the processing of other human faces, yet it is unclear how dogs' brains process human faces. For this reason, our study focuses on describing the brain correlates of perception of human faces in dogs using functional magnetic resonance imaging (fMRI). We trained seven domestic dogs to remain awake, still and unrestrained inside an MRI scanner. We used a visual stimulation paradigm with block design to compare activity elicited by human faces against everyday objects. Brain activity related to the perception of faces changed significantly in several brain regions, but mainly in the bilateral temporal cortex. The opposite contrast (i.e., everyday objects against human faces) showed no significant brain activity change. The temporal cortex is part of the ventral visual pathway, and our results are consistent with reports in other species like primates and sheep, that suggest a high degree of evolutionary conservation of this pathway for face processing. This study introduces the temporal cortex as candidate to process human faces, a pillar of social cognition in dogs.

  11. ARTIFICIAL NEURAL NETWORK IN FACE DETECTION HUMAN ON DIGITAL IMAGE

    Directory of Open Access Journals (Sweden)

    Abdusamad Al-Marghilani

    2013-01-01

    Full Text Available Method itself is proposed to be formed by series of filters. Each filter is an independent method of detection and allows you to cut off quickly the regions that do not contain the face’s areas. For this purpose some of the different characteristics of the object are used in addition each subsequent part processes only promising areas of image which were obtained from the previous parts of the method. It has been tested by means of CMU/MIT test set. Analogy of speed and quality detection. There are two modifications to the classic use of neural networks in face detection. First the neural network only tests candidate regions for the face, thus dropping the search space. Secondly the window size is used in network scanning the input image is adaptive and depends on the size of the region of the candidate are implemented in Using Mat lab. The analysis of detection quality of a new method in comparison with the algorithm. The experimental results show that the proposed method the detection method, based on rectangular primitives, in quality. The proposed method, tested on a standard Test set, has surpassed all known methods in speed and quality of detection. Our approach without pre-treatment is not required because the normalization is enabled directly in the weights of the input network.

  12. Brain imaging reveals neuronal circuitry underlying the crow's perception of human faces.

    Science.gov (United States)

    Marzluff, John M; Miyaoka, Robert; Minoshima, Satoshi; Cross, Donna J

    2012-09-25

    Crows pay close attention to people and can remember specific faces for several years after a single encounter. In mammals, including humans, faces are evaluated by an integrated neural system involving the sensory cortex, limbic system, and striatum. Here we test the hypothesis that birds use a similar system by providing an imaging analysis of an awake, wild animal's brain as it performs an adaptive, complex cognitive task. We show that in vivo imaging of crow brain activity during exposure to familiar human faces previously associated with either capture (threatening) or caretaking (caring) activated several brain regions that allow birds to discriminate, associate, and remember visual stimuli, including the rostral hyperpallium, nidopallium, mesopallium, and lateral striatum. Perception of threatening faces activated circuitry including amygdalar, thalamic, and brainstem regions, known in humans and other vertebrates to be related to emotion, motivation, and conditioned fear learning. In contrast, perception of caring faces activated motivation and striatal regions. In our experiments and in nature, when perceiving a threatening face, crows froze and fixed their gaze (decreased blink rate), which was associated with activation of brain regions known in birds to regulate perception, attention, fear, and escape behavior. These findings indicate that, similar to humans, crows use sophisticated visual sensory systems to recognize faces and modulate behavioral responses by integrating visual information with expectation and emotion. Our approach has wide applicability and potential to improve our understanding of the neural basis for animal behavior.

  13. Brain imaging reveals neuronal circuitry underlying the crow’s perception of human faces

    Science.gov (United States)

    Marzluff, John M.; Miyaoka, Robert; Minoshima, Satoshi; Cross, Donna J.

    2012-01-01

    Crows pay close attention to people and can remember specific faces for several years after a single encounter. In mammals, including humans, faces are evaluated by an integrated neural system involving the sensory cortex, limbic system, and striatum. Here we test the hypothesis that birds use a similar system by providing an imaging analysis of an awake, wild animal’s brain as it performs an adaptive, complex cognitive task. We show that in vivo imaging of crow brain activity during exposure to familiar human faces previously associated with either capture (threatening) or caretaking (caring) activated several brain regions that allow birds to discriminate, associate, and remember visual stimuli, including the rostral hyperpallium, nidopallium, mesopallium, and lateral striatum. Perception of threatening faces activated circuitry including amygdalar, thalamic, and brainstem regions, known in humans and other vertebrates to be related to emotion, motivation, and conditioned fear learning. In contrast, perception of caring faces activated motivation and striatal regions. In our experiments and in nature, when perceiving a threatening face, crows froze and fixed their gaze (decreased blink rate), which was associated with activation of brain regions known in birds to regulate perception, attention, fear, and escape behavior. These findings indicate that, similar to humans, crows use sophisticated visual sensory systems to recognize faces and modulate behavioral responses by integrating visual information with expectation and emotion. Our approach has wide applicability and potential to improve our understanding of the neural basis for animal behavior. PMID:22984177

  14. Multi-View Algorithm for Face, Eyes and Eye State Detection in Human Image- Study Paper

    Directory of Open Access Journals (Sweden)

    Latesh Kumari

    2014-07-01

    Full Text Available For fatigue detection such as in the application of driver‟s fatigue monitoring system, the eye state analysis is one of the important and deciding steps to determine the fatigue of driver‟s eyes. In this study, algorithms for face detection, eye detection and eye state analysis have been studied and presented as well as an efficient algorithm for detection of face, eyes have been proposed. Firstly the efficient algorithm for face detection method has been presented which find the face area in the human images. Then, novel algorithms for detection of eye region and eye state are introduced. In this paper we propose a multi-view based eye state detection to determine the state of the eye. With the help of skin color model, the algorithm detects the face regions in an YCbCr color model. By applying the skin segmentation which normally separates the skin and non-skin pixels of the images, it detects the face regions of the image under various lighting and noise conditions. Then from these face regions, the eye regions are extracted within those extracted face regions. Our proposed algorithms are fast and robust as there is not pattern match.

  15. Retinotopy and attention to the face and house images in the human visual cortex.

    Science.gov (United States)

    Wang, Bin; Yan, Tianyi; Ohno, Seiichiro; Kanazawa, Susumu; Wu, Jinglong

    2016-06-01

    Attentional modulation of the neural activities in human visual areas has been well demonstrated. However, the retinotopic activities that are driven by face and house images and attention to face and house images remain unknown. In the present study, we used images of faces and houses to estimate the retinotopic activities that were driven by both the images and attention to the images, driven by attention to the images, and driven by the images. Generally, our results show that both face and house images produced similar retinotopic activities in visual areas, which were only observed in the attention + stimulus and the attention conditions, but not in the stimulus condition. The fusiform face area (FFA) responded to faces that were presented on the horizontal meridian, whereas parahippocampal place area (PPA) rarely responded to house at any visual field. We further analyzed the amplitudes of the neural responses to the target wedge. In V1, V2, V3, V3A, lateral occipital area 1 (LO-1), and hV4, the neural responses to the attended target wedge were significantly greater than those to the unattended target wedge. However, in LO-2, ventral occipital areas 1 and 2 (VO-1 and VO-2) and FFA and PPA, the differences were not significant. We proposed that these areas likely have large fields of attentional modulation for face and house images and exhibit responses to both the target wedge and the background stimuli. In addition, we proposed that the absence of retinotopic activity in the stimulus condition might imply no perceived difference between the target wedge and the background stimuli.

  16. Automated Image Segmentation And Characterization Technique For Effective Isolation And Representation Of Human Face

    Directory of Open Access Journals (Sweden)

    Rajesh Reddy N

    2014-01-01

    Full Text Available In areas such as defense and forensics, it is necessary to identify the face of the criminals from the already available database. Automated face recognition system involves face isolation, feature extraction and classification technique. Challenges in face recognition system are isolating the face effectively as it may be affected by illumination, posture and variation in skin color. Hence it is necessary to develop an effective algorithm that isolates face from the image. In this paper, advanced face isolation technique and feature extraction technique has been proposed.

  17. Ethnicity identification from face images

    Science.gov (United States)

    Lu, Xiaoguang; Jain, Anil K.

    2004-08-01

    Human facial images provide the demographic information, such as ethnicity and gender. Conversely, ethnicity and gender also play an important role in face-related applications. Image-based ethnicity identification problem is addressed in a machine learning framework. The Linear Discriminant Analysis (LDA) based scheme is presented for the two-class (Asian vs. non-Asian) ethnicity classification task. Multiscale analysis is applied to the input facial images. An ensemble framework, which integrates the LDA analysis for the input face images at different scales, is proposed to further improve the classification performance. The product rule is used as the combination strategy in the ensemble. Experimental results based on a face database containing 263 subjects (2,630 face images, with equal balance between the two classes) are promising, indicating that LDA and the proposed ensemble framework have sufficient discriminative power for the ethnicity classification problem. The normalized ethnicity classification scores can be helpful in the facial identity recognition. Useful as a "soft" biometric, face matching scores can be updated based on the output of ethnicity classification module. In other words, ethnicity classifier does not have to be perfect to be useful in practice.

  18. Comparison of human face matching behavior and computational image similarity measure

    Institute of Scientific and Technical Information of China (English)

    CHEN WenFeng; LIU ChangHong; LANDER Karen; FU XiaoLan

    2009-01-01

    Computational similarity measures have been evaluated in a variety of ways, but few of the validated computational measures are based on a high-level, cognitive criterion of objective similarity. In this paper, we evaluate two popular objective similarity measures by comparing them with face matching performance In human observers. The results suggest that these measures are still limited in predicting human behavior, especially In rejection behavior, but objective measure taking advantage of global and local face characteristics may improve the prediction. It is also suggested that human may set different criterions for "hit" and "rejection" and this may provide implications for biologically-inspired computational systems.

  19. Quotient Based Multiresolution Image Fusion of Thermal and Visual Images Using Daubechies Wavelet Transform for Human Face Recognition

    CERN Document Server

    Bhowmik, Mrinal Kanti; Nasipuri, Mita; Basu, Dipak Kumar; Kundu, Mahantapas

    2010-01-01

    This paper investigates the multiresolution level-1 and level-2 Quotient based Fusion of thermal and visual images. In the proposed system, the method-1 namely "Decompose then Quotient Fuse Level-1" and the method-2 namely "Decompose-Reconstruct then Quotient Fuse Level-2" both work on wavelet transformations of the visual and thermal face images. The wavelet transform is well-suited to manage different image resolution and allows the image decomposition in different kinds of coefficients, while preserving the image information without any loss. This approach is based on a definition of an illumination invariant signature image which enables an analytic generation of the image space with varying illumination. The quotient fused images are passed through Principal Component Analysis (PCA) for dimension reduction and then those images are classified using a multi-layer perceptron (MLP). The performances of both the methods have been evaluated using OTCBVS and IRIS databases. All the different classes have been ...

  20. Are faces of different species perceived categorically by human observers?

    OpenAIRE

    Campbell, R.; Pascalis, O.; Coleman, M.; Wallace, S B; Benson, P. J.

    1997-01-01

    What are the species boundaries of face processing? Using a face-feature morphing algorithm, image series intermediate between human, monkey (macaque), and bovine faces were constructed. Forced-choice judgement of these images showed sharply bounded categories for upright face images of each species. These predicted the perceptual discrimination boundaries for upright monkey-cow and cow-human images, but not human-monkey images. Species categories were also well-judged for inverted face image...

  1. Polar Fusion Technique Analysis for Evaluating the Performances of Image Fusion of Thermal and Visual Images for Human Face Recognition

    CERN Document Server

    Bhowmik, Mrinal Kanti; Basu, Dipak Kumar; Nasipuri, Mita

    2011-01-01

    This paper presents a comparative study of two different methods, which are based on fusion and polar transformation of visual and thermal images. Here, investigation is done to handle the challenges of face recognition, which include pose variations, changes in facial expression, partial occlusions, variations in illumination, rotation through different angles, change in scale etc. To overcome these obstacles we have implemented and thoroughly examined two different fusion techniques through rigorous experimentation. In the first method log-polar transformation is applied to the fused images obtained after fusion of visual and thermal images whereas in second method fusion is applied on log-polar transformed individual visual and thermal images. After this step, which is thus obtained in one form or another, Principal Component Analysis (PCA) is applied to reduce dimension of the fused images. Log-polar transformed images are capable of handling complicacies introduced by scaling and rotation. The main objec...

  2. Quotient Based Multiresolution Image Fusion of Thermal and Visual Images Using Daubechies Wavelet Transform for Human Face Recognition

    Directory of Open Access Journals (Sweden)

    Mrinal Kanti Bhowmik

    2010-05-01

    Full Text Available This paper investigates Quotient based Fusion of thermal and visual images, which were individually passed through level-1 and level-2 multiresolution analyses. In the proposed system, the method-1 namely "Decompose then Quotient Fuse Level-1" and the method-2 namely "Decompose-Reconstruct in level-2 and then Fuse Quotients", both work on wavelet transformations of the visual and thermal face images. The wavelet transform is well-suited to manage different image resolutions and allows the image decomposition in different kinds of coefficients, while preserving the image information without any loss. This approach is based on a definition of an illumination invariant signature image which enables an analytic generation of the image space with varying illumination. The quotient fused images are passed through Principal Component Analysis (PCA for dimension reduction and then those images are classified using a multi-layer perceptron (MLP. The performances of both the methods have been evaluated using OTCBVS and IRIS databases. All the different classes have been tested separately, among them the maximum recognition result for a class is 100% and the minimum recognition rate for a class is 73%.

  3. Decoding of faces and face components in face-sensitive human visual cortex

    Directory of Open Access Journals (Sweden)

    David F Nichols

    2010-07-01

    Full Text Available A great challenge to the field of visual neuroscience is to understand how faces are encoded and represented within the human brain. Here we show evidence from functional magnetic resonance imaging (fMRI for spatially distributed processing of the whole face and its components in face-sensitive human visual cortex. We used multi-class linear pattern classifiers constructed with a leave-one-scan-out verification procedure to discriminate brain activation patterns elicited by whole faces, the internal features alone, and the external head outline alone. Furthermore, our results suggest that whole faces are represented disproportionately in the fusiform cortex (FFA whereas the building blocks of faces are represented disproportionately in occipitotemporal cortex (OFA. Faces and face components may therefore be organized with functional clustering within both the FFA and OFA, but with specialization for face components in the OFA and the whole face in the FFA.

  4. An Approach to Face Recognition of 2-D Images Using Eigen Faces and PCA

    Directory of Open Access Journals (Sweden)

    Annapurna Mishra

    2012-04-01

    Full Text Available Face detection is to find any face in a given image. Face recognition is a two-dimension problem used for detecting faces. The information contained in a face can be analysed automatically by this system like identity, gender, expression, age, race and pose. Normally face detection is done for a single image but it can also be extended for video stream. As the face images are normally upright, they can be described by a small set of 2-D characteristics views. Here the face images are projected to a feature space or face space to encode the variation between the known face images. The projected feature space or the face space can be defined as ‘eigenfaces’ and can be formed by eigenvectors of the face image set. The above process can be used to recognize a new face in unsupervised manner. This paper introduces an algorithm which is used for effective face recognition. It takes into consideration not only the face extraction but also the mathematical calculations which enable us to bring the image into a simple and technical form. It can also be implemented in real-time using data acquisition hardware and software interface with the face recognition systems. Face recognition can be applied to various domains including security systems, personal identification, image and film processing and human computer interaction.

  5. Human faces are slower than chimpanzee faces.

    Directory of Open Access Journals (Sweden)

    Anne M Burrows

    Full Text Available BACKGROUND: While humans (like other primates communicate with facial expressions, the evolution of speech added a new function to the facial muscles (facial expression muscles. The evolution of speech required the development of a coordinated action between visual (movement of the lips and auditory signals in a rhythmic fashion to produce "visemes" (visual movements of the lips that correspond to specific sounds. Visemes depend upon facial muscles to regulate shape of the lips, which themselves act as speech articulators. This movement necessitates a more controlled, sustained muscle contraction than that produced during spontaneous facial expressions which occur rapidly and last only a short period of time. Recently, it was found that human tongue musculature contains a higher proportion of slow-twitch myosin fibers than in rhesus macaques, which is related to the slower, more controlled movements of the human tongue in the production of speech. Are there similar unique, evolutionary physiologic biases found in human facial musculature related to the evolution of speech? METHODOLOGY/PRINICIPAL FINDINGS: Using myosin immunohistochemistry, we tested the hypothesis that human facial musculature has a higher percentage of slow-twitch myosin fibers relative to chimpanzees (Pan troglodytes and rhesus macaques (Macaca mulatta. We sampled the orbicularis oris and zygomaticus major muscles from three cadavers of each species and compared proportions of fiber-types. Results confirmed our hypothesis: humans had the highest proportion of slow-twitch myosin fibers while chimpanzees had the highest proportion of fast-twitch fibers. CONCLUSIONS/SIGNIFICANCE: These findings demonstrate that the human face is slower than that of rhesus macaques and our closest living relative, the chimpanzee. They also support the assertion that human facial musculature and speech co-evolved. Further, these results suggest a unique set of evolutionary selective pressures on

  6. An Approach to Face Recognition of 2-D Images Using Eigen Faces and PCA

    Directory of Open Access Journals (Sweden)

    Annapurna Mishra

    2012-05-01

    Full Text Available Face detection is to find any face in a given image. Face recognition is a two-dimension problem used fordetecting faces. The information contained in a face can be analysed automatically by this system likeidentity, gender, expression, age, race and pose. Normally face detection is done for a single image but itcan also be extended for video stream. As the face images are normally upright, they can be described by asmall set of 2-D characteristics views. Here the face images are projected to a feature space or face spaceto encode the variation between the known face images. The projected feature space or the face space canbe defined as ‘eigenfaces’ and can be formed by eigenvectors of the face image set. The above process canbe used to recognize a new face in unsupervised manner. This paper introduces an algorithm which is usedfor effective face recognition. It takes into consideration not only the face extraction but also themathematical calculations which enable us to bring the image into a simple and technical form. It can alsobe implemented in real-time using data acquisition hardware and software interface with the facerecognition systems. Face recognition can be applied to various domains including security systems,personal identification, image and film processing and human computer interaction.

  7. No-reference face image assessment based on deep features

    Science.gov (United States)

    Liu, Guirong; Xu, Yi; Lan, Jinpeng

    2016-09-01

    Face quality assessment is important to improve the performance of face recognition system. For instance, it is required to select images of good quality to improve recognition rate for the person of interest. Current methods mostly depend on traditional image assessment, which use prior knowledge of human vision system. As a result, the quality score of face images shows consistency with human vision perception but deviates from the processing procedure of a real face recognition system. It is the fact that the state-of-art face recognition systems are all built on deep neural networks. Naturally, it is expected to propose an efficient quality scoring method of face images, which should show high consistency with the recognition rate of face images from current face recognition systems. This paper proposes a non-reference face image assessment algorithm based on the deep features, which is capable of predicting the recognition rate of face images. The proposed face image assessment algorithm provides a promising tool to filter out the good input images for the real face recognition system to achieve high recognition rate.

  8. Pgu-Face: A dataset of partially covered facial images

    Directory of Open Access Journals (Sweden)

    Seyed Reza Salari

    2016-12-01

    Full Text Available In this article we introduce a human face image dataset. Images were taken in close to real-world conditions using several cameras, often mobile phone׳s cameras. The dataset contains 224 subjects imaged under four different figures (a nearly clean-shaven countenance, a nearly clean-shaven countenance with sunglasses, an unshaven or stubble face countenance, an unshaven or stubble face countenance with sunglasses in up to two recording sessions. Existence of partially covered face images in this dataset could reveal the robustness and efficiency of several facial image processing algorithms. In this work we present the dataset and explain the recording method.

  9. Robust Face Image Matching under Illumination Variations

    Directory of Open Access Journals (Sweden)

    Yang Chyuan-Huei Thomas

    2004-01-01

    Full Text Available Face image matching is an essential step for face recognition and face verification. It is difficult to achieve robust face matching under various image acquisition conditions. In this paper, a novel face image matching algorithm robust against illumination variations is proposed. The proposed image matching algorithm is motivated by the characteristics of high image gradient along the face contours. We define a new consistency measure as the inner product between two normalized gradient vectors at the corresponding locations in two images. The normalized gradient is obtained by dividing the computed gradient vector by the corresponding locally maximal gradient magnitude. Then we compute the average consistency measures for all pairs of the corresponding face contour pixels to be the robust matching measure between two face images. To alleviate the problem due to shadow and intensity saturation, we introduce an intensity weighting function for each individual consistency measure to form a weighted average of the consistency measure. This robust consistency measure is further extended to integrate multiple face images of the same person captured under different illumination conditions, thus making our robust face matching algorithm. Experimental results of applying the proposed face image matching algorithm on some well-known face datasets are given in comparison with some existing face recognition methods. The results show that the proposed algorithm consistently outperforms other methods and achieves higher than 93% recognition rate with three reference images for different datasets under different lighting conditions.

  10. Face Detection in Digital Image: A Technical Review

    Directory of Open Access Journals (Sweden)

    Devang C

    2015-01-01

    Full Text Available Face detection is the method of focusing faces in input image is an important part of any face processing system. In Face detection, segmentation plays the major role to detect the face. There are many contests for effective and efficient face detection. The aim of this paper is to present a review on several algorithms and methods used for face detection. We read the various surveys and related various techniques according to how they extract features and what learning algorithms are adopted for. Face detection system has two major phases, first to segment skin region from an image and second to decide these regions cover human face or not. There are number of algorithms used in face detection namely Genetic, Hausdorff Distance etc.

  11. Development of Human Face Detection System Based on Real-time Camera Image%摄像头实时图像人脸检测系统开发

    Institute of Scientific and Technical Information of China (English)

    孙雅琪; 刘羽

    2013-01-01

    人脸检测是计算机视觉领域中一个重要的研究热点,也是人脸识别、表情识别等研究的基础.论文首先通过截取摄像头实时图像,然后通过转换彩色空间、人脸肤色建模、图像处理和人脸定位算法实现了人脸检测功能.详细介绍了基于摄像头的人脸图像采集开发和人脸检测等主要步骤,并由此开发了摄像头实时图像的人脸检测系统.试验结果表明,论文提出的方法是可行的.%Human face detection is an important research in the field of computer vision.It also is a basic research of face recognition and expression recognition etc.Firstly,the camera real-time image is captured,and then the face detection function is realized through conversing of color space,skin color model,image processing and face location algorithm are built.The main steps of developing the face images acquisition based on camera and face detection are introduced in detail.At last,the face detection system based on real-time camera image is developed.The test results show that,the proposed method is feasible.

  12. Recognizing disguised faces: human and machine evaluation.

    Directory of Open Access Journals (Sweden)

    Tejas Indulal Dhamecha

    Full Text Available Face verification, though an easy task for humans, is a long-standing open research area. This is largely due to the challenging covariates, such as disguise and aging, which make it very hard to accurately verify the identity of a person. This paper investigates human and machine performance for recognizing/verifying disguised faces. Performance is also evaluated under familiarity and match/mismatch with the ethnicity of observers. The findings of this study are used to develop an automated algorithm to verify the faces presented under disguise variations. We use automatically localized feature descriptors which can identify disguised face patches and account for this information to achieve improved matching accuracy. The performance of the proposed algorithm is evaluated on the IIIT-Delhi Disguise database that contains images pertaining to 75 subjects with different kinds of disguise variations. The experiments suggest that the proposed algorithm can outperform a popular commercial system and evaluates them against humans in matching disguised face images.

  13. Face Recognition in Real-world Images

    OpenAIRE

    Fontaine, Xavier; Achanta, Radhakrishna; Süsstrunk, Sabine

    2017-01-01

    Face recognition systems are designed to handle well-aligned images captured under controlled situations. However real-world images present varying orientations, expressions, and illumination conditions. Traditional face recognition algorithms perform poorly on such images. In this paper we present a method for face recognition adapted to real-world conditions that can be trained using very few training examples and is computationally efficient. Our method consists of performing a novel align...

  14. Personality judgments from everyday images of faces

    OpenAIRE

    Clare AM Sutherland; Rowley, Lauren E.; Amoaku, Unity T.; Ella eDaguzan; Kate A Kidd-Rossiter; Ugne eMaceviciute; Young, Andrew W.

    2015-01-01

    People readily make personality attributions to images of strangers' faces. Here we investigated the basis of these personality attributions as made to everyday, naturalistic face images. In a first study, we used 1000 highly varying “ambient image” face photographs to test the correspondence between personality judgments of the Big Five and dimensions known to underlie a range of facial first impressions: approachability, dominance, and youthful-attractiveness. Interestingly, the facial Big ...

  15. Personality judgments from everyday images of faces

    OpenAIRE

    Clare AM Sutherland; Lauren E Rowley; Unity T Amoaku; Ella eDaguzan; Kate A Kidd-Rossiter; Ugne eMaceviciute; Andrew W Young

    2015-01-01

    People readily make personality attributions to images of strangers' faces. Here we investigated the basis of these personality attributions as made to everyday, naturalistic face images. In a first study, we used 1000 highly varying “ambient image” face photographs to test the correspondence between personality judgments of the Big Five and dimensions known to underlie a range of facial first impressions: approachability, dominance, and youthful-attractiveness. Interestingly, the facial Big ...

  16. Towards Designing Android Faces after Actual Humans

    DEFF Research Database (Denmark)

    Vlachos, Evgenios; Schärfe, Henrik

    2015-01-01

    Using their face as their prior affective interface, android robots and other agents embody emotional facial expressions, and convey messages on their identity, gender, age, race, and attractiveness. We are examining whether androids can convey emotionally relevant information via their static...... facial sig-nals, just as humans do. Based on the fact that social information can be accu-rately identified from still images of nonexpressive unknown faces, a judgment paradigm was employed to discover, and compare the style of facial expres-sions of the Geminoid-DK android (modeled after an actual...... initially made for the Original, suggesting that androids inherit the same style of facial expression as their originals. Our findings support the case of designing android faces after specific actual persons who portray facial features that are familiar to the users, and also relevant to the notion...

  17. Why the long face? The importance of vertical image structure for biological "barcodes" underlying face recognition.

    Science.gov (United States)

    Spence, Morgan L; Storrs, Katherine R; Arnold, Derek H

    2014-07-29

    Humans are experts at face recognition. The mechanisms underlying this complex capacity are not fully understood. Recently, it has been proposed that face recognition is supported by a coarse-scale analysis of visual information contained in horizontal bands of contrast distributed along the vertical image axis-a biological facial "barcode" (Dakin & Watt, 2009). A critical prediction of the facial barcode hypothesis is that the distribution of image contrast along the vertical axis will be more important for face recognition than image distributions along the horizontal axis. Using a novel paradigm involving dynamic image distortions, a series of experiments are presented examining famous face recognition impairments from selectively disrupting image distributions along the vertical or horizontal image axes. Results show that disrupting the image distribution along the vertical image axis is more disruptive for recognition than matched distortions along the horizontal axis. Consistent with the facial barcode hypothesis, these results suggest that human face recognition relies disproportionately on appropriately scaled distributions of image contrast along the vertical image axis.

  18. Modeling human dynamics of face-to-face interaction networks

    CERN Document Server

    Starnini, Michele; Pastor-Satorras, Romualdo

    2013-01-01

    Face-to-face interaction networks describe social interactions in human gatherings, and are the substrate for processes such as epidemic spreading and gossip propagation. The bursty nature of human behavior characterizes many aspects of empirical data, such as the distribution of conversation lengths, of conversations per person, or of inter-conversation times. Despite several recent attempts, a general theoretical understanding of the global picture emerging from data is still lacking. Here we present a simple model that reproduces quantitatively most of the relevant features of empirical face-to-face interaction networks. The model describes agents which perform a random walk in a two dimensional space and are characterized by an attractiveness whose effect is to slow down the motion of people around them. The proposed framework sheds light on the dynamics of human interactions and can improve the modeling of dynamical processes taking place on the ensuing dynamical social networks.

  19. Temporal networks of face-to-face human interactions

    CERN Document Server

    Barrat, Alain

    2013-01-01

    The ever increasing adoption of mobile technologies and ubiquitous services allows to sense human behavior at unprecedented levels of details and scale. Wearable sensors are opening up a new window on human mobility and proximity at the finest resolution of face-to-face proximity. As a consequence, empirical data describing social and behavioral networks are acquiring a longitudinal dimension that brings forth new challenges for analysis and modeling. Here we review recent work on the representation and analysis of temporal networks of face-to-face human proximity, based on large-scale datasets collected in the context of the SocioPatterns collaboration. We show that the raw behavioral data can be studied at various levels of coarse-graining, which turn out to be complementary to one another, with each level exposing different features of the underlying system. We briefly review a generative model of temporal contact networks that reproduces some statistical observables. Then, we shift our focus from surface ...

  20. Human face processing is tuned to sexual age preferences

    DEFF Research Database (Denmark)

    Ponseti, J; Granert, O; van Eimeren, T

    2014-01-01

    . In paedophilia, sexual attraction is directed to sexually immature children. Therefore, we hypothesized that brain networks that normally are tuned to mature faces of the preferred gender show an abnormal tuning to sexual immature faces in paedophilia. Here, we use functional magnetic resonance imaging (f......Human faces can motivate nurturing behaviour or sexual behaviour when adults see a child or an adult face, respectively. This suggests that face processing is tuned to detecting age cues of sexual maturity to stimulate the appropriate reproductive behaviour: either caretaking or mating......MRI) to test directly for the existence of a network which is tuned to face cues of sexual maturity. During fMRI, participants sexually attracted to either adults or children were exposed to various face images. In individuals attracted to adults, adult faces activated several brain regions significantly more...

  1. Probabilistic recognition of human faces from video

    DEFF Research Database (Denmark)

    Zhou, Saohua; Krüger, Volker; Chellappa, Rama

    2003-01-01

    Recognition of human faces using a gallery of still or video images and a probe set of videos is systematically investigated using a probabilistic framework. In still-to-video recognition, where the gallery consists of still images, a time series state space model is proposed to fuse temporal...... of the identity variable produces the recognition result. The model formulation is very general and it allows a variety of image representations and transformations. Experimental results using videos collected by NIST/USF and CMU illustrate the effectiveness of this approach for both still-to-video and video...... demonstrate that, due to the propagation of the identity variable over time, a degeneracy in posterior probability of the identity variable is achieved to give improved recognition. The gallery is generalized to videos in order to realize video-to-video recognition. An exemplar-based learning strategy...

  2. Review on Matching Infrared Face Images to Optical Face Images using LBP

    Directory of Open Access Journals (Sweden)

    Kamakhaya Argulewar

    2014-12-01

    Full Text Available In biometric research and many security areas, it is very difficult task to match the images which is captured by different devices. Large gap exist between them because they relates with different classes. Matching optical face images to infrared face images is one of the difficult task in face biometric. Large difference exists between infrared and optical face images because they belong to multiple classes. Converting the samples of multimodality into common feature space is the main objective of this project. Different class of images is relating by coordinating separate feature for classes .It is mainly used in heterogeneous face recognition. The new method has been developing for identification of heterogeneous face identification. Training set contains the images from different modalities. Initially the infrared image is preprocessed by applying Gaussian filter, difference of Gaussian and CSDN filters are apply on infrared face image. After preprocessing next step to extracting the feature by using LBP(local binary pattern feature extraction then relevance machine classifier is used to identify the best matching optical image from the corresponding infrared images from the optical images dataset. By processing this technique our system efficiently match the infrared and optical face images.

  3. Automatic Age Estimation System for Face Images

    OpenAIRE

    Chin-Teng Lin; Dong-Lin Li; Jian-Hao Lai; Ming-Feng Han; Jyh-Yeong Chang

    2012-01-01

    Humans are the most important tracking objects in surveillance systems. However, human tracking is not enough to provide the required information for personalized recognition. In this paper, we present a novel and reliable framework for automatic age estimation based on computer vision. It exploits global face features based on the combination of Gabor wavelets and orthogonal locality preserving projections. In addition, the proposed system can extract face aging features automatically in rea...

  4. Generating virtual training samples for sparse representation of face images and face recognition

    Science.gov (United States)

    Du, Yong; Wang, Yu

    2016-03-01

    There are many challenges in face recognition. In real-world scenes, images of the same face vary with changing illuminations, different expressions and poses, multiform ornaments, or even altered mental status. Limited available training samples cannot convey these possible changes in the training phase sufficiently, and this has become one of the restrictions to improve the face recognition accuracy. In this article, we view the multiplication of two images of the face as a virtual face image to expand the training set and devise a representation-based method to perform face recognition. The generated virtual samples really reflect some possible appearance and pose variations of the face. By multiplying a training sample with another sample from the same subject, we can strengthen the facial contour feature and greatly suppress the noise. Thus, more human essential information is retained. Also, uncertainty of the training data is simultaneously reduced with the increase of the training samples, which is beneficial for the training phase. The devised representation-based classifier uses both the original and new generated samples to perform the classification. In the classification phase, we first determine K nearest training samples for the current test sample by calculating the Euclidean distances between the test sample and training samples. Then, a linear combination of these selected training samples is used to represent the test sample, and the representation result is used to classify the test sample. The experimental results show that the proposed method outperforms some state-of-the-art face recognition methods.

  5. 3D face database for human pattern recognition

    Science.gov (United States)

    Song, LiMei; Lu, Lu

    2008-10-01

    Face recognition is an essential work to ensure human safety. It is also an important task in biomedical engineering. 2D image is not enough for precision face recognition. 3D face data includes more exact information, such as the precision size of eyes, mouth, etc. 3D face database is an important part in human pattern recognition. There is a lot of method to get 3D data, such as 3D laser scan system, 3D phase measurement, shape from shading, shape from motion, etc. This paper will introduce a non-orbit, non-contact, non-laser 3D measurement system. The main idea is from shape from stereo technique. Two cameras are used in different angle. A sequence of light will project on the face. Human face, human head, human tooth, human body can all be measured by the system. The visualization data of each person can form to a large 3D face database, which can be used in human recognition. The 3D data can provide a vivid copy of a face, so the recognition exactness can be reached to 100%. Although the 3D data is larger than 2D image, it can be used in the occasion where only few people include, such as the recognition of a family, a small company, etc.

  6. Personality judgments from everyday images of faces.

    Science.gov (United States)

    Sutherland, Clare A M; Rowley, Lauren E; Amoaku, Unity T; Daguzan, Ella; Kidd-Rossiter, Kate A; Maceviciute, Ugne; Young, Andrew W

    2015-01-01

    People readily make personality attributions to images of strangers' faces. Here we investigated the basis of these personality attributions as made to everyday, naturalistic face images. In a first study, we used 1000 highly varying "ambient image" face photographs to test the correspondence between personality judgments of the Big Five and dimensions known to underlie a range of facial first impressions: approachability, dominance, and youthful-attractiveness. Interestingly, the facial Big Five judgments were found to separate to some extent: judgments of openness, extraversion, emotional stability, and agreeableness were mainly linked to facial first impressions of approachability, whereas conscientiousness judgments involved a combination of approachability and dominance. In a second study we used average face images to investigate which main cues are used by perceivers to make impressions of the Big Five, by extracting consistent cues to impressions from the large variation in the original images. When forming impressions of strangers from highly varying, naturalistic face photographs, perceivers mainly seem to rely on broad facial cues to approachability, such as smiling.

  7. SPECFACE - A Dataset of Human Faces Wearing Spectacles

    OpenAIRE

    2015-01-01

    This paper presents a database of human faces for persons wearing spectacles. The database consists of images of faces having significant variations with respect to illumination, head pose, skin color, facial expressions and sizes, and nature of spectacles. The database contains data of 60 subjects. This database is expected to be a precious resource for the development and evaluation of algorithms for face detection, eye detection, head tracking, eye gaze tracking, etc., for subjects wearing...

  8. Personality judgments from everyday images of faces

    Directory of Open Access Journals (Sweden)

    Clare AM Sutherland

    2015-10-01

    Full Text Available People readily make personality attributions to images of strangers’ faces. Here we investigated the basis of these personality attributions as made to everyday, naturalistic face images. In a first study, we used 1,000 highly varying ‘ambient image’ face photographs to test the correspondence between personality judgments of the Big Five and dimensions known to underlie a range of facial first impressions: approachability, dominance and youthful-attractiveness. Interestingly, the facial Big Five judgments were found to separate to some extent: judgments of openness, extraversion, emotional stability and agreeableness were mainly linked to facial first impressions of approachability, whereas conscientiousness judgments involved a combination of approachability and dominance. In a second study we used average face images to investigate which main cues are used by perceivers to make impressions of the Big Five, by extracting consistent cues to impressions from the large variation in the original images. When forming impressions of strangers from highly varying, naturalistic face photographs, perceivers mainly seem to rely on broad facial cues to approachability, such as smiling.

  9. Enhancing face recognition by image warping

    OpenAIRE

    García Bueno, Jorge

    2009-01-01

    This project has been developed as an improvement which could be added to the actual computer vision algorithms. It is based on the original idea proposed and published by Rob Jenkins and Mike Burton about the power of the face averages in arti cial recognition. The present project aims to create a new automated procedure applied for face recognition working with average images. Up to now, this algorithm has been used manually. With this study, the averaging and warping process will be done b...

  10. Real Time Detection and Tracking of Human Face using Skin Color Segmentation and Region Properties

    Directory of Open Access Journals (Sweden)

    Prashanth Kumar G.

    2014-07-01

    Full Text Available Real time faces detection and face tracking is one of the challenging problems in application like computer human interaction, video surveillance, biometrics etc. In this paper we are presenting an algorithm for real time face detection and tracking using skin color segmentation and region properties. First segmentation of skin regions from an image is done by using different color models. Skin regions are separated from the image by using thresholding. Then to decide whether these regions contain human face or not we used face features. Our procedure is based on skin color segmentation and human face features (knowledge-based approach. We have used RGB, YCbCr, and HSV color models for skin color segmentation. These color models with thresholds, help to remove non skin like pixel from an image. Each segmented skin regions are tested to know whether region is human face or not, by using human face features based on knowledge of geometrical properties of human face.

  11. An infrared human face recognition method based on 2DPCA

    Institute of Scientific and Technical Information of China (English)

    LIU Xia; Li Ting-jun

    2009-01-01

    Aimed at the problems of infrared image recognition under varying illumination, face disguise, etc. ,we bring out an infrared human face recognition algorithm based on 2DPCA. The proposed algorithm can work out the covariance matrix of the training sample easily and directly; at the same time, it costs less time to work out the eigenvector. Relevant experiments are carried out, and the result indicates that compared with the traditional recognition algorithm, the proposed recognition method is swift and has a good adaptability to the changes of human face posture.

  12. Model-based reconstruction for illumination variation in face images

    NARCIS (Netherlands)

    Boom, B.J.; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.

    2008-01-01

    We propose a novel method to correct for arbitrary illumination variation in the face images. The main purpose is to improve recognition results of face images taken under uncontrolled illumination conditions. We correct the illumination variation in the face images using a face shape model, which

  13. Human face processing is tuned to sexual age preferences.

    Science.gov (United States)

    Ponseti, J; Granert, O; van Eimeren, T; Jansen, O; Wolff, S; Beier, K; Deuschl, G; Bosinski, H; Siebner, H

    2014-05-01

    Human faces can motivate nurturing behaviour or sexual behaviour when adults see a child or an adult face, respectively. This suggests that face processing is tuned to detecting age cues of sexual maturity to stimulate the appropriate reproductive behaviour: either caretaking or mating. In paedophilia, sexual attraction is directed to sexually immature children. Therefore, we hypothesized that brain networks that normally are tuned to mature faces of the preferred gender show an abnormal tuning to sexual immature faces in paedophilia. Here, we use functional magnetic resonance imaging (fMRI) to test directly for the existence of a network which is tuned to face cues of sexual maturity. During fMRI, participants sexually attracted to either adults or children were exposed to various face images. In individuals attracted to adults, adult faces activated several brain regions significantly more than child faces. These brain regions comprised areas known to be implicated in face processing, and sexual processing, including occipital areas, the ventrolateral prefrontal cortex and, subcortically, the putamen and nucleus caudatus. The same regions were activated in paedophiles, but with a reversed preferential response pattern.

  14. Sand Face: Humanism after Antihumanism

    Science.gov (United States)

    Arcilla, René V.

    2015-01-01

    Have the critiques of humanism of the 1960s and 1970s buried this idea once and for all? Or is there a way that humanism can absorb some of this antihumanist thinking and thereby renew itself? Drawing on writings of Michel Foucault, Charles Taylor, Friedrich Nietzsche, and Martin Heidegger in order to illuminate artworks by Robert Smithson and…

  15. Sand Face: Humanism after Antihumanism

    Science.gov (United States)

    Arcilla, René V.

    2015-01-01

    Have the critiques of humanism of the 1960s and 1970s buried this idea once and for all? Or is there a way that humanism can absorb some of this antihumanist thinking and thereby renew itself? Drawing on writings of Michel Foucault, Charles Taylor, Friedrich Nietzsche, and Martin Heidegger in order to illuminate artworks by Robert Smithson and…

  16. Human Face Recognition using Line Features

    CERN Document Server

    Bhowmik, Mrinal Kanti; Nasipuri, Mita; Basu, Dipak Kumar; Kundu, Mahantapas

    2010-01-01

    In this work we investigate a novel approach to handle the challenges of face recognition, which includes rotation, scale, occlusion, illumination etc. Here, we have used thermal face images as those are capable to minimize the affect of illumination changes and occlusion due to moustache, beards, adornments etc. The proposed approach registers the training and testing thermal face images in polar coordinate, which is capable to handle complicacies introduced by scaling and rotation. Line features are extracted from thermal polar images and feature vectors are constructed using these line. Feature vectors thus obtained passes through principal component analysis (PCA) for the dimensionality reduction of feature vectors. Finally, the images projected into eigenspace are classified using a multi-layer perceptron. In the experiments we have used Object Tracking and Classification Beyond Visible Spectrum (OTCBVS) database. Experimental results show that the proposed approach significantly improves the verificatio...

  17. A novel polar-based human face recognition computational model

    Directory of Open Access Journals (Sweden)

    Y. Zana

    2009-07-01

    Full Text Available Motivated by a recently proposed biologically inspired face recognition approach, we investigated the relation between human behavior and a computational model based on Fourier-Bessel (FB spatial patterns. We measured human recognition performance of FB filtered face images using an 8-alternative forced-choice method. Test stimuli were generated by converting the images from the spatial to the FB domain, filtering the resulting coefficients with a band-pass filter, and finally taking the inverse FB transformation of the filtered coefficients. The performance of the computational models was tested using a simulation of the psychophysical experiment. In the FB model, face images were first filtered by simulated V1- type neurons and later analyzed globally for their content of FB components. In general, there was a higher human contrast sensitivity to radially than to angularly filtered images, but both functions peaked at the 11.3-16 frequency interval. The FB-based model presented similar behavior with regard to peak position and relative sensitivity, but had a wider frequency band width and a narrower response range. The response pattern of two alternative models, based on local FB analysis and on raw luminance, strongly diverged from the human behavior patterns. These results suggest that human performance can be constrained by the type of information conveyed by polar patterns, and consequently that humans might use FB-like spatial patterns in face processing.

  18. Spatial augmented reality based high accuracy human face projection

    Science.gov (United States)

    Li, Dong; Xie, Jinghui; Li, Yufeng; Weng, Dongdong; Liu, Yue

    2015-08-01

    This paper discusses the imaging principles and the technical difficulties of spatial augmented reality based human face projection. A novel geometry correction method is proposed to realize fast, high-accuracy face model projection. Using a depth camera to reconstruct the projected object, the relative position from the rendered model to the projector can be accessed and the initial projection image is generated. Then the projected image is distorted by using Bezier interpolation to guarantee that the projected texture matches with the object surface. The proposed method is under a simple process flow and can achieve high perception registration of virtual and real object. In addition, this method has a good performance in the condition that the reconstructed model is not exactly same with the rendered virtual model which extends its application area in the spatial augmented reality based human face projection.

  19. Automatic Age Estimation System for Face Images

    Directory of Open Access Journals (Sweden)

    Chin-Teng Lin

    2012-11-01

    Full Text Available Humans are the most important tracking objects in surveillance systems. However, human tracking is not enough to provide the required information for personalized recognition. In this paper, we present a novel and reliable framework for automatic age estimation based on computer vision. It exploits global face features based on the combination of Gabor wavelets and orthogonal locality preserving projections. In addition, the proposed system can extract face aging features automatically in real‐time. This means that the proposed system has more potential in applications compared to other semi‐automatic systems. The results obtained from this novel approach could provide clearer insight for operators in the field of age estimation to develop real‐world applications.

  20. Recognition of human face based on improved multi-sample

    Institute of Scientific and Technical Information of China (English)

    LIU Xia; LI Lei-lei; LI Ting-jun; LIU Lu; ZHANG Ying

    2009-01-01

    In order to solve the problem caused by variation illumination in human face recognition, we bring forward a face recognition algorithm based on the improved muhi-sample. In this algorithm, the face image is processed with Retinex theory, meanwhile, the Gabor filter is adopted to perform the feature extraction. The experimental results show that the application of Retinex theory improves the recognition accuracy, and makes the algorithm more robust to the variation illumination. The Gabor filter is more effective and accurate for extracting more useable facial local features. It is proved that the proposed algorithm has good recognition accuracy and it is stable under variation illumination.

  1. Face Image Quality and its Improvement in a Face Detection System

    DEFF Research Database (Denmark)

    Kamal, Nasrollahi; Moeslund, Thomas B.

    2008-01-01

    When a person passes by a surveillance camera a sequence of images is obtained. Most of these images are redundant and usually keeping some of them which have better quality is sufficient. So before performing any analysis on the face of a person, the face at the first step needs to be detected....... In the second step the quality of the different face images needs to be evaluated. Finally, after choosing the best image(s) based on this quality assessment, in the third step, if this image(s) is not satisfying a predefined set of measures for good quality images, its quality should be improved. In this work...

  2. Face Identification from Manipulated Facial Images using SIFT

    CERN Document Server

    Chennamma, H R; Veerabhadrappa,

    2011-01-01

    Editing on digital images is ubiquitous. Identification of deliberately modified facial images is a new challenge for face identification system. In this paper, we address the problem of identification of a face or person from heavily altered facial images. In this face identification problem, the input to the system is a manipulated or transformed face image and the system reports back the determined identity from a database of known individuals. Such a system can be useful in mugshot identification in which mugshot database contains two views (frontal and profile) of each criminal. We considered only frontal view from the available database for face identification and the query image is a manipulated face generated by face transformation software tool available online. We propose SIFT features for efficient face identification in this scenario. Further comparative analysis has been given with well known eigenface approach. Experiments have been conducted with real case images to evaluate the performance of ...

  3. En-face coherence imaging using galvanometer scanner modulation.

    Science.gov (United States)

    Podoleanu, A G; Dobre, G M; Jackson, D A

    1998-02-01

    We introduce a novel optical path-modulation technique for a low-coherence interferometric imaging system based on transverse scanning of the target with a galvanometric scanning-mirror pair. The path modulation arises when the beam that is incident upon one of the scanning mirrors does not fall on its axis of rotation. The method is demonstrated by the production of en-face low-coherence images of different objects such as a fiber-optic tip and a human retina invivo .

  4. Human Face Recognition Using Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Răzvan-Daniel Albu

    2009-10-01

    Full Text Available In this paper, I present a novel hybrid face recognition approach based on a convolutional neural architecture, designed to robustly detect highly variable face patterns. The convolutional network extracts successively larger features in a hierarchical set of layers. With the weights of the trained neural networks there are created kernel windows used for feature extraction in a 3-stage algorithm. I present experimental results illustrating the efficiency of the proposed approach. I use a database of 796 images of 159 individuals from Reims University which contains quite a high degree of variability in expression, pose, and facial details.

  5. Robust Algorithm for Face Detection in Color Images

    Directory of Open Access Journals (Sweden)

    Hlaing Htake Khaung Tin

    2012-03-01

    Full Text Available Robust Algorithm is presented for frontal face detection in color images. Face detection is an important task in facial analysis systems in order to have a priori localized faces in a given image. Applications such as face tracking, facial expression recognition, gesture recognition, etc., for example, have a pre-requisite that a face is already located in the given image or the image sequence. Facial features such as eyes, nose and mouth are automatically detected based on properties of the associated image regions. On detecting a mouth, a nose and two eyes, a face verification step based on Eigen face theory is applied to a normalized search space in the image relative to the distance between the eye feature points. The experiments were carried out on test images taken from the internet and various other randomly selected sources. The algorithm has also been tested in practice with a webcam, giving (near real-time performance and good extraction results.

  6. Discriminating Projections for Estimating Face Age in Wild Images

    Energy Technology Data Exchange (ETDEWEB)

    Tokola, Ryan A [ORNL; Bolme, David S [ORNL; Ricanek, Karl [ORNL; Barstow, Del R [ORNL; Boehnen, Chris Bensing [ORNL

    2014-01-01

    We introduce a novel approach to estimating the age of a human from a single uncontrolled image. Current face age estimation algorithms work well in highly controlled images, and some are robust to changes in illumination, but it is usually assumed that images are close to frontal. This bias is clearly seen in the datasets that are commonly used to evaluate age estimation, which either entirely or mostly consist of frontal images. Using pose-specific projections, our algorithm maps image features into a pose-insensitive latent space that is discriminative with respect to age. Age estimation is then performed using a multi-class SVM. We show that our approach outperforms other published results on the Images of Groups dataset, which is the only age-related dataset with a non-trivial number of off-axis face images, and that we are competitive with recent age estimation algorithms on the mostly-frontal FG-NET dataset. We also experimentally demonstrate that our feature projections introduce insensitivity to pose.

  7. Face Synthesis (FASY) System for Determining the Characteristics of a Face Image

    CERN Document Server

    Halder, Santanu; Nasipuri, Mita; Basu, Dipak Kumar; Kundu, Mahantapas

    2010-01-01

    This paper aims at determining the characteristics of a face image by extracting its components. The FASY (FAce SYnthesis) System is a Face Database Retrieval and new Face generation System that is under development. One of its main features is the generation of the requested face when it is not found in the existing database, which allows a continuous growing of the database also. To generate the new face image, we need to store the face components in the database. So we have designed a new technique to extract the face components by a sophisticated method. After extraction of the facial feature points we have analyzed the components to determine their characteristics. After extraction and analysis we have stored the components along with their characteristics into the face database for later use during the face construction.

  8. Original and Mirror Face Images and Minimum Squared Error Classification for Visible Light Face Recognition

    Directory of Open Access Journals (Sweden)

    Rong Wang

    2015-01-01

    Full Text Available In real-world applications, the image of faces varies with illumination, facial expression, and poses. It seems that more training samples are able to reveal possible images of the faces. Though minimum squared error classification (MSEC is a widely used method, its applications on face recognition usually suffer from the problem of a limited number of training samples. In this paper, we improve MSEC by using the mirror faces as virtual training samples. We obtained the mirror faces generated from original training samples and put these two kinds of samples into a new set. The face recognition experiments show that our method does obtain high accuracy performance in classification.

  9. Original and Mirror Face Images and Minimum Squared Error Classification for Visible Light Face Recognition.

    Science.gov (United States)

    Wang, Rong

    2015-01-01

    In real-world applications, the image of faces varies with illumination, facial expression, and poses. It seems that more training samples are able to reveal possible images of the faces. Though minimum squared error classification (MSEC) is a widely used method, its applications on face recognition usually suffer from the problem of a limited number of training samples. In this paper, we improve MSEC by using the mirror faces as virtual training samples. We obtained the mirror faces generated from original training samples and put these two kinds of samples into a new set. The face recognition experiments show that our method does obtain high accuracy performance in classification.

  10. The morphometrics of "masculinity" in human faces.

    Science.gov (United States)

    Mitteroecker, Philipp; Windhager, Sonja; Müller, Gerd B; Schaefer, Katrin

    2015-01-01

    In studies of social inference and human mate preference, a wide but inconsistent array of tools for computing facial masculinity has been devised. Several of these approaches implicitly assumed that the individual expression of sexually dimorphic shape features, which we refer to as maleness, resembles facial shape features perceived as masculine. We outline a morphometric strategy for estimating separately the face shape patterns that underlie perceived masculinity and maleness, and for computing individual scores for these shape patterns. We further show how faces with different degrees of masculinity or maleness can be constructed in a geometric morphometric framework. In an application of these methods to a set of human facial photographs, we found that shape features typically perceived as masculine are wide faces with a wide inter-orbital distance, a wide nose, thin lips, and a large and massive lower face. The individual expressions of this combination of shape features--the masculinity shape scores--were the best predictor of rated masculinity among the compared methods (r = 0.5). The shape features perceived as masculine only partly resembled the average face shape difference between males and females (sexual dimorphism). Discriminant functions and Procrustes distances to the female mean shape were poor predictors of perceived masculinity.

  11. The morphometrics of "masculinity" in human faces.

    Directory of Open Access Journals (Sweden)

    Philipp Mitteroecker

    Full Text Available In studies of social inference and human mate preference, a wide but inconsistent array of tools for computing facial masculinity has been devised. Several of these approaches implicitly assumed that the individual expression of sexually dimorphic shape features, which we refer to as maleness, resembles facial shape features perceived as masculine. We outline a morphometric strategy for estimating separately the face shape patterns that underlie perceived masculinity and maleness, and for computing individual scores for these shape patterns. We further show how faces with different degrees of masculinity or maleness can be constructed in a geometric morphometric framework. In an application of these methods to a set of human facial photographs, we found that shape features typically perceived as masculine are wide faces with a wide inter-orbital distance, a wide nose, thin lips, and a large and massive lower face. The individual expressions of this combination of shape features--the masculinity shape scores--were the best predictor of rated masculinity among the compared methods (r = 0.5. The shape features perceived as masculine only partly resembled the average face shape difference between males and females (sexual dimorphism. Discriminant functions and Procrustes distances to the female mean shape were poor predictors of perceived masculinity.

  12. Next Level of Data Fusion for Human Face Recognition

    CERN Document Server

    Bhowmik, Mrinal Kanti; Bhattacharjee, Debotosh; Basu, Dipak Kumar; Nasipuri, Mita

    2011-01-01

    This paper demonstrates two different fusion techniques at two different levels of a human face recognition process. The first one is called data fusion at lower level and the second one is the decision fusion towards the end of the recognition process. At first a data fusion is applied on visual and corresponding thermal images to generate fused image. Data fusion is implemented in the wavelet domain after decomposing the images through Daubechies wavelet coefficients (db2). During the data fusion maximum of approximate and other three details coefficients are merged together. After that Principle Component Analysis (PCA) is applied over the fused coefficients and finally two different artificial neural networks namely Multilayer Perceptron(MLP) and Radial Basis Function(RBF) networks have been used separately to classify the images. After that, for decision fusion based decisions from both the classifiers are combined together using Bayesian formulation. For experiments, IRIS thermal/visible Face Database h...

  13. DIFFERENCE FEATURE NEURAL NETWORK IN RECOGNITION OF HUMAN FACES

    Institute of Scientific and Technical Information of China (English)

    Chen Gang; Qi Feihu

    2001-01-01

    This article discusses vision recognition process and finds out that human recognizes objects not by their isolated features, but by their main difference features which people get by contrasting them. According to the resolving character of difference features for vision recognition, the difference feature neural network(DFNN) which is the improved auto-associative neural network is proposed.Using ORL database, the comparative experiment for face recognition with face images and the ones added Gaussian noise is performed, and the result shows that DFNN is better than the auto-associative neural network and it proves DFNN is more efficient.

  14. Examplers based image fusion features for face recognition

    CERN Document Server

    James, Alex Pappachen

    2012-01-01

    Examplers of a face are formed from multiple gallery images of a person and are used in the process of classification of a test image. We incorporate such examplers in forming a biologically inspired local binary decisions on similarity based face recognition method. As opposed to single model approaches such as face averages the exampler based approach results in higher recognition accu- racies and stability. Using multiple training samples per person, the method shows the following recognition accuracies: 99.0% on AR, 99.5% on FERET, 99.5% on ORL, 99.3% on EYALE, 100.0% on YALE and 100.0% on CALTECH face databases. In addition to face recognition, the method also detects the natural variability in the face images which can find application in automatic tagging of face images.

  15. Classification of fused face images using multilayer perceptron neural network

    CERN Document Server

    Bhattacharjee, Debotosh; Nasipuri, Mita; Basu, Dipak Kumar; Kundu, Mahantapas

    2010-01-01

    This paper presents a concept of image pixel fusion of visual and thermal faces, which can significantly improve the overall performance of a face recognition system. Several factors affect face recognition performance including pose variations, facial expression changes, occlusions, and most importantly illumination changes. So, image pixel fusion of thermal and visual images is a solution to overcome the drawbacks present in the individual thermal and visual face images. Fused images are projected into eigenspace and finally classified using a multi-layer perceptron. In the experiments we have used Object Tracking and Classification Beyond Visible Spectrum (OTCBVS) database benchmark thermal and visual face images. Experimental results show that the proposed approach significantly improves the verification and identification performance and the success rate is 95.07%. The main objective of employing fusion is to produce a fused image that provides the most detailed and reliable information. Fusion of multip...

  16. Imprinting and flexibility in human face cognition

    Science.gov (United States)

    Marcinkowska, Urszula M.; Terraube, Julien; Kaminski, Gwenaël

    2016-01-01

    Faces are an important cue to multiple physiological and psychological traits. Human preferences for exaggerated sex typicality (masculinity or femininity) in faces depend on multiple factors and show high inter-subject variability. To gain a deeper understanding of the mechanisms underlying facial femininity preferences in men, we tested the interactive effect of family structure (birth order, sibling sex-ratio and number of siblings) and parenthood status on these preferences. Based on a group of 1304 heterosexual men, we have found that preference for feminine faces was not only influenced by sibling age and sex, but also that fatherhood modulated this preference. Men with sisters had a weaker preference for femininity than men with brothers, highlighting a possible effect of a negative imprinting-like mechanism. What is more, fatherhood increased strongly the preference for facial femininity. Finally, for fathers with younger sisters only, the more the age difference increased between them, the more femininity preference increased. Overall our findings bring new insight into how early-acquired experience at the individual level may determine face preference in adulthood, and what is more, how these preferences are flexible and potentially dependent on parenthood status in adult men. PMID:27680495

  17. Recognizing Celebrity Faces in Lot of Web Images

    Directory of Open Access Journals (Sweden)

    Surekha Naganath Gaikwad

    2014-07-01

    Full Text Available Now a dayscelebrityrelatedqueriesrankingconstantlyamong all the image queries.On the other hand celebrity images on web provide a greatopportunity for constructing large scale training datasets to advance face recognition. Collecting and labelingcelebrity faces fromgeneral web images is a challengingtask. In thisproblemwe are using the surroundingtext in web images such as name, location, time etc., then the image isannotedusing image annotation system and nameassignment system thenfinding the near duplicate image and at lastgetting the correct result.In thiswayusercanidentify the person in the web images.

  18. Observed touch on a non-human face is not remapped onto the human observer's own face

    National Research Council Canada - National Science Library

    Beck, Brianna; Bertini, Caterina; Scarpazza, Cristina; Làdavas, Elisabetta

    2013-01-01

    Visual remapping of touch (VRT) is a phenomenon in which seeing a human face being touched enhances detection of tactile stimuli on the observer's own face, especially when the observed face expresses fear...

  19. Effects of pose and image resolution on automatic face recognition

    NARCIS (Netherlands)

    Mahmood, Zahid; Ali, Tauseef; Khan, Samee U.

    The popularity of face recognition systems have increased due to their use in widespread applications. Driven by the enormous number of potential application domains, several algorithms have been proposed for face recognition. Face pose and image resolutions are among the two important factors that

  20. Effects of pose and image resolution on automatic face recognition

    NARCIS (Netherlands)

    Mahmood, Zahid; Ali, Tauseef; Khan, Samee U.

    2015-01-01

    The popularity of face recognition systems have increased due to their use in widespread applications. Driven by the enormous number of potential application domains, several algorithms have been proposed for face recognition. Face pose and image resolutions are among the two important factors that

  1. Discrimination of human and dog faces and inversion responses in domestic dogs (Canis familiaris).

    Science.gov (United States)

    Racca, Anaïs; Amadei, Eleonora; Ligout, Séverine; Guo, Kun; Meints, Kerstin; Mills, Daniel

    2010-05-01

    Although domestic dogs can respond to many facial cues displayed by other dogs and humans, it remains unclear whether they can differentiate individual dogs or humans based on facial cues alone and, if so, whether they would demonstrate the face inversion effect, a behavioural hallmark commonly used in primates to differentiate face processing from object processing. In this study, we first established the applicability of the visual paired comparison (VPC or preferential looking) procedure for dogs using a simple object discrimination task with 2D pictures. The animals demonstrated a clear looking preference for novel objects when simultaneously presented with prior-exposed familiar objects. We then adopted this VPC procedure to assess their face discrimination and inversion responses. Dogs showed a deviation from random behaviour, indicating discrimination capability when inspecting upright dog faces, human faces and object images; but the pattern of viewing preference was dependent upon image category. They directed longer viewing time at novel (vs. familiar) human faces and objects, but not at dog faces, instead, a longer viewing time at familiar (vs. novel) dog faces was observed. No significant looking preference was detected for inverted images regardless of image category. Our results indicate that domestic dogs can use facial cues alone to differentiate individual dogs and humans and that they exhibit a non-specific inversion response. In addition, the discrimination response by dogs of human and dog faces appears to differ with the type of face involved.

  2. Rear-facing car seat (image)

    Science.gov (United States)

    A rear-facing car seat position is recommended for a child who is very young. Extreme injury can occur in an accident because ... child. In a frontal crash a rear-facing car seat is best, because it cradles the head, ...

  3. Improving face image extraction by using deep learning technique

    Science.gov (United States)

    Xue, Zhiyun; Antani, Sameer; Long, L. R.; Demner-Fushman, Dina; Thoma, George R.

    2016-03-01

    The National Library of Medicine (NLM) has made a collection of over a 1.2 million research articles containing 3.2 million figure images searchable using the Open-iSM multimodal (text+image) search engine. Many images are visible light photographs, some of which are images containing faces ("face images"). Some of these face images are acquired in unconstrained settings, while others are studio photos. To extract the face regions in the images, we first applied one of the most widely-used face detectors, a pre-trained Viola-Jones detector implemented in Matlab and OpenCV. The Viola-Jones detector was trained for unconstrained face image detection, but the results for the NLM database included many false positives, which resulted in a very low precision. To improve this performance, we applied a deep learning technique, which reduced the number of false positives and as a result, the detection precision was improved significantly. (For example, the classification accuracy for identifying whether the face regions output by this Viola- Jones detector are true positives or not in a test set is about 96%.) By combining these two techniques (Viola-Jones and deep learning) we were able to increase the system precision considerably, while avoiding the need to manually construct a large training set by manual delineation of the face regions.

  4. Comparing the face inversion effect in crows and humans.

    Science.gov (United States)

    Brecht, Katharina F; Wagener, Lysann; Ostojić, Ljerka; Clayton, Nicola S; Nieder, Andreas

    2017-09-13

    Humans show impaired recognition of faces that are presented upside down, a phenomenon termed face inversion effect, which is thought to reflect the special relevance of faces for humans. Here, we investigated whether a phylogenetically distantly related avian species, the carrion crow, with similar socio-cognitive abilities to human and non-human primates, exhibits a face inversion effect. In a delayed matching-to-sample task, two crows had to differentiate profiles of crow faces as well as matched controls, presented both upright and inverted. Because crows can discriminate humans based on their faces, we also assessed the face inversion effect using human faces. Both crows performed better with crow faces than with human faces and performed worse when responding to inverted pictures in general compared to upright pictures. However, neither of the crows showed a face inversion effect. For comparative reasons, the tests were repeated with human subjects. As expected, humans showed a face-specific inversion effect. Therefore, we did not find any evidence that crows-like humans-process faces as a special visual stimulus. Instead, individual recognition in crows may be based on cues other than a conspecific's facial profile, such as their body, or on processing of local features rather than holistic processing.

  5. Acne, cystic on the face (image)

    Science.gov (United States)

    The face is the most common location of acne. Here, there are 4 to 6 millimeter red ( ... scars and fistulous tract formation (connecting passages). Severe acne may have a profound psychological impact and may ...

  6. The correction of the distortion of human face based on three-dimensional modeling methods

    Science.gov (United States)

    Ye, Qingmin; Chen, Kuo; Feng, Huajun; Xu, Zhihai; Li, Qi

    2015-08-01

    When the human face is on the edge of field of the camera which has a large view, serious deformation will be captured. To correct the distortion of the human face, we present an approach based on setting up a 3D model. Firstly, we construct 3D target face modeling by using the data and depth information of the standard human face, which is set up by the three-dimensional model with three-dimensional Gaussian function with sectional type. According to the size of the face in the image and the parameters of the camera, we can obtain the information of relative position and depth of the human face. Then by translating the virtual camera axis to the center of the face, we can achieve the goal to correct the distortion of the face based on the theory of three-dimensional imaging. Finally, we have made a lot of experiments, and we study the influence of parameters of the 3D model of human face. The result indicates that the method presented by this paper can play an effective role in correcting the distortion of the face in the edge of the view, and we can get better results if the model appreciates the real human face.

  7. Image preprocessing study on KPCA-based face recognition

    Science.gov (United States)

    Li, Xuan; Li, Dehua

    2015-12-01

    Face recognition as an important biometric identification method, with its friendly, natural, convenient advantages, has obtained more and more attention. This paper intends to research a face recognition system including face detection, feature extraction and face recognition, mainly through researching on related theory and the key technology of various preprocessing methods in face detection process, using KPCA method, focuses on the different recognition results in different preprocessing methods. In this paper, we choose YCbCr color space for skin segmentation and choose integral projection for face location. We use erosion and dilation of the opening and closing operation and illumination compensation method to preprocess face images, and then use the face recognition method based on kernel principal component analysis method for analysis and research, and the experiments were carried out using the typical face database. The algorithms experiment on MATLAB platform. Experimental results show that integration of the kernel method based on PCA algorithm under certain conditions make the extracted features represent the original image information better for using nonlinear feature extraction method, which can obtain higher recognition rate. In the image preprocessing stage, we found that images under various operations may appear different results, so as to obtain different recognition rate in recognition stage. At the same time, in the process of the kernel principal component analysis, the value of the power of the polynomial function can affect the recognition result.

  8. Adaptive Deep Supervised Autoencoder Based Image Reconstruction for Face Recognition

    Directory of Open Access Journals (Sweden)

    Rongbing Huang

    2016-01-01

    Full Text Available Based on a special type of denoising autoencoder (DAE and image reconstruction, we present a novel supervised deep learning framework for face recognition (FR. Unlike existing deep autoencoder which is unsupervised face recognition method, the proposed method takes class label information from training samples into account in the deep learning procedure and can automatically discover the underlying nonlinear manifold structures. Specifically, we define an Adaptive Deep Supervised Network Template (ADSNT with the supervised autoencoder which is trained to extract characteristic features from corrupted/clean facial images and reconstruct the corresponding similar facial images. The reconstruction is realized by a so-called “bottleneck” neural network that learns to map face images into a low-dimensional vector and reconstruct the respective corresponding face images from the mapping vectors. Having trained the ADSNT, a new face image can then be recognized by comparing its reconstruction image with individual gallery images, respectively. Extensive experiments on three databases including AR, PubFig, and Extended Yale B demonstrate that the proposed method can significantly improve the accuracy of face recognition under enormous illumination, pose change, and a fraction of occlusion.

  9. Faces in places: humans and machines make similar face detection errors.

    Directory of Open Access Journals (Sweden)

    Bernard Marius 't Hart

    Full Text Available The human visual system seems to be particularly efficient at detecting faces. This efficiency sometimes comes at the cost of wrongfully seeing faces in arbitrary patterns, including famous examples such as a rock configuration on Mars or a toast's roast patterns. In machine vision, face detection has made considerable progress and has become a standard feature of many digital cameras. The arguably most wide-spread algorithm for such applications ("Viola-Jones" algorithm achieves high detection rates at high computational efficiency. To what extent do the patterns that the algorithm mistakenly classifies as faces also fool humans? We selected three kinds of stimuli from real-life, first-person perspective movies based on the algorithm's output: correct detections ("real faces", false positives ("illusory faces" and correctly rejected locations ("non faces". Observers were shown pairs of these for 20 ms and had to direct their gaze to the location of the face. We found that illusory faces were mistaken for faces more frequently than non faces. In addition, rotation of the real face yielded more errors, while rotation of the illusory face yielded fewer errors. Using colored stimuli increases overall performance, but does not change the pattern of results. When replacing the eye movement by a manual response, however, the preference for illusory faces over non faces disappeared. Taken together, our data show that humans make similar face-detection errors as the Viola-Jones algorithm, when directing their gaze to briefly presented stimuli. In particular, the relative spatial arrangement of oriented filters seems of relevance. This suggests that efficient face detection in humans is likely to be pre-attentive and based on rather simple features as those encoded in the early visual system.

  10. Pose-Invariant Face Recognition via RGB-D Images

    Directory of Open Access Journals (Sweden)

    Gaoli Sang

    2016-01-01

    Full Text Available Three-dimensional (3D face models can intrinsically handle large pose face recognition problem. In this paper, we propose a novel pose-invariant face recognition method via RGB-D images. By employing depth, our method is able to handle self-occlusion and deformation, both of which are challenging problems in two-dimensional (2D face recognition. Texture images in the gallery can be rendered to the same view as the probe via depth. Meanwhile, depth is also used for similarity measure via frontalization and symmetric filling. Finally, both texture and depth contribute to the final identity estimation. Experiments on Bosphorus, CurtinFaces, Eurecom, and Kiwi databases demonstrate that the additional depth information has improved the performance of face recognition with large pose variations and under even more challenging conditions.

  11. Human Age Estimation Method Robust to Camera Sensor and/or Face Movement

    OpenAIRE

    Dat Tien Nguyen; So Ra Cho; Tuyen Danh Pham; Kang Ryoung Park

    2015-01-01

    Human age can be employed in many useful real-life applications, such as customer service systems, automatic vending machines, entertainment, etc. In order to obtain age information, image-based age estimation systems have been developed using information from the human face. However, limitations exist for current age estimation systems because of the various factors of camera motion and optical blurring, facial expressions, gender, etc. Motion blurring can usually be presented on face images...

  12. A new method for face detection in colour images for emotional bio-robots

    Institute of Scientific and Technical Information of China (English)

    HAPESHI; Kevin

    2010-01-01

    Emotional bio-robots have become a hot research topic in last two decades. Though there have been some progress in research, design and development of various emotional bio-robots, few of them can be used in practical applications. The study of emotional bio-robots demands multi-disciplinary co-operation. It involves computer science, artificial intelligence, 3D computation, engineering system modelling, analysis and simulation, bionics engineering, automatic control, image processing and pattern recognition etc. Among them, face detection belongs to image processing and pattern recognition. An emotional robot must have the ability to recognize various objects, particularly, it is very important for a bio-robot to be able to recognize human faces from an image. In this paper, a face detection method is proposed for identifying any human faces in colour images using human skin model and eye detection method. Firstly, this method can be used to detect skin regions from the input colour image after normalizing its luminance. Then, all face candidates are identified using an eye detection method. Comparing with existing algorithms, this method only relies on the colour and geometrical data of human face rather than using training datasets. From experimental results, it is shown that this method is effective and fast and it can be applied to the development of an emotional bio-robot with further improvements of its speed and accuracy.

  13. Face Spoof Attack Recognition Using Discriminative Image Patches

    Directory of Open Access Journals (Sweden)

    Zahid Akhtar

    2016-01-01

    Full Text Available Face recognition systems are now being used in many applications such as border crossings, banks, and mobile payments. The wide scale deployment of facial recognition systems has attracted intensive attention to the reliability of face biometrics against spoof attacks, where a photo, a video, or a 3D mask of a genuine user’s face can be used to gain illegitimate access to facilities or services. Though several face antispoofing or liveness detection methods (which determine at the time of capture whether a face is live or spoof have been proposed, the issue is still unsolved due to difficulty in finding discriminative and computationally inexpensive features and methods for spoof attacks. In addition, existing techniques use whole face image or complete video for liveness detection. However, often certain face regions (video frames are redundant or correspond to the clutter in the image (video, thus leading generally to low performances. Therefore, we propose seven novel methods to find discriminative image patches, which we define as regions that are salient, instrumental, and class-specific. Four well-known classifiers, namely, support vector machine (SVM, Naive-Bayes, Quadratic Discriminant Analysis (QDA, and Ensemble, are then used to distinguish between genuine and spoof faces using a voting based scheme. Experimental analysis on two publicly available databases (Idiap REPLAY-ATTACK and CASIA-FASD shows promising results compared to existing works.

  14. Assessing paedophilia based on the haemodynamic brain response to face images

    DEFF Research Database (Denmark)

    Ponseti, Jorge; Granert, Oliver; Van Eimeren, Thilo

    2016-01-01

    that human face processing is tuned to sexual age preferences. This observation prompted us to test whether paedophilia can be inferred based on the haemodynamic brain responses to adult and child faces. METHODS: Twenty-four men sexually attracted to prepubescent boys or girls (paedophiles) and 32 men...... sexually attracted to men or women (teleiophiles) were exposed to images of child and adult, male and female faces during a functional magnetic resonance imaging (fMRI) session. RESULTS: A cross-validated, automatic pattern classification algorithm of brain responses to facial stimuli yielded four...

  15. Face Detection in Still Gray Images

    Science.gov (United States)

    2000-05-01

    statistical independence between the components. In our system we started with a manually de ned set of facial components and a simple geometrical...there was at least one detection with a higher SVM out- put value in its neighborhood. The neighborhood in the image plane was de ned as a 19 19 box...based classi er is shown in Fig. 16. A similar architecture was used for people detection [ Mohan 99]. On the rst level, component classi ers

  16. Modelling temporal networks of human face-to-face contacts with public activity and individual reachability

    Science.gov (United States)

    Zhang, Yi-Qing; Cui, Jing; Zhang, Shu-Min; Zhang, Qi; Li, Xiang

    2016-02-01

    Modelling temporal networks of human face-to-face contacts is vital both for understanding the spread of airborne pathogens and word-of-mouth spreading of information. Although many efforts have been devoted to model these temporal networks, there are still two important social features, public activity and individual reachability, have been ignored in these models. Here we present a simple model that captures these two features and other typical properties of empirical face-to-face contact networks. The model describes agents which are characterized by an attractiveness to slow down the motion of nearby people, have event-triggered active probability and perform an activity-dependent biased random walk in a square box with periodic boundary. The model quantitatively reproduces two empirical temporal networks of human face-to-face contacts which are testified by their network properties and the epidemic spread dynamics on them.

  17. Human Wagering Behavior Depends on Opponents' Faces

    Science.gov (United States)

    Schlicht, Erik J.; Shimojo, Shinsuke; Camerer, Colin F.; Battaglia, Peter; Nakayama, Ken

    2010-01-01

    Research in competitive games has exclusively focused on how opponent models are developed through previous outcomes and how peoples' decisions relate to normative predictions. Little is known about how rapid impressions of opponents operate and influence behavior in competitive economic situations, although such subjective impressions have been shown to influence cooperative decision-making. This study investigates whether an opponent's face influences players' wagering decisions in a zero-sum game with hidden information. Participants made risky choices in a simplified poker task while being presented opponents whose faces differentially correlated with subjective impressions of trust. Surprisingly, we find that threatening face information has little influence on wagering behavior, but faces relaying positive emotional characteristics impact peoples' decisions. Thus, people took significantly longer and made more mistakes against emotionally positive opponents. Differences in reaction times and percent correct were greatest around the optimal decision boundary, indicating that face information is predominantly used when making decisions during medium-value gambles. Mistakes against emotionally positive opponents resulted from increased folding rates, suggesting that participants may have believed that these opponents were betting with hands of greater value than other opponents. According to these results, the best “poker face” for bluffing may not be a neutral face, but rather a face that contains emotional correlates of trustworthiness. Moreover, it suggests that rapid impressions of an opponent play an important role in competitive games, especially when people have little or no experience with an opponent. PMID:20657772

  18. Intelligent Sensor for Image Control Point of Eigenfaces for Face Recognition

    Directory of Open Access Journals (Sweden)

    Mohamed L. Toure

    2010-01-01

    Full Text Available Problem statement: The sensor for image control point in Face Recognition (FR is one of the most active research areas in computer vision and pattern recognition. Its practical application includes forensic identification, access control and human computer interface. The task of a FR system is to compare an input face image against a database containing a set of face samples with known identity and identifying the subject to which the input face belongs. However, a straightforward implementation is difficult since faces exhibit significant variations in appearance due to acquisition, illuminations, pose and aging variations. This research contracted with several images combined through image registration offering the possibility of improving eigenface recognition. Sensor detection by head orientation for image control point of the training sets collected in a database was discussed. Approach: In fact, the aim of such a research consisted first, identification of the face recognition and the possibility of improving eigenface recognition. So the approach of eigenface focused on three fundamental points: generating eigenfaces, classification and identification and the method used image processing toolbox to perform the matrix calculations. Results: Observation showed that the performance of the proposed technique proved to be less affected by registration errors. Conclusion/Recommendations: We presented the intelligent sensor for face recognition using image control point of eigenfaces. It is important to note that many applications of face recognition do not require perfect identification, although most require a low false-positive rate. In searching a large database of faces, for example, it may be preferable to find a small set of likely matches to present to the user.

  19. Face recognition with multi-resolution spectral feature images.

    Directory of Open Access Journals (Sweden)

    Zhan-Li Sun

    Full Text Available The one-sample-per-person problem has become an active research topic for face recognition in recent years because of its challenges and significance for real-world applications. However, achieving relatively higher recognition accuracy is still a difficult problem due to, usually, too few training samples being available and variations of illumination and expression. To alleviate the negative effects caused by these unfavorable factors, in this paper we propose a more accurate spectral feature image-based 2DLDA (two-dimensional linear discriminant analysis ensemble algorithm for face recognition, with one sample image per person. In our algorithm, multi-resolution spectral feature images are constructed to represent the face images; this can greatly enlarge the training set. The proposed method is inspired by our finding that, among these spectral feature images, features extracted from some orientations and scales using 2DLDA are not sensitive to variations of illumination and expression. In order to maintain the positive characteristics of these filters and to make correct category assignments, the strategy of classifier committee learning (CCL is designed to combine the results obtained from different spectral feature images. Using the above strategies, the negative effects caused by those unfavorable factors can be alleviated efficiently in face recognition. Experimental results on the standard databases demonstrate the feasibility and efficiency of the proposed method.

  20. Face recognition with multi-resolution spectral feature images.

    Science.gov (United States)

    Sun, Zhan-Li; Lam, Kin-Man; Dong, Zhao-Yang; Wang, Han; Gao, Qing-Wei; Zheng, Chun-Hou

    2013-01-01

    The one-sample-per-person problem has become an active research topic for face recognition in recent years because of its challenges and significance for real-world applications. However, achieving relatively higher recognition accuracy is still a difficult problem due to, usually, too few training samples being available and variations of illumination and expression. To alleviate the negative effects caused by these unfavorable factors, in this paper we propose a more accurate spectral feature image-based 2DLDA (two-dimensional linear discriminant analysis) ensemble algorithm for face recognition, with one sample image per person. In our algorithm, multi-resolution spectral feature images are constructed to represent the face images; this can greatly enlarge the training set. The proposed method is inspired by our finding that, among these spectral feature images, features extracted from some orientations and scales using 2DLDA are not sensitive to variations of illumination and expression. In order to maintain the positive characteristics of these filters and to make correct category assignments, the strategy of classifier committee learning (CCL) is designed to combine the results obtained from different spectral feature images. Using the above strategies, the negative effects caused by those unfavorable factors can be alleviated efficiently in face recognition. Experimental results on the standard databases demonstrate the feasibility and efficiency of the proposed method.

  1. EEG source imaging assists decoding in a face recognition task

    DEFF Research Database (Denmark)

    Andersen, Rasmus S.; Eliasen, Anders U.; Pedersen, Nicolai

    2017-01-01

    EEG based brain state decoding has numerous applications. State of the art decoding is based on processing of the multivariate sensor space signal, however evidence is mounting that EEG source reconstruction can assist decoding. EEG source imaging leads to high-dimensional representations...... of face recognition. This task concerns the differentiation of brain responses to images of faces and scrambled faces and poses a rather difficult decoding problem at the single trial level. We implement the pipeline using spatially focused features and show that this approach is challenged and source...... imaging does not lead to an improved decoding. We design a distributed pipeline in which the classifier has access to brain wide features which in turn does lead to a 15% reduction in the error rate using source space features. Hence, our work presents supporting evidence for the hypothesis that source...

  2. Exploring manifold structure of face images via multiple graphs

    KAUST Repository

    Alghamdi, Masheal

    2013-12-24

    Geometric structure in the data provides important information for face image recognition and classification tasks. Graph regularized non-negative matrix factorization (GrNMF) performs well in this task. However, it is sensitive to the parameters selection. Wang et al. proposed multiple graph regularized non-negative matrix factorization (MultiGrNMF) to solve the parameter selection problem by testing it on medical images. In this paper, we introduce the MultiGrNMF algorithm in the context of still face Image classification, and conduct a comparative study of NMF, GrNMF, and MultiGrNMF using two well-known face databases. Experimental results show that MultiGrNMF outperforms NMF and GrNMF for most cases.

  3. Dogs can discriminate emotional expressions of human faces.

    Science.gov (United States)

    Müller, Corsin A; Schmitt, Kira; Barber, Anjuli L A; Huber, Ludwig

    2015-03-01

    The question of whether animals have emotions and respond to the emotional expressions of others has become a focus of research in the last decade [1-9]. However, to date, no study has convincingly shown that animals discriminate between emotional expressions of heterospecifics, excluding the possibility that they respond to simple cues. Here, we show that dogs use the emotion of a heterospecific as a discriminative cue. After learning to discriminate between happy and angry human faces in 15 picture pairs, whereby for one group only the upper halves of the faces were shown and for the other group only the lower halves of the faces were shown, dogs were tested with four types of probe trials: (1) the same half of the faces as in the training but of novel faces, (2) the other half of the faces used in training, (3) the other half of novel faces, and (4) the left half of the faces used in training. We found that dogs for which the happy faces were rewarded learned the discrimination more quickly than dogs for which the angry faces were rewarded. This would be predicted if the dogs recognized an angry face as an aversive stimulus. Furthermore, the dogs performed significantly above chance level in all four probe conditions and thus transferred the training contingency to novel stimuli that shared with the training set only the emotional expression as a distinguishing feature. We conclude that the dogs used their memories of real emotional human faces to accomplish the discrimination task.

  4. Image Region Selection and Ensemble for Face Recognition

    Institute of Scientific and Technical Information of China (English)

    Xin Geng; Zhi-Hua Zhou

    2006-01-01

    In this paper, a novel framework for face recognition, namely Selective Ensemble of Image Regions (SEIR), is proposed. In this framework, all possible regions in the face image are regarded as a certain kind of features. There are two main steps in SEIR: the first step is to automatically select several regions from all possible candidates; the second step is to construct classifier ensemble from the selected regions. An implementation of SEIR based on multiple eigenspaces, namely SEME, is also proposed in this paper. SEME is analyzed and compared with eigenface, PCA + LDA, eigenfeature, and eigenface + eigenfeature through experiments. The experimental results show that SEME achieves the best performance.

  5. Face Detection Using Discrete Gabor Jets and a Probabilistic Model of Colored Image Patches

    Science.gov (United States)

    Hoffmann, Ulrich; Naruniec, Jacek; Yazdani, Ashkan; Ebrahimi, Touradj

    Face detection allows to recognize and detect human faces and provides information about their location in a given image. Many applications such as biometrics, face recognition, and video surveillance employ face detection as one of their main modules. Therefore, improvement in the performance of existing face detection systems and new achievements in this field of research are of significant importance. In this paper a hierarchical classification approach for face detection is presented. In the first step, discrete Gabor jets (DGJ) are used for extracting features related to the brightness information of images and a preliminary classification is made. Afterwards, a skin detection algorithm, based on modeling of colored image patches, is employed as a post-processing of the results of DGJ-based classification. It is shown that the use of color efficiently reduces the number of false positives while maintaining a high true positive rate. A comparison is made with the OpenCV implementation of the Viola and Jones face detector and it is concluded that higher correct classification rates can be attained using the proposed face detector.

  6. The neural code for face orientation in the human fusiform face area.

    Science.gov (United States)

    Ramírez, Fernando M; Cichy, Radoslaw M; Allefeld, Carsten; Haynes, John-Dylan

    2014-09-01

    Humans recognize faces and objects with high speed and accuracy regardless of their orientation. Recent studies have proposed that orientation invariance in face recognition involves an intermediate representation where neural responses are similar for mirror-symmetric views. Here, we used fMRI, multivariate pattern analysis, and computational modeling to investigate the neural encoding of faces and vehicles at different rotational angles. Corroborating previous studies, we demonstrate a representation of face orientation in the fusiform face-selective area (FFA). We go beyond these studies by showing that this representation is category-selective and tolerant to retinal translation. Critically, by controlling for low-level confounds, we found the representation of orientation in FFA to be compatible with a linear angle code. Aspects of mirror-symmetric coding cannot be ruled out when FFA mean activity levels are considered as a dimension of coding. Finally, we used a parametric family of computational models, involving a biased sampling of view-tuned neuronal clusters, to compare different face angle encoding models. The best fitting model exhibited a predominance of neuronal clusters tuned to frontal views of faces. In sum, our findings suggest a category-selective and monotonic code of face orientation in the human FFA, in line with primate electrophysiology studies that observed mirror-symmetric tuning of neural responses at higher stages of the visual system, beyond the putative homolog of human FFA.

  7. Neural representations of faces and body parts in macaque and human cortex: a comparative FMRI study.

    Science.gov (United States)

    Pinsk, Mark A; Arcaro, Michael; Weiner, Kevin S; Kalkus, Jan F; Inati, Souheil J; Gross, Charles G; Kastner, Sabine

    2009-05-01

    Single-cell studies in the macaque have reported selective neural responses evoked by visual presentations of faces and bodies. Consistent with these findings, functional magnetic resonance imaging studies in humans and monkeys indicate that regions in temporal cortex respond preferentially to faces and bodies. However, it is not clear how these areas correspond across the two species. Here, we directly compared category-selective areas in macaques and humans using virtually identical techniques. In the macaque, several face- and body part-selective areas were found located along the superior temporal sulcus (STS) and middle temporal gyrus (MTG). In the human, similar to previous studies, face-selective areas were found in ventral occipital and temporal cortex and an additional face-selective area was found in the anterior temporal cortex. Face-selective areas were also found in lateral temporal cortex, including the previously reported posterior STS area. Body part-selective areas were identified in the human fusiform gyrus and lateral occipitotemporal cortex. In a first experiment, both monkey and human subjects were presented with pictures of faces, body parts, foods, scenes, and man-made objects, to examine the response profiles of each category-selective area to the five stimulus types. In a second experiment, face processing was examined by presenting upright and inverted faces. By comparing the responses and spatial relationships of the areas, we propose potential correspondences across species. Adjacent and overlapping areas in the macaque anterior STS/MTG responded strongly to both faces and body parts, similar to areas in the human fusiform gyrus and posterior STS. Furthermore, face-selective areas on the ventral bank of the STS/MTG discriminated both upright and inverted faces from objects, similar to areas in the human ventral temporal cortex. Overall, our findings demonstrate commonalities and differences in the wide-scale brain organization between

  8. The Faces in Radiological Images: Fusiform Face Area Supports Radiological Expertise.

    Science.gov (United States)

    Bilalić, Merim; Grottenthaler, Thomas; Nägele, Thomas; Lindig, Tobias

    2016-03-01

    The fusiform face area (FFA) has often been used as an example of a brain module that was developed through evolution to serve a specific purpose-face processing. Many believe, however, that FFA is responsible for holistic processing associated with any kind of expertise. The expertise view has been tested with various stimuli, with mixed results. One of the main stumbling blocks in the FFA controversy has been the fact that the stimuli used have been similar to faces. Here, we circumvent the problem by using radiological images, X-rays, which bear no resemblance to faces. We demonstrate that FFA can distinguish between X-rays and other stimuli by employing multivariate pattern analysis. The sensitivity to X-rays was significantly better in experienced radiologists than that in medical students with limited radiological experience. For the radiologists, it was also possible to use the patterns of FFA activations obtained on faces to differentiate X-ray stimuli from other stimuli. The overlap in the FFA activation is not based on visual similarity of faces and X-rays but rather on the processes necessary for expertise with both kinds of stimulus. Our results support the expertise view that FFA's main function is related to holistic processing.

  9. Robust Multi biometric Recognition Using Face and Ear Images

    CERN Document Server

    Boodoo, Nazmeen Bibi

    2009-01-01

    This study investigates the use of ear as a biometric for authentication and shows experimental results obtained on a newly created dataset of 420 images. Images are passed to a quality module in order to reduce False Rejection Rate. The Principal Component Analysis (eigen ear) approach was used, obtaining 90.7 percent recognition rate. Improvement in recognition results is obtained when ear biometric is fused with face biometric. The fusion is done at decision level, achieving a recognition rate of 96 percent.

  10. Whole-face procedures for recovering facial images from memory.

    Science.gov (United States)

    Frowd, Charlie D; Skelton, Faye; Hepton, Gemma; Holden, Laura; Minahil, Simra; Pitchford, Melanie; McIntyre, Alex; Brown, Charity; Hancock, Peter J B

    2013-06-01

    Research has indicated that traditional methods for accessing facial memories usually yield unidentifiable images. Recent research, however, has made important improvements in this area to the witness interview, method used for constructing the face and recognition of finished composites. Here, we investigated whether three of these improvements would produce even-more recognisable images when used in conjunction with each other. The techniques are holistic in nature: they involve processes which operate on an entire face. Forty participants first inspected an unfamiliar target face. Nominally 24h later, they were interviewed using a standard type of cognitive interview (CI) to recall the appearance of the target, or an enhanced 'holistic' interview where the CI was followed by procedures for focussing on the target's character. Participants then constructed a composite using EvoFIT, a recognition-type system that requires repeatedly selecting items from face arrays, with 'breeding', to 'evolve' a composite. They either saw faces in these arrays with blurred external features, or an enhanced method where these faces were presented with masked external features. Then, further participants attempted to name the composites, first by looking at the face front-on, the normal method, and then for a second time by looking at the face side-on, which research demonstrates facilitates recognition. All techniques improved correct naming on their own, but together promoted highly-recognisable composites with mean naming at 74% correct. The implication is that these techniques, if used together by practitioners, should substantially increase the detection of suspects using this forensic method of person identification. Copyright © 2013 Forensic Science Society. Published by Elsevier Ireland Ltd. All rights reserved.

  11. Fusion of Daubechies Wavelet Coefficients for Human Face Recognition

    CERN Document Server

    Bhowmik, Mrinal Kanti; Nasipuri, Mita; Basu, Dipak Kumar; Kundu, Mahantapas

    2010-01-01

    In this paper fusion of visual and thermal images in wavelet transformed domain has been presented. Here, Daubechies wavelet transform, called as D2, coefficients from visual and corresponding coefficients computed in the same manner from thermal images are combined to get fused coefficients. After decomposition up to fifth level (Level 5) fusion of coefficients is done. Inverse Daubechies wavelet transform of those coefficients gives us fused face images. The main advantage of using wavelet transform is that it is well-suited to manage different image resolution and allows the image decomposition in different kinds of coefficients, while preserving the image information. Fused images thus found are passed through Principal Component Analysis (PCA) for reduction of dimensions and then those reduced fused images are classified using a multi-layer perceptron. For experiments IRIS Thermal/Visual Face Database was used. Experimental results show that the performance of the approach presented here achieves maximum...

  12. Ultrahigh speed en face OCT capsule for endoscopic imaging.

    Science.gov (United States)

    Liang, Kaicheng; Traverso, Giovanni; Lee, Hsiang-Chieh; Ahsen, Osman Oguz; Wang, Zhao; Potsaid, Benjamin; Giacomelli, Michael; Jayaraman, Vijaysekhar; Barman, Ross; Cable, Alex; Mashimo, Hiroshi; Langer, Robert; Fujimoto, James G

    2015-04-01

    Depth resolved and en face OCT visualization in vivo may have important clinical applications in endoscopy. We demonstrate a high speed, two-dimensional (2D) distal scanning capsule with a micromotor for fast rotary scanning and a pneumatic actuator for precision longitudinal scanning. Longitudinal position measurement and image registration were performed by optical tracking of the pneumatic scanner. The 2D scanning device enables high resolution imaging over a small field of view and is suitable for OCT as well as other scanning microscopies. Large field of view imaging for screening or surveillance applications can also be achieved by proximally pulling back or advancing the capsule while scanning the distal high-speed micromotor. Circumferential en face OCT was demonstrated in living swine at 250 Hz frame rate and 1 MHz A-scan rate using a MEMS tunable VCSEL light source at 1300 nm. Cross-sectional and en face OCT views of the upper and lower gastrointestinal tract were generated with precision distal pneumatic longitudinal actuation as well as proximal manual longitudinal actuation. These devices could enable clinical studies either as an adjunct to endoscopy, attached to an endoscope, or as a swallowed tethered capsule for non-endoscopic imaging without sedation. The combination of ultrahigh speed imaging and distal scanning capsule technology could enable both screening and surveillance applications.

  13. Face image analysis using a multiple features fitting strategy

    OpenAIRE

    Romdhani, Sami

    2005-01-01

    The main contribution of this thesis is a novel algorithm for fitting a Three-Dimensional Morphable Model of faces to a 2D input image. This fitting algorithm enables the estimation of the 3D shape, the texture, the 3D pose and the light direction from a single input image. Generally, the algorithms tackling the problem of 3D shape estimation from image data use only the pixels intensity as input to drive the estimation process. This was previously achieved using either a simple model, such as ...

  14. Tracking Human Faces in Infrared Video

    Science.gov (United States)

    2006-01-01

    environments, but allow for warm items such as computers in the background, without significant distraction from clutter. Such an ability to differentiate...images show the (posterior) probability-of- skin images without and with adaptation. Note how some background objects such as the warm chassis of a...June 1998. [5] M. Isard and J. MacCormick, “Bramble: A bayesian multiple- blob tracker,” in Proc. 8th Int. Conf. Com- puter Vision, 2001. [6] L. Wolff, D

  15. [Health and humanization Diploma: the value of reflection and face to face learning].

    Science.gov (United States)

    Martínez-Gutiérrez, Javiera; Magliozzi, Pietro; Torres, Patricio; Soto, Mauricio; Walker, Rosa

    2015-03-01

    In a rapidly changing culture like ours, with emphasis on productivity, there is a strong need to find the meaning of health care work using learning instances that privilege reflection and face to face contact with others. The Diploma in Health and Humanization (DSH), was developed as an interdisciplinary space for training on issues related to humanization. To analyze the experience of DSH aiming to identify the elements that students considered key factors for the success of the program. We conducted a focus group with DSH graduates, identifying factors associated with satisfaction. Transcripts were coded and analyzed by two independent reviewers. DSH graduates valued a safe space, personal interaction, dialogue and respect as learning tools of the DSH. They also appreciates the opportunity to have emotional interactions among students and between them and the teacher as well as the opportunity to share personal stories and their own search for meaning. DSH is a learning experience in which their graduates value the ability to think about their vocation and the affective interaction with peers and teachers. We hope to contribute to the development of face to face courses in the area of humanization. Face to face methodology is an excellent teaching technique for contents related to the meaning of work, and more specifically, to a group of learners that require affective communication and a personal connection of their work with their own values and beliefs.

  16. Transparent face recognition in an unconstrained environment using a Sparse representation from multiple still images

    NARCIS (Netherlands)

    B.A.M. Ben Schouten; Dr. Johan Tangelder

    2006-01-01

    In a real-world environment a face detector can be applied to extract multiple face images from multiple video streams without constraints on pose and illumination. The extracted face images will have varying image quality and resolution. Moreover, also the detected faces will not be precisely

  17. Transparent face recognition in an unconstrained environment using a Sparse representation from multiple still images

    NARCIS (Netherlands)

    Tangelder, Johan; Schouten, Ben

    2006-01-01

    In a real-world environment a face detector can be applied to extract multiple face images from multiple video streams without constraints on pose and illumination. The extracted face images will have varying image quality and resolution. Moreover, also the detected faces will not be precisely align

  18. Distinct representations of configural and part information across multiple face-selective regions of the human brain

    OpenAIRE

    Golijeh eGolarai; Dara eGhahremani; Eberhardt, Jennifer L.; John D E Gabrieli

    2015-01-01

    Several regions of the human brain respond more strongly to faces than to other visual stimuli, such as regions in the amygdala (AMG), superior temporal sulcus (STS), and the fusiform face area (FFA). It is unclear if these brain regions are similar in representing the configuration or natural appearance of face parts. We used functional magnetic resonance imaging of healthy adults who viewed natural or schematic faces with internal parts that were either normally configured or randomly rearr...

  19. Sensory competition in the face processing areas of the human brain.

    Directory of Open Access Journals (Sweden)

    Krisztina Nagy

    Full Text Available The concurrent presentation of multiple stimuli in the visual field may trigger mutually suppressive interactions throughout the ventral visual stream. While several studies have been performed on sensory competition effects among non-face stimuli relatively little is known about the interactions in the human brain for multiple face stimuli. In the present study we analyzed the neuronal basis of sensory competition in an event-related functional magnetic resonance imaging (fMRI study using multiple face stimuli. We varied the ratio of faces and phase-noise images within a composite display with a constant number of peripheral stimuli, thereby manipulating the competitive interactions between faces. For contralaterally presented stimuli we observed strong competition effects in the fusiform face area (FFA bilaterally and in the right lateral occipital area (LOC, but not in the occipital face area (OFA, suggesting their different roles in sensory competition. When we increased the spatial distance among pairs of faces the magnitude of suppressive interactions was reduced in the FFA. Surprisingly, the magnitude of competition depended on the visual hemifield of the stimuli: ipsilateral stimulation reduced the competition effects somewhat in the right LOC while it increased them in the left LOC. This suggests a left hemifield dominance of sensory competition. Our results support the sensory competition theory in the processing of multiple faces and suggests that sensory competition occurs in several cortical areas in both cerebral hemispheres.

  20. Adaptive feature-specific imaging: a face recognition example.

    Science.gov (United States)

    Baheti, Pawan K; Neifeld, Mark A

    2008-04-01

    We present an adaptive feature-specific imaging (AFSI) system and consider its application to a face recognition task. The proposed system makes use of previous measurements to adapt the projection basis at each step. Using sequential hypothesis testing, we compare AFSI with static-FSI (SFSI) and static or adaptive conventional imaging in terms of the number of measurements required to achieve a specified probability of misclassification (Pe). The AFSI system exhibits significant improvement compared to SFSI and conventional imaging at low signal-to-noise ratio (SNR). It is shown that for M=4 hypotheses and desired Pe=10(-2), AFSI requires 100 times fewer measurements than the adaptive conventional imager at SNR= -20 dB. We also show a trade-off, in terms of average detection time, between measurement SNR and adaptation advantage, resulting in an optimal value of integration time (equivalent to SNR) per measurement.

  1. “Review on Human Face Detection based on Skin Color and Edge Information”

    Directory of Open Access Journals (Sweden)

    Divyesh S. Gondaliya

    2015-01-01

    Full Text Available Human face detection system is gradually used for the tracking a human face. Face detection system is mainly used in face reorganization system for detecting human face. Here in this review paper we have describe how face detection system works and where it is useful in real world environment. We have describes different technique like template matching, skin color and edge information based on face detection from skin region, symmetry based face detection and etc.

  2. Putting A Human Face on Equilibrium

    Science.gov (United States)

    Glickstein, Neil

    2005-03-01

    A short biography of chemist Fritz Haber is used to personalize the abstract concepts of equilibrium chemistry for high school students in an introductory course. In addition to giving the Haber Bosch process an historic, an economic, and a scientific background the reading and subsequent discussion allows students for whom the human perspective is of paramount importance a chance to investigate the irony of balance or equilibrium in Haber's life story. Since the inclusion of the Haber biography, performance in the laboratory and on examinations for those students who are usually only partially engaged has dramatically improved.

  3. The effect of image resolution on the performance of a face recognition system

    NARCIS (Netherlands)

    Boom, B.J.; Beumer, G.M.; Spreeuwers, L.J.; Veldhuis, R.N.J.

    2006-01-01

    In this paper we investigate the effect of image resolution on the error rates of a face verification system. We do not restrict ourselves to the face recognition algorithm only, but we also consider the face registration. In our face recognition system, the face registration is done by finding land

  4. Robust statistical frontalization of human and animal faces

    NARCIS (Netherlands)

    Sagonas, Christos; Panagakis, Yannis; Zafeiriou, Stefanos; Pantic, Maja

    2016-01-01

    The unconstrained acquisition of facial data in real-world conditions may result in face images with significant pose variations, illumination changes, and occlusions, affecting the performance of facial landmark localization and recognition methods. In this paper, a novel method, robust to pose,

  5. Image-Based 3D Face Modeling System

    Directory of Open Access Journals (Sweden)

    Vladimir Vezhnevets

    2005-08-01

    Full Text Available This paper describes an automatic system for 3D face modeling using frontal and profile images taken by an ordinary digital camera. The system consists of four subsystems including frontal feature detection, profile feature detection, shape deformation, and texture generation modules. The frontal and profile feature detection modules automatically extract the facial parts such as the eye, nose, mouth, and ear. The shape deformation module utilizes the detected features to deform the generic head mesh model such that the deformed model coincides with the detected features. A texture is created by combining the facial textures augmented from the input images and the synthesized texture and mapped onto the deformed generic head model. This paper provides a practical system for 3D face modeling, which is highly automated by aggregating, customizing, and optimizing a bunch of individual computer vision algorithms. The experimental results show a highly automated process of modeling, which is sufficiently robust to various imaging conditions. The whole model creation including all the optional manual corrections takes only 2∼3 minutes.

  6. An FPGA-based design of a modular approach for integral images in a real-time face detection system

    Science.gov (United States)

    Ngo, Hau T.; Rakvic, Ryan N.; Broussard, Randy P.; Ives, Robert W.

    2009-05-01

    The first step in a facial recognition system is to find and extract human faces in a static image or video frame. Most face detection methods are based on statistical models that can be trained and then used to classify faces. These methods are effective but the main drawback is speed because a massive number of sub-windows at different image scales are considered in the detection procedure. A robust face detection technique based on an encoded image known as an "integral image" has been proposed by Viola and Jones. The use of an integral image helps to reduce the number of operations to access a sub-image to a relatively small and fixed number. Additional speedup is achieved by incorporating a cascade of simple classifiers to quickly eliminate non-face sub-windows. Even with the reduced number of accesses to image data to extract features in Viola-Jones algorithm, the number of memory accesses is still too high to support realtime operations for high resolution images or video frames. The proposed hardware design in this research work employs a modular approach to represent the "integral image" for this memory-intensive application. An efficient memory manage strategy is also proposed to aggressively utilize embedded memory modules to reduce interaction with external memory chips. The proposed design is targeted for a low-cost FPGA prototype board for a cost-effective face detection/recognition system.

  7. Daubechies Wavelet Tool: Application For Human Face Recognition

    Directory of Open Access Journals (Sweden)

    Ms. Swapna M. Patil,

    2011-03-01

    Full Text Available In this paper fusion of visual and thermal images in wavelet transformed domain has been presented. Here, Daubechies wavelet transform, called as D2, coefficients from visual and corresponding coefficients computed in the same manner from thermal images are combined to get fused coefficients. After decomposition up to fifth level (Level 5 fusion of coefficients is done. Inverse Daubechies wavelet transform of those coefficients gives us fused face images. The main advantage of using wavelet transform is that it is well-suited to manage different image resolution and allows the image decomposition in different kinds of coefficients, while preserving the image information. Fused images thus found are passed through Principal Component Analysis (PCA for reduction of dimensions and then those reduced fused images are classified using a multi-layer perceptron. For experiments IRIS Thermal/Visual Face Database was used. Experimental results show that the performance of the approach presented here achieves maximum success rate of 100% in many cases.

  8. Music-Elicited Emotion Identification Using Optical Flow Analysis of Human Face

    Science.gov (United States)

    Kniaz, V. V.; Smirnova, Z. N.

    2015-05-01

    Human emotion identification from image sequences is highly demanded nowadays. The range of possible applications can vary from an automatic smile shutter function of consumer grade digital cameras to Biofied Building technologies, which enables communication between building space and residents. The highly perceptual nature of human emotions leads to the complexity of their classification and identification. The main question arises from the subjective quality of emotional classification of events that elicit human emotions. A variety of methods for formal classification of emotions were developed in musical psychology. This work is focused on identification of human emotions evoked by musical pieces using human face tracking and optical flow analysis. Facial feature tracking algorithm used for facial feature speed and position estimation is presented. Facial features were extracted from each image sequence using human face tracking with local binary patterns (LBP) features. Accurate relative speeds of facial features were estimated using optical flow analysis. Obtained relative positions and speeds were used as the output facial emotion vector. The algorithm was tested using original software and recorded image sequences. The proposed technique proves to give a robust identification of human emotions elicited by musical pieces. The estimated models could be used for human emotion identification from image sequences in such fields as emotion based musical background or mood dependent radio.

  9. 'Face value': new medical imaging software in commercial view.

    Science.gov (United States)

    Coopmans, Catelijne

    2011-04-01

    Based on three ethnographic vignettes describing the engagements of a small start-up company with prospective competitors, partners and customers, this paper shows how commercial considerations are folded into the ways visual images become 'seeable'. When company members mount demonstrations of prototype mammography software, they seek to generate interest but also to protect their intellectual property. Pivotal to these efforts to manage revelation and concealment is the visual interface, which is variously performed as obstacle and ally in the development of a profitable product. Using the concept of 'face value', the paper seeks to develop further insight into contemporary dynamics of seeing and showing by tracing the way techno-visual presentations and commercial considerations become entangled in practice. It also draws attention to the salience and significance of enactments of surface and depth in image-based practices.

  10. Human Bites of the Face with Tissue Losses in Cosmopolitan ...

    African Journals Online (AJOL)

    Dr. Milaki Asuku

    Abstract. A retrospective series of thirty-six cases of human bites to the face with tissue losses requiring .... other authors 3, 5The expression 'snatched lover' featured .... literature is replete with reports on re-implantation of ... review of 22 cases.

  11. Appearance of Symmetry, Beauty, and Health in Human Faces

    Science.gov (United States)

    Zaidel, D.W.; Aarde, S.M.; Baig, K.

    2005-01-01

    Symmetry is an important concept in biology, being related to mate selection strategies, health, and survival of species. In human faces, the relevance of left-right symmetry to attractiveness and health is not well understood. We compared the appearance of facial attractiveness, health, and symmetry in three separate experiments. Participants…

  12. A Database of Registered, Textured Models of the Human Face

    DEFF Research Database (Denmark)

    Sjöstrand, Karl; Lading, Brian

    2005-01-01

    This note describes a data set of 24 registered human faces represented by both shape and texture. The data was collected during 2003 as part of the preparation of the master thesis of Karl Sjöstrand (former name Karl Skoglund). The data is ready to be used in shape, appearance and data analysis....

  13. Can the usage of human growth hormones affect facial appearance and the accuracy of face recognition systems?

    Science.gov (United States)

    Rose, Jake; Martin, Michael; Bourlai, Thirimachos

    2014-06-01

    In law enforcement and security applications, the acquisition of face images is critical in producing key trace evidence for the successful identification of potential threats. The goal of the study is to demonstrate that steroid usage significantly affects human facial appearance and hence, the performance of commercial and academic face recognition (FR) algorithms. In this work, we evaluate the performance of state-of-the-art FR algorithms on two unique face image datasets of subjects before (gallery set) and after (probe set) steroid (or human growth hormone) usage. For the purpose of this study, datasets of 73 subjects were created from multiple sources found on the Internet, containing images of men and women before and after steroid usage. Next, we geometrically pre-processed all images of both face datasets. Then, we applied image restoration techniques on the same face datasets, and finally, we applied FR algorithms in order to match the pre-processed face images of our probe datasets against the face images of the gallery set. Experimental results demonstrate that only a specific set of FR algorithms obtain the most accurate results (in terms of the rank-1 identification rate). This is because there are several factors that influence the efficiency of face matchers including (i) the time lapse between the before and after image pre-processing and restoration face photos, (ii) the usage of different drugs (e.g. Dianabol, Winstrol, and Decabolan), (iii) the usage of different cameras to capture face images, and finally, (iv) the variability of standoff distance, illumination and other noise factors (e.g. motion noise). All of the previously mentioned complicated scenarios make clear that cross-scenario matching is a very challenging problem and, thus, further investigation is required.

  14. The Morphometrics of “Masculinity” in Human Faces

    Science.gov (United States)

    Mitteroecker, Philipp; Windhager, Sonja; Müller, Gerd B.; Schaefer, Katrin

    2015-01-01

    In studies of social inference and human mate preference, a wide but inconsistent array of tools for computing facial masculinity has been devised. Several of these approaches implicitly assumed that the individual expression of sexually dimorphic shape features, which we refer to as maleness, resembles facial shape features perceived as masculine. We outline a morphometric strategy for estimating separately the face shape patterns that underlie perceived masculinity and maleness, and for computing individual scores for these shape patterns. We further show how faces with different degrees of masculinity or maleness can be constructed in a geometric morphometric framework. In an application of these methods to a set of human facial photographs, we found that shape features typically perceived as masculine are wide faces with a wide inter-orbital distance, a wide nose, thin lips, and a large and massive lower face. The individual expressions of this combination of shape features—the masculinity shape scores—were the best predictor of rated masculinity among the compared methods (r = 0.5). The shape features perceived as masculine only partly resembled the average face shape difference between males and females (sexual dimorphism). Discriminant functions and Procrustes distances to the female mean shape were poor predictors of perceived masculinity. PMID:25671667

  15. Emotion identification method using RGB information of human face

    Science.gov (United States)

    Kita, Shinya; Mita, Akira

    2015-03-01

    Recently, the number of single households is drastically increased due to the growth of the aging society and the diversity of lifestyle. Therefore, the evolution of building spaces is demanded. Biofied Building we propose can help to avoid this situation. It helps interaction between the building and residents' conscious and unconscious information using robots. The unconscious information includes emotion, condition, and behavior. One of the important information is thermal comfort. We assume we can estimate it from human face. There are many researchs about face color analysis, but a few of them are conducted in real situations. In other words, the existing methods were not used with disturbance such as room lumps. In this study, Kinect was used with face-tracking. Room lumps and task lumps were used to verify that our method could be applicable to real situation. In this research, two rooms at 22 and 28 degrees C were prepared. We showed that the transition of thermal comfort by changing temperature can be observed from human face. Thus, distinction between the data of 22 and 28 degrees C condition from face color was proved to be possible.

  16. Visual peripersonal space centred on the face in humans.

    Science.gov (United States)

    Làdavas, E; Zeloni, G; Farnè, A

    1998-12-01

    A convergent series of studies in monkeys and man suggests that the computation of visual space is performed in several brain regions for different behavioural purposes. Among these multiple spatial areas, the ventral intraparietal cortex, the putamen and the ventral aspect of the premotor cortex (area 6) contain a system for representing visual space near the face (peripersonal space). In these cerebral areas some neurons are bimodal: they have tactile receptive fields on the face, and they can also be driven by visual stimuli located near the tactile field. The spatial correspondence between the visual and tactile receptive fields provides a map of near visual space coded in body-part-centred co-ordinates. In the present study we demonstrate for the first time the existence of a visual peripersonal space centred on the face in humans. In patients with right hemispheric lesions, visual stimuli delivered in the space near the ipsilesional side of the face extinguished tactile stimuli on the contralesional side (cross-modal visuotactile extinction) to the same extent as did an ipsilesional tactile stimulation (unimodal tactile extinction). Furthermore, a visual stimulus presented in the proximity of the contralesional side of the face improved the detection of a left tactile stimulus: i.e. under bilateral tactile presentation patients were more accurate to report the presence of a left tactile stimulus when a simultaneous visual stimulus was presented near the left side of the face. However, when visual stimuli were delivered far from the face, visuotactile extinction and visuotactile facilitation effects were dramatically reduced. These findings are consistent with the hypothesis of a representation of visual peripersonal space coded in bodypart-centred co-ordinates, and they provide a striking demonstration of the modularity of human visual space.

  17. Experience Shapes the Development of Neural Substrates of Face Processing in Human Ventral Temporal Cortex.

    Science.gov (United States)

    Golarai, Golijeh; Liberman, Alina; Grill-Spector, Kalanit

    2017-02-01

    In adult humans, the ventral temporal cortex (VTC) represents faces in a reproducible topology. However, it is unknown what role visual experience plays in the development of this topology. Using functional magnetic resonance imaging in children and adults, we found a sequential development, in which the topology of face-selective activations across the VTC was matured by age 7, but the spatial extent and degree of face selectivity continued to develop past age 7 into adulthood. Importantly, own- and other-age faces were differentially represented, both in the distributed multivoxel patterns across the VTC, and also in the magnitude of responses of face-selective regions. These results provide strong evidence that experience shapes cortical representations of faces during development from childhood to adulthood. Our findings have important implications for the role of experience and age in shaping the neural substrates of face processing in the human VTC. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  18. Sub-component modeling for face image reconstruction in video communications

    Science.gov (United States)

    Shiell, Derek J.; Xiao, Jing; Katsaggelos, Aggelos K.

    2008-08-01

    Emerging communications trends point to streaming video as a new form of content delivery. These systems are implemented over wired systems, such as cable or ethernet, and wireless networks, cell phones, and portable game systems. These communications systems require sophisticated methods of compression and error-resilience encoding to enable communications across band-limited and noisy delivery channels. Additionally, the transmitted video data must be of high enough quality to ensure a satisfactory end-user experience. Traditionally, video compression makes use of temporal and spatial coherence to reduce the information required to represent an image. In many communications systems, the communications channel is characterized by a probabilistic model which describes the capacity or fidelity of the channel. The implication is that information is lost or distorted in the channel, and requires concealment on the receiving end. We demonstrate a generative model based transmission scheme to compress human face images in video, which has the advantages of a potentially higher compression ratio, while maintaining robustness to errors and data corruption. This is accomplished by training an offline face model and using the model to reconstruct face images on the receiving end. We propose a sub-component AAM modeling the appearance of sub-facial components individually, and show face reconstruction results under different types of video degradation using a weighted and non-weighted version of the sub-component AAM.

  19. Distinct representations of configural and part information across multiple face- selective regions of the human brain

    Directory of Open Access Journals (Sweden)

    Golijeh eGolarai

    2015-11-01

    Full Text Available Several regions of the human brain respond more strongly to faces than to other visual stimuli, such as regions in the amygdala (AMG, superior temporal sulcus (STS, and the fusiform face area (FFA. It is unclear if these brain regions are similar in representing the configuration or natural appearance of face parts. We used functional magnetic resonance imaging of healthy adults who viewed natural or schematic faces with internal parts that were either normally configured or randomly rearranged. Response amplitudes were reduced in the AMG and STS when subjects viewed stimuli whose configuration of parts were digitally rearranged, suggesting representation of the 1st order configuration of face parts. In contrast, response amplitudes in the FFA showed little modulation whether face parts were rearranged or if the natural face parts were replaced with lines. Instead, FFA responses were reduced only when both configural and part information were reduced, revealing an interaction between these factors, suggesting distinct representation of 1st order face configuration and parts in the AMG and STS vs. the FFA.

  20. A novel BCI based on ERP components sensitive to configural processing of human faces

    Science.gov (United States)

    Zhang, Yu; Zhao, Qibin; Jing, Jin; Wang, Xingyu; Cichocki, Andrzej

    2012-04-01

    This study introduces a novel brain-computer interface (BCI) based on an oddball paradigm using stimuli of facial images with loss of configural face information (e.g., inversion of face). To the best of our knowledge, till now the configural processing of human faces has not been applied to BCI but widely studied in cognitive neuroscience research. Our experiments confirm that the face-sensitive event-related potential (ERP) components N170 and vertex positive potential (VPP) have reflected early structural encoding of faces and can be modulated by the configural processing of faces. With the proposed novel paradigm, we investigate the effects of ERP components N170, VPP and P300 on target detection for BCI. An eight-class BCI platform is developed to analyze ERPs and evaluate the target detection performance using linear discriminant analysis without complicated feature extraction processing. The online classification accuracy of 88.7% and information transfer rate of 38.7 bits min-1 using stimuli of inverted faces with only single trial suggest that the proposed paradigm based on the configural processing of faces is very promising for visual stimuli-driven BCI applications.

  1. The impact of image quality on the performance of face recognition

    NARCIS (Netherlands)

    Dutta, Abhishek; Veldhuis, Raymond; Spreeuwers, Luuk

    2012-01-01

    The performance of a face recognition system depends on the quality of both test and reference images participating in the face comparison process. In a forensic evaluation case involving face recognition, we do not have any control over the quality of the trace (image captured by a CCTV at a crime

  2. Face image modeling by multilinear subspace analysis with missing values.

    Science.gov (United States)

    Geng, Xin; Smith-Miles, Kate; Zhou, Zhi-Hua; Wang, Liang

    2011-06-01

    Multilinear subspace analysis (MSA) is a promising methodology for pattern-recognition problems due to its ability in decomposing the data formed from the interaction of multiple factors. The MSA requires a large training set, which is well organized in a single tensor, which consists of data samples with all possible combinations of the contributory factors. However, such a "complete" training set is difficult (or impossible) to obtain in many real applications. The missing-value problem is therefore crucial to the practicality of the MSA but has been hardly investigated up to present. To solve the problem, this paper proposes an algorithm named M(2)SA, which is advantageous in real applications due to the following: 1) it inherits the ability of the MSA to decompose the interlaced semantic factors; 2) it does not depend on any assumptions on the data distribution; and 3) it can deal with a high percentage of missing values. M(2)SA is evaluated by face image modeling on two typical multifactorial applications, i.e., face recognition and facial age estimation. Experimental results show the effectiveness of M(2) SA even when the majority of the values in the training tensor are missing.

  3. Advances in face detection and facial image analysis

    CERN Document Server

    Celebi, M; Smolka, Bogdan

    2016-01-01

    This book presents the state-of-the-art in face detection and analysis. It outlines new research directions, including in particular psychology-based facial dynamics recognition, aimed at various applications such as behavior analysis, deception detection, and diagnosis of various psychological disorders. Topics of interest include face and facial landmark detection, face recognition, facial expression and emotion analysis, facial dynamics analysis, face classification, identification, and clustering, and gaze direction and head pose estimation, as well as applications of face analysis.

  4. 任意光照下人脸图像的低维光照空间表示%A Low-dimensional Illumination Space Representation of Human Faces for Arbitrary Lighting Conditions

    Institute of Scientific and Technical Information of China (English)

    胡元奎; 汪增福

    2007-01-01

    The proposed method for low-dimensional illumination space representation (LDISR) of human faces can not only synthesize a virtual face image when given lighting conditions but also estimate lighting conditions when given a face image. The LDISR is based on the observation that 9 basis point light sources can represent almost arbitrary lighting conditions for face recognition application and different human faces have a similar LDISR. The principal component analysis (PCA) and the nearest neighbor clustering method are adopted to obtain the 9 basis point light sources. The 9 basis images under the 9 basis point light sources are then used to construct an LDISR which can represent almost all face images under arbitrary lighting conditions.Illumination ratio image (IRI) is employed to generate virtual face images under different illuminations. The LDISR obtained from face images of one person can be used for other people. Experimental results on image reconstruction and face recognition indicate the efficiency of LDISR.

  5. Can human eyes prevent perceptual narrowing for monkey faces in human infants?

    Science.gov (United States)

    Damon, Fabrice; Bayet, Laurie; Quinn, Paul C; Hillairet de Boisferon, Anne; Méary, David; Dupierrix, Eve; Lee, Kang; Pascalis, Olivier

    2015-07-01

    Perceptual narrowing has been observed in human infants for monkey faces: 6-month-olds can discriminate between them, whereas older infants from 9 months of age display difficulty discriminating between them. The difficulty infants from 9 months have processing monkey faces has not been clearly identified. It could be due to the structural characteristics of monkey faces, particularly the key facial features that differ from human faces. The current study aimed to investigate whether the information conveyed by the eyes is of importance. We examined whether the presence of Caucasian human eyes in monkey faces allows recognition to be maintained in 6-month-olds and facilitates recognition in 9- and 12-month-olds. Our results revealed that the presence of human eyes in monkey faces maintains recognition for those faces at 6 months of age and partially facilitates recognition of those faces at 9 months of age, but not at 12 months of age. The findings are interpreted in the context of perceptual narrowing and suggest that the attenuation of processing of other-species faces is not reversed by the presence of human eyes.

  6. From local pixel structure to global image super-resolution: a new face hallucination framework.

    Science.gov (United States)

    Hu, Yu; Lam, Kin-Man; Qiu, Guoping; Shen, Tingzhi

    2011-02-01

    We have developed a new face hallucination framework termed from local pixel structure to global image super-resolution (LPS-GIS). Based on the assumption that two similar face images should have similar local pixel structures, the new framework first uses the input low-resolution (LR) face image to search a face database for similar example high-resolution (HR) faces in order to learn the local pixel structures for the target HR face. It then uses the input LR face and the learned pixel structures as priors to estimate the target HR face. We present a three-step implementation procedure for the framework. Step 1 searches the database for K example faces that are the most similar to the input, and then warps the K example images to the input using optical flow. Step 2 uses the warped HR version of the K example faces to learn the local pixel structures for the target HR face. An effective method for learning local pixel structures from an individual face, and an adaptive procedure for fusing the local pixel structures of different example faces to reduce the influence of warping errors, have been developed. Step 3 estimates the target HR face by solving a constrained optimization problem by means of an iterative procedure. Experimental results show that our new method can provide good performances for face hallucination, both in terms of reconstruction error and visual quality; and that it is competitive with existing state-of-the-art methods.

  7. Human Body Image Edge Detection Based on Wavelet Transform

    Institute of Scientific and Technical Information of China (English)

    李勇; 付小莉

    2003-01-01

    Human dresses are different in thousands way.Human body image signals have big noise, a poor light and shade contrast and a narrow range of gray gradation distribution. The application of a traditional grads method or gray method to detect human body image edges can't obtain satisfactory results because of false detections and missed detections. According to tte peculiarity of human body image, dyadic wavelet transform of cubic spline is successfully applied to detect the face and profile edges of human body image and Mallat algorithm is used in the wavelet decomposition in this paper.

  8. Electrophysiological brain dynamics during the esthetic judgment of human bodies and faces.

    Science.gov (United States)

    Muñoz, Francisco; Martín-Loeches, Manuel

    2015-01-12

    This experiment investigated how the esthetic judgment of human body and face modulates cognitive and affective processes. We hypothesized that judgments on ugliness and beauty would elicit separable event-related brain potentials (ERP) patterns, depending on the esthetic value of body and faces in both genders. In a pretest session, participants evaluated images in a range from very ugly to very beautiful, what generated three sets of beautiful, ugly and neutral faces and bodies. In the recording session, they performed a task consisting in a beautiful-neutral-ugly judgment. Cognitive and affective effects were observed on a differential pattern of ERP components (P200, P300 and LPC). Main findings revealed a P200 amplitude increase to ugly images, probably the result of a negativity bias in attentional processes. A P300 increase was found mostly to beautiful images, particularly to female bodies, consistent with the salience of these stimuli, particularly for stimulus categorization. LPC appeared significantly larger to both ugly and beautiful images, probably reflecting later, decision processes linked to keeping information in working memory. This finding was especially remarkable for ugly male faces. Our findings are discussed on the ground of evolutionary and adaptive value of esthetics in person evaluation. This article is part of a Special Issue entitled Hold Item.

  9. Improving the Performance of Machine Learning Based Multi Attribute Face Recognition Algorithm Using Wavelet Based Image Decomposition Technique

    Directory of Open Access Journals (Sweden)

    S. Sakthivel

    2011-01-01

    Full Text Available Problem statement: Recognizing a face based attributes is an easy task for a human to perform; it is closely automated and requires little mental effort. A computer, on the other hand, has no innate ability to recognize a face or a facial feature and must be programmed with an algorithm to do so. Generally, to recognize a face, different kinds of the facial features were used separately or in a combined manner. In the previous work, we have developed a machine learning based multi attribute face recognition algorithm and evaluated it different set of weights to each input attribute and performance wise it is low compared to proposed wavelet decomposition technique. Approach: In this study, wavelet decomposition technique has been applied as a preprocessing technique to enhance the input face images in order to reduce the loss of classification performance due to changes in facial appearance. The Experiment was specifically designed to investigate the gain in robustness against illumination and facial expression changes. Results: In this study, a wavelet based image decomposition technique has been proposed to enhance the performance by 8.54 percent of the previously designed system. Conclusion: The proposed model has been tested on face images with difference in expression and illumination condition with a dataset obtained from face image databases from Olivetti Research Laboratory.

  10. An Efficient Feature Extraction Method with Pseudo-Zernike Moment in RBF Neural Network-Based Human Face Recognition System

    Directory of Open Access Journals (Sweden)

    Ahmadi Majid

    2003-01-01

    Full Text Available This paper introduces a novel method for the recognition of human faces in digital images using a new feature extraction method that combines the global and local information in frontal view of facial images. Radial basis function (RBF neural network with a hybrid learning algorithm (HLA has been used as a classifier. The proposed feature extraction method includes human face localization derived from the shape information. An efficient distance measure as facial candidate threshold (FCT is defined to distinguish between face and nonface images. Pseudo-Zernike moment invariant (PZMI with an efficient method for selecting moment order has been used. A newly defined parameter named axis correction ratio (ACR of images for disregarding irrelevant information of face images is introduced. In this paper, the effect of these parameters in disregarding irrelevant information in recognition rate improvement is studied. Also we evaluate the effect of orders of PZMI in recognition rate of the proposed technique as well as RBF neural network learning speed. Simulation results on the face database of Olivetti Research Laboratory (ORL indicate that the proposed method for human face recognition yielded a recognition rate of 99.3%.

  11. Staining and embedding of human chromosomes for 3-d serial block-face scanning electron microscopy.

    Science.gov (United States)

    Yusuf, Mohammed; Chen, Bo; Hashimoto, Teruo; Estandarte, Ana Katrina; Thompson, George; Robinson, Ian

    2014-12-01

    The high-order structure of human chromosomes is an important biological question that is still under investigation. Studies have been done on imaging human mitotic chromosomes using mostly 2-D microscopy methods. To image micron-sized human chromosomes in 3-D, we developed a procedure for preparing samples for serial block-face scanning electron microscopy (SBFSEM). Polyamine chromosomes are first separated using a simple filtration method and then stained with heavy metal. We show that the DNA-specific platinum blue provides higher contrast than osmium tetroxide. A two-step procedure for embedding chromosomes in resin is then used to concentrate the chromosome samples. After stacking the SBFSEM images, a familiar X-shaped chromosome was observed in 3-D.

  12. Does my face FIT?: a face image task reveals structure and distortions of facial feature representation.

    Directory of Open Access Journals (Sweden)

    Christina T Fuentes

    Full Text Available Despite extensive research on face perception, few studies have investigated individuals' knowledge about the physical features of their own face. In this study, 50 participants indicated the location of key features of their own face, relative to an anchor point corresponding to the tip of the nose, and the results were compared to the true location of the same individual's features from a standardised photograph. Horizontal and vertical errors were analysed separately. An overall bias to underestimate vertical distances revealed a distorted face representation, with reduced face height. Factor analyses were used to identify separable subconfigurations of facial features with correlated localisation errors. Independent representations of upper and lower facial features emerged from the data pattern. The major source of variation across individuals was in representation of face shape, with a spectrum from tall/thin to short/wide representation. Visual identification of one's own face is excellent, and facial features are routinely used for establishing personal identity. However, our results show that spatial knowledge of one's own face is remarkably poor, suggesting that face representation may not contribute strongly to self-awareness.

  13. Human Age Estimation Method Robust to Camera Sensor and/or Face Movement.

    Science.gov (United States)

    Nguyen, Dat Tien; Cho, So Ra; Pham, Tuyen Danh; Park, Kang Ryoung

    2015-08-31

    Human age can be employed in many useful real-life applications, such as customer service systems, automatic vending machines, entertainment, etc. In order to obtain age information, image-based age estimation systems have been developed using information from the human face. However, limitations exist for current age estimation systems because of the various factors of camera motion and optical blurring, facial expressions, gender, etc. Motion blurring can usually be presented on face images by the movement of the camera sensor and/or the movement of the face during image acquisition. Therefore, the facial feature in captured images can be transformed according to the amount of motion, which causes performance degradation of age estimation systems. In this paper, the problem caused by motion blurring is addressed and its solution is proposed in order to make age estimation systems robust to the effects of motion blurring. Experiment results show that our method is more efficient for enhancing age estimation performance compared with systems that do not employ our method.

  14. Human Age Estimation Method Robust to Camera Sensor and/or Face Movement

    Directory of Open Access Journals (Sweden)

    Dat Tien Nguyen

    2015-08-01

    Full Text Available Human age can be employed in many useful real-life applications, such as customer service systems, automatic vending machines, entertainment, etc. In order to obtain age information, image-based age estimation systems have been developed using information from the human face. However, limitations exist for current age estimation systems because of the various factors of camera motion and optical blurring, facial expressions, gender, etc. Motion blurring can usually be presented on face images by the movement of the camera sensor and/or the movement of the face during image acquisition. Therefore, the facial feature in captured images can be transformed according to the amount of motion, which causes performance degradation of age estimation systems. In this paper, the problem caused by motion blurring is addressed and its solution is proposed in order to make age estimation systems robust to the effects of motion blurring. Experiment results show that our method is more efficient for enhancing age estimation performance compared with systems that do not employ our method.

  15. The Human Face as a Dynamic Tool for Social Communication.

    Science.gov (United States)

    Jack, Rachael E; Schyns, Philippe G

    2015-07-20

    As a highly social species, humans frequently exchange social information to support almost all facets of life. One of the richest and most powerful tools in social communication is the face, from which observers can quickly and easily make a number of inferences - about identity, gender, sex, age, race, ethnicity, sexual orientation, physical health, attractiveness, emotional state, personality traits, pain or physical pleasure, deception, and even social status. With the advent of the digital economy, increasing globalization and cultural integration, understanding precisely which face information supports social communication and which produces misunderstanding is central to the evolving needs of modern society (for example, in the design of socially interactive digital avatars and companion robots). Doing so is challenging, however, because the face can be thought of as comprising a high-dimensional, dynamic information space, and this impacts cognitive science and neuroimaging, and their broader applications in the digital economy. New opportunities to address this challenge are arising from the development of new methods and technologies, coupled with the emergence of a modern scientific culture that embraces cross-disciplinary approaches. Here, we briefly review one such approach that combines state-of-the-art computer graphics, psychophysics and vision science, cultural psychology and social cognition, and highlight the main knowledge advances it has generated. In the light of current developments, we provide a vision of the future directions in the field of human facial communication within and across cultures.

  16. Preference for Averageness in Faces Does Not Generalize to Non-Human Primates

    Directory of Open Access Journals (Sweden)

    Olivia B. Tomeo

    2017-07-01

    Full Text Available Facial attractiveness is a long-standing topic of active study in both neuroscience and social science, motivated by its positive social consequences. Over the past few decades, it has been established that averageness is a major factor influencing judgments of facial attractiveness in humans. Non-human primates share similar social behaviors as well as neural mechanisms related to face processing with humans. However, it is unknown whether monkeys, like humans, also find particular faces attractive and, if so, which kind of facial traits they prefer. To address these questions, we investigated the effect of averageness on preferences for faces in monkeys. We tested three adult male rhesus macaques using a visual paired comparison (VPC task, in which they viewed pairs of faces (both individual faces, or one individual face and one average face; viewing time was used as a measure of preference. We did find that monkeys looked longer at certain individual faces than others. However, unlike humans, monkeys did not prefer the average face over individual faces. In fact, the more the individual face differed from the average face, the longer the monkeys looked at it, indicating that the average face likely plays a role in face recognition rather than in judgments of facial attractiveness: in models of face recognition, the average face operates as the norm against which individual faces are compared and recognized. Taken together, our study suggests that the preference for averageness in faces does not generalize to non-human primates.

  17. Automatic landmark detection and face recognition for side-view face images

    NARCIS (Netherlands)

    Santemiz, Pinar; Spreeuwers, Luuk J.; Veldhuis, Raymond N.J.; Broemme, Arslan; Busch, Christoph

    2013-01-01

    In real-life scenarios where pose variation is up to side-view positions, face recognition becomes a challenging task. In this paper we propose an automatic side-view face recognition system designed for home-safety applications. Our goal is to recognize people as they pass through doors in order to

  18. A Parallel Framework for Multilayer Perceptron for Human Face Recognition

    CERN Document Server

    Bhowmik, M K; Nasipuri, M; Basu, D K; Kundu, M

    2010-01-01

    Artificial neural networks have already shown their success in face recognition and similar complex pattern recognition tasks. However, a major disadvantage of the technique is that it is extremely slow during training for larger classes and hence not suitable for real-time complex problems such as pattern recognition. This is an attempt to develop a parallel framework for the training algorithm of a perceptron. In this paper, two general architectures for a Multilayer Perceptron (MLP) have been demonstrated. The first architecture is All-Class-in-One-Network (ACON) where all the classes are placed in a single network and the second one is One-Class-in-One-Network (OCON) where an individual single network is responsible for each and every class. Capabilities of these two architectures were compared and verified in solving human face recognition, which is a complex pattern recognition task where several factors affect the recognition performance like pose variations, facial expression changes, occlusions, and ...

  19. Face liveness detection for face recognition based on cardiac features of skin color image

    Science.gov (United States)

    Suh, Kun Ha; Lee, Eui Chul

    2016-07-01

    With the growth of biometric technology, spoofing attacks have been emerged a threat to the security of the system. Main spoofing scenarios in the face recognition system include the printing attack, replay attack, and 3D mask attack. To prevent such attacks, techniques that evaluating liveness of the biometric data can be considered as a solution. In this paper, a novel face liveness detection method based on cardiac signal extracted from face is presented. The key point of proposed method is that the cardiac characteristic is detected in live faces but not detected in non-live faces. Experimental results showed that the proposed method can be effective way for determining printing attack or 3D mask attack.

  20. Face-selective regions show invariance to linear, but not to non-linear, changes in facial images.

    Science.gov (United States)

    Baseler, Heidi A; Young, Andrew W; Jenkins, Rob; Mike Burton, A; Andrews, Timothy J

    2016-12-01

    Familiar face recognition is remarkably invariant across huge image differences, yet little is understood concerning how image-invariant recognition is achieved. To investigate the neural correlates of invariance, we localized the core face-responsive regions and then compared the pattern of fMR-adaptation to different stimulus transformations in each region to behavioural data demonstrating the impact of the same transformations on familiar face recognition. In Experiment 1, we compared linear transformations of size and aspect ratio to a non-linear transformation affecting only part of the face. We found that adaptation to facial identity in face-selective regions showed invariance to linear changes, but there was no invariance to non-linear changes. In Experiment 2, we measured the sensitivity to non-linear changes that fell within the normal range of variation across face images. We found no adaptation to facial identity for any of the non-linear changes in the image, including to faces that varied in different levels of caricature. These results show a compelling difference in the sensitivity to linear compared to non-linear image changes in face-selective regions of the human brain that is only partially consistent with their effect on behavioural judgements of identity. We conclude that while regions such as the FFA may well be involved in the recognition of face identity, they are more likely to contribute to some form of normalisation that underpins subsequent recognition than to form the neural substrate of recognition per se. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. USE OF IMAGE ENHANCEMENT TECHNIQUES FOR IMPROVING REAL TIME FACE RECOGNITION EFFICIENCY ON WEARABLE GADGETS

    Directory of Open Access Journals (Sweden)

    MUHAMMAD EHSAN RANA

    2017-01-01

    Full Text Available The objective of this research is to study the effects of image enhancement techniques on face recognition performance of wearable gadgets with an emphasis on recognition rate.In this research, a number of image enhancement techniques are selected that include brightness normalization, contrast normalization, sharpening, smoothing, and various combinations of these. Subsequently test images are obtained from AT&T database and Yale Face Database B to investigate the effect of these image enhancement techniques under various conditions such as change of illumination and face orientation and expression.The evaluation of data, collected during this research, revealed that the effect of image pre-processing techniques on face recognition highly depends on the illumination condition under which these images are taken. It is revealed that the benefit of applying image enhancement techniques on face images is best seen when there is high variation of illumination among images. Results also indicate that highest recognition rate is achieved when images are taken under low light condition and image contrast is enhanced using histogram equalization technique and then image noise is reduced using median smoothing filter. Additionally combination of contrast normalization and mean smoothing filter shows good result in all scenarios. Results obtained from test cases illustrate up to 75% improvement in face recognition rate when image enhancement is applied to images in given scenarios.

  2. Three-dimensional recording of the human face with a 3D laser scanner.

    Science.gov (United States)

    Kovacs, L; Zimmermann, A; Brockmann, G; Gühring, M; Baurecht, H; Papadopulos, N A; Schwenzer-Zimmerer, K; Sader, R; Biemer, E; Zeilhofer, H F

    2006-01-01

    Three-dimensional recording of the surface of the human body or of certain anatomical areas has gained an ever increasing importance in recent years. When recording living surfaces, such as the human face, not only has a varying degree of surface complexity to be accounted for, but also a variety of other factors, such as motion artefacts. It is of importance to establish standards for the recording procedure, which will optimise results and allow for better comparison and validation. In the study presented here, the faces of five male test persons were scanned in different experimental settings using non-contact 3D digitisers, type Minolta Vivid 910). Among others, the influence of the number of scanners used, the angle of recording, the head position of the test person, the impact of the examiner and of examination time on accuracy and precision of the virtual face models generated from the scanner data with specialised software were investigated. Computed data derived from the virtual models were compared to corresponding reference measurements carried out manually between defined landmarks on the test persons' faces. We describe experimental conditions that were of benefit in optimising the quality of scanner recording and the reliability of three-dimensional surface imaging. However, almost 50% of distances between landmarks derived from the virtual models deviated more than 2mm from the reference of manual measurements on the volunteers' faces.

  3. Human diversity in images

    CERN Multimedia

    Laëtitia Pedroso

    2010-01-01

    A photo contest is being jointly organized by the CERN Equal Opportunities team and the CERN Photo Club. All you need to do is submit a photo or quotation. The contest is open to everyone.   Diversity at CERN You don’t need to be a photographer or to have sophisticated photographic equipment to capture CERN’s diversity of working styles, gender, age, ethnic, origin and physical ability. Its many facets are all around you! The emphasis of the initiative is on capturing this diversity in an image using creativity, intuition and cultural empathy. You can also contribute with a quotation (whether or not you specify who said it is optional) telling the organizers what strikes you about diversity at CERN. The photo entries and a collection of the quotations will be displayed in an exhibition to be held in May in the Main Building, as well as on the CERN Photo Club website. The best photos will be awarded prizes. So over to you: dig deep inside human nature, explore individual tal...

  4. A learning framework for age rank estimation based on face images with scattering transform.

    Science.gov (United States)

    Chang, Kuang-Yu; Chen, Chu-Song

    2015-03-01

    This paper presents a cost-sensitive ordinal hyperplanes ranking algorithm for human age estimation based on face images. The proposed approach exploits relative-order information among the age labels for rank prediction. In our approach, the age rank is obtained by aggregating a series of binary classification results, where cost sensitivities among the labels are introduced to improve the aggregating performance. In addition, we give a theoretical analysis on designing the cost of individual binary classifier so that the misranking cost can be bounded by the total misclassification costs. An efficient descriptor, scattering transform, which scatters the Gabor coefficients and pooled with Gaussian smoothing in multiple layers, is evaluated for facial feature extraction. We show that this descriptor is a generalization of conventional bioinspired features and is more effective for face-based age inference. Experimental results demonstrate that our method outperforms the state-of-the-art age estimation approaches.

  5. Testing the Utility of a Data-Driven Approach for Assessing BMI from Face Images

    DEFF Research Database (Denmark)

    Wolffhechel, Karin Marie Brandt; Hahn, Amanda C.; Jarmer, Hanne Østergaard

    2015-01-01

    Several lines of evidence suggest that facial cues of adiposity may be important for human social interaction. However, tests for quantifiable cues of body mass index (BMI) in the face have examined only a small number of facial proportions and these proportions were found to have relatively low...... predictive power. Here we employed a data-driven approach in which statistical models were built using principal components (PCs) derived from objectively defined shape and color characteristics in face images. The predictive power of these models was then compared with models based on previously studied...... facial proportions (perimeter-to-area ratio, width-to-height ratio, and cheek-to-jaw width). Models based on 2D shape-only PCs, color-only PCs, and 2D shape and color PCs combined each performed significantly and substantially better than models based on one or more of the previously studied facial...

  6. Defining Face Perception Areas in the Human Brain: A Large-Scale Factorial fMRI Face Localizer Analysis

    Science.gov (United States)

    Rossion, Bruno; Hanseeuw, Bernard; Dricot, Laurence

    2012-01-01

    A number of human brain areas showing a larger response to faces than to objects from different categories, or to scrambled faces, have been identified in neuroimaging studies. Depending on the statistical criteria used, the set of areas can be overextended or minimized, both at the local (size of areas) and global (number of areas) levels. Here…

  7. Defining Face Perception Areas in the Human Brain: A Large-Scale Factorial fMRI Face Localizer Analysis

    Science.gov (United States)

    Rossion, Bruno; Hanseeuw, Bernard; Dricot, Laurence

    2012-01-01

    A number of human brain areas showing a larger response to faces than to objects from different categories, or to scrambled faces, have been identified in neuroimaging studies. Depending on the statistical criteria used, the set of areas can be overextended or minimized, both at the local (size of areas) and global (number of areas) levels. Here…

  8. Untold stories: the human face of poverty dynamics

    DEFF Research Database (Denmark)

    Prowse, Martin

    2008-01-01

    Key Points • Life histories offer an important window for policy makers, and should be brought to the policy table much more frequently. • Life histories show the human face of chronic poverty. Such vignettes provide concrete examples of poverty traps – such as insecurity, social discrimination a...... have ambivalent effects. • Whilst life histories are not representative, they highlight key themes and processes which are ‘typical’ of individuals with similar sets of sociobiographical characteristics who live in similar social, economic and political circumstances....

  9. Learning Local Binary Patterns for Gender Classification on Real-World Face Images

    NARCIS (Netherlands)

    Shan, C.

    2011-01-01

    Gender recognition is one of fundamental face analysis tasks. Most of the existing studies have focused on face images acquired under controlled conditions. However, real-world applications require gender classification on real-life faces, which is much more challenging due to significant appearance

  10. The other-race effect in face learning: Using naturalistic images to investigate face ethnicity effects in a learning paradigm.

    Science.gov (United States)

    Hayward, William G; Favelle, Simone K; Oxner, Matt; Chu, Ming Hon; Lam, Sze Man

    2017-05-01

    The other-race effect in face identification has been reported in many situations and by many different ethnicities, yet it remains poorly understood. One reason for this lack of clarity may be a limitation in the methodologies that have been used to test it. Experiments typically use an old-new recognition task to demonstrate the existence of the other-race effect, but such tasks are susceptible to different social and perceptual influences, particularly in terms of the extent to which all faces are equally individuated at study. In this paper we report an experiment in which we used a face learning methodology to measure the other-race effect. We obtained naturalistic photographs of Chinese and Caucasian individuals, which allowed us to test the ability of participants to generalize their learning to new ecologically valid exemplars of a face identity. We show a strong own-race advantage in face learning, such that participants required many fewer trials to learn names of own-race individuals than those of other-race individuals and were better able to identify learned own-race individuals in novel naturalistic stimuli. Since our methodology requires individuation of all faces, and generalization over large image changes, our finding of an other-race effect can be attributed to a specific deficit in the sensitivity of perceptual and memory processes to other-race faces.

  11. Testing the Utility of a Data-Driven Approach for Assessing BMI from Face Images.

    Directory of Open Access Journals (Sweden)

    Karin Wolffhechel

    Full Text Available Several lines of evidence suggest that facial cues of adiposity may be important for human social interaction. However, tests for quantifiable cues of body mass index (BMI in the face have examined only a small number of facial proportions and these proportions were found to have relatively low predictive power. Here we employed a data-driven approach in which statistical models were built using principal components (PCs derived from objectively defined shape and color characteristics in face images. The predictive power of these models was then compared with models based on previously studied facial proportions (perimeter-to-area ratio, width-to-height ratio, and cheek-to-jaw width. Models based on 2D shape-only PCs, color-only PCs, and 2D shape and color PCs combined each performed significantly and substantially better than models based on one or more of the previously studied facial proportions. A non-linear PC model considering both 2D shape and color PCs was the best predictor of BMI. These results highlight the utility of a "bottom-up", data-driven approach for assessing BMI from face images.

  12. Testing the Utility of a Data-Driven Approach for Assessing BMI from Face Images.

    Science.gov (United States)

    Wolffhechel, Karin; Hahn, Amanda C; Jarmer, Hanne; Fisher, Claire I; Jones, Benedict C; DeBruine, Lisa M

    2015-01-01

    Several lines of evidence suggest that facial cues of adiposity may be important for human social interaction. However, tests for quantifiable cues of body mass index (BMI) in the face have examined only a small number of facial proportions and these proportions were found to have relatively low predictive power. Here we employed a data-driven approach in which statistical models were built using principal components (PCs) derived from objectively defined shape and color characteristics in face images. The predictive power of these models was then compared with models based on previously studied facial proportions (perimeter-to-area ratio, width-to-height ratio, and cheek-to-jaw width). Models based on 2D shape-only PCs, color-only PCs, and 2D shape and color PCs combined each performed significantly and substantially better than models based on one or more of the previously studied facial proportions. A non-linear PC model considering both 2D shape and color PCs was the best predictor of BMI. These results highlight the utility of a "bottom-up", data-driven approach for assessing BMI from face images.

  13. Multi-face detection based on downsampling and modified subtractive clustering for color images

    Institute of Scientific and Technical Information of China (English)

    KONG Wan-zeng; ZHU Shan-an

    2007-01-01

    This paper presents a multi-face detection method for color images. The method is based on the assumption that faces are well separated from the background by skin color detection. These faces can be located by the proposed method which modifies the subtractive clustering. The modified clustering algorithm proposes a new definition of distance for multi-face detection, and its key parameters can be predetermined adaptively by statistical information of face objects in the image. Downsampling is employed to reduce the computation of clustering and speed up the process of the proposed method. The effectiveness of the proposed method is illustrated by three experiments.

  14. USE OF IMAGE ENHANCEMENT TECHNIQUES FOR IMPROVING REAL TIME FACE RECOGNITION EFFICIENCY ON WEARABLE GADGETS

    OpenAIRE

    MUHAMMAD EHSAN RANA; AHMAD AFZAL ZADEH; AHMAD MOHAMMAD MAHMOOD ALQURNEH

    2017-01-01

    The objective of this research is to study the effects of image enhancement techniques on face recognition performance of wearable gadgets with an emphasis on recognition rate.In this research, a number of image enhancement techniques are selected that include brightness normalization, contrast normalization, sharpening, smoothing, and various combinations of these. Subsequently test images are obtained from AT&T database and Yale Face Database B to investigate the effect of these image enhan...

  15. Illumination normalization of face image based on illuminant direction estimation and improved Retinex.

    Directory of Open Access Journals (Sweden)

    Jizheng Yi

    Full Text Available Illumination normalization of face image for face recognition and facial expression recognition is one of the most frequent and difficult problems in image processing. In order to obtain a face image with normal illumination, our method firstly divides the input face image into sixteen local regions and calculates the edge level percentage in each of them. Secondly, three local regions, which meet the requirements of lower complexity and larger average gray value, are selected to calculate the final illuminant direction according to the error function between the measured intensity and the calculated intensity, and the constraint function for an infinite light source model. After knowing the final illuminant direction of the input face image, the Retinex algorithm is improved from two aspects: (1 we optimize the surround function; (2 we intercept the values in both ends of histogram of face image, determine the range of gray levels, and stretch the range of gray levels into the dynamic range of display device. Finally, we achieve illumination normalization and get the final face image. Unlike previous illumination normalization approaches, the method proposed in this paper does not require any training step or any knowledge of 3D face and reflective surface model. The experimental results using extended Yale face database B and CMU-PIE show that our method achieves better normalization effect comparing with the existing techniques.

  16. Moving human full body and body parts detection, tracking, and applications on human activity estimation, walking pattern and face recognition

    Science.gov (United States)

    Chen, Hai-Wen; McGurr, Mike

    2016-05-01

    We have developed a new way for detection and tracking of human full-body and body-parts with color (intensity) patch morphological segmentation and adaptive thresholding for security surveillance cameras. An adaptive threshold scheme has been developed for dealing with body size changes, illumination condition changes, and cross camera parameter changes. Tests with the PETS 2009 and 2014 datasets show that we can obtain high probability of detection and low probability of false alarm for full-body. Test results indicate that our human full-body detection method can considerably outperform the current state-of-the-art methods in both detection performance and computational complexity. Furthermore, in this paper, we have developed several methods using color features for detection and tracking of human body-parts (arms, legs, torso, and head, etc.). For example, we have developed a human skin color sub-patch segmentation algorithm by first conducting a RGB to YIQ transformation and then applying a Subtractive I/Q image Fusion with morphological operations. With this method, we can reliably detect and track human skin color related body-parts such as face, neck, arms, and legs. Reliable body-parts (e.g. head) detection allows us to continuously track the individual person even in the case that multiple closely spaced persons are merged. Accordingly, we have developed a new algorithm to split a merged detection blob back to individual detections based on the detected head positions. Detected body-parts also allow us to extract important local constellation features of the body-parts positions and angles related to the full-body. These features are useful for human walking gait pattern recognition and human pose (e.g. standing or falling down) estimation for potential abnormal behavior and accidental event detection, as evidenced with our experimental tests. Furthermore, based on the reliable head (face) tacking, we have applied a super-resolution algorithm to enhance

  17. Do Infants Recognize the Arcimboldo Images as Faces? Behavioral and Near-Infrared Spectroscopic Study

    Science.gov (United States)

    Kobayashi, Megumi; Otsuka, Yumiko; Nakato, Emi; Kanazawa, So; Yamaguchi, Masami K.; Kakigi, Ryusuke

    2012-01-01

    Arcimboldo images induce the perception of faces when shown upright despite the fact that only nonfacial objects such as vegetables and fruits are painted. In the current study, we examined whether infants recognize a face in the Arcimboldo images by using the preferential looking technique and near-infrared spectroscopy (NIRS). In the first…

  18. PET FACE: MECHANISMS UNDERLYING HUMAN-ANIMAL RELATIONSHIPS

    Directory of Open Access Journals (Sweden)

    Marta eBorgi

    2016-03-01

    Full Text Available Accumulating behavioral and neurophysiological studies support the idea of infantile (cute faces as highly biologically relevant stimuli rapidly and unconsciously capturing attention and eliciting positive/affectionate behaviors, including willingness to care. It has been hypothesized that the presence of infantile physical and behavioral features in companion (or pet animals (i.e. dogs and cats might form the basis of our attraction to these species. Preliminary evidence has indeed shown that the human attentional bias toward the baby schema may extend to animal facial configurations. In this review, the role of facial cues, specifically of infantile traits and facial signals (i.e. eyes gaze as emotional and communicative signals is highlighted and discussed as regulating human-animal bond, similarly to what can be observed in the adult-infant interaction context. Particular emphasis is given to the neuroendocrine regulation of social bond between humans and animals through oxytocin secretion. Instead of considering companion animals as mere baby substitutes for their owners, in this review we highlight the central role of cats and dogs in human lives. Specifically, we consider the ability of companion animals to bond with humans as fulfilling the need for attention and emotional intimacy, thus serving similar psychological and adaptive functions as human-human friendships. In this context, facial cuteness is viewed not just as a releaser of care/parental behavior, but more in general as a trait motivating social engagement. To conclude, the impact of this information for applied disciplines is briefly described, particularly in consideration of the increasing evidence of the beneficial effects of contacts with animals for human health and wellbeing.

  19. Learning a sparse representation from multiple still images for on-line face recognition in an unconstrained environment

    NARCIS (Netherlands)

    Tangelder, Johan; Schouten, Ben

    2006-01-01

    In a real-world environment a face detector can be applied to extract multiple face images from multiple video streams without constraints on pose and illumination. The extracted face images will have varying image quality and resolution. Moreover, also the detected faces will not be precisely align

  20. 3D FACE RECOGNITION FROM RANGE IMAGES BASED ON CURVATURE ANALYSIS

    Directory of Open Access Journals (Sweden)

    Suranjan Ganguly

    2014-02-01

    Full Text Available In this paper, we present a novel approach for three-dimensional face recognition by extracting the curvature maps from range images. There are four types of curvature maps: Gaussian, Mean, Maximum and Minimum curvature maps. These curvature maps are used as a feature for 3D face recognition purpose. The dimension of these feature vectors is reduced using Singular Value Decomposition (SVD technique. Now from calculated three components of SVD, the non-negative values of ‘S’ part of SVD is ranked and used as feature vector. In this proposed method, two pair-wise curvature computations are done. One is Mean, and Maximum curvature pair and another is Gaussian and Mean curvature pair. These are used to compare the result for better recognition rate. This automated 3D face recognition system is focused in different directions like, frontal pose with expression and illumination variation, frontal face along with registered face, only registered face and registered face from different pose orientation across X, Y and Z axes. 3D face images used for this research work are taken from FRAV3D database. The pose variation of 3D facial image is being registered to frontal pose by applying one to all registration technique then curvature mapping is applied on registered face images along with remaining frontal face images. For the classification and recognition purpose five layer feed-forward back propagation neural network classifiers is used, and the corresponding result is discussed in section 4.

  1. Human bites (image)

    Science.gov (United States)

    Human bites present a high risk of infection. Besides the bacteria which can cause infection, there is ... the wound extends below the skin. Anytime a human bite has broken the skin, seek medical attention.

  2. Featural and configural face processing strategies: evidence from a functional magnetic resonance imaging study.

    Science.gov (United States)

    Lobmaier, Janek S; Klaver, Peter; Loenneker, Thomas; Martin, Ernst; Mast, Fred W

    2008-02-12

    We explored the processing mechanisms of featural and configural face information using event-related functional magnetic resonance imaging. Featural information describes the information contained in the facial parts; configural information conveys the spatial interrelationship between parts. In a delayed matching-to-sample task, participants decided whether an intact test face matched a precedent scrambled or blurred cue face. Scrambled faces primarily contain featural information whereas blurred faces preserve configural information. Scrambled cue faces evoked enhanced activation in the left fusiform gyrus, left parietal lobe, and left lingual gyrus when viewing intact test faces. Following blurred cue faces, test faces enhanced activation bilaterally in the middle temporal gyrus. The results suggest that featural and configural information is processed by following distinct neural pathways.

  3. Putting a Face to a Name: Visualising Human Rights

    Directory of Open Access Journals (Sweden)

    Vera Mackie

    2014-03-01

    Full Text Available In this essay, I focus on a text which attempts to deal with human rights issues in an accessible media format, Kälin, Müller and Wyttenbach’s book, The Face of Human Rights. I am interested in this text as an attempt to translate between different modes of communicating about human rights, which we might call the academic mode, the bureaucratic mode, the activist mode and the popular media mode. There are significant gaps between the academic debates on human rights, the actual language and protocols of the bodies devoted to ensuring the achievement of basic human rights, the language of activists, and the ways in which these issues are discussed in the media. These issues are compounded in a transnational frame where people must find ways of communicating across differences of language and culture. These problems of communicating across difference are inherent to the contemporary machinery of the international human rights system, where global institutions of governance are implicated in the claims of individuals who are located in diverse national contexts. Several commentators have noted the importance of narrative in human rights advocacy, while others have explored the role of art. I am interested in analysing narrative and representational strategies, from a consciousness that texts work not only through vocabulary and propositional content, but also through discursive positioning. It is necessary to look at the structure of texts, the contents of texts, and the narrative strategies and discursive frameworks which inform them. Similar points can be made about photography, which must be analysed in terms of the specific representational possibilities of visual culture.

  4. Efficient Discriminate Component Analysis using Support Vector Machine Classifier on Invariant Pose and Illumination Face Images

    Directory of Open Access Journals (Sweden)

    R. Rajalakshmi

    2015-03-01

    Full Text Available Face recognition is the process of categorizing a person in an image by evaluating with a known face image library. The pose and illumination variations are two main practical confronts for an automatic face recognition system. This study proposes a novel face recognition algorithm known as Efficient Discriminant Component Analysis (EDCA for face recognition under varying poses and illumination conditions. This EDCA algorithm overcomes the high dimensionality problem in the feature space by extracting features from the low dimensional frequency band of the image. It combines the features of both LDA and PCA algorithms and these features are used in the training set and is classified using Support Vector Machine classifier. The experiments were performed on the CMU-PIE datasets. The experimental results show that the proposed algorithm produces a higher recognition rate than the existing LDA and PCA based face recognition techniques.

  5. Photogrammetric Network for Evaluation of Human Faces for Face Reconstruction Purpose

    Science.gov (United States)

    Schrott, P.; Detrekői, Á.; Fekete, K.

    2012-08-01

    Facial reconstruction is the process of reconstructing the geometry of faces of persons from skeletal remains. A research group (BME Cooperation Research Center for Biomechanics) was formed representing several organisations to combine knowledgebases of different disciplines like anthropology, medical, mechanical, archaeological sciences etc. to computerize the face reconstruction process based on a large dataset of 3D face and skull models gathered from living persons: cranial data from CT scans and face models from photogrammetric evaluations. The BUTE Dept. of Photogrammetry and Geoinformatics works on the method and technology of the 3D data acquisition for the face models. In this paper we will present the research and results of the photogrammetric network design, the modelling to deal with visibility constraints, and the investigation of the developed basic photogrammetric configuration to specify the result characteristics to be expected using the device built for the photogrammetric face measurements.

  6. A Method for En Face OCT Imaging of Subretinal Fluid in Age-Related Macular Degeneration

    Directory of Open Access Journals (Sweden)

    Fatimah Mohammad

    2014-01-01

    Full Text Available Purpose. The purpose of the study is to report a method for en face imaging of subretinal fluid (SRF due to age-related macular degeneration (AMD based on spectral domain optical coherence tomography (SDOCT. Methods. High density SDOCT imaging was performed at two visits in 4 subjects with neovascular AMD and one healthy subject. En face OCT images of a retinal layer anterior to the retinal pigment epithelium were generated. Validity, repeatability, and utility of the method were established. Results. En face OCT images generated by manual and automatic segmentation were nearly indistinguishable and displayed similar regions of SRF. En face OCT images displayed uniform intensities and similar retinal vascular patterns in a healthy subject, while the size and appearance of a hypopigmented fibrotic scar in an AMD subject were similar at 2 visits. In AMD subjects, dark regions on en face OCT images corresponded to reduced or absent light reflectance due to SRF. On en face OCT images, a decrease in SRF areas with treatment was demonstrated and this corresponded with a reduction in the central subfield retinal thickness. Conclusion. En face OCT imaging is a promising tool for visualization and monitoring of SRF area due to disease progression and treatment.

  7. A Method for En Face OCT Imaging of Subretinal Fluid in Age-Related Macular Degeneration.

    Science.gov (United States)

    Mohammad, Fatimah; Wanek, Justin; Zelkha, Ruth; Lim, Jennifer I; Chen, Judy; Shahidi, Mahnaz

    2014-01-01

    Purpose. The purpose of the study is to report a method for en face imaging of subretinal fluid (SRF) due to age-related macular degeneration (AMD) based on spectral domain optical coherence tomography (SDOCT). Methods. High density SDOCT imaging was performed at two visits in 4 subjects with neovascular AMD and one healthy subject. En face OCT images of a retinal layer anterior to the retinal pigment epithelium were generated. Validity, repeatability, and utility of the method were established. Results. En face OCT images generated by manual and automatic segmentation were nearly indistinguishable and displayed similar regions of SRF. En face OCT images displayed uniform intensities and similar retinal vascular patterns in a healthy subject, while the size and appearance of a hypopigmented fibrotic scar in an AMD subject were similar at 2 visits. In AMD subjects, dark regions on en face OCT images corresponded to reduced or absent light reflectance due to SRF. On en face OCT images, a decrease in SRF areas with treatment was demonstrated and this corresponded with a reduction in the central subfield retinal thickness. Conclusion. En face OCT imaging is a promising tool for visualization and monitoring of SRF area due to disease progression and treatment.

  8. Variation in the human cannabinoid receptor CNR1 gene modulates gaze duration for happy faces

    Directory of Open Access Journals (Sweden)

    Chakrabarti Bhismadev

    2011-06-01

    Full Text Available Abstract Background From an early age, humans look longer at preferred stimuli and also typically look longer at facial expressions of emotion, particularly happy faces. Atypical gaze patterns towards social stimuli are common in autism spectrum conditions (ASC. However, it is unknown whether gaze fixation patterns have any genetic basis. In this study, we tested whether variations in the cannabinoid receptor 1 (CNR1 gene are associated with gaze duration towards happy faces. This gene was selected because CNR1 is a key component of the endocannabinoid system, which is involved in processing reward, and in our previous functional magnetic resonance imaging (fMRI study, we found that variations in CNR1 modulate the striatal response to happy (but not disgust faces. The striatum is involved in guiding gaze to rewarding aspects of a visual scene. We aimed to validate and extend this result in another sample using a different technique (gaze tracking. Methods A total of 30 volunteers (13 males and 17 females from the general population observed dynamic emotional expressions on a screen while their eye movements were recorded. They were genotyped for the identical four single-nucleotide polymorphisms (SNPs in the CNR1 gene tested in our earlier fMRI study. Results Two SNPs (rs806377 and rs806380 were associated with differential gaze duration for happy (but not disgust faces. Importantly, the allelic groups associated with a greater striatal response to happy faces in the fMRI study were associated with longer gaze duration at happy faces. Conclusions These results suggest that CNR1 variations modulate the striatal function that underlies the perception of signals of social reward, such as happy faces. This suggests that CNR1 is a key element in the molecular architecture of perception of certain basic emotions. This may have implications for understanding neurodevelopmental conditions marked by atypical eye contact and facial emotion processing

  9. Sparse Illumination Learning and Transfer for Single-Sample Face Recognition with Image Corruption and Misalignment

    OpenAIRE

    Zhuang, Liansheng; Chan, Tsung-Han; Yang, Allen Y.; Sastry, S. Shankar; Ma, Yi

    2014-01-01

    Single-sample face recognition is one of the most challenging problems in face recognition. We propose a novel algorithm to address this problem based on a sparse representation based classification (SRC) framework. The new algorithm is robust to image misalignment and pixel corruption, and is able to reduce required gallery images to one sample per class. To compensate for the missing illumination information traditionally provided by multiple gallery images, a sparse illumination learning a...

  10. Orientation tuning of human face processing estimated by contrast matching in transparency displays.

    Science.gov (United States)

    Martini, Paolo; McKone, Elinor; Nakayama, Ken

    2006-06-01

    Upright images of faces appear more salient than faces of other orientations. We exploited this effect in a titration experiment where faces were superimposed in transparency. By manipulating the physical contrast of the component images, we measured the degree of perceptual dominance as function of the orientation of the face in the image plane. From these measurements, we obtain the orientation tuning of face processing, which is well approximated by a Gaussian function with a SD of about 45 deg and mean centered on upright. Faces predominantly lit from above and from below produced very similar results. However, when presented with scrambled faces observers showed no orientation preference. We argue that these results can be explained by the existence of specialized face processing mechanisms with an orientation tuning with a bandwidth of approximately 90 deg, predominantly centered on the upright orientation and easily disrupted by alterations of the normal facial configuration.

  11. FPGA Based Assembling of Facial Components for Human Face Construction

    CERN Document Server

    Halder, Santanu; Nasipuri, Mita; Basu, Dipak Kumar; Kundu, Mahantapas

    2010-01-01

    This paper aims at VLSI realization for generation of a new face from textual description. The FASY (FAce SYnthesis) System is a Face Database Retrieval and new Face generation System that is under development. One of its main features is the generation of the requested face when it is not found in the existing database. The new face generation system works in three steps - searching phase, assembling phase and tuning phase. In this paper the tuning phase using hardware description language and its implementation in a Field Programmable Gate Array (FPGA) device is presented.

  12. Face Scanning in Autism Spectrum Disorder and Attention Deficit/Hyperactivity Disorder: Human Versus Dog Face Scanning

    OpenAIRE

    Mauro eMuszkat; Claudia Berlim De Melo; Patricia de Oliveira Lima Muñoz; Tania Kiehl Lucci; Vinicius Frayze David; José de Oliveira Siqueira; Emma eOtta

    2015-01-01

    This study used eye tracking to explore attention allocation to human and dog faces in children and adolescents with autism spectrum disorder (ASD), attention deficit/hyperactivity disorder (ADHD), and typical development (TD). Significant differences were found among the three groups. TD participants looked longer at the eyes than ASD and ADHD ones, irrespective of the faces presented. In spite of this difference, groups were similar in that they looked more to the eyes than to the mouth are...

  13. Fourier power spectrum characteristics of face photographs: attractiveness perception depends on low-level image properties.

    Directory of Open Access Journals (Sweden)

    Claudia Menzel

    Full Text Available We investigated whether low-level processed image properties that are shared by natural scenes and artworks - but not veridical face photographs - affect the perception of facial attractiveness and age. Specifically, we considered the slope of the radially averaged Fourier power spectrum in a log-log plot. This slope is a measure of the distribution of special frequency power in an image. Images of natural scenes and artworks possess - compared to face images - a relatively shallow slope (i.e., increased high spatial frequency power. Since aesthetic perception might be based on the efficient processing of images with natural scene statistics, we assumed that the perception of facial attractiveness might also be affected by these properties. We calculated Fourier slope and other beauty-associated measurements in face images and correlated them with ratings of attractiveness and age of the depicted persons (Study 1. We found that Fourier slope - in contrast to the other tested image properties - did not predict attractiveness ratings when we controlled for age. In Study 2A, we overlaid face images with random-phase patterns with different statistics. Patterns with a slope similar to those in natural scenes and artworks resulted in lower attractiveness and higher age ratings. In Studies 2B and 2C, we directly manipulated the Fourier slope of face images and found that images with shallower slopes were rated as more attractive. Additionally, attractiveness of unaltered faces was affected by the Fourier slope of a random-phase background (Study 3. Faces in front of backgrounds with statistics similar to natural scenes and faces were rated as more attractive. We conclude that facial attractiveness ratings are affected by specific image properties. An explanation might be the efficient coding hypothesis.

  14. Fourier power spectrum characteristics of face photographs: attractiveness perception depends on low-level image properties.

    Science.gov (United States)

    Menzel, Claudia; Hayn-Leichsenring, Gregor U; Langner, Oliver; Wiese, Holger; Redies, Christoph

    2015-01-01

    We investigated whether low-level processed image properties that are shared by natural scenes and artworks - but not veridical face photographs - affect the perception of facial attractiveness and age. Specifically, we considered the slope of the radially averaged Fourier power spectrum in a log-log plot. This slope is a measure of the distribution of special frequency power in an image. Images of natural scenes and artworks possess - compared to face images - a relatively shallow slope (i.e., increased high spatial frequency power). Since aesthetic perception might be based on the efficient processing of images with natural scene statistics, we assumed that the perception of facial attractiveness might also be affected by these properties. We calculated Fourier slope and other beauty-associated measurements in face images and correlated them with ratings of attractiveness and age of the depicted persons (Study 1). We found that Fourier slope - in contrast to the other tested image properties - did not predict attractiveness ratings when we controlled for age. In Study 2A, we overlaid face images with random-phase patterns with different statistics. Patterns with a slope similar to those in natural scenes and artworks resulted in lower attractiveness and higher age ratings. In Studies 2B and 2C, we directly manipulated the Fourier slope of face images and found that images with shallower slopes were rated as more attractive. Additionally, attractiveness of unaltered faces was affected by the Fourier slope of a random-phase background (Study 3). Faces in front of backgrounds with statistics similar to natural scenes and faces were rated as more attractive. We conclude that facial attractiveness ratings are affected by specific image properties. An explanation might be the efficient coding hypothesis.

  15. Face and eyes localization algorithm in thermal images for temperature measurement of the inner canthus of the eyes

    Science.gov (United States)

    Budzan, Sebastian; Wyżgolik, Roman

    2013-09-01

    In this paper, a novel algorithm for the detection and localization of the face and eyes in thermal images is presented, particularly the temperature measurement of the human body by measuring the eye corner (inner canthus) temperature. The algorithm uses a combination of the template-matching, knowledge-based and morphological methods, particularly the modified Randomized Hough Transform (RHT) in the localization process, also growing segmentation to increase accuracy of the localization algorithm. In many solutions, the localization of the face and/or eyes is made by manual selection of the regions of the face and eyes and then the average temperature in the region is measured. The paper also discusses experimental studies and the results, which allowed the evaluation of the effectiveness of the developed algorithm. The standardization of measurement, necessary for proper temperature measurement with the use of infrared thermal imaging, are also presented.

  16. Sample based 3D face reconstruction from a single frontal image by adaptive locally linear embedding

    Institute of Scientific and Technical Information of China (English)

    ZHANG Jian; ZHUANG Yue-ting

    2007-01-01

    In this paper, we propose a highly automatic approach for 3D photorealistic face reconstruction from a single frontal image. The key point of our work is the implementation of adaptive manifold learning approach. Beforehand, an active appearance model (AAM) is trained for automatic feature extraction and adaptive locally linear embedding (ALLE) algorithm is utilized to reduce the dimensionality of the 3D database. Then, given an input frontal face image, the corresponding weights between 3D samples and the image are synthesized adaptively according to the AAM selected facial features. Finally, geometry reconstruction is achieved by linear weighted combination of adaptively selected samples. Radial basis function (RBF) is adopted to map facial texture from the frontal image to the reconstructed face geometry. The texture of invisible regions between the face and the ears is interpolated by sampling from the frontal image. This approach has several advantages: (1) Only a single frontal face image is needed for highly automatic face reconstruction; (2) Compared with former works, our reconstruction approach provides higher accuracy; (3) Constraint based RBF texture mapping provides natural appearance for reconstructed face.

  17. An Automatic Framework for Segmentation and Digital Inpainting of 2D Frontal Face Images

    NARCIS (Netherlands)

    Sobiecki, A.; Giraldi, G. A.; Neves, L. A. P.; Thomaz, C. E.

    2012-01-01

    Nowadays applications that use face images as input for people identification have been very common. In general, the input image must be preprocessed in order to fit some normalization and quality criteria. In this paper, we propose a computational framework composed of digital image quality

  18. A Bayesian model for predicting face recognition performance using image quality

    NARCIS (Netherlands)

    Dutta, A.; Veldhuis, Raymond N.J.; Spreeuwers, Lieuwe Jan

    2014-01-01

    Quality of a pair of facial images is a strong indicator of the uncertainty in decision about identity based on that image pair. In this paper, we describe a Bayesian approach to model the relation between image quality (like pose, illumination, noise, sharpness, etc) and corresponding face

  19. A Feature-Based Structural Measure: An Image Similarity Measure for Face Recognition

    Directory of Open Access Journals (Sweden)

    Noor Abdalrazak Shnain

    2017-08-01

    Full Text Available Facial recognition is one of the most challenging and interesting problems within the field of computer vision and pattern recognition. During the last few years, it has gained special attention due to its importance in relation to current issues such as security, surveillance systems and forensics analysis. Despite this high level of attention to facial recognition, the success is still limited by certain conditions; there is no method which gives reliable results in all situations. In this paper, we propose an efficient similarity index that resolves the shortcomings of the existing measures of feature and structural similarity. This measure, called the Feature-Based Structural Measure (FSM, combines the best features of the well-known SSIM (structural similarity index measure and FSIM (feature similarity index measure approaches, striking a balance between performance for similar and dissimilar images of human faces. In addition to the statistical structural properties provided by SSIM, edge detection is incorporated in FSM as a distinctive structural feature. Its performance is tested for a wide range of PSNR (peak signal-to-noise ratio, using ORL (Olivetti Research Laboratory, now AT&T Laboratory Cambridge and FEI (Faculty of Industrial Engineering, São Bernardo do Campo, São Paulo, Brazil databases. The proposed measure is tested under conditions of Gaussian noise; simulation results show that the proposed FSM outperforms the well-known SSIM and FSIM approaches in its efficiency of similarity detection and recognition of human faces.

  20. Differences in the Pattern of Hemodynamic Response to Self-Face and Stranger-Face Images in Adolescents with Anorexia Nervosa: A Near-Infrared Spectroscopic Study.

    Science.gov (United States)

    Inoue, Takeshi; Sakuta, Yuiko; Shimamura, Keiichi; Ichikawa, Hiroko; Kobayashi, Megumi; Otani, Ryoko; Yamaguchi, Masami K; Kanazawa, So; Kakigi, Ryusuke; Sakuta, Ryoichi

    2015-01-01

    There have been no reports concerning the self-face perception in patients with anorexia nervosa (AN). The purpose of this study was to compare the neuronal correlates of viewing self-face images (i.e. images of familiar face) and stranger-face images (i.e. images of an unfamiliar face) in female adolescents with and without AN. We used near-infrared spectroscopy (NIRS) to measure hemodynamic responses while the participants viewed full-color photographs of self-face and stranger-face. Fifteen females with AN (mean age, 13.8 years) and 15 age- and intelligence quotient (IQ)-matched female controls without AN (mean age, 13.1 years) participated in the study. The responses to photographs were compared with the baseline activation (response to white uniform blank). In the AN group, the concentration of oxygenated hemoglobin (oxy-Hb) significantly increased in the right temporal area during the presentation of both the self-face and stranger-face images compared with the baseline level. In contrast, in the control group, the concentration of oxy-Hb significantly increased in the right temporal area only during the presentation of the self-face image. To our knowledge the present study is the first report to assess brain activities during self-face and stranger-face perception among female adolescents with AN. There were different patterns of brain activation in response to the sight of the self-face and stranger-face images in female adolescents with AN and controls.

  1. Differences in the Pattern of Hemodynamic Response to Self-Face and Stranger-Face Images in Adolescents with Anorexia Nervosa: A Near-Infrared Spectroscopic Study.

    Directory of Open Access Journals (Sweden)

    Takeshi Inoue

    Full Text Available There have been no reports concerning the self-face perception in patients with anorexia nervosa (AN. The purpose of this study was to compare the neuronal correlates of viewing self-face images (i.e. images of familiar face and stranger-face images (i.e. images of an unfamiliar face in female adolescents with and without AN. We used near-infrared spectroscopy (NIRS to measure hemodynamic responses while the participants viewed full-color photographs of self-face and stranger-face. Fifteen females with AN (mean age, 13.8 years and 15 age- and intelligence quotient (IQ-matched female controls without AN (mean age, 13.1 years participated in the study. The responses to photographs were compared with the baseline activation (response to white uniform blank. In the AN group, the concentration of oxygenated hemoglobin (oxy-Hb significantly increased in the right temporal area during the presentation of both the self-face and stranger-face images compared with the baseline level. In contrast, in the control group, the concentration of oxy-Hb significantly increased in the right temporal area only during the presentation of the self-face image. To our knowledge the present study is the first report to assess brain activities during self-face and stranger-face perception among female adolescents with AN. There were different patterns of brain activation in response to the sight of the self-face and stranger-face images in female adolescents with AN and controls.

  2. Short faces, big tongues: developmental origin of the human chin.

    Directory of Open Access Journals (Sweden)

    Michael Coquerelle

    Full Text Available During the course of human evolution, the retraction of the face underneath the braincase, and closer to the cervical column, has reduced the horizontal dimension of the vocal tract. By contrast, the relative size of the tongue has not been reduced, implying a rearrangement of the space at the back of the vocal tract to allow breathing and swallowing. This may have left a morphological signature such as a chin (mental prominence that can potentially be interpreted in Homo. Long considered an autopomorphic trait of Homo sapiens, various extinct hominins show different forms of mental prominence. These features may be the evolutionary by-product of equivalent developmental constraints correlated with an enlarged tongue. In order to investigate developmental mechanisms related to this hypothesis, we compare modern 34 human infants against 8 chimpanzee fetuses, whom development of the mandibular symphysis passes through similar stages. The study sets out to test that the shared ontogenetic shape changes of the symphysis observed in both species are driven by the same factor--space restriction at the back of the vocal tract and the associated arrangement of the tongue and hyoid bone. We apply geometric morphometric methods to extensive three-dimensional anatomical landmarks and semilandmarks configuration, capturing the geometry of the cervico-craniofacial complex including the hyoid bone, tongue muscle and the mandible. We demonstrate that in both species, the forward displacement of the mental region derives from the arrangement of the tongue and hyoid bone, in order to cope with the relative horizontal narrowing of the oral cavity. Because humans and chimpanzees share this pattern of developmental integration, the different forms of mental prominence seen in some extinct hominids likely originate from equivalent ontogenetic constraints. Variations in this process could account for similar morphologies.

  3. Technique for real-time frontal face image acquisition using stereo system

    Science.gov (United States)

    Knyaz, Vladimir A.; Vizilter, Yuri V.; Kudryashov, Yuri I.

    2013-04-01

    Most part of existing systems for face recognition is usually based on two-dimensional images. And the quality of recognition is rather high for frontal images of face. But for other kind of images the quality decreases significantly. It is necessary to compensate for the effect of a change in the posture of a person (the camera angle) for correct operation of such systems. There are methods of transformation of 2D image of the person to the canonical orientation. The efficiency of these methods depends on the accuracy of determination of specific anthropometric points. Problems can arise for cases of partly occlusion of the person`s face. Another approach is to have a set of person images for different view angles for the further processing. But a need for storing and processing a large number of two-dimensional images makes this method considerably time-consuming. The proposed technique uses stereo system for fast generation of person face 3D model and obtaining face image in given orientation using this 3D model. Real-time performance is provided by implementing and graph cut methods for face surface 3D reconstruction and applying CUDA software library for parallel calculation.

  4. Predicting performance of a face recognition system based on image quality

    NARCIS (Netherlands)

    Dutta, Abhishek

    2015-01-01

    In this dissertation, we present a generative model to capture the relation between facial image quality features (like pose, illumination direction, etc) and face recognition performance. Such a model can be used to predict the performance of a face recognition system. Since the model is based sole

  5. Principal component analysis of image gradient orientations for face recognition

    NARCIS (Netherlands)

    Tzimiropoulos, Georgios; Zafeiriou, Stefanos; Pantic, Maja

    We introduce the notion of Principal Component Analysis (PCA) of image gradient orientations. As image data is typically noisy, but noise is substantially different from Gaussian, traditional PCA of pixel intensities very often fails to estimate reliably the low-dimensional subspace of a given data

  6. Facing the challenges in human resources for humanitarian health.

    Science.gov (United States)

    Mowafi, Hani; Nowak, Kristin; Hein, Karen

    2007-01-01

    The human resources crisis in humanitarian health care parallels that seen in the broader area of health care. This crisis is exacerbated by the lack of resources in areas in which humanitarian action is needed--difficult environments that often are remote and insecure--and the requirement of specific skill sets is not routinely gained during traditional medical training. While there is ample data to suggest that health outcomes improve when worker density is increased, this remains an area of critical under-investment in humanitarian health care. In addition to under-investment, other factors limit the availability of human resources for health (HRH) in humanitarian work including: (1) over-reliance on degrees as surrogates for specific competencies; (2) under-development and under-utilization of national staff and beneficiaries as humanitarian health workers; (3) lack of standardized training modules to ensure adequate preparation for work in complex emergencies; (4) and the draining of limited available HRH from countries with low prevalence and high need to wealthier, developed nations also facing HRH shortages. A working group of humanitarian health experts from implementing agencies, United Nations agencies, private and governmental financiers, and members of academia gathered at Hanover, New Hampshire for a conference to discuss elements of the HRH problem in humanitarian health care and how to solve them. Several key elements of successful solutions were highlighted, including: (1) the need to develop a set of standards of what would constitute "adequate training" for humanitarian health work; (2) increasing the utilization and professional development of national staff; (3) "training with a purpose" specific to humanitarian health work (not simply relying on professional degrees as surrogates); (4) and developing specific health task-based competencies thereby increasing the pool of potential workers. Such steps would accomplish several key goals, such as

  7. Both dog and human faces are explored abnormally by young children with autism spectrum disorders.

    Science.gov (United States)

    Guillon, Quentin; Hadjikhani, Nouchine; Baduel, Sophie; Kruck, Jeanne; Arnaud, Mado; Rogé, Bernadette

    2014-10-22

    When looking at faces, typical individuals tend to have a right hemispheric bias manifested by a tendency to look first toward the left visual hemifield. Here, we tested for the presence of this bias in young children with autism spectrum disorders (ASD) for both human and dog faces. We show that children with ASD do not show a left visual hemifield (right hemispheric) bias for human faces. In addition, we show that this effect extends to faces of dogs, suggesting that the absence of bias is not specific to human faces, but applies to all faces with the first-order configuration, pointing to an anomaly at an early stage of visual analysis of faces. The lack of right hemispheric dominance for face processing may reflect a more general disorder of cerebral specialization of social functions in ASD.

  8. Embracing humanity in the face of death: why do existential concerns moderate ingroup humanization?

    Science.gov (United States)

    Vaes, Jeroen; Bain, Paul G; Bastian, Brock

    2014-01-01

    People humanize their ingroup to address existential concerns about their mortality, but the reasons why they do so remain ambiguous. One explanation is that people humanize their ingroup to bolster their social identity in the face of their mortality. Alternatively, people might be motivated to see their ingroup as more uniquely human (UH) to distance themselves from their corporeal "animal" nature. These explanations were tested in Australia, where social identity is tied less to UH and more to human nature (HN) which does not distinguish humans from animals. Australians attributed more HN traits to the ingroup when mortality was salient, while the attribution of UH traits remained unchanged. This indicates that the mortality-buffering function of ingroup humanization lies in reinforcing the humanness of our social identity, rather than just distancing ourselves from our animal nature. Implications for (de)humanization in intergroup relations are discussed.

  9. Application of 3D Morphable Models to faces in video images

    NARCIS (Netherlands)

    van Rootseler, R.T.A.; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.; van den Biggelaar, Olivier

    2011-01-01

    The 3D Morphable Face Model (3DMM) has been used for over a decade for creating 3D models from single images of faces. This model is based on a PCA model of the 3D shape and texture generated from a limited number of 3D scans. The goal of fitting a 3DMM to an image is to find the model coefficients,

  10. Face Scanning in Autism Spectrum Disorder and Attention Deficit/Hyperactivity Disorder: Human Versus Dog Face Scanning

    Science.gov (United States)

    Muszkat, Mauro; de Mello, Claudia Berlim; Muñoz, Patricia de Oliveira Lima; Lucci, Tania Kiehl; David, Vinicius Frayze; Siqueira, José de Oliveira; Otta, Emma

    2015-01-01

    This study used eye tracking to explore attention allocation to human and dog faces in children and adolescents with autism spectrum disorder (ASD), attention deficit/hyperactivity disorder (ADHD), and typical development (TD). Significant differences were found among the three groups. TD participants looked longer at the eyes than ASD and ADHD ones, irrespective of the faces presented. In spite of this difference, groups were similar in that they looked more to the eyes than to the mouth areas of interest. The ADHD group gazed longer at the mouth region than the other groups. Furthermore, groups were also similar in that they looked more to the dog than to the human faces. The eye-tracking technology proved to be useful for behavioral investigation in different neurodevelopmental disorders. PMID:26557097

  11. Face Scanning in Autism Spectrum Disorder and Attention Deficit/Hyperactivity Disorder: Human Versus Dog Face Scanning.

    Science.gov (United States)

    Muszkat, Mauro; de Mello, Claudia Berlim; Muñoz, Patricia de Oliveira Lima; Lucci, Tania Kiehl; David, Vinicius Frayze; Siqueira, José de Oliveira; Otta, Emma

    2015-01-01

    This study used eye tracking to explore attention allocation to human and dog faces in children and adolescents with autism spectrum disorder (ASD), attention deficit/hyperactivity disorder (ADHD), and typical development (TD). Significant differences were found among the three groups. TD participants looked longer at the eyes than ASD and ADHD ones, irrespective of the faces presented. In spite of this difference, groups were similar in that they looked more to the eyes than to the mouth areas of interest. The ADHD group gazed longer at the mouth region than the other groups. Furthermore, groups were also similar in that they looked more to the dog than to the human faces. The eye-tracking technology proved to be useful for behavioral investigation in different neurodevelopmental disorders.

  12. Face scanning in autism spectrum disorder (ASD and attention deficit/hyperactivity disorder (ADHD: human versus dog face scanning

    Directory of Open Access Journals (Sweden)

    Mauro eMuszkat

    2015-10-01

    Full Text Available This study used eye-tracking to explore attention allocation to human and dog faces in children and adolescents with autism spectrum disorder (ASD, attention deficit/hyperactivity disorder (ADHD, and typical development (TD. Significant differences were found among the three groups. TD participants looked longer at the eyes than ASD and ADHD ones, irrespective of the faces presented. In spite of this difference, groups were similar in that they looked more to the eyes than to the mouth areas of interest. The ADHD group gazed longer at the mouth region than the other groups. Furthermore, groups were also similar in that they looked more to the dog than to the human faces. The eye tracking technology proved to be useful for behavioral investigation in different neurodevelopmental disorders.

  13. A Novel Approach of Low-Light Image Denoising for Face Recognition

    Directory of Open Access Journals (Sweden)

    Yimei Kang

    2014-04-01

    Full Text Available Illumination variation makes automatic face recognition a challenging task, especially in low light environments. A very simple and efficient novel low-light image denoising of low frequency noise (DeLFN is proposed. The noise frequency distribution of low-light images is presented based on massive experimental results. The low and very low frequency noise are dominant in low light conditions. DeLFN is a three-level image denoising method. The first level denoises mixed noises by histogram equalization (HE to improve overall contrast. The second level denoises low frequency noise by logarithmic transformation (LOG to enhance the image detail. The third level denoises residual very low frequency noise by high-pass filtering to recover more features of the true images. The PCA (Principal Component Analysis recognition method is applied to test recognition rate of the preprocessed face images with DeLFN. DeLFN are compared with several representative illumination preprocessing methods on the Yale Face Database B, the Extended Yale face database B, and the CMU PIE face database, respectively. DeLFN not only outperformed other algorithms in improving visual quality and face recognition rate, but also is simpler and computationally efficient for real time applications.

  14. Face Recognition for Access Control Systems Combining Image-Difference Features Based on a Probabilistic Model

    Science.gov (United States)

    Miwa, Shotaro; Kage, Hiroshi; Hirai, Takashi; Sumi, Kazuhiko

    We propose a probabilistic face recognition algorithm for Access Control System(ACS)s. Comparing with existing ACSs using low cost IC-cards, face recognition has advantages in usability and security that it doesn't require people to hold cards over scanners and doesn't accept imposters with authorized cards. Therefore face recognition attracts more interests in security markets than IC-cards. But in security markets where low cost ACSs exist, price competition is important, and there is a limitation on the quality of available cameras and image control. Therefore ACSs using face recognition are required to handle much lower quality images, such as defocused and poor gain-controlled images than high security systems, such as immigration control. To tackle with such image quality problems we developed a face recognition algorithm based on a probabilistic model which combines a variety of image-difference features trained by Real AdaBoost with their prior probability distributions. It enables to evaluate and utilize only reliable features among trained ones during each authentication, and achieve high recognition performance rates. The field evaluation using a pseudo Access Control System installed in our office shows that the proposed system achieves a constant high recognition performance rate independent on face image qualities, that is about four times lower EER (Equal Error Rate) under a variety of image conditions than one without any prior probability distributions. On the other hand using image difference features without any prior probabilities are sensitive to image qualities. We also evaluated PCA, and it has worse, but constant performance rates because of its general optimization on overall data. Comparing with PCA, Real AdaBoost without any prior distribution performs twice better under good image conditions, but degrades to a performance as good as PCA under poor image conditions.

  15. Baby schema in human and animal faces induces cuteness perception and gaze allocation in children

    Directory of Open Access Journals (Sweden)

    Marta eBorgi

    2014-05-01

    Full Text Available The baby schema concept was originally proposed as a set of infantile traits with high appeal for humans, subsequently shown to elicit caretaking behavior and to affect cuteness perception and attentional processes. However, it is unclear whether the response to the baby schema may be extended to the human-animal bond context. Moreover, questions remain as to whether the cute response is constant and persistent or whether it changes with development. In the present study we parametrically manipulated the baby schema in images of humans, dogs and cats. We analyzed responses of 3-6-year-old children, using both explicit (i.e. cuteness ratings and implicit (i.e. eye gaze patterns measures. By means of eye-tracking, we assessed children’s preferential attention to images varying only for the degree of baby schema and explored participants’ fixation patterns during a cuteness task. For comparative purposes, cuteness ratings were also obtained in a sample of adults. Overall our results show that the response to an infantile facial configuration emerges early during development. In children, the baby schema affects both cuteness perception and gaze allocation to infantile stimuli and to specific facial features, an effect not simply limited to human faces. In line with previous research, results confirm human positive appraisal towards animals and inform both educational and therapeutic interventions involving pets, helping to minimize risk factors (e.g. dog bites.

  16. A novel pose and illumination robust face recognition with a single training image per person algorithm

    Institute of Scientific and Technical Information of China (English)

    Junbao Li; Jeng-Shyang Pan

    2008-01-01

    @@ In the real-world application of face recognition system, owing to the difficulties of collecting samples or storage space of systems, only one sample image per person is stored in the system, which is so-called one sample per person problem. Moreover, pose and illumination have impact on recognition performance. We propose a novel pose and illumination robust algorithm for face recognition with a single training image per person to solve the above limitations. Experimental results show that the proposed algorithm is an efficient and practical approach for face recognition.

  17. Image Generation Using Bidirectional Integral Features for Face Recognition with a Single Sample per Person.

    Directory of Open Access Journals (Sweden)

    Yonggeol Lee

    Full Text Available In face recognition, most appearance-based methods require several images of each person to construct the feature space for recognition. However, in the real world it is difficult to collect multiple images per person, and in many cases there is only a single sample per person (SSPP. In this paper, we propose a method to generate new images with various illuminations from a single image taken under frontal illumination. Motivated by the integral image, which was developed for face detection, we extract the bidirectional integral feature (BIF to obtain the characteristics of the illumination condition at the time of the picture being taken. The experimental results for various face databases show that the proposed method results in improved recognition performance under illumination variation.

  18. Image Generation Using Bidirectional Integral Features for Face Recognition with a Single Sample per Person

    Science.gov (United States)

    Lee, Yonggeol; Lee, Minsik; Choi, Sang-Il

    2015-01-01

    In face recognition, most appearance-based methods require several images of each person to construct the feature space for recognition. However, in the real world it is difficult to collect multiple images per person, and in many cases there is only a single sample per person (SSPP). In this paper, we propose a method to generate new images with various illuminations from a single image taken under frontal illumination. Motivated by the integral image, which was developed for face detection, we extract the bidirectional integral feature (BIF) to obtain the characteristics of the illumination condition at the time of the picture being taken. The experimental results for various face databases show that the proposed method results in improved recognition performance under illumination variation. PMID:26414018

  19. From the Form to the Face to Face: IRBs, Ethnographic Researchers, and Human Subjects Translate Consent

    Science.gov (United States)

    Metro, Rosalie

    2014-01-01

    Based on my fieldwork with Burmese teachers in Thailand, I describe the drawbacks of using IRB-mandated written consent procedures in my cross-cultural collaborative ethnographic research on education. Drawing on theories of intersubjectivity (Mikhail Bakhtin), ethics (Emmanuel Levinas), and translation (Naoki Sakai), I describe face-to-face…

  20. From the Form to the Face to Face: IRBs, Ethnographic Researchers, and Human Subjects Translate Consent

    Science.gov (United States)

    Metro, Rosalie

    2014-01-01

    Based on my fieldwork with Burmese teachers in Thailand, I describe the drawbacks of using IRB-mandated written consent procedures in my cross-cultural collaborative ethnographic research on education. Drawing on theories of intersubjectivity (Mikhail Bakhtin), ethics (Emmanuel Levinas), and translation (Naoki Sakai), I describe face-to-face…

  1. A New Viewpoint on the Evolution of Sexually Dimorphic Human Faces

    Directory of Open Access Journals (Sweden)

    Darren Burke

    2010-10-01

    Full Text Available Human faces show marked sexual shape dimorphism, and this affects their attractiveness. Humans also show marked height dimorphism, which means that men typically view women's faces from slightly above and women typically view men's faces from slightly below. We tested the idea that this perspective difference may be the evolutionary origin of the face shape dimorphism by having males and females rate the masculinity/femininity and attractiveness of male and female faces that had been manipulated in pitch (forward or backward tilt, simulating viewing the face from slightly above or below. As predicted, tilting female faces upwards decreased their perceived femininity and attractiveness, whereas tilting them downwards increased their perceived femininity and attractiveness. Male faces tilted up were judged to be more masculine, and tilted down judged to be less masculine. This suggests that sexual selection may have embodied this viewpoint difference into the actual facial proportions of men and women.

  2. Face recognition via edge-based Gabor feature representation for plastic surgery-altered images

    Science.gov (United States)

    Chude-Olisah, Chollette C.; Sulong, Ghazali; Chude-Okonkwo, Uche A. K.; Hashim, Siti Z. M.

    2014-12-01

    Plastic surgery procedures on the face introduce skin texture variations between images of the same person (intra-subject), thereby making the task of face recognition more difficult than in normal scenario. Usually, in contemporary face recognition systems, the original gray-level face image is used as input to the Gabor descriptor, which translates to encoding some texture properties of the face image. The texture-encoding process significantly degrades the performance of such systems in the case of plastic surgery due to the presence of surgically induced intra-subject variations. Based on the proposition that the shape of significant facial components such as eyes, nose, eyebrow, and mouth remains unchanged after plastic surgery, this paper employs an edge-based Gabor feature representation approach for the recognition of surgically altered face images. We use the edge information, which is dependent on the shapes of the significant facial components, to address the plastic surgery-induced texture variation problems. To ensure that the significant facial components represent useful edge information with little or no false edges, a simple illumination normalization technique is proposed for preprocessing. Gabor wavelet is applied to the edge image to accentuate on the uniqueness of the significant facial components for discriminating among different subjects. The performance of the proposed method is evaluated on the Georgia Tech (GT) and the Labeled Faces in the Wild (LFW) databases with illumination and expression problems, and the plastic surgery database with texture changes. Results show that the proposed edge-based Gabor feature representation approach is robust against plastic surgery-induced face variations amidst expression and illumination problems and outperforms the existing plastic surgery face recognition methods reported in the literature.

  3. Distinct representations of configural and part information across multiple face-selective regions of the human brain.

    Science.gov (United States)

    Golarai, Golijeh; Ghahremani, Dara G; Eberhardt, Jennifer L; Gabrieli, John D E

    2015-01-01

    Several regions of the human brain respond more strongly to faces than to other visual stimuli, such as regions in the amygdala (AMG), superior temporal sulcus (STS), and the fusiform face area (FFA). It is unclear if these brain regions are similar in representing the configuration or natural appearance of face parts. We used functional magnetic resonance imaging of healthy adults who viewed natural or schematic faces with internal parts that were either normally configured or randomly rearranged. Response amplitudes were reduced in the AMG and STS when subjects viewed stimuli whose configuration of parts were digitally rearranged, suggesting that these regions represent the 1st order configuration of face parts. In contrast, response amplitudes in the FFA showed little modulation whether face parts were rearranged or if the natural face parts were replaced with lines. Instead, FFA responses were reduced only when both configural and part information were reduced, revealing an interaction between these factors, suggesting distinct representation of 1st order face configuration and parts in the AMG and STS vs. the FFA.

  4. High Performance Human Face Recognition using Independent High Intensity Gabor Wavelet Responses: A Statistical Approach

    CERN Document Server

    Kar, Arindam; Basu, Dipak Kumar; Nasipuri, Mita; Kundu, Mahantapas

    2011-01-01

    In this paper, we present a technique by which high-intensity feature vectors extracted from the Gabor wavelet transformation of frontal face images, is combined together with Independent Component Analysis (ICA) for enhanced face recognition. Firstly, the high-intensity feature vectors are automatically extracted using the local characteristics of each individual face from the Gabor transformed images. Then ICA is applied on these locally extracted high-intensity feature vectors of the facial images to obtain the independent high intensity feature (IHIF) vectors. These IHIF forms the basis of the work. Finally, the image classification is done using these IHIF vectors, which are considered as representatives of the images. The importance behind implementing ICA along with the high-intensity features of Gabor wavelet transformation is twofold. On the one hand, selecting peaks of the Gabor transformed face images exhibit strong characteristics of spatial locality, scale, and orientation selectivity. Thus these...

  5. Fixations Gate Species-Specific Responses to Free Viewing of Faces in the Human and Macaque Amygdala

    Directory of Open Access Journals (Sweden)

    Juri Minxha

    2017-01-01

    Full Text Available Neurons in the primate amygdala respond prominently to faces. This implicates the amygdala in the processing of socially significant stimuli, yet its contribution to social perception remains poorly understood. We evaluated the representation of faces in the primate amygdala during naturalistic conditions by recording from both human and macaque amygdala neurons during free viewing of identical arrays of images with concurrent eye tracking. Neurons responded to faces only when they were fixated, suggesting that neuronal activity was gated by visual attention. Further experiments in humans utilizing covert attention confirmed this hypothesis. In both species, the majority of face-selective neurons preferred faces of conspecifics, a bias also seen behaviorally in first fixation preferences. Response latencies, relative to fixation onset, were shortest for conspecific-selective neurons and were ∼100 ms shorter in monkeys compared to humans. This argues that attention to faces gates amygdala responses, which in turn prioritize species-typical information for further processing.

  6. An Illumination Invariant Face Detection Based on Human Shape Analysis and Skin Color Information

    Directory of Open Access Journals (Sweden)

    Dibakar Chakraborty

    2012-06-01

    Full Text Available This paper provides a novel approach towards face area localization through analyzing the shape characteristics of human body. The face region is extracted by determining the sharp increase in body pixels in the shoulder area from neck region. For ensuring face area skin color information is also analyzed. The experimental analysis shows that the proposed algorithm detects the face area effectively and it’s performance is found to be quite satisfactory

  7. Face Detection Based on Image Segmentation%基于图像分割的人脸检测

    Institute of Scientific and Technical Information of China (English)

    李艳; 陈虹洁

    2011-01-01

    人脸检测作为人脸识别中的关键问题之一,近年来受到了越来越多的关注。通常采集到人脸信息非常丰富,无法直接判断脸部信息和背景信息。因此,需要一种有效的方法来解决图像的分类问题。数据挖掘中的聚类分析方法能对大量数据进行有效划分,为人脸检测中的图像分割提供了新的研究思路。%As one of the key points in face recognition, face detection has been paid more and more attention in recent years.However, we usually can not directly distinguish the face from the background of massive image information we collected.So,we need an effective method to solve image classification problems.Cluster analysis method in data mining can effectively divide large amounts of data,and provides a new research idea for image segmentation in human face detection.

  8. Direct imaging of haloes and truncations in face-on nearby galaxies

    NARCIS (Netherlands)

    Knapen, Johan; Peters, Stephan; van der Kruit, Piet; Trujillo, Ignacio; Fliri, Juergen; Cisternas, Mauricio

    2015-01-01

    We use ultra-deep imaging from the IAC Stripe82 Legacy Project to study the surface photometry of 22 nearby, face-on to moderately inclined spiral galaxies. The reprocessed and co-added SDSS/Stripe82 imaging allows us to probe the galaxy down to 29-30 r‧-magnitudes/arcsec2 and thus reach into the ve

  9. Efficient Recognition of Human Faces from Video in Particle Filter

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Face recognition from video requires dealing with uncertainty both in tracking and recognition. This paper proposed an effective method for face recognition from video. In order to realize simultaneous tracking and recognition, fisherface-based recognition is combined with tracking into one model. This model is then embedded into particle filter to perform face recognition from video. In order to improve the robustness of tracking, an expectation maximization (EM) algorithm was adopted to update the appearance model. The experimental results show that the proposed method can perform well in tracking and recognition even in poor conditions such as occlusion and remarkable change in lighting.

  10. Unified Probabilistic Models for Face Recognition from a Single Example Image per Person

    Institute of Scientific and Technical Information of China (English)

    Pin Liao; Li Shen

    2004-01-01

    This paper presents a new technique of unified probabilistic models for face recognition from only one single example image per person. The unified models, trained on an obtained training set with multiple samples per person, are used to recognize facial images from another disjoint database with a single sample per person. Variations between facial images are modeled as two unified probabilistic models: within-class variations and between-class variations. Gaussian Mixture Models are used to approximate the distributions of the two variations and exploit a classifier combination method to improve the performance. Extensive experimental results on the ORL face database and the authors' database (the ICT-JDL database) including totally 1,750facial images of 350 individuals demonstrate that the proposed technique, compared with traditional eigenface method and some well-known traditional algorithms, is a significantly more effective and robust approach for face recognition.

  11. Unified Probabilistic Models for Face Recognition from a Single Example Image per Person

    Institute of Scientific and Technical Information of China (English)

    PinLiao; LiShen

    2004-01-01

    This paper presents a new technique of unified probabilistic models for face recognition from only one single example image per person. The unified models, trained on an obtained training set with multiple samples per person, are used to recognize facial images from another disjoint database with a single sample per person. Variations between facial images are modeled as two unified probabilistic models: within-class variations and between-class variations. Gaussian Mixture Models are used to approximate the distributions of the two variations and exploit a classifier combination method to improve the performance. Extensive experimental results on the ORL face database and the authors' database (the ICT-JDL database) including totally 1,750 facial images of 350 individuals demonstrate that the proposed technique, compared with traditional eigenface method and some well-known traditional algorithms, is a significantly more effective and robust approach for face recognition.

  12. Research Based on Digital Image Irocessing of Face Detection Algorithm%基于数字图像处理的人脸检测算法研究

    Institute of Scientific and Technical Information of China (English)

    刘笃晋; 邓小亚; 蒲国林

    2012-01-01

      Face detection is premise and the foundation of face recognition, at the same time ,it has very important application val⁃ue in the digital video processing, authentication, content based retrieval, visual detection, this paper makes a study based on digi⁃tal image processing, color face detection steps at the present situation, that include human face image denoising, human face im⁃age edge detection, human face image segmentation, image the illumination effect removal, and points out the future develop⁃ment direction of each step.%  人脸检测是人脸识别的前提和基础,同时在数字视频处理、身份验证、基于内容的检索、视觉检测等方面都有着非常重要的应用价值,该文对基于数字图像处理的彩色人脸检测的各个步骤包括图像去噪、图像边缘检测、图像分割、图像光照影响的去除等的发展现状进行了研究,并指出了各个步骤以后的发展方向。

  13. 基于图像的人脸美感初探%A PRELIMINARY STUDY ON FACE BEAUTY DEGREE BASED ON FACE IMAGE

    Institute of Scientific and Technical Information of China (English)

    张天刚; 张景安; 康苏明

    2011-01-01

    在关系数据库中,为了实现以人脸图像为关键字段进行升序或降序排列,对计算机自动评析人脸的漂亮程度进行了初步探讨.采用主动表观模型提取人脸特征,将提取到的特征分为公共特征及私有特征两部分,公共特征用于识别性别,私有特征完成两块功能:计算“三庭五眼”比例值,获取人脸纹理信息.利用私有特征评析人脸的漂亮程度,用BP神经网络分类器在自建的OID人脸库中进行了实验,取得的效果与人类视觉审美相一致.不仅实现了对于单一测试者能输出确切的结果,而且对于多个测试者,能实现首先接性别分类,相同性别的测试者可依漂亮程度自动排序.%In order to realize ascending or descending sorting in a relational database by face image as its key field,a preliminary study is performed on face beauty degree computer automatic assessment. Face features are extracted by active appearance model. Then they are classified as public features and private features. Public features are used for gender identification while private features bear two functions; one is to calculate proportions of internal parts of faces;the other is to obtain face texture information. Private features are used to assess face beauty degree. Experiments are carried out with back propagation neural network classifier on OID face database built by the authors themselves. The results are consistent with human visual perception. The method not only produces exact results for a single tester,but also realizes both gender recognition and sorting by beauty degree for testers of the same gender.

  14. [Decrease in N170 evoked potential component latency during repeated presentation of face images].

    Science.gov (United States)

    Verkhliutov, V M; Ushakov, V L; Strelets, V B

    2009-01-01

    The 15 healthy volunteers EEG from 28 channels was recorded during the presentation of visual stimuli in the form of face and building images. The stimuli were presented in two series. The first series consisted of 60 face and 60 building images presented in random order. The second series consisted of 30 face and 30 building images. The second series began 1.5-2 min after the end of the first ore. No instruction was given to the participants. P1, N170 and VPP EP components were identified for both stimuli categories. These components were located in the medial parietal area (Brodmann area 40). P1 and N170 components were recorded in the superior temporal fissure (Brodmann area 21, STS region), the first component had the latency 120 ms, the second one--155 ms. VPP was recorded with the latency 190 ms (Brodmann area 19). Dynamic mapping of EP components with the latency from 97 to 242 ms revealed the removal of positive maximums from occipital to frontal areas through temporal ones and their subsequent returning to occipital areas through the central ones. During the comparison of EP components to face and building images the amplitude differences were revealed in the following areas: P1--in frontal, central and anterior temporal areas, N170--in frontal, central, temporal and parietal areas, VPP--in all areas. It was also revealed that N170 latency was 12 ms shorter for face than for building images. It was proposed that the above mentioned N170 latency decrease for face in comparison with building images is connected with the different space location of the fusiform area responsible for face and building images recognition. Priming--the effect that is revealed during the repetitive face images presentation is interpreted as the manifestation of functional heterogeneity of the fusiform area responsible for the face images recognition. The hypothesis is put forward that the parts of extrastriate cortex which are located closer to the central retinotopical

  15. Unified framework for automated iris segmentation using distantly acquired face images.

    Science.gov (United States)

    Tan, Chun-Wei; Kumar, Ajay

    2012-09-01

    Remote human identification using iris biometrics has high civilian and surveillance applications and its success requires the development of robust segmentation algorithm to automatically extract the iris region. This paper presents a new iris segmentation framework which can robustly segment the iris images acquired using near infrared or visible illumination. The proposed approach exploits multiple higher order local pixel dependencies to robustly classify the eye region pixels into iris or noniris regions. Face and eye detection modules have been incorporated in the unified framework to automatically provide the localized eye region from facial image for iris segmentation. We develop robust postprocessing operations algorithm to effectively mitigate the noisy pixels caused by the misclassification. Experimental results presented in this paper suggest significant improvement in the average segmentation errors over the previously proposed approaches, i.e., 47.5%, 34.1%, and 32.6% on UBIRIS.v2, FRGC, and CASIA.v4 at-a-distance databases, respectively. The usefulness of the proposed approach is also ascertained from recognition experiments on three different publicly available databases.

  16. A Novel Permutation Based Approach for Effective and Efficient Representation of Face Images under Varying Illuminations

    Directory of Open Access Journals (Sweden)

    S Natarajan

    2013-08-01

    Full Text Available Paramount importance for an automated face recognition system is the ability to enhance discriminatory power with a low-dimensional feature representation. Keeping this as a focal point, we present a novel approach for face recognition by formulating the problem of face tagging in terms of permutation. Using a fundamental concept that, dominant pixels of a person will remain dominant under varying illuminations, we develop a Permutation Matrix (PM based approach for representing face images. The proposed method is extensively evaluated on several benchmark databases under different exemplary evaluation protocols reported in the literature. Experimental results and comparative study with state-of-the-art methods suggest that the proposed approach provides a better representation of face, thereby achieving higher efficacy and lower error rates.

  17. Face recognition across makeup and plastic surgery from real-world images

    Science.gov (United States)

    Moeini, Ali; Faez, Karim; Moeini, Hossein

    2015-09-01

    A study for feature extraction is proposed to handle the problem of facial appearance changes including facial makeup and plastic surgery in face recognition. To extend a face recognition method robust to facial appearance changes, features are individually extracted from facial depth on which facial makeup and plastic surgery have no effect. Then facial depth features are added to facial texture features to perform feature extraction. Accordingly, a three-dimensional (3-D) face is reconstructed from only a single two-dimensional (2-D) frontal image in real-world scenarios. Then the facial depth is extracted from the reconstructed model. Afterward, the dual-tree complex wavelet transform (DT-CWT) is applied to both texture and reconstructed depth images to extract the feature vectors. Finally, the final feature vectors are generated by combining 2-D and 3-D feature vectors, and are then classified by adopting the support vector machine. Promising results have been achieved for makeup-invariant face recognition on two available image databases including YouTube makeup and virtual makeup, and plastic surgery-invariant face recognition on a plastic surgery face database is compared to several state-of-the-art feature extraction methods. Several real-world scenarios are also planned to evaluate the performance of the proposed method on a combination of these three databases with 1102 subjects.

  18. Foveation: an alternative method to simultaneously preserve privacy and information in face images

    Science.gov (United States)

    Alonso, Víctor E.; Enríquez-Caldera, Rogerio; Sucar, Luis Enrique

    2017-03-01

    This paper presents a real-time foveation technique proposed as an alternative method for image obfuscation while simultaneously preserving privacy in face deidentification. Relevance of the proposed technique is discussed through a comparative study of the most common distortions methods in face images and an assessment on performance and effectiveness of privacy protection. All the different techniques presented here are evaluated when they go through a face recognition software. Evaluating the data utility preservation was carried out under gender and facial expression classification. Results on quantifying the tradeoff between privacy protection and image information preservation at different obfuscation levels are presented. Comparative results using the facial expression subset of the FERET database show that the technique achieves a good tradeoff between privacy and awareness with 30% of recognition rate and a classification accuracy as high as 88% obtained from the common figures of merit using the privacy-awareness map.

  19. Kernel TV-Based Quotient Image Employing Gabor Analysis and Its Application to Face Recognition

    Science.gov (United States)

    An, Gaoyun; Wu, Jiying; Ruan, Qiuqi

    In order to overcome the drawback of TVQI and to utilize the property of dimensionality increasing techniques, a novel model for Kernel TV-based Quotient Image employing Gabor analysis is proposed and applied to face recognition with only one sample per subject. To deal with illumination outliers, an enhanced TV-based quotient image (ETVQI) model is first adopted. Then for preprocessed images by ETVQI, a bank of Gabor filters is built to extract features at specified scales and orientations. Lastly, KPCA is introduced to extract final high-order and nonlinear features of extracted Gabor features. According to experiments on the CAS-PEAL face database, our model could outperform Gabor-based KPCA, TVQI and Gabor-based TVQI when they face most outliers (illumination, expression, masking etc.).

  20. Extracting a Good Quality Frontal Face Image from a Low-Resolution Video Sequence

    DEFF Research Database (Denmark)

    Nasrollahi, Kamal; Moeslund, Thomas B.

    2011-01-01

    Feeding low-resolution and low-quality images, from inexpensive surveillance cameras, to systems like, e.g., face recognition, produces erroneous and unstable results. Therefore, there is a need for a mechanism to bridge the gap between on one hand low-resolution and low-quality images and on the......Feeding low-resolution and low-quality images, from inexpensive surveillance cameras, to systems like, e.g., face recognition, produces erroneous and unstable results. Therefore, there is a need for a mechanism to bridge the gap between on one hand low-resolution and low-quality images...... and on the other hand facial analysis systems. The proposed system in this paper deals with exactly this problem. Our approach is to apply a reconstruction-based super-resolution algorithm. Such an algorithm, however, has two main problems: first, it requires relatively similar images with not too much noise...

  1. Extracting a Good Quality Frontal Face Image from a Low-Resolution Video Sequence

    DEFF Research Database (Denmark)

    Nasrollahi, Kamal; Moeslund, Thomas B.

    2011-01-01

    Feeding low-resolution and low-quality images, from inexpensive surveillance cameras, to systems like, e.g., face recognition, produces erroneous and unstable results. Therefore, there is a need for a mechanism to bridge the gap between on one hand low-resolution and low-quality images and on the......Feeding low-resolution and low-quality images, from inexpensive surveillance cameras, to systems like, e.g., face recognition, produces erroneous and unstable results. Therefore, there is a need for a mechanism to bridge the gap between on one hand low-resolution and low-quality images...... and on the other hand facial analysis systems. The proposed system in this paper deals with exactly this problem. Our approach is to apply a reconstruction-based super-resolution algorithm. Such an algorithm, however, has two main problems: first, it requires relatively similar images with not too much noise...

  2. Face relighting from a single image under arbitrary unknown lighting conditions.

    Science.gov (United States)

    Wang, Yang; Zhang, Lei; Liu, Zicheng; Hua, Gang; Wen, Zhen; Zhang, Zhengyou; Samaras, Dimitris

    2009-11-01

    In this paper, we present a new method to modify the appearance of a face image by manipulating the illumination condition, when the face geometry and albedo information is unknown. This problem is particularly difficult when there is only a single image of the subject available. Recent research demonstrates that the set of images of a convex Lambertian object obtained under a wide variety of lighting conditions can be approximated accurately by a low-dimensional linear subspace using a spherical harmonic representation. Moreover, morphable models are statistical ensembles of facial properties such as shape and texture. In this paper, we integrate spherical harmonics into the morphable model framework by proposing a 3D spherical harmonic basis morphable model (SHBMM). The proposed method can represent a face under arbitrary unknown lighting and pose simply by three low-dimensional vectors, i.e., shape parameters, spherical harmonic basis parameters, and illumination coefficients, which are called the SHBMM parameters. However, when the image was taken under an extreme lighting condition, the approximation error can be large, thus making it difficult to recover albedo information. In order to address this problem, we propose a subregion-based framework that uses a Markov random field to model the statistical distribution and spatial coherence of face texture, which makes our approach not only robust to extreme lighting conditions, but also insensitive to partial occlusions. The performance of our framework is demonstrated through various experimental results, including the improved rates for face recognition under extreme lighting conditions.

  3. Electrocortical reactivity to emotional images and faces in middle childhood to early adolescence.

    Science.gov (United States)

    Kujawa, Autumn; Klein, Daniel N; Hajcak, Greg

    2012-10-01

    The late positive potential (LPP) is an event-related potential (ERP) component that indexes sustained attention toward motivationally salient information. The LPP has been observed in children and adults, however little is known about its development from childhood into adolescence. In addition, whereas LPP studies examine responses to images from the International Affective Picture System (IAPS; Lang et al., 2008) or emotional faces, no previous studies have compared responses in youth across stimuli. To examine how emotion interacts with attention across development, the current study used an emotional-interrupt task to measure LPP and behavioral responses in 8- to 13-year-olds using unpleasant, pleasant, and neutral IAPS images, as well as sad, happy, and neutral faces. Compared to older youth, younger children exhibited enhanced LPPs over occipital sites. In addition, sad but not happy faces elicited a larger LPP than neutral faces; behavioral measures did not vary across facial expressions. Both unpleasant and pleasant IAPS images were associated with increased LPPs and behavioral interference compared to neutral images. Results suggest that there may be developmental differences in the scalp distribution of the LPP, and compared to faces, IAPS elicit more robust behavioral and electrocortical measures of attention to emotional stimuli.

  4. Classification of Polar-Thermal Eigenfaces using Multilayer Perceptron for Human Face Recognition

    CERN Document Server

    Bhowmik, Mrinal Kanti; Nasipuri, Mita; Basu, Dipak Kumar; Kundu, Mahantapas

    2010-01-01

    This paper presents a novel approach to handle the challenges of face recognition. In this work thermal face images are considered, which minimizes the affect of illumination changes and occlusion due to moustache, beards, adornments etc. The proposed approach registers the training and testing thermal face images in polar coordinate, which is capable to handle complicacies introduced by scaling and rotation. Polar images are projected into eigenspace and finally classified using a multi-layer perceptron. In the experiments we have used Object Tracking and Classification Beyond Visible Spectrum (OTCBVS) database benchmark thermal face images. Experimental results show that the proposed approach significantly improves the verification and identification performance and the success rate is 97.05%.

  5. A Scale and Pose Invariant Algorithm for Fast Detecting Human Faces in a Complex Background

    Institute of Scientific and Technical Information of China (English)

    XING Xin; SHEN Lansun; JIA Kebin

    2001-01-01

    Human face detection is an interesting and challenging task in computer vision. A scale and pose invariant algorithm is proposed in this paper.The algorithm is able to detect human faces in a complex background in about 400ms with a detection rate of 92%. The algorithm can be used in a wide range of applications such as human-computer interface, video coding, etc.

  6. Humanity faced with climatic change; L'Humanite face au changement climatique

    Energy Technology Data Exchange (ETDEWEB)

    Dautray, R.; Lesourne, J

    2009-07-01

    Humanity is for the first time confronted with a global change phenomenon of the eco-sphere which makes her entering a long transition era with at the same time economical, social and political impacts. Every country and every human activity will be impacted. The resulting problems may be solved by changing our way of life, limiting transports and by the large-scale implementation of existing technologies or technologies under development. The challenge is not to reject technology but to intensify the efforts to develop and adapt it according to the real needs of populations. (J.S.)

  7. Recognition of Expressions on Human Face using AI Techniques

    Directory of Open Access Journals (Sweden)

    Arpita Nagpal

    2011-08-01

    Full Text Available Facial expressions convey non-verbal cues, which play animportant role in interpersonal relations. Facial expressionsrecognition technology helps in designing an intelligent humancomputer interfaces. This paper discusses a three phase techniquefor the facial expression recognition of the Indian faces. In thefirst phase the faces are tracked using Haar classifier in the livevideos of Indian student’s community. In the second phase 38facial feature points are detected using Active Appearance Model(AAM technique. In the last step the support vector machine(SVM is used to classify four primary facial expression.Integrating these broader techniques and obtaining a reasonablygood performance is a very big challenge. The performance ofthe proposed facial expressions recognizer is 82.7%.

  8. Pleasant and unpleasant odors influence hedonic evaluations of human faces: an event-related potential study.

    Directory of Open Access Journals (Sweden)

    Stephanie Jane Cook

    2015-12-01

    Full Text Available Odors can alter hedonic evaluations of human faces, but the neural mechanisms of such effects are poorly understood. The present study aimed to analyze the neural underpinning of odor-induced changes in evaluations of human faces in an odor-priming paradigm, using event-related potentials (ERPs. Healthy, young participants (N = 20 rated neutral faces presented after a three second pulse of a pleasant odor (jasmine, unpleasant odor (methylmercaptan, or no-odor control (clean air. Neutral faces presented in the pleasant odor condition were rated more pleasant than the same faces presented in the no-odor control condition, which in turn were rated more pleasant than faces in the unpleasant odor condition. Analysis of face-related potentials revealed four clusters of electrodes significantly affected by odor condition at specific time points during long-latency epochs (600−950 ms. In the 620−640 ms interval, two scalp-time clusters showed greater negative potential in the right parietal electrodes in response to faces in the pleasant odor condition, compared to those in the no-odor and unpleasant odor conditions. At 926 ms, face-related potentials showed greater positivity in response to faces in the pleasant and unpleasant odor conditions at the left and right lateral frontal-temporal electrodes, respectively. Our data shows that odor-induced shifts in evaluations of faces were associated with amplitude changes in the late (> 600 and ultra-late (> 900 ms latency epochs. The observed amplitude changes during the ultra-late epoch are consistent with a left/right hemisphere bias towards pleasant/unpleasant odor effects. Odors alter evaluations of human faces, even when there is a temporal lag between presentation of odors and faces. Our results provide an initial understanding of the neural mechanisms underlying effects of odors on hedonic evaluations.

  9. Pleasant and Unpleasant Odors Influence Hedonic Evaluations of Human Faces: An Event-Related Potential Study

    Science.gov (United States)

    Cook, Stephanie; Fallon, Nicholas; Wright, Hazel; Thomas, Anna; Giesbrecht, Timo; Field, Matt; Stancak, Andrej

    2015-01-01

    Odors can alter hedonic evaluations of human faces, but the neural mechanisms of such effects are poorly understood. The present study aimed to analyze the neural underpinning of odor-induced changes in evaluations of human faces in an odor-priming paradigm, using event-related potentials (ERPs). Healthy, young participants (N = 20) rated neutral faces presented after a 3 s pulse of a pleasant odor (jasmine), unpleasant odor (methylmercaptan), or no-odor control (clean air). Neutral faces presented in the pleasant odor condition were rated more pleasant than the same faces presented in the no-odor control condition, which in turn were rated more pleasant than faces in the unpleasant odor condition. Analysis of face-related potentials revealed four clusters of electrodes significantly affected by odor condition at specific time points during long-latency epochs (600−950 ms). In the 620−640 ms interval, two scalp-time clusters showed greater negative potential in the right parietal electrodes in response to faces in the pleasant odor condition, compared to those in the no-odor and unpleasant odor conditions. At 926 ms, face-related potentials showed greater positivity in response to faces in the pleasant and unpleasant odor conditions at the left and right lateral frontal-temporal electrodes, respectively. Our data shows that odor-induced shifts in evaluations of faces were associated with amplitude changes in the late (>600) and ultra-late (>900 ms) latency epochs. The observed amplitude changes during the ultra-late epoch are consistent with a left/right hemisphere bias towards pleasant/unpleasant odor effects. Odors alter evaluations of human faces, even when there is a temporal lag between presentation of odors and faces. Our results provide an initial understanding of the neural mechanisms underlying effects of odors on hedonic evaluations. PMID:26733843

  10. A truly human interface: Interacting face-to-face with someone whose words are determined by a computer program

    Directory of Open Access Journals (Sweden)

    Kevin eCorti

    2015-05-01

    Full Text Available We use speech shadowing to create situations wherein people converse in person with a human whose words are determined by a conversational agent computer program. Speech shadowing involves a person (the shadower repeating vocal stimuli originating from a separate communication source in real-time. Humans shadowing for conversational agent sources (e.g., chat bots become hybrid agents (echoborgs capable of face-to-face interlocution. We report three studies that investigated people’s experiences interacting with echoborgs and the extent to which echoborgs pass as autonomous humans. First, participants in a Turing Test spoke with a chat bot via either a text interface or an echoborg. Human shadowing did not improve the chat bot’s chance of passing but did increase interrogators’ ratings of how human-like the chat bot seemed. In our second study, participants had to decide whether their interlocutor produced words generated by a chat bot or simply pretended to be one. Compared to those who engaged a text interface, participants who engaged an echoborg were more likely to perceive their interlocutor as pretending to be a chat bot. In our third study, participants were naïve to the fact that their interlocutor produced words generated by a chat bot. Unlike those who engaged a text interface, the vast majority of participants who engaged an echoborg neither sensed nor suspected a robotic interaction. These findings have implications for android science, the Turing Test paradigm, and human-computer interaction. The human body, as the delivery mechanism of communication, fundamentally alters the social psychological dynamics of interactions with machine intelligence.

  11. Self-compassion in the face of shame and body image dissatisfaction: implications for eating disorders.

    Science.gov (United States)

    Ferreira, Cláudia; Pinto-Gouveia, José; Duarte, Cristiana

    2013-04-01

    The current study examines the role of self-compassion in face of shame and body image dissatisfaction, in 102 female eating disorders' patients, and 123 women from general population. Self-compassion was negatively associated with external shame, general psychopathology, and eating disorders' symptomatology. In women from the general population increased external shame predicted drive for thinness partially through lower self-compassion; also, body image dissatisfaction directly predicted drive for thinness. However, in the patients' sample increased shame and body image dissatisfaction predicted increased drive for thinness through decreased self-compassion. These results highlight the importance of the affiliative emotion dimensions of self-compassion in face of external shame, body image dissatisfaction and drive for thinness, emphasising the relevance of cultivating a self-compassionate relationship in eating disorders' patients.

  12. DEWA: A Multiaspect Approach for Multiple Face Detection in Complex Scene Digital Image

    Directory of Open Access Journals (Sweden)

    Setiawan Hadi

    2013-09-01

    Full Text Available A new approach for detecting faces in a digital image with unconstrained background has been developed. The approach is composed of three phases: segmentation phase, filtering phase and localization phase. In the segmentation phase, we utilized both training and non-training methods, which are implemented in user selectable color space. In the filtering phase, Minkowski addition-based objects removal has been used for image cleaning. In the last phase, an image processing method and a data mining method are employed for grouping and localizing objects, combined with geometric-based image analysis. Several experiments have been conducted using our special face database that consists of simple objects and complex objects. The experiment results demonstrated that the detection accuracy is around 90% and the detection speed is less than 1 second in average.

  13. DEWA: A Multiaspect Approach for Multiple Face Detection in Complex Scene Digital Image

    Directory of Open Access Journals (Sweden)

    Setiawan Hadi

    2007-05-01

    Full Text Available A new approach for detecting faces in a digital image with unconstrained background has been developed. The approach is composed of three phases: segmentation phase, filtering phase and localization phase. In the segmentation phase, we utilized both training and non-training methods, which are implemented in user selectable color space. In the filtering phase, Minkowski addition-based objects removal has been used for image cleaning. In the last phase, an image processing method and a data mining method are employed for grouping and localizing objects, combined with geometric-based image analysis. Several experiments have been conducted using our special face database that consists of simple objects and complex objects. The experiment results demonstrated that the detection accuracy is around 90% and the detection speed is less than 1 second in average.

  14. Image disparity in cross-spectral face recognition: mitigating camera and atmospheric effects

    Science.gov (United States)

    Cao, Zhicheng; Schmid, Natalia A.; Li, Xin

    2016-05-01

    Matching facial images acquired in different electromagnetic spectral bands remains a challenge. An example of this type of comparison is matching active or passive infrared (IR) against a gallery of visible face images. When combined with cross-distance, this problem becomes even more challenging due to deteriorated quality of the IR data. As an example, we consider a scenario where visible light images are acquired at a short standoff distance while IR images are long range data. To address the difference in image quality due to atmospheric and camera effects, typical degrading factors observed in long range data, we propose two approaches that allow to coordinate image quality of visible and IR face images. The first approach involves Gaussian-based smoothing functions applied to images acquired at a short distance (visible light images in the case we analyze). The second approach involves denoising and enhancement applied to low quality IR face images. A quality measure tool called Adaptive Sharpness Measure is utilized as guidance for the quality parity process, which is an improvement of the famous Tenengrad method. For recognition algorithm, a composite operator combining Gabor filters, Local Binary Patterns (LBP), generalized LBP and Weber Local Descriptor (WLD) is used. The composite operator encodes both magnitude and phase responses of the Gabor filters. The combining of LBP and WLD utilizes both the orientation and intensity information of edges. Different IR bands, short-wave infrared (SWIR) and near-infrared (NIR), and different long standoff distances are considered. The experimental results show that in all cases the proposed technique of image quality parity (both approaches) benefits the final recognition performance.

  15. Two-dimensional maximum local variation based on image euclidean distance for face recognition.

    Science.gov (United States)

    Gao, Quanxue; Gao, Feifei; Zhang, Hailin; Hao, Xiu-Juan; Wang, Xiaogang

    2013-10-01

    Manifold learning concerns the local manifold structure of high dimensional data, and many related algorithms are developed to improve image classification performance. None of them, however, consider both the relationships among pixels in images and the geometrical properties of various images during learning the reduced space. In this paper, we propose a linear approach, called two-dimensional maximum local variation (2DMLV), for face recognition. In 2DMLV, we encode the relationships among pixels in images using the image Euclidean distance instead of conventional Euclidean distance in estimating the variation of values of images, and then incorporate the local variation, which characterizes the diversity of images and discriminating information, into the objective function of dimensionality reduction. Extensive experiments demonstrate the effectiveness of our approach.

  16. Robust Face Recognition using Voting by Bit-plane Images based on Sparse Representation

    Directory of Open Access Journals (Sweden)

    Dongmei Wei

    2015-08-01

    Full Text Available Plurality voting is widely employed as combination strategies in pattern recognition. As a technology proposed recently, sparse representation based classification codes the query image as a sparse linear combination of entire training images and classifies the query sample class by class exploiting the class representation error. In this paper, an improvement face recognition approach using sparse representation and plurality voting based on the binary bit-plane images is proposed. After being equalized, gray images are decomposed into eight bit-plane images, sparse representation based classification is exploited respectively on the five bit-plane images that have more discrimination information. Finally, the true identity of query image is voted by these five identities obtained. Experiment results shown that this proposed approach is preferable both in recognition accuracy and in recognition speed.

  17. NEURO FUZZY MODEL FOR FACE RECOGNITION WITH CURVELET BASED FEATURE IMAGE

    OpenAIRE

    SHREEJA R,; KHUSHALI DEULKAR,; SHALINI BHATIA

    2011-01-01

    A facial recognition system is a computer application for automatically identifying or verifying a person from a digital image or a video frame from a video source. One of the ways to do this is by comparing selected facial features from the image and a facial database. It is typically used in security systems and can be compared to other biometric techniques such as fingerprint or iris recognition systems. Every face has approximately 80 nodal points like (Distance between the eyes, Width of...

  18. A truly human interface: interacting face-to-face with someone whose words are determined by a computer program

    Science.gov (United States)

    Corti, Kevin; Gillespie, Alex

    2015-01-01

    We use speech shadowing to create situations wherein people converse in person with a human whose words are determined by a conversational agent computer program. Speech shadowing involves a person (the shadower) repeating vocal stimuli originating from a separate communication source in real-time. Humans shadowing for conversational agent sources (e.g., chat bots) become hybrid agents (“echoborgs”) capable of face-to-face interlocution. We report three studies that investigated people’s experiences interacting with echoborgs and the extent to which echoborgs pass as autonomous humans. First, participants in a Turing Test spoke with a chat bot via either a text interface or an echoborg. Human shadowing did not improve the chat bot’s chance of passing but did increase interrogators’ ratings of how human-like the chat bot seemed. In our second study, participants had to decide whether their interlocutor produced words generated by a chat bot or simply pretended to be one. Compared to those who engaged a text interface, participants who engaged an echoborg were more likely to perceive their interlocutor as pretending to be a chat bot. In our third study, participants were naïve to the fact that their interlocutor produced words generated by a chat bot. Unlike those who engaged a text interface, the vast majority of participants who engaged an echoborg did not sense a robotic interaction. These findings have implications for android science, the Turing Test paradigm, and human–computer interaction. The human body, as the delivery mechanism of communication, fundamentally alters the social psychological dynamics of interactions with machine intelligence. PMID:26042066

  19. Predicting Performance of a Face Recognition System Based on Image Quality

    NARCIS (Netherlands)

    Dutta, A.

    2015-01-01

    In this dissertation, we focus on several aspects of models that aim to predict performance of a face recognition system. Performance prediction models are commonly based on the following two types of performance predictor features: a) image quality features; and b) features derived solely from

  20. Social Inferences from Faces: Ambient Images Generate a Three-Dimensional Model

    Science.gov (United States)

    Sutherland, Clare A. M.; Oldmeadow, Julian A.; Santos, Isabel M.; Towler, John; Burt, D. Michael; Young, Andrew W.

    2013-01-01

    Three experiments are presented that investigate the two-dimensional valence/trustworthiness by dominance model of social inferences from faces (Oosterhof & Todorov, 2008). Experiment 1 used image averaging and morphing techniques to demonstrate that consistent facial cues subserve a range of social inferences, even in a highly variable sample of…

  1. Pose-Encoded Spherical Harmonics for Face Recognition and Synthesis Using a Single Image

    Directory of Open Access Journals (Sweden)

    Rama Chellappa

    2007-12-01

    Full Text Available Face recognition under varying pose is a challenging problem, especially when illumination variations are also present. In this paper, we propose to address one of the most challenging scenarios in face recognition. That is, to identify a subject from a test image that is acquired under different pose and illumination condition from only one training sample (also known as a gallery image of this subject in the database. For example, the test image could be semifrontal and illuminated by multiple lighting sources while the corresponding training image is frontal under a single lighting source. Under the assumption of Lambertian reflectance, the spherical harmonics representation has proved to be effective in modeling illumination variations for a fixed pose. In this paper, we extend the spherical harmonics representation to encode pose information. More specifically, we utilize the fact that 2D harmonic basis images at different poses are related by close-form linear transformations, and give a more convenient transformation matrix to be directly used for basis images. An immediate application is that we can easily synthesize a different view of a subject under arbitrary lighting conditions by changing the coefficients of the spherical harmonics representation. A more important result is an efficient face recognition method, based on the orthonormality of the linear transformations, for solving the above-mentioned challenging scenario. Thus, we directly project a nonfrontal view test image onto the space of frontal view harmonic basis images. The impact of some empirical factors due to the projection is embedded in a sparse warping matrix; for most cases, we show that the recognition performance does not deteriorate after warping the test image to the frontal view. Very good recognition results are obtained using this method for both synthetic and challenging real images.

  2. Quantifying nonverbal communicative behavior in face-to-face human dialogues

    Science.gov (United States)

    Skhiri, Mustapha; Cerrato, Loredana

    2002-11-01

    The referred study is based on the assumption that understanding how humans use nonverbal behavior in dialogues can be very useful in the design of more natural-looking animated talking heads. The goal of the study is twofold: (1) to explore how people use specific facial expressions and head movements to serve important dialogue functions, and (2) to show evidence that it is possible to measure and quantify the entity of these movements with the Qualisys MacReflex motion tracking system. Naturally elicited dialogues between humans have been analyzed with focus on the attention on those nonverbal behaviors that serve the very relevant functions of regulating the conversational flux (i.e., turn taking) and producing information about the state of communication (i.e., feedback). The results show that eyebrow raising, head nods, and head shakes are typical signals involved during the exchange of speaking turns, as well as in the production and elicitation of feedback. These movements can be easily measured and quantified, and this measure can be implemented in animated talking heads.

  3. Closed-loop dialog model of face-to-face communication with a photo-real virtual human

    Science.gov (United States)

    Kiss, Bernadette; Benedek, Balázs; Szijárto, Gábor; Takács, Barnabás

    2004-01-01

    We describe an advanced Human Computer Interaction (HCI) model that employs photo-realistic virtual humans to provide digital media users with information, learning services and entertainment in a highly personalized and adaptive manner. The system can be used as a computer interface or as a tool to deliver content to end-users. We model the interaction process between the user and the system as part of a closed loop dialog taking place between the participants. This dialog, exploits the most important characteristics of a face-to-face communication process, including the use of non-verbal gestures and meta communication signals to control the flow of information. Our solution is based on a Virtual Human Interface (VHI) technology that was specifically designed to be able to create emotional engagement between the virtual agent and the user, thus increasing the efficiency of learning and/or absorbing any information broadcasted through this device. The paper reviews the basic building blocks and technologies needed to create such a system and discusses its advantages over other existing methods.

  4. Neanderthal paintings? Production of prototypical human (Homo sapiens) faces shows systematic distortions.

    Science.gov (United States)

    Carbon, Claus-Christian; Wirth, Benedikt Emanuel

    2014-01-01

    People's sketches of human faces seem to be systematically distorted: the eye position is always higher than in reality. This bias was experimentally analyzed by a series of experiments varying drawing conditions. Participants either drew prototypical faces from memory (studies 1 and 2: free reconstruction; study 3: cued reconstruction) or directly copied average faces (study 4). Participants consistently showed this positioning bias, which is even in accord with facial depictions published in influential research articles by famous face researchers (study 5). We discuss plausible explanations for this reliable and stable bias, which is coincidentally similar to the morphology of Neanderthals.

  5. Face Recognition from Still Images to Video Sequences: A Local-Feature-Based Framework

    Directory of Open Access Journals (Sweden)

    Chen Shaokang

    2011-01-01

    Full Text Available Although automatic faces recognition has shown success for high-quality images under controlled conditions, for video-based recognition it is hard to attain similar levels of performance. We describe in this paper recent advances in a project being undertaken to trial and develop advanced surveillance systems for public safety. In this paper, we propose a local facial feature based framework for both still image and video-based face recognition. The evaluation is performed on a still image dataset LFW and a video sequence dataset MOBIO to compare 4 methods for operation on feature: feature averaging (Avg-Feature, Mutual Subspace Method (MSM, Manifold to Manifold Distance (MMS, and Affine Hull Method (AHM, and 4 methods for operation on distance on 3 different features. The experimental results show that Multi-region Histogram (MRH feature is more discriminative for face recognition compared to Local Binary Patterns (LBP and raw pixel intensity. Under the limitation on a small number of images available per person, feature averaging is more reliable than MSM, MMD, and AHM and is much faster. Thus, our proposed framework—averaging MRH feature is more suitable for CCTV surveillance systems with constraints on the number of images and the speed of processing.

  6. Overlay of conventional angiographic and en-face OCT images enhances their interpretation

    Directory of Open Access Journals (Sweden)

    Pool Chris W

    2005-06-01

    Full Text Available Abstract Background Combining characteristic morphological and functional information in one image increases pathophysiologic understanding as well as diagnostic accuracy in most clinical settings. En-face optical coherence tomography (OCT provides a high resolution, transversal OCT image of the macular area combined with a confocal image of the same area (OCT C-scans. Creating an overlay image of a conventional angiographic image onto an OCT image, using the confocal part to facilitate transformation, combines structural and functional information of the retinal area of interest. This paper describes the construction of such overlay images and their aid in improving the interpretation of OCT C-scans. Methods In various patients, en-face OCT C-scans (made with a prototype OCT-Ophthalmoscope (OTI, Canada in use at the Department of Ophthalmology (Academic Medical Centre, Amsterdam, The Netherlands and conventional fluorescein angiography (FA were performed. ImagePro, with a custom made plug-in, was used to make an overlay-image. The confocal part of the OCT C-scan was used to spatially transform the FA image onto the OCT C-scan, using the vascular arcades as a reference. To facilitate visualization the transformed angiographic image and the OCT C-scan were combined in an RGB image. Results The confocal part of the OCT C-scan could easily be fused with angiographic images. Overlay showed a direct correspondence between retinal thickening and FA leakage in Birdshot retinochoroiditis, localized the subretinal neovascular membrane and correlated anatomic and vascular leakage features in myopia, and showed the extent of retinal and pigment epithelial detachment in retinal angiomatous proliferation as FA leakage was subject to blocked fluorescence. The overlay mode provided additional insight not readily available in either mode alone. Conclusion Combining conventional angiographic images and en-face OCT C-scans assists in the interpretation of both

  7. What affects facing direction in human facial profile drawing? A meta-analytic inquiry.

    Science.gov (United States)

    Tosun, Sümeyra; Vaid, Jyotsna

    2014-01-01

    Two meta-analyses were conducted to examine two potential sources of spatial orientation biases in human profile drawings by brain-intact individuals. The first examined profile facing direction as function of hand used to draw. The second examined profile facing direction in relation to directional scanning biases related to reading/writing habits. Results of the first meta-analysis, based on 27 study samples with 4171 participants, showed that leftward facing of profiles (from the viewer's perspective) was significantly associated with using the right hand to draw. The reading/writing direction meta-analysis, based on 10 study samples with 1552 participants, suggested a modest relationship between leftward profile facing and primary use of a left-to-right reading/writing direction. These findings suggest that biomechanical and cultural factors jointly influence hand movement preferences and in turn the direction of facing of human profile drawings.

  8. Robust selectivity for faces in the human amygdala in the absence of expressions.

    Science.gov (United States)

    Mende-Siedlecki, Peter; Verosky, Sara C; Turk-Browne, Nicholas B; Todorov, Alexander

    2013-12-01

    There is a well-established posterior network of cortical regions that plays a central role in face processing and that has been investigated extensively. In contrast, although responsive to faces, the amygdala is not considered a core face-selective region, and its face selectivity has never been a topic of systematic research in human neuroimaging studies. Here, we conducted a large-scale group analysis of fMRI data from 215 participants. We replicated the posterior network observed in prior studies but found equally robust and reliable responses to faces in the amygdala. These responses were detectable in most individual participants, but they were also highly sensitive to the initial statistical threshold and habituated more rapidly than the responses in posterior face-selective regions. A multivariate analysis showed that the pattern of responses to faces across voxels in the amygdala had high reliability over time. Finally, functional connectivity analyses showed stronger coupling between the amygdala and posterior face-selective regions during the perception of faces than during the perception of control visual categories. These findings suggest that the amygdala should be considered a core face-selective region.

  9. A Two-Stage Framework for 3D Face Reconstruction from RGBD Images.

    Science.gov (United States)

    Wang, Kangkan; Wang, Xianwang; Pan, Zhigeng; Liu, Kai

    2014-08-01

    This paper proposes a new approach for 3D face reconstruction with RGBD images from an inexpensive commodity sensor. The challenges we face are: 1) substantial random noise and corruption are present in low-resolution depth maps; and 2) there is high degree of variability in pose and face expression. We develop a novel two-stage algorithm that effectively maps low-quality depth maps to realistic face models. Each stage is targeted toward a certain type of noise. The first stage extracts sparse errors from depth patches through the data-driven local sparse coding, while the second stage smooths noise on the boundaries between patches and reconstructs the global shape by combining local shapes using our template-based surface refinement. Our approach does not require any markers or user interaction. We perform quantitative and qualitative evaluations on both synthetic and real test sets. Experimental results show that the proposed approach is able to produce high-resolution 3D face models with high accuracy, even if inputs are of low quality, and have large variations in viewpoint and face expression.

  10. Virtual images inspired consolidate collaborative representation-based classification method for face recognition

    Science.gov (United States)

    Liu, Shigang; Zhang, Xinxin; Peng, Yali; Cao, Han

    2016-07-01

    The collaborative representation-based classification method performs well in the field of classification of high-dimensional images such as face recognition. It utilizes training samples from all classes to represent a test sample and assigns a class label to the test sample using the representation residuals. However, this method still suffers from the problem that limited number of training sample influences the classification accuracy when applied to image classification. In this paper, we propose a modified collaborative representation-based classification method (MCRC), which exploits novel virtual images and can obtain high classification accuracy. The procedure to produce virtual images is very simple but the use of them can bring surprising performance improvement. The virtual images can sufficiently denote the features of original face images in some case. Extensive experimental results doubtlessly demonstrate that the proposed method can effectively improve the classification accuracy. This is mainly attributed to the integration of the collaborative representation and the proposed feature-information dominated virtual images.

  11. Intraocular lens alignment from an en face optical coherence tomography image Purkinje-like method

    Science.gov (United States)

    Sun, Mengchan; de Castro, Alberto; Ortiz, Sergio; Perez-Merino, Pablo; Birkenfeld, Judith; Marcos, Susana

    2014-06-01

    Measurement of intraocular lens (IOL) alignment implanted in patients in cataract surgery is important to understand their optical performance. We present a method to estimate tilt and decentration of IOLs based on optical coherence tomography (OCT) images. En face OCT images show Purkinje-like images that correspond to the specular reflections from the corneal and IOL surfaces. Unlike in standard Purkinje-imaging, the tomographic nature of OCT allows unequivocal association of the reflection with the corresponding surface. The locations of the Purkinje-like images are linear combinations of IOL tilt, IOL decentration, and eye rotation. The weighting coefficients depend on the individual anterior segment geometry, obtained from the same OCT datasets. The methodology was demonstrated on an artificial model eye with set amounts of lens tilt and decentration and five pseudophakic eyes. Measured tilt and decentration in the artificial eye differed by 3.7% and 0.9%, respectively, from nominal values. In patients, average IOL tilt and decentration from Purkinje were 3.30±4.68 deg and 0.16±0.16 mm, respectively, and differed on average by 0.5 deg and 0.09 mm, respectively, from direct measurements on distortion-corrected OCT images. Purkinje-based methodology from anterior segment en face OCT imaging provided, therefore, reliable measurements of IOL tilt and decentration.

  12. Serial block face scanning electron microscopy--the future of cell ultrastructure imaging.

    Science.gov (United States)

    Hughes, Louise; Hawes, Chris; Monteith, Sandy; Vaughan, Sue

    2014-03-01

    One of the major drawbacks in transmission electron microscopy has been the production of three-dimensional views of cells and tissues. Currently, there is no one suitable 3D microscopy technique that answers all questions and serial block face scanning electron microscopy (SEM) fills the gap between 3D imaging using high-end fluorescence microscopy and the high resolution offered by electron tomography. In this review, we discuss the potential of the serial block face SEM technique for studying the three-dimensional organisation of animal, plant and microbial cells.

  13. Face Detection Based on 3×3 Block Gradient Image Partition and Face Geometric Model%基于3×3块梯度图像划分和人脸几何模型的人脸检测

    Institute of Scientific and Technical Information of China (English)

    盛光磊; 张腾; 裴铮

    2013-01-01

      提出一个人脸检测算法,该算法使用3×3块划分的梯度图像和几何人脸模型来进行人脸检测.3×3块划分用来初步检测特定区域中是否有人脸,接下来利用几何人脸模型把人脸检测出来.实验结果表明所提出的人脸检测算法检测结果比较好,并且对于光照并不敏感.%This paper presents a face detection algorithm, the algorithm uses a gradient image of 3 Í3 block partition and geometric human face model for face detection. 3 Í 3 block partition is used to and preliminary detect whether someone in the specific area of the face, followed by the use of geometric face model face detection. The experimental results show that the proposed face detection algorithms to detect better results,and is not sensitive to light.

  14. Thermal signature analysis of human face during jogging activity using infrared thermography technique

    Science.gov (United States)

    Budiarti, Putria W.; Kusumawardhani, Apriani; Setijono, Heru

    2016-11-01

    Thermal imaging has been widely used for many applications. Thermal camera is used to measure object's temperature above absolute temperature of 0 Kelvin using infrared radiation emitted by the object. Thermal imaging is color mapping taken using false color that represents temperature. Human body is one of the objects that emits infrared radiation. Human infrared radiations vary according to the activity that is being done. Physical activities such as jogging is among ones that is commonly done. Therefore this experiment will investigate the thermal signature profile of jogging activity in human body, especially in the face parts. The results show that the significant increase is found in periorbital area that is near eyes and forehand by the number of 7.5%. Graphical temperature distributions show that all region, eyes, nose, cheeks, and chin at the temperature of 28.5 - 30.2°C the pixel area tends to be constant since it is the surrounding temperature. At the temperature of 30.2 - 34.7°C the pixel area tends to increase, while at the temperature of 34.7 - 37.1°C the pixel area tends to decrease because pixels at temperature of 34.7 - 37.1°C after jogging activity change into temperature of 30.2 - 34.7°C so that the pixel area increases. The trendline of jogging activity during 10 minutes period also shows the increasing of temperature. The results of each person also show variations due to physiological nature of each person, such as sweat production during physical activities.

  15. Quantitatively Plotting the Human Face for Multivariate Data Visualisation Illustrated by Health Assessments Using Laboratory Parameters

    Directory of Open Access Journals (Sweden)

    Wang Hongwei

    2013-01-01

    Full Text Available Objective. The purpose of this study was to describe a new data visualisation system by plotting the human face to observe the comprehensive effects of multivariate data. Methods. The Graphics Device Interface (GDI+ in the Visual Studio.NET development platform was used to write a program that enables facial image parameters to be recorded, such as cropping and rotation, and can generate a new facial image according to Z values from sets of normal data (Z>3 was still counted as 3. The measured clinical laboratory parameters related to health status were obtained from senile people, glaucoma patients, and fatty liver patients to illustrate the facial data visualisation system. Results. When the eyes, nose, and mouth were rotated around their own axes at the same angle, the deformation effects were similar. The deformation effects for any abnormality of the eyes, nose, or mouth should be slightly higher than those for simultaneous abnormalities. The facial changes in the populations with different health statuses were significant compared with a control population. Conclusions. The comprehensive effects of multivariate may not equal the sum of each variable. The 3Z facial data visualisation system can effectively distinguish people with poor health status from healthy people.

  16. Quantitatively plotting the human face for multivariate data visualisation illustrated by health assessments using laboratory parameters.

    Science.gov (United States)

    Hongwei, Wang; Hui, Liu

    2013-01-01

    The purpose of this study was to describe a new data visualisation system by plotting the human face to observe the comprehensive effects of multivariate data. The Graphics Device Interface (GDI+) in the Visual Studio.NET development platform was used to write a program that enables facial image parameters to be recorded, such as cropping and rotation, and can generate a new facial image according to Z values from sets of normal data (Z > 3 was still counted as 3). The measured clinical laboratory parameters related to health status were obtained from senile people, glaucoma patients, and fatty liver patients to illustrate the facial data visualisation system. When the eyes, nose, and mouth were rotated around their own axes at the same angle, the deformation effects were similar. The deformation effects for any abnormality of the eyes, nose, or mouth should be slightly higher than those for simultaneous abnormalities. The facial changes in the populations with different health statuses were significant compared with a control population. The comprehensive effects of multivariate may not equal the sum of each variable. The 3Z facial data visualisation system can effectively distinguish people with poor health status from healthy people.

  17. Facing Freeze: Social Threat Induces Bodily Freeze in Humans

    NARCIS (Netherlands)

    Roelofs, K.; Hagenaars, M.A.; Stins, J.F.

    2010-01-01

    Freezing is a common defensive response in animals threatened by predators. It is characterized by reduced body motion and decreased heart rate (bradycardia). However, despite the relevance of animal defense models in human stress research, studies have not shown whether social threat cues elicit si

  18. Facing History and Ourselves: Holocaust and Human Behavior.

    Science.gov (United States)

    Strom, Margot Stern; Parsons, William S.

    This unit for junior and senior high school students presents techniques and materials for studying about the holocaust of World War II. Emphasis in the guide is on human behavior and the role of the individual within society. Among the guide's 18 objectives are for students to examine society's influence on individual behavior, place Hitler's…

  19. Diverting attention suppresses human amygdala responses to faces

    Directory of Open Access Journals (Sweden)

    Carmen eMorawetz

    2010-12-01

    Full Text Available Recent neuroimaging studies disagree as to whether the processing of emotion-laden visual stimuli is dependent upon the availability of attentional resources or entirely capacity-free. Two main factors have been proposed to be responsible for the discrepancies: the differences in the perceptual attentional demands of the tasks used to divert attentional resources from emotional stimuli and the spatial location of the affective stimuli in the visual field. To date, no neuroimaging report addressed these two issues in the same set of subjects. Therefore, the aim of the study was to investigate the effects of high and low attentional load as well as different stimulus locations on face processing in the amygdala using fMRI to provide further evidence for one of the two opposing theories. We were able for the first time to directly test the interaction of attentional load and spatial location. The results revealed a strong attenuation of amygdala activity when the attentional load was high. The eccentricity of the emotional stimuli did not affect responses in the amygdala and no interaction effect between attentional load and spatial location was found. We conclude that the processing of emotional stimuli in the amygdala is strongly dependent on the availability of attentional resources without a preferred processing of stimuli presented in the periphery and provide firm evidence for the concept of the attentional load theory of emotional processing in the amygdala.

  20. Perception and Processing of Faces in the Human Brain Is Tuned to Typical Feature Locations

    Science.gov (United States)

    Schwarzkopf, D. Samuel; Alvarez, Ivan; Lawson, Rebecca P.; Henriksson, Linda; Kriegeskorte, Nikolaus; Rees, Geraint

    2016-01-01

    Faces are salient social stimuli whose features attract a stereotypical pattern of fixations. The implications of this gaze behavior for perception and brain activity are largely unknown. Here, we characterize and quantify a retinotopic bias implied by typical gaze behavior toward faces, which leads to eyes and mouth appearing most often in the upper and lower visual field, respectively. We found that the adult human visual system is tuned to these contingencies. In two recognition experiments, recognition performance for isolated face parts was better when they were presented at typical, rather than reversed, visual field locations. The recognition cost of reversed locations was equal to ∼60% of that for whole face inversion in the same sample. Similarly, an fMRI experiment showed that patterns of activity evoked by eye and mouth stimuli in the right inferior occipital gyrus could be separated with significantly higher accuracy when these features were presented at typical, rather than reversed, visual field locations. Our findings demonstrate that human face perception is determined not only by the local position of features within a face context, but by whether features appear at the typical retinotopic location given normal gaze behavior. Such location sensitivity may reflect fine-tuning of category-specific visual processing to retinal input statistics. Our findings further suggest that retinotopic heterogeneity might play a role for face inversion effects and for the understanding of conditions affecting gaze behavior toward faces, such as autism spectrum disorders and congenital prosopagnosia. SIGNIFICANCE STATEMENT Faces attract our attention and trigger stereotypical patterns of visual fixations, concentrating on inner features, like eyes and mouth. Here we show that the visual system represents face features better when they are shown at retinal positions where they typically fall during natural vision. When facial features were shown at typical (rather

  1. NEURO FUZZY MODEL FOR FACE RECOGNITION WITH CURVELET BASED FEATURE IMAGE

    Directory of Open Access Journals (Sweden)

    SHREEJA R,

    2011-06-01

    Full Text Available A facial recognition system is a computer application for automatically identifying or verifying a person from a digital image or a video frame from a video source. One of the ways to do this is by comparing selected facial features from the image and a facial database. It is typically used in security systems and can be compared to other biometric techniques such as fingerprint or iris recognition systems. Every face has approximately 80 nodal points like (Distance between the eyes, Width of the nose etc.The basic face recognition system capture the sample, extract feature, compare template and perform matching. In this paper two methods of face recognition are compared- neural networks and neuro fuzzy method. For this curvelet transform is used for feature extraction. Feature vector is formed by extracting statistical quantities of curve coefficients. From the statistical results it is concluded that neuro fuzzy method is the better technique for face recognition as compared to neural network.

  2. Testing the connections within face processing circuitry in Capgras delusion with diffusion imaging tractography.

    Science.gov (United States)

    Bobes, Maria A; Góngora, Daylin; Valdes, Annette; Santos, Yusniel; Acosta, Yanely; Fernandez Garcia, Yuriem; Lage, Agustin; Valdés-Sosa, Mitchell

    2016-01-01

    Although Capgras delusion (CD) patients are capable of recognizing familiar faces, they present a delusional belief that some relatives have been replaced by impostors. CD has been explained as a selective disruption of a pathway processing affective values of familiar faces. To test the integrity of connections within face processing circuitry, diffusion tensor imaging was performed in a CD patient and 10 age-matched controls. Voxel-based morphometry indicated gray matter damage in right frontal areas. Tractography was used to examine two important tracts of the face processing circuitry: the inferior fronto-occipital fasciculus (IFOF) and the inferior longitudinal (ILF). The superior longitudinal fasciculus (SLF) and commissural tracts were also assessed. CD patient did not differ from controls in the commissural fibers, or the SLF. Right and left ILF, and right IFOF were also equivalent to those of controls. However, the left IFOF was significantly reduced respect to controls, also showing a significant dissociation with the ILF, which represents a selective impairment in the fiber-tract connecting occipital and frontal areas. This suggests a possible involvement of the IFOF in affective processing of faces in typical observers and in covert recognition in some cases with prosopagnosia.

  3. Testing the connections within face processing circuitry in Capgras delusion with diffusion imaging tractography

    Directory of Open Access Journals (Sweden)

    Maria A. Bobes

    2016-01-01

    Full Text Available Although Capgras delusion (CD patients are capable of recognizing familiar faces, they present a delusional belief that some relatives have been replaced by impostors. CD has been explained as a selective disruption of a pathway processing affective values of familiar faces. To test the integrity of connections within face processing circuitry, diffusion tensor imaging was performed in a CD patient and 10 age-matched controls. Voxel-based morphometry indicated gray matter damage in right frontal areas. Tractography was used to examine two important tracts of the face processing circuitry: the inferior fronto-occipital fasciculus (IFOF and the inferior longitudinal (ILF. The superior longitudinal fasciculus (SLF and commissural tracts were also assessed. CD patient did not differ from controls in the commissural fibers, or the SLF. Right and left ILF, and right IFOF were also equivalent to those of controls. However, the left IFOF was significantly reduced respect to controls, also showing a significant dissociation with the ILF, which represents a selective impairment in the fiber-tract connecting occipital and frontal areas. This suggests a possible involvement of the IFOF in affective processing of faces in typical observers and in covert recognition in some cases with prosopagnosia.

  4. Testing the connections within face processing circuitry in Capgras delusion with diffusion imaging tractography

    Science.gov (United States)

    Bobes, Maria A.; Góngora, Daylin; Valdes, Annette; Santos, Yusniel; Acosta, Yanely; Fernandez Garcia, Yuriem; Lage, Agustin; Valdés-Sosa, Mitchell

    2016-01-01

    Although Capgras delusion (CD) patients are capable of recognizing familiar faces, they present a delusional belief that some relatives have been replaced by impostors. CD has been explained as a selective disruption of a pathway processing affective values of familiar faces. To test the integrity of connections within face processing circuitry, diffusion tensor imaging was performed in a CD patient and 10 age-matched controls. Voxel-based morphometry indicated gray matter damage in right frontal areas. Tractography was used to examine two important tracts of the face processing circuitry: the inferior fronto-occipital fasciculus (IFOF) and the inferior longitudinal (ILF). The superior longitudinal fasciculus (SLF) and commissural tracts were also assessed. CD patient did not differ from controls in the commissural fibers, or the SLF. Right and left ILF, and right IFOF were also equivalent to those of controls. However, the left IFOF was significantly reduced respect to controls, also showing a significant dissociation with the ILF, which represents a selective impairment in the fiber-tract connecting occipital and frontal areas. This suggests a possible involvement of the IFOF in affective processing of faces in typical observers and in covert recognition in some cases with prosopagnosia. PMID:26909325

  5. Assessing knowledge of human papillomavirus and collecting data on sexual behavior: computer assisted telephone versus face to face interviews

    Directory of Open Access Journals (Sweden)

    Garland Suzanne

    2009-11-01

    Full Text Available Abstract Background Education campaigns seeking to raise awareness of human papillomavirus (HPV and promoting HPV vaccination depend on accurate surveys of public awareness and knowledge of HPV and related sexual behavior. However, the most recent population-based studies have relied largely on computer-assisted telephone interviews (CATI as opposed to face to face interviews (FTFI. It is currently unknown how these survey modes differ, and in particular whether they attract similar demographics and therefore lead to similar overall findings. Methods A comprehensive survey of HPV awareness and knowledge, including sexual behavior, was conducted among 3,045 Singaporean men and women, half of whom participated via CATI, the other half via FTFI. Results Overall levels of awareness and knowledge of HPV differed between CATI and FTFI, attributable in part to demographic variations between these survey modes. Although disclosure of sexual behavior was greater when using CATI, few differences between survey modes were found in the actual information disclosed. Conclusion Although CATI is a cheaper, faster alternative to FTFI and people appear more willing to provide information about sexual behavior when surveyed using CATI, thorough assessments of HPV awareness and knowledge depend on multiple survey modes.

  6. Neglect in Human Communication: Quantifying the Cost of Cell-Phone Interruptions in Face to Face Dialogs

    Science.gov (United States)

    Lopez-Rosenfeld, Matías; Calero, Cecilia I.; Fernandez Slezak, Diego; Garbulsky, Gerry; Bergman, Mariano; Trevisan, Marcos; Sigman, Mariano

    2015-01-01

    There is a prevailing belief that interruptions using cellular phones during face to face interactions may affect severely how people relate and perceive each other. We set out to determine this cost quantitatively through an experiment performed in dyads, in a large audience in a TEDx event. One of the two participants (the speaker) narrates a story vividly. The listener is asked to deliberately ignore the speaker during part of the story (for instance, attending to their cell-phone). The speaker is not aware of this treatment. We show that total amount of attention is the major factor driving subjective beliefs about the story and the conversational partner. The effects are mostly independent on how attention is distributed in time. All social parameters of human communication are affected by attention time with a sole exception: the perceived emotion of the story. Interruptions during day-to-day communication between peers are extremely frequent. Our data should provide a note of caution, by indicating that they have a major effect on the perception people have about what they say (whether it is interesting or not . . .) and about the virtues of the people around them. PMID:26039326

  7. Neglect in human communication: quantifying the cost of cell-phone interruptions in face to face dialogs.

    Science.gov (United States)

    Lopez-Rosenfeld, Matías; Calero, Cecilia I; Fernandez Slezak, Diego; Garbulsky, Gerry; Bergman, Mariano; Trevisan, Marcos; Sigman, Mariano

    2015-01-01

    There is a prevailing belief that interruptions using cellular phones during face to face interactions may affect severely how people relate and perceive each other. We set out to determine this cost quantitatively through an experiment performed in dyads, in a large audience in a TEDx event. One of the two participants (the speaker) narrates a story vividly. The listener is asked to deliberately ignore the speaker during part of the story (for instance, attending to their cell-phone). The speaker is not aware of this treatment. We show that total amount of attention is the major factor driving subjective beliefs about the story and the conversational partner. The effects are mostly independent on how attention is distributed in time. All social parameters of human communication are affected by attention time with a sole exception: the perceived emotion of the story. Interruptions during day-to-day communication between peers are extremely frequent. Our data should provide a note of caution, by indicating that they have a major effect on the perception people have about what they say (whether it is interesting or not . . .) and about the virtues of the people around them.

  8. Neglect in human communication: quantifying the cost of cell-phone interruptions in face to face dialogs.

    Directory of Open Access Journals (Sweden)

    Matías Lopez-Rosenfeld

    Full Text Available There is a prevailing belief that interruptions using cellular phones during face to face interactions may affect severely how people relate and perceive each other. We set out to determine this cost quantitatively through an experiment performed in dyads, in a large audience in a TEDx event. One of the two participants (the speaker narrates a story vividly. The listener is asked to deliberately ignore the speaker during part of the story (for instance, attending to their cell-phone. The speaker is not aware of this treatment. We show that total amount of attention is the major factor driving subjective beliefs about the story and the conversational partner. The effects are mostly independent on how attention is distributed in time. All social parameters of human communication are affected by attention time with a sole exception: the perceived emotion of the story. Interruptions during day-to-day communication between peers are extremely frequent. Our data should provide a note of caution, by indicating that they have a major effect on the perception people have about what they say (whether it is interesting or not . . . and about the virtues of the people around them.

  9. Disability approach in face of expansion of human rights

    Directory of Open Access Journals (Sweden)

    Joyceane Bezerra de Menezes

    2016-12-01

    Full Text Available It analyzes the social model of disability approach that is adopted by the Convention on the Rights of Persons with Disabilities. Unlike the medical model, disability shall be understood as the interaction between the limitation or natural deterrent suffering person in their physical functions, mental and / or intellectual and social barriers. The paper follows qualitative analysis, basing on bibliographical and documentary research that showed the change in paradigm of international documents on human rights, focusing on the inclusion of people with disabilities and mitigation of social barriers to participate in community life, social and politician.

  10. The Many Faces of Human Leukocyte Antigen-G

    DEFF Research Database (Denmark)

    Dahl, Mette; Djurisic, Snezana; Hviid, Thomas Vauvert F

    2014-01-01

    is the human leukocyte antigen (HLA)-G, a nonclassical HLA protein displaying limited polymorphism, restricted tissue distribution, and a unique alternative splice pattern. HLA-G is primarily expressed in placenta and plays multifaceted roles during pregnancy, both as a soluble and a membrane-bound molecule...... pregnancy and pregnancy complications, such as preeclampsia, recurrent spontaneous abortions, and subfertility or infertility. This review aims to clarify the multifunctional role of HLA-G in pregnancy-related disorders by focusing on genetic variation, differences in mRNA stability between HLA-G alleles...

  11. Heritability maps of human face morphology through large-scale automated three-dimensional phenotyping

    Science.gov (United States)

    Tsagkrasoulis, Dimosthenis; Hysi, Pirro; Spector, Tim; Montana, Giovanni

    2017-04-01

    The human face is a complex trait under strong genetic control, as evidenced by the striking visual similarity between twins. Nevertheless, heritability estimates of facial traits have often been surprisingly low or difficult to replicate. Furthermore, the construction of facial phenotypes that correspond to naturally perceived facial features remains largely a mystery. We present here a large-scale heritability study of face geometry that aims to address these issues. High-resolution, three-dimensional facial models have been acquired on a cohort of 952 twins recruited from the TwinsUK registry, and processed through a novel landmarking workflow, GESSA (Geodesic Ensemble Surface Sampling Algorithm). The algorithm places thousands of landmarks throughout the facial surface and automatically establishes point-wise correspondence across faces. These landmarks enabled us to intuitively characterize facial geometry at a fine level of detail through curvature measurements, yielding accurate heritability maps of the human face (www.heritabilitymaps.info).

  12. Images of war: using satellite images for human rights monitoring in Turkish Kurdistan.

    Science.gov (United States)

    de Vos, Hugo; Jongerden, Joost; van Etten, Jacob

    2008-09-01

    In areas of war and armed conflict it is difficult to get trustworthy and coherent information. Civil society and human rights groups often face problems of dealing with fragmented witness reports, disinformation of war propaganda, and difficult direct access to these areas. Turkish Kurdistan was used as a case study of armed conflict to evaluate the potential use of satellite images for verification of witness reports collected by human rights groups. The Turkish army was reported to be burning forests, fields and villages as a strategy in the conflict against guerrilla uprising. This paper concludes that satellite images are useful to validate witness reports of forest fires. Even though the use of this technology for human rights groups will depend on some feasibility factors such as prices, access and expertise, the images proved to be key for analysis of spatial aspects of conflict and valuable for reconstructing a more trustworthy picture.

  13. Single face image reconstruction for super resolution using support vector regression

    Science.gov (United States)

    Lin, Haijie; Yuan, Qiping; Chen, Zhihong; Yang, Xiaoping

    2016-10-01

    In recent years, we have witnessed the prosperity of the face image super-resolution (SR) reconstruction, especially the learning-based technology. In this paper, a novel super-resolution face reconstruction framework based on support vector regression (SVR) about a single image is presented. Given some input data, SVR can precisely predict output class labels. We regard the SR problem as the estimation of pixel labels in its high resolution version. It's effective to put local binary pattern (LBP) codes and partial pixels into input vectors during training models in our work, and models are learnt from a set of high and low resolution face image. By optimizing vector pairs which are used for learning model, the final reconstructed results were advanced. Especially to deserve to be mentioned, we can get more high frequency information by exploiting the cyclical scan actions in the process of both training and prediction. A large number of experimental data and visual observation have shown that our method outperforms bicubic interpolation and some stateof- the-art super-resolution algorithms.

  14. Study on image feature extraction and classification for human colorectal cancer using optical coherence tomography

    Science.gov (United States)

    Huang, Shu-Wei; Yang, Shan-Yi; Huang, Wei-Cheng; Chiu, Han-Mo; Lu, Chih-Wei

    2011-06-01

    Most of the colorectal cancer has grown from the adenomatous polyp. Adenomatous lesions have a well-documented relationship to colorectal cancer in previous studies. Thus, to detect the morphological changes between polyp and tumor can allow early diagnosis of colorectal cancer and simultaneous removal of lesions. OCT (Optical coherence tomography) has been several advantages including high resolution and non-invasive cross-sectional image in vivo. In this study, we investigated the relationship between the B-scan OCT image features and histology of malignant human colorectal tissues, also en-face OCT image and the endoscopic image pattern. The in-vitro experiments were performed by a swept-source optical coherence tomography (SS-OCT) system; the swept source has a center wavelength at 1310 nm and 160nm in wavelength scanning range which produced 6 um axial resolution. In the study, the en-face images were reconstructed by integrating the axial values in 3D OCT images. The reconstructed en-face images show the same roundish or gyrus-like pattern with endoscopy images. The pattern of en-face images relate to the stages of colon cancer. Endoscopic OCT technique would provide three-dimensional imaging and rapidly reconstruct en-face images which can increase the speed of colon cancer diagnosis. Our results indicate a great potential for early detection of colorectal adenomas by using the OCT imaging.

  15. Sensitive periods for the functional specialization of the neural system for human face processing.

    Science.gov (United States)

    Röder, Brigitte; Ley, Pia; Shenoy, Bhamy H; Kekunnaya, Ramesh; Bottari, Davide

    2013-10-15

    The aim of the study was to identify possible sensitive phases in the development of the processing system for human faces. We tested the neural processing of faces in 11 humans who had been blind from birth and had undergone cataract surgery between 2 mo and 14 y of age. Pictures of faces and houses, scrambled versions of these pictures, and pictures of butterflies were presented while event-related potentials were recorded. Participants had to respond to the pictures of butterflies (targets) only. All participants, even those who had been blind from birth for several years, were able to categorize the pictures and to detect the targets. In healthy controls and in a group of visually impaired individuals with a history of developmental or incomplete congenital cataracts, the well-known enhancement of the N170 (negative peak around 170 ms) event-related potential to faces emerged, but a face-sensitive response was not observed in humans with a history of congenital dense cataracts. By contrast, this group showed a similar N170 response to all visual stimuli, which was indistinguishable from the N170 response to faces in the controls. The face-sensitive N170 response has been associated with the structural encoding of faces. Therefore, these data provide evidence for the hypothesis that the functional differentiation of category-specific neural representations in humans, presumably involving the elaboration of inhibitory circuits, is dependent on experience and linked to a sensitive period. Such functional specialization of neural systems seems necessary to archive high processing proficiency.

  16. Atypical Asymmetry for Processing Human and Robot Faces in Autism Revealed by fNIRS.

    Directory of Open Access Journals (Sweden)

    Corinne E Jung

    Full Text Available Deficits in the visual processing of faces in autism spectrum disorder (ASD individuals may be due to atypical brain organization and function. Studies assessing asymmetric brain function in ASD individuals have suggested that facial processing, which is known to be lateralized in neurotypical (NT individuals, may be less lateralized in ASD. Here we used functional near-infrared spectroscopy (fNIRS to first test this theory by comparing patterns of lateralized brain activity in homologous temporal-occipital facial processing regions during observation of faces in an ASD group and an NT group. As expected, the ASD participants showed reduced right hemisphere asymmetry for human faces, compared to the NT participants. Based on recent behavioral reports suggesting that robots can facilitate increased verbal interaction over human counterparts in ASD, we also measured responses to faces of robots to determine if these patterns of activation were lateralized in each group. In this exploratory test, both groups showed similar asymmetry patterns for the robot faces. Our findings confirm existing literature suggesting reduced asymmetry for human faces in ASD and provide a preliminary foundation for future testing of how the use of categorically different social stimuli in the clinical setting may be beneficial in this population.

  17. Capturing specific abilities as a window into human individuality: the example of face recognition.

    Science.gov (United States)

    Wilmer, Jeremy B; Germine, Laura; Chabris, Christopher F; Chatterjee, Garga; Gerbasi, Margaret; Nakayama, Ken

    2012-01-01

    Proper characterization of each individual's unique pattern of strengths and weaknesses requires good measures of diverse abilities. Here, we advocate combining our growing understanding of neural and cognitive mechanisms with modern psychometric methods in a renewed effort to capture human individuality through a consideration of specific abilities. We articulate five criteria for the isolation and measurement of specific abilities, then apply these criteria to face recognition. We cleanly dissociate face recognition from more general visual and verbal recognition. This dissociation stretches across ability as well as disability, suggesting that specific developmental face recognition deficits are a special case of a broader specificity that spans the entire spectrum of human face recognition performance. Item-by-item results from 1,471 web-tested participants, included as supplementary information, fuel item analyses, validation, norming, and item response theory (IRT) analyses of our three tests: (a) the widely used Cambridge Face Memory Test (CFMT); (b) an Abstract Art Memory Test (AAMT), and (c) a Verbal Paired-Associates Memory Test (VPMT). The availability of this data set provides a solid foundation for interpreting future scores on these tests. We argue that the allied fields of experimental psychology, cognitive neuroscience, and vision science could fuel the discovery of additional specific abilities to add to face recognition, thereby providing new perspectives on human individuality.

  18. A case of persistent visual hallucinations of faces following LSD abuse: a functional Magnetic Resonance Imaging study.

    Science.gov (United States)

    Iaria, Giuseppe; Fox, Christopher J; Scheel, Michael; Stowe, Robert M; Barton, Jason J S

    2010-04-01

    In this study, we report the case of a patient experiencing hallucinations of faces that could be reliably precipitated by looking at trees. Using functional Magnetic Resonance Imaging (fMRI), we found that face hallucinations were associated with increased and decreased neural activity in a number of cortical regions. Within the same fusiform face area, however, we found significant decreased and increased neural activity according to whether the patient was experiencing hallucinations or veridical perception of faces, respectively. These findings may indicate key differences in how hallucinatory and veridical perceptions lead to the same phenomenological experience of seeing faces.

  19. A ciosed-loop algorithm to detect human face using color and reinforcement learning

    Institute of Scientific and Technical Information of China (English)

    吴东晖; 叶秀清; 顾伟康

    2002-01-01

    A closed-loop algorithm to detect human face using color information and reinforcement learning is presented in this paper. By using a skin-color selector, the regions with color "like" that of human skin are selected as candidates for human face. In the next stage, the candidates are matched with a face model and given an evaluation of the match degree by the matching module. And if the evaluation of the match result is too low, a reinforcement learning stage will start to search the best parameters of the skin-color selector. It has been tested using many photos of various ethnic groups under various lighting conditions, such as different light source, high light and shadow. And the experiment result proved that this algorithm is robust to the vary-ing lighting conditions and personal conditions.

  20. A closed-loop algorithm to detect human face using color and reinforcement learning

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    A closed-loop algorithm to detect human face using color information and reinforcement learning is presented in this paper. By using a skin-color selector, the regions with color “like" that of human skin are selected as candidates for human face. In the next stage, the candidates are matched with a face model and given an evaluation of the match degree by the matching module. And if the evaluation of the match result is too low, a reinforcement learning stage will start to search the best parameters of the skin-color selector. It has been tested using many photos of various ethnic groups under various lighting conditions, such as different light source, high light and shadow. And the experiment result proved that this algorithm is robust to the varying lighting conditions and personal conditions.

  1. The relationship between body image, age, and distress in women facing breast cancer surgery.

    Science.gov (United States)

    Miller, Sarah J; Schnur, Julie B; Weinberger-Litman, Sarah L; Montgomery, Guy H

    2014-10-01

    Research suggests that the strength of the relationship between body image and emotional distress decreases with age. Past research has focused on expected aging-related body changes, and has not yet examined unexpected body changes (e.g., breast cancer surgery). The present post-hoc study assessed relationships between age, body image, and emotional distress in women facing breast cancer surgery. Older (≥ 65 years, n = 40) and younger (women were matched on race/ethnicity, marital status, and surgery type. Within one week prior to surgery, participants completed measures of demographics, aspects of body image, and emotional distress (general and surgery-specific). Results indicated that: (1) body image did not differ by age (p > 0.999); (2) older women reported less pre-surgical emotional distress than younger women (p's body image and emotional distress (p's women, particularly those with poor body image, are at an increased risk for pre-surgical emotional distress. These women may benefit from pre-surgical interventions designed to improve body image or to reduce pre-surgical emotional distress.

  2. Enhanced patterns of oriented edge magnitudes for face recognition and image matching.

    Science.gov (United States)

    Vu, Ngoc-Son; Caplier, Alice

    2012-03-01

    A good feature descriptor is desired to be discriminative, robust, and computationally inexpensive in both terms of time and storage requirement. In the domain of face recognition, these properties allow the system to quickly deliver high recognition results to the end user. Motivated by the recent feature descriptor called Patterns of Oriented Edge Magnitudes (POEM), which balances the three concerns, this paper aims at enhancing its performance with respect to all these criteria. To this end, we first optimize the parameters of POEM and then apply the whitened principal-component-analysis dimensionality reduction technique to get a more compact, robust, and discriminative descriptor. For face recognition, the efficiency of our algorithm is proved by strong results obtained on both constrained (Face Recognition Technology, FERET) and unconstrained (Labeled Faces in the Wild, LFW) data sets in addition with the low complexity. Impressively, our algorithm is about 30 times faster than those based on Gabor filters. Furthermore, by proposing an additional technique that makes our descriptor robust to rotation, we validate its efficiency for the task of image matching.

  3. 基于图像合成的多姿态人脸图像识别方法%Multipose Face Image Recognition Based on Image Synthesis

    Institute of Scientific and Technical Information of China (English)

    王亚南; 苏剑波

    2015-01-01

    Pose variations of face images make recognition rate decrease significantly. The fused face image gained by several multi-pose face images of the same person is used to identify the face information of this person. The proposed fusion method includes texture information and geometry information, and the face image sets for fusion are selected through geometry information of face images to guarantee the integrity of the face information. On the basis of existing face databases, the recognition rates of the original face images and the fused face images composed of multi-pose face images from internet are tested respectively. The recognition results show that the fusion method achieves higher recognition rate.%由于人脸图像的姿态变化会导致识别效果大幅下降,文中通过同一个人的多张多姿态人脸图像的融合图像识别该人身份。从几何信息与纹理信息两个层面进行融合,利用人脸图像的几何信息挑选待融合图像集合以保证人脸信息的完整性。在现有的通用人脸数据库的基础上,利用网络多姿态人脸图像测试组测试原始图像与融合图像的识别率,结果表明融合后的图像能获得更好的识别效果。

  4. The compassionate brain: humans detect intensity of pain from another's face.

    Science.gov (United States)

    Saarela, Miiamaaria V; Hlushchuk, Yevhen; Williams, Amanda C de C; Schürmann, Martin; Kalso, Eija; Hari, Riitta

    2007-01-01

    Understanding another person's experience draws on "mirroring systems," brain circuitries shared by the subject's own actions/feelings and by similar states observed in others. Lately, also the experience of pain has been shown to activate partly the same brain areas in the subjects' own and in the observer's brain. Recent studies show remarkable overlap between brain areas activated when a subject undergoes painful sensory stimulation and when he/she observes others suffering from pain. Using functional magnetic resonance imaging, we show that not only the presence of pain but also the intensity of the observed pain is encoded in the observer's brain-as occurs during the observer's own pain experience. When subjects observed pain from the faces of chronic pain patients, activations in bilateral anterior insula (AI), left anterior cingulate cortex, and left inferior parietal lobe in the observer's brain correlated with their estimates of the intensity of observed pain. Furthermore, the strengths of activation in the left AI and left inferior frontal gyrus during observation of intensified pain correlated with subjects' self-rated empathy. These findings imply that the intersubjective representation of pain in the human brain is more detailed than has been previously thought.

  5. A retrospective look at replacing face-to-face embryology instruction with online lectures in a human anatomy course.

    Science.gov (United States)

    Beale, Elmus G; Tarwater, Patrick M; Lee, Vaughan H

    2014-01-01

    Embryology is integrated into the Clinically Oriented Anatomy course at the Texas Tech University Health Sciences Center School of Medicine. Before 2008, the same instructor presented embryology in 13 face-to-face lectures distributed by organ systems throughout the course. For the 2008 and 2009 offerings of the course, a hybrid embryology instruction model with four face-to-face classes that supplemented online recorded lectures was used. One instructor delivered the lectures face-to-face in 2007 and by online videos in 2008-2009, while a second instructor provided the supplemental face-to-face classes in 2008-2009. The same embryology learning objectives and selected examination questions were used for each of the three years. This allowed direct comparison of learning outcomes, as measured by examination performance, for students receiving only face-to-face embryology instruction versus the hybrid approach. Comparison of the face-to-face lectures to the hybrid approach showed no difference in overall class performance on embryology questions that were used all three years. Moreover, there was no differential effect of the delivery method on the examination scores for bottom quartile students. Students completed an end-of-course survey to assess their opinions. They rated the two forms of delivery similarly on a six-point Likert scale and reported that face-to-face lectures have the advantage of allowing them to interact with the instructor, whereas online lectures could be paused, replayed, and viewed at any time. These experiences suggest the need for well-designed prospective studies to determine whether online lectures can be used to enhance the efficacy of embryology instruction. © 2013 American Association of Anatomists.

  6. Robust gray-image face detector based on local statistical features

    Institute of Scientific and Technical Information of China (English)

    王琳; 冯正进; 崔光亮

    2004-01-01

    An efficient training framework for gray-image face detection was presented. Our system includes two stages.In the first stage, the pattern rejection theory is used for features selection. The local Haar-like wavelet features used as rejection features to reject those patterns are not faces obviously. In the second stage, the Kullback-Leibler divergence in information theory is applied to choose more effective features further and to construct hierarchical classifier. The probability functions of two classes are estimated by joint-histograms. Final decisions are made according to the likelihood ratios between two classes. The experimental results show that our system is the same robust and efficient as the best reported methods, while the training efficiency is higher than others.

  7. Enhanced Visualization of Subtle Outer Retinal Pathology by En Face Optical Coherence Tomography and Correlation with Multi-Modal Imaging

    Science.gov (United States)

    Chew, Avenell L.; Lamey, Tina; McLaren, Terri; De Roach, John

    2016-01-01

    Purpose To present en face optical coherence tomography (OCT) images generated by graph-search theory algorithm-based custom software and examine correlation with other imaging modalities. Methods En face OCT images derived from high density OCT volumetric scans of 3 healthy subjects and 4 patients using a custom algorithm (graph-search theory) and commercial software (Heidelberg Eye Explorer software (Heidelberg Engineering)) were compared and correlated with near infrared reflectance, fundus autofluorescence, adaptive optics flood-illumination ophthalmoscopy (AO-FIO) and microperimetry. Results Commercial software was unable to generate accurate en face OCT images in eyes with retinal pigment epithelium (RPE) pathology due to segmentation error at the level of Bruch’s membrane (BM). Accurate segmentation of the basal RPE and BM was achieved using custom software. The en face OCT images from eyes with isolated interdigitation or ellipsoid zone pathology were of similar quality between custom software and Heidelberg Eye Explorer software in the absence of any other significant outer retinal pathology. En face OCT images demonstrated angioid streaks, lesions of acute macular neuroretinopathy, hydroxychloroquine toxicity and Bietti crystalline deposits that correlated with other imaging modalities. Conclusions Graph-search theory algorithm helps to overcome the limitations of outer retinal segmentation inaccuracies in commercial software. En face OCT images can provide detailed topography of the reflectivity within a specific layer of the retina which correlates with other forms of fundus imaging. Our results highlight the need for standardization of image reflectivity to facilitate quantification of en face OCT images and longitudinal analysis. PMID:27959968

  8. 人脸识别中面部图像处理算法研究%Research on Image Processing Algorithm of Face in Face Recognition

    Institute of Scientific and Technical Information of China (English)

    韩增锟

    2012-01-01

    在人脸图像定位的前提上,利用灰度信息实现了人脸面部主要器官如眼睛、鼻子和嘴巴的定位.采用双三次插值法对图像进行旋转和缩放.另外,采用直方图增强的方法对图像灰度值进行归一化处理.%On the premise of face location, this paper apply the grey-scale information to locate major facial organs, such as eyes, nose and mouth. Bi-cubic interpolation, needs to be introduced in image rotation and zoom during the preprocessing phase of face images; moreover, grey value of the image is normalized by adopting histogram enhancement.

  9. An objective signature for visual binding of face parts in the human brain.

    Science.gov (United States)

    Boremanse, Adriano; Norcia, Anthony M; Rossion, Bruno

    2013-09-10

    Whether and how the parts of a visual object are grouped together to form an integrated ("holistic") representation is a central question in cognitive neuroscience. Although the face is considered to be the quintessential example of holistic representation, this issue has been the subject of much debate in face perception research. The implication of holistic processing is that the response to the whole cannot be predicted from the sum of responses to the parts. Here we apply techniques from nonlinear systems analysis to provide an objective measure of the nonlinear integration of parts into a whole, using the left and right halves of a face stimulus as the parts. High-density electroencephalogram (EEG) was recorded in 15 human participants presented with two halves of a face stimulus, flickering at different frequencies (5.88 vs. 7.14 Hz). Besides specific responses at these fundamental frequencies, reflecting part-based responses, we found intermodulation components (e.g., 7.14 - 5.88 = 1.26 Hz) over the right occipito-temporal hemisphere, reflecting nonlinear integration of the face halves. Part-based responses did not depend on the relative alignment of the two face halves, their spatial separation, or whether the face was presented upright or inverted. By contrast, intermodulations were virtually absent when the two halves were spatially misaligned and separated. Inversion of the whole face configuration also reduced specifically the intermodulation components over the right occipito-temporal cortex. These observations indicate that the intermodulation components constitute an objective, configuration-specific signature of an emergent neural representation of the whole face that is distinct from that generated by the parts themselves.

  10. Humanity in God's Image: An Interdisciplinary Exploration

    DEFF Research Database (Denmark)

    Welz, Claudia

    How can we, in our times, understand the biblical concept that human beings have been created in the image of an invisible God? This is a perennial but increasingly pressing question that lies at the heart of theological anthropology. Humanity in God's Image: An Interdisciplinary Exploration....... Claudia Welz offers an interdisciplinary exploration of theological and ethical 'visions' of the invisible. By analysing poetry and art, Welz exemplifies human self-understanding in the interface between the visual and the linguistic. The content of the imago Dei cannot be defined apart from the image...

  11. 3D quantitative analysis of early decomposition changes of the human face.

    Science.gov (United States)

    Caplova, Zuzana; Gibelli, Daniele Maria; Poppa, Pasquale; Cummaudo, Marco; Obertova, Zuzana; Sforza, Chiarella; Cattaneo, Cristina

    2017-07-13

    Decomposition of the human body and human face is influenced, among other things, by environmental conditions. The early decomposition changes that modify the appearance of the face may hamper the recognition and identification of the deceased. Quantitative assessment of those changes may provide important information for forensic identification. This report presents a pilot 3D quantitative approach of tracking early decomposition changes of a single cadaver in controlled environmental conditions by summarizing the change with weekly morphological descriptions. The root mean square (RMS) value was used to evaluate the changes of the face after death. The results showed a high correlation (r = 0.863) between the measured RMS and the time since death. RMS values of each scan are presented, as well as the average weekly RMS values. The quantification of decomposition changes could improve the accuracy of antemortem facial approximation and potentially could allow the direct comparisons of antemortem and postmortem 3D scans.

  12. Human infant faces provoke implicit positive affective responses in parents and non-parents alike.

    Directory of Open Access Journals (Sweden)

    Vincenzo Paolo Senese

    Full Text Available Human infants' complete dependence on adult caregiving suggests that mechanisms associated with adult responsiveness to infant cues might be deeply embedded in the brain. Behavioural and neuroimaging research has produced converging evidence for adults' positive disposition to infant cues, but these studies have not investigated directly the valence of adults' reactions, how they are moderated by biological and social factors, and if they relate to child caregiving. This study examines implicit affective responses of 90 adults toward faces of human and non-human (cats and dogs infants and adults. Implicit reactions were assessed with Single Category Implicit Association Tests, and reports of childrearing behaviours were assessed by the Parental Style Questionnaire. The results showed that human infant faces represent highly biologically relevant stimuli that capture attention and are implicitly associated with positive emotions. This reaction holds independent of gender and parenthood status and is associated with ideal parenting behaviors.

  13. Analysis and Segmentation of Face Images using Point Annotations and Linear Subspace Techniques

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille

    2002-01-01

    This report provides an analysis of 37 annotated frontal face images. All results presented have been obtained using our freely available Active Appearance Model (AAM) implementation. To ensure the reproducibility of the presented experiments, the data set has also been made available. As such......, the data and this report may serve as a point of reference to compare other AAM implementations against. In addition, we address the problem of AAM model truncation using parallel analysis along with a comparable study of the two prevalent AAM learning methods; principal component regression and estimation...

  14. Management of human bites of the face in Enugu, Nigeria | Olaitan ...

    African Journals Online (AJOL)

    Log in or Register to get access to full text downloads. ... Methods:A retrospective review of the cases of human bites of the face that presented within a ten year period was carried out. ... Information obtained includes age, gender of the patients as well as that of the assailants and the relationship of the assailants to the ...

  15. Lurking on the Internet: A Small-Group Assignment that Puts a Human Face on Psychopathology

    Science.gov (United States)

    Lowman, Joseph; Judge, Abigail M.; Wiss, Charles

    2010-01-01

    Lurking on the Internet aims to put a human face on psychopathology for the abnormal psychology course. Student groups are assigned major diagnostic categories and instructed to search the Internet for discussion forums, individual blogs, or YouTube videos where affected individuals discuss their symptoms and lives. After discussing the ethics of…

  16. Impairments in Monkey and Human Face Recognition in 2-Year-Old Toddlers with Autism Spectrum Disorder and Developmental Delay

    Science.gov (United States)

    Chawarska, Katarzyna; Volkmar, Fred

    2007-01-01

    Face recognition impairments are well documented in older children with Autism Spectrum Disorders (ASD); however, the developmental course of the deficit is not clear. This study investigates the progressive specialization of face recognition skills in children with and without ASD. Experiment 1 examines human and monkey face recognition in…

  17. Data Mining Based Skin Pixel Detection Applied On Human Images: A Study Paper

    Directory of Open Access Journals (Sweden)

    Gagandeep Kaur

    2014-07-01

    Full Text Available Skin segmentation is the process of the identifying the skin pixels in a image in a particular color model and dividing the images into skin and non-skin pixels. It is the process of find the particular skin of the image or video in a color model. Finding the regions of the images in human images to say these pixel regions are part of the image or videos is typically a preprocessing step in skin detection in computer vision, face detection or multi-view face detection. Skin pixel detection model converts the images into appropriate format in a color space and then classification process is being used for labeling of the skin and non-skin pixels. A skin classifier identifies the boundary of the skin image in a skin color model based on the training dataset. Here in this paper, we present the survey of the skin pixel segmentation using the learning algorithms.

  18. Audio-Visual Speech Recognition Using Lip Information Extracted from Side-Face Images

    Directory of Open Access Journals (Sweden)

    Iwano Koji

    2007-01-01

    Full Text Available This paper proposes an audio-visual speech recognition method using lip information extracted from side-face images as an attempt to increase noise robustness in mobile environments. Our proposed method assumes that lip images can be captured using a small camera installed in a handset. Two different kinds of lip features, lip-contour geometric features and lip-motion velocity features, are used individually or jointly, in combination with audio features. Phoneme HMMs modeling the audio and visual features are built based on the multistream HMM technique. Experiments conducted using Japanese connected digit speech contaminated with white noise in various SNR conditions show effectiveness of the proposed method. Recognition accuracy is improved by using the visual information in all SNR conditions. These visual features were confirmed to be effective even when the audio HMM was adapted to noise by the MLLR method.

  19. Audio-Visual Speech Recognition Using Lip Information Extracted from Side-Face Images

    Directory of Open Access Journals (Sweden)

    Koji Iwano

    2007-03-01

    Full Text Available This paper proposes an audio-visual speech recognition method using lip information extracted from side-face images as an attempt to increase noise robustness in mobile environments. Our proposed method assumes that lip images can be captured using a small camera installed in a handset. Two different kinds of lip features, lip-contour geometric features and lip-motion velocity features, are used individually or jointly, in combination with audio features. Phoneme HMMs modeling the audio and visual features are built based on the multistream HMM technique. Experiments conducted using Japanese connected digit speech contaminated with white noise in various SNR conditions show effectiveness of the proposed method. Recognition accuracy is improved by using the visual information in all SNR conditions. These visual features were confirmed to be effective even when the audio HMM was adapted to noise by the MLLR method.

  20. 3D imaging by serial block face scanning electron microscopy for materials science using ultramicrotomy.

    Science.gov (United States)

    Hashimoto, Teruo; Thompson, George E; Zhou, Xiaorong; Withers, Philip J

    2016-04-01

    Mechanical serial block face scanning electron microscopy (SBFSEM) has emerged as a means of obtaining three dimensional (3D) electron images over volumes much larger than possible by focused ion beam (FIB) serial sectioning and at higher spatial resolution than achievable with conventional X-ray computed tomography (CT). Such high resolution 3D electron images can be employed for precisely determining the shape, volume fraction, distribution and connectivity of important microstructural features. While soft (fixed or frozen) biological samples are particularly well suited for nanoscale sectioning using an ultramicrotome, the technique can also produce excellent 3D images at electron microscope resolution in a time and resource-efficient manner for engineering materials. Currently, a lack of appreciation of the capabilities of ultramicrotomy and the operational challenges associated with minimising artefacts for different materials is limiting its wider application to engineering materials. Consequently, this paper outlines the current state of the art for SBFSEM examining in detail how damage is introduced during slicing and highlighting strategies for minimising such damage. A particular focus of the study is the acquisition of 3D images for a variety of metallic and coated systems.

  1. Facing the phase problem in Coherent Diffractive Imaging via Memetic Algorithms

    Science.gov (United States)

    Colombo, Alessandro; Galli, Davide Emilio; de Caro, Liberato; Scattarella, Francesco; Carlino, Elvio

    2017-02-01

    Coherent Diffractive Imaging is a lensless technique that allows imaging of matter at a spatial resolution not limited by lens aberrations. This technique exploits the measured diffraction pattern of a coherent beam scattered by periodic and non–periodic objects to retrieve spatial information. The diffracted intensity, for weak–scattering objects, is proportional to the modulus of the Fourier Transform of the object scattering function. Any phase information, needed to retrieve its scattering function, has to be retrieved by means of suitable algorithms. Here we present a new approach, based on a memetic algorithm, i.e. a hybrid genetic algorithm, to face the phase problem, which exploits the synergy of deterministic and stochastic optimization methods. The new approach has been tested on simulated data and applied to the phasing of transmission electron microscopy coherent electron diffraction data of a SrTiO3 sample. We have been able to quantitatively retrieve the projected atomic potential, and also image the oxygen columns, which are not directly visible in the relevant high-resolution transmission electron microscopy images. Our approach proves to be a new powerful tool for the study of matter at atomic resolution and opens new perspectives in those applications in which effective phase retrieval is necessary.

  2. Fractal analysis of en face tomographic images obtained with full field optical coherence tomography

    Energy Technology Data Exchange (ETDEWEB)

    Gao, Wanrong; Zhu, Yue [Department of Optical Engineering, Nanjing University of Science and Technology, Jiangsu (China)

    2017-03-15

    The quantitative modeling of the imaging signal of pathological areas and healthy areas is necessary to improve the specificity of diagnosis with tomographic en face images obtained with full field optical coherence tomography (FFOCT). In this work, we propose to use the depth-resolved change in the fractal parameter as a quantitative specific biomarker of the stages of disease. The idea is based on the fact that tissue is a random medium and only statistical parameters that characterize tissue structure are appropriate. We successfully relate the imaging signal in FFOCT to the tissue structure in terms of the scattering function and the coherent transfer function of the system. The formula is then used to analyze the ratio of the Fourier transforms of the cancerous tissue to the normal tissue. We found that when the tissue changes from the normal to cancerous the ratio of the spectrum of the index inhomogeneities takes the form of an inverse power law and the changes in the fractal parameter can be determined by estimating slopes of the spectra of the ratio plotted on a log-log scale. The fresh normal and cancer liver tissues were imaged to demonstrate the potential diagnostic value of the method at early stages when there are no significant changes in tissue microstructures. (copyright 2016 by WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  3. Image Quality Assessment for Fake Biometric Detection: Application to Iris, Fingerprint, and Face Recognition.

    Science.gov (United States)

    Galbally, Javier; Marcel, Sébastien; Fierrez, Julian

    2014-02-01

    To ensure the actual presence of a real legitimate trait in contrast to a fake self-manufactured synthetic or reconstructed sample is a significant problem in biometric authentication, which requires the development of new and efficient protection measures. In this paper, we present a novel software-based fake detection method that can be used in multiple biometric systems to detect different types of fraudulent access attempts. The objective of the proposed system is to enhance the security of biometric recognition frameworks, by adding liveness assessment in a fast, user-friendly, and non-intrusive manner, through the use of image quality assessment. The proposed approach presents a very low degree of complexity, which makes it suitable for real-time applications, using 25 general image quality features extracted from one image (i.e., the same acquired for authentication purposes) to distinguish between legitimate and impostor samples. The experimental results, obtained on publicly available data sets of fingerprint, iris, and 2D face, show that the proposed method is highly competitive compared with other state-of-the-art approaches and that the analysis of the general image quality of real biometric samples reveals highly valuable information that may be very efficiently used to discriminate them from fake traits.

  4. Galactose uncovers face recognition and mental images in congenital prosopagnosia: the first case report.

    Science.gov (United States)

    Esins, Janina; Schultz, Johannes; Bülthoff, Isabelle; Kennerknecht, Ingo

    2014-09-01

    A woman in her early 40s with congenital prosopagnosia and attention deficit hyperactivity disorder observed for the first time sudden and extensive improvement of her face recognition abilities, mental imagery, and sense of navigation after galactose intake. This effect of galactose on prosopagnosia has never been reported before. Even if this effect is restricted to a subform of congenital prosopagnosia, galactose might improve the condition of other prosopagnosics. Congenital prosopagnosia, the inability to recognize other people by their face, has extensive negative impact on everyday life. It has a high prevalence of about 2.5%. Monosaccharides are known to have a positive impact on cognitive performance. Here, we report the case of a prosopagnosic woman for whom the daily intake of 5 g of galactose resulted in a remarkable improvement of her lifelong face blindness, along with improved sense of orientation and more vivid mental imagery. All these improvements vanished after discontinuing galactose intake. The self-reported effects of galactose were wide-ranging and remarkably strong but could not be reproduced for 16 other prosopagnosics tested. Indications about heterogeneity within prosopagnosia have been reported; this could explain the difficulty to find similar effects in other prosopagnosics. Detailed analyses of the effects of galactose in prosopagnosia might give more insight into the effects of galactose on human cognition in general. Galactose is cheap and easy to obtain, therefore, a systematic test of its positive effects on other cases of congenital prosopagnosia may be warranted.

  5. Perfusion harmonic imaging of the human brain

    Science.gov (United States)

    Metzler, Volker H.; Seidel, Guenter; Wiesmann, Martin; Meyer, Karsten; Aach, Til

    2003-05-01

    The fast visualisation of cerebral microcirculation supports diagnosis of acute cerebrovascular diseases. However, the commonly used CT/MRI-based methods are time consuming and, moreover, costly. Therefore we propose an alternative approach to brain perfusion imaging by means of ultrasonography. In spite of the low signal/noise-ratio of transcranial ultrasound and the high impedance of the skull, flow images of cerebral blood flow can be derived by capturing the kinetics of appropriate contrast agents by harmonic ultrasound image sequences. In this paper we propose three different methods for human brain perfusion imaging, each of which yielding flow images indicating the status of the patient's cerebral microcirculation by visualising local flow parameters. Bolus harmonic imaging (BHI) displays the flow kinetics of bolus injections, while replenishment (RHI) and diminution harmonic imaging (DHI) compute flow characteristics from contrast agent continuous infusions. RHI measures the contrast agents kinetics in the influx phase and DHI displays the diminution kinetics of the contrast agent acquired from the decay phase. In clinical studies, BHI- and RHI-parameter images were found to represent comprehensive and reproducible distributions of physiological cerebral blood flow. For DHI it is shown, that bubble destruction and hence perfusion phenomena principally can be displayed. Generally, perfusion harmonic imaging enables reliable and fast bedside imaging of human brain perfusion. Due to its cost efficiency it complements cerebrovascular diagnostics by established CT/MRI-based methods.

  6. Human Lives Are More Valuable Than Material Possessions--My Reflections on "Face to Face with Hurricane Camille"

    Institute of Scientific and Technical Information of China (English)

    顾嘉祖

    1985-01-01

    @@ "Face to face with HurricaneCamille" is a piece of narration writ-ten by Joseph P. Blank. It has beenadopted by text-book compilers ofvarious countries ever since its firstpublication in The Readers Digest,March 1970. As early as 1973, Nat-ali C. Moreda, Katherlne M. Sin-clair andNancy J.Sparks put it intoa set of text-books entitled New Advanced.

  7. Common cortical responses evoked by appearance, disappearance and change of the human face

    Directory of Open Access Journals (Sweden)

    Kida Tetsuo

    2009-04-01

    Full Text Available Abstract Background To segregate luminance-related, face-related and non-specific components involved in spatio-temporal dynamics of cortical activations to a face stimulus, we recorded cortical responses to face appearance (Onset, disappearance (Offset, and change (Change using magnetoencephalography. Results Activity in and around the primary visual cortex (V1/V2 showed luminance-dependent behavior. Any of the three events evoked activity in the middle occipital gyrus (MOG at 150 ms and temporo-parietal junction (TPJ at 250 ms after the onset of each event. Onset and Change activated the fusiform gyrus (FG, while Offset did not. This FG activation showed a triphasic waveform, consistent with results of intracranial recordings in humans. Conclusion Analysis employed in this study successfully segregated four different elements involved in the spatio-temporal dynamics of cortical activations in response to a face stimulus. The results show the responses of MOG and TPJ to be associated with non-specific processes, such as the detection of abrupt changes or exogenous attention. Activity in FG corresponds to a face-specific response recorded by intracranial studies, and that in V1/V2 is related to a change in luminance.

  8. Coarse-grained and fine-grained parallel optimization for real-time en-face OCT imaging

    Science.gov (United States)

    Kapinchev, Konstantin; Bradu, Adrian; Barnes, Frederick; Podoleanu, Adrian

    2016-03-01

    This paper presents parallel optimizations in the en-face (C-scan) optical coherence tomography (OCT) display. Compared with the cross-sectional (B-scan) imagery, the production of en-face images is more computationally demanding, due to the increased size of the data handled by the digital signal processing (DSP) algorithms. A sequential implementation of the DSP leads to a limited number of real-time generated en-face images. There are OCT applications, where simultaneous production of large number of en-face images from multiple depths is required, such as real-time diagnostics and monitoring of surgery and ablation. In sequential computing, this requirement leads to a significant increase of the time to process the data and to generate the images. As a result, the processing time exceeds the acquisition time and the image generation is not in real-time. In these cases, not producing en-face images in real-time makes the OCT system ineffective. Parallel optimization of the DSP algorithms provides a solution to this problem. Coarse-grained central processing unit (CPU) based and fine-grained graphics processing unit (GPU) based parallel implementations of the conventional Fourier domain (CFD) OCT method and the Master-Slave Interferometry (MSI) OCT method are studied. In the coarse-grained CPU implementation, each parallel thread processes the whole OCT frame and generates a single en-face image. The corresponding fine-grained GPU implementation launches one parallel thread for every data point from the OCT frame and thus achieves maximum parallelism. The performance and scalability of the CPU-based and GPU-based parallel approaches are analyzed and compared. The quality and the resolution of the images generated by the CFD method and the MSI method are also discussed and compared.

  9. Imaging's insights into human violence.

    Science.gov (United States)

    Church, Elizabeth J

    2014-01-01

    Following every well-publicized act of incomprehensible violence, the news media rush to interview neighbors, family members, and experts in an attempt to discover what could have led an individual to commit such a barbarous act. Certain stock answers are reiterated: video games, bullying, violent films, mental illness, the availability of guns, and a society that is increasingly both anonymous and callous. Might imaging be one of the more valuable keys to unlocking the mysteries of violent, aggressive people? This article explores these questions and their complex answers in the context of violent individuals.

  10. Symmetrical Two-Dimensional PCA with Image Measures in Face Recognition

    Directory of Open Access Journals (Sweden)

    Jicheng Meng

    2012-12-01

    Full Text Available In this paper, weextensively investigate symmetrical two‐dimensional principal component analysis (S2DPCA and introduce two image measures for S2DPCA‐based face recognition, volume measure (VM and subspace distance measure (SM. Although symmetrical features are an obviously but not absolutely facial characteristic, they have been successfully applied to PCA and 2DPCA. The paper gives detailed evidence that even and odd subspaces in S2DPCA are mutually orthogonal, and particularly that S2DPCA can be constructed using a quarter of the conventional S2DPCA even/odd covariance matrix. Based on these theories, we investigate the time and memory complexities of S2PDCA further, and find that S2DPCA can in fact be computed using a quarter of the time and memory compared to conventional S2DPCA. Finally, VM and SM are introduced to S2DPCA for final classification. Our experiments compare S2DPCA with 2DPCA on YALE, AR and FERET face databases, and the results indicate that S2DPCA+VM generally outperforms other algorithms.

  11. The image of the body-face: The case of Franz X. Messerschmidt and Bill Viola

    Directory of Open Access Journals (Sweden)

    Maria POPCZYK

    2015-06-01

    Full Text Available In this paper, I am predominantly interested in interpretations of emotional states portrayed in images of the face. In particular, the interpretations which have grown around the series of busts by Franz Xaver Messerschmidt, as well as those which attempt to expound Bill Viola’s video works. I will refer to aspects of physiognomy, artistic practices and aesthetics, in order to show what each of these tells us about our attitude to the body and emotions and what happens to the body while a person is experiencing an emotion. My aim is to demonstrate how the act of depicting the body, regarded as a cognitive process in an artistic medium accompanied by a special kind of aesthetic experience, becomes a means of communication which is capable of conveying a universal message and of allowing us to define our attitude to the body.

  12. The Processing of Human Emotional Faces by Pet and Lab Dogs: Evidence for Lateralization and Experience Effects.

    Science.gov (United States)

    Barber, Anjuli L A; Randi, Dania; Müller, Corsin A; Huber, Ludwig

    2016-01-01

    From all non-human animals dogs are very likely the best decoders of human behavior. In addition to a high sensitivity to human attentive status and to ostensive cues, they are able to distinguish between individual human faces and even between human facial expressions. However, so far little is known about how they process human faces and to what extent this is influenced by experience. Here we present an eye-tracking study with dogs emanating from two different living environments and varying experience with humans: pet and lab dogs. The dogs were shown pictures of familiar and unfamiliar human faces expressing four different emotions. The results, extracted from several different eye-tracking measurements, revealed pronounced differences in the face processing of pet and lab dogs, thus indicating an influence of the amount of exposure to humans. In addition, there was some evidence for the influences of both, the familiarity and the emotional expression of the face, and strong evidence for a left gaze bias. These findings, together with recent evidence for the dog's ability to discriminate human facial expressions, indicate that dogs are sensitive to some emotions expressed in human faces.

  13. Investigation of New Techniques for Face detection

    OpenAIRE

    Abdallah, Abdallah Sabry

    2007-01-01

    The task of detecting human faces within either a still image or a video frame is one of the most popular object detection problems. For the last twenty years researchers have shown great interest in this problem because it is an essential pre-processing stage for computing systems that process human faces as input data. Example applications include face recognition systems, vision systems for autonomous robots, human computer interaction systems (HCI), surveillance systems, biometric based a...

  14. False memory for face in short-term memory and neural activity in human amygdala.

    Science.gov (United States)

    Iidaka, Tetsuya; Harada, Tokiko; Sadato, Norihiro

    2014-12-03

    Human memory is often inaccurate. Similar to words and figures, new faces are often recognized as seen or studied items in long- and short-term memory tests; however, the neural mechanisms underlying this false memory remain elusive. In a previous fMRI study using morphed faces and a standard false memory paradigm, we found that there was a U-shaped response curve of the amygdala to old, new, and lure items. This indicates that the amygdala is more active in response to items that are salient (hit and correct rejection) compared to items that are less salient (false alarm), in terms of memory retrieval. In the present fMRI study, we determined whether the false memory for faces occurs within the short-term memory range (a few seconds), and assessed which neural correlates are involved in veridical and illusory memories. Nineteen healthy participants were scanned by 3T MRI during a short-term memory task using morphed faces. The behavioral results indicated that the occurrence of false memories was within the short-term range. We found that the amygdala displayed a U-shaped response curve to memory items, similar to those observed in our previous study. These results suggest that the amygdala plays a common role in both long- and short-term false memory for faces. We made the following conclusions: First, the amygdala is involved in detecting the saliency of items, in addition to fear, and supports goal-oriented behavior by modulating memory. Second, amygdala activity and response time might be related with a subject's response criterion for similar faces. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Average Gait Differential Image Based Human Recognition

    Directory of Open Access Journals (Sweden)

    Jinyan Chen

    2014-01-01

    Full Text Available The difference between adjacent frames of human walking contains useful information for human gait identification. Based on the previous idea a silhouettes difference based human gait recognition method named as average gait differential image (AGDI is proposed in this paper. The AGDI is generated by the accumulation of the silhouettes difference between adjacent frames. The advantage of this method lies in that as a feature image it can preserve both the kinetic and static information of walking. Comparing to gait energy image (GEI, AGDI is more fit to representation the variation of silhouettes during walking. Two-dimensional principal component analysis (2DPCA is used to extract features from the AGDI. Experiments on CASIA dataset show that AGDI has better identification and verification performance than GEI. Comparing to PCA, 2DPCA is a more efficient and less memory storage consumption feature extraction method in gait based recognition.

  16. From Parts to Identity: Invariance and Sensitivity of Face Representations to Different Face Halves.

    Science.gov (United States)

    Anzellotti, Stefano; Caramazza, Alfonso

    2016-05-01

    Recognizing the identity of a face is computationally challenging, because it requires distinguishing between similar images depicting different people, while recognizing even very different images depicting a same person. Previous human fMRI studies investigated representations of face identity in the presence of changes in viewpoint and in expression. Despite the importance of holistic processing for face recognition, an investigation of representations of face identity across different face parts is missing. To fill this gap, we investigated representations of face identity and their invariance across different face halves. Information about face identity with invariance across changes in the face half was individuated in the right anterior temporal lobe, indicating this region as the most plausible candidate brain area for the representation of face identity. In a complementary analysis, information distinguishing between different face halves was found to decline along the posterior to anterior axis in the ventral stream.

  17. Photothermal image cytometry of human neutrophils

    Science.gov (United States)

    Lapotko, Dmitry

    2001-07-01

    Photothermal imaging, when being applied to the study of living cells, provides morpho-functional information about the cell populations. In technical terms, the method is complementary to optical microscopy. The photothermal method was used for cell imaging and quantitative studies. Preliminary results of the studies on living human neutrophils are presented. Differences between normal and pathological neutrophil populations from blood of healthy donors and patients with saracoidosis and pleuritis are demonstrated.

  18. Chemical Achievers: The Human Face of the Chemical Sciences (by Mary Ellen Bowden)

    Science.gov (United States)

    Kauffman, George B.

    1999-02-01

    Chemical Heritage Foundation: Philadelphia, PA, 1997. viii + 180 pp. 21.6 x 27.8 cm. ISBN 0-941901-15-1. Paper. 20.00 (10.00 for high school teachers who provide documentation). At a 1991 summer workshop sponsored by the Chemical Heritage Foundation and taught by Derek A. Davenport and William B. Jensen, high school and college teachers of introductory chemistry requested a source of pictorial material about famous chemical scientists suitable as a classroom aid. CHF responded by publishing this attractive, inexpensive paperback volume, which reflects the considerable research effort needed to locate appropriate images and to write the biographical essays. Printed on heavy, glossy paper and spiral bound to facilitate conversion to overhead transparencies, it contains 157 images from pictorial collections at CHF and many other institutions on two types of achievers: the historical "greats" most often referred to in introductory courses, and scientists who made contributions in areas of the chemical sciences that are of special relevance to modern life and the career choices students will make. The pictures are intended to provide the "human face" of the book's subtitle- "to point to the human beings who had the insights and made the major advances that [teachers] ask students to master." Thus, for example, Boyle's law becomes less cold and abstract if the student can connect it with the two portraits of the Irish scientist even if his face is topped with a wig. Marie Curie can be seen in the role of wife and mother as well as genius scientist in the photographs of her with her two daughters, one of whom also became a Nobel laureate. And students are reminded of the ubiquity of the contribution of the chemical scientists to all aspects of our everyday life by the stories and pictures of Wallace Hume Carothers' path to nylon, Percy Lavon Julian's work on hormones, and Charles F. Chandler and Rachel Carson's efforts to preserve the environment. In addition to portraits

  19. 基于Fisher判别的人脸识别方法研究%Human face recognition based on fisher decision

    Institute of Scientific and Technical Information of China (English)

    赵丽; 马银雪

    2012-01-01

    人脸特征是最自然直接的生物特征,它具有直接、友好、方便的特点,易于为用户接受。人脸识别由于其在监控、罪犯识别、人机交互等方面广泛潜在的应用,已成为图像处理、模式识别和计算机视觉等学科最活跃的研究领域。线性鉴别分析是特征抽取中最为经典和广泛使用的方法之一。近年来,在小样本情况下如何抽取Fisher最优鉴别特征一直是许多研究者关心的问题。文中阐述了应用Fisher判别法在人脸图像样本分类方面的运用。在标准数据库ORL人脸库和Yale人脸数据库上仿真的试验结果证实了方法的有效性和稳定性。%Compared to other biological characteristics by using the characteristics of human face is the most natural and direct mean, as it has straightforward, friendly, convenient features, and easy for users to accept. Due to Face recognition's potential and wide range of applications in control, criminal identification, and human-computer interaction, it has became the most active area of research in the image processing, pattern recognition, computer vision and other subjects. In recent years, how to extract Fisher optimal features in the situation of small samples has been a concern of many researchers.This paper highlighted the application of Fisher discrimination method in the use of face image samples classification. Experimental results on ORL face database and Yale database show that the proposed method is efficient and robust.

  20. Determining human target facing orientation using bistatic radar micro-Doppler signals

    Science.gov (United States)

    Fairchild, Dustin P.; Narayanan, Ram M.

    2014-06-01

    Micro-Doppler radar signals can be used to separate moving human targets from stationary clutter and also to identify and classify human movements. Traditional micro-Doppler radar systems which use a single sensor, monostatic system, suffer from the drawback that only the radial component of the micro-Doppler signal will be observed by the radar operator. This reduces the sensitivity of human activity recognition if the movements are not directly towards or away with respect to the line-of-sight to the radar antenna. In this paper, we propose the use of two bistatic micro-Doppler sensors to overcome this limitation. By using multiple sensors, the orientation of oscillating targets with respect to the radar line-of-sight can be inferred, thereby providing additional information to the radar operator. This approach can be used to infer the facing direction of the human with respect to the radar beam.

  1. Multiphoton fluorescence lifetime imaging of human hair.

    Science.gov (United States)

    Ehlers, Alexander; Riemann, Iris; Stark, Martin; König, Karsten

    2007-02-01

    In vivo and in vitro multiphoton imaging was used to perform high resolution optical sectioning of human hair by nonlinear excitation of endogenous as well as exogenous fluorophores. Multiphoton fluorescence lifetime imaging (FLIM) based on time-resolved single photon counting and near-infrared femtosecond laser pulse excitation was employed to analyze the various fluorescent hair components. Time-resolved multiphoton imaging of intratissue pigments has the potential (i) to identify endogenous keratin and melanin, (ii) to obtain information on intrahair dye accumulation, (iii) to study bleaching effects, and (iv) to monitor the intratissue diffusion of pharmaceutical and cosmetical components along hair shafts.

  2. The Mobility of the Human Face: More than Just the Musculature.

    Science.gov (United States)

    Burrows, Anne M; Rogers-Vizena, Carolyn R; Li, Ly; Mendelson, Bryan

    2016-12-01

    The human face has the greatest mobility and facial display repertoire among all primates. However, the variables that account for this are not clear. Humans and other anthropoids have remarkably similar mimetic musculature. This suggests that differences among the mimetic muscles alone may not account for the increased mobility and facial display repertoire seen in humans. Furthermore, anthropoids themselves outpace prosimians in these categories: humans > other anthropoids > prosimians. This study was undertaken to clarify the morphological underpinnings of the increased mobility and display repertoire of the human face by investigating the SMAS (the superficial musculo-aponeurotic system), a connective tissue layer enclosing the mimetic musculature located between the skin and deep fascia/periosteum. Full-thickness samples from the face near the zygoma region from the anthropoids Homo sapiens (humans, N = 3), Pan troglodytes (chimpanzees, N = 3), Hylobates muelleri (gibbons, N = 1), and Macaca mulatta (rhesus macaque, N = 3) and the prosimians Tarsius bancanus (tarsiers, N = 1), and Otolemur crassicaudatus (galagos, N = 2) were used. All samples were processed for paraffin-based histology and stained sections were viewed under light microscopy to determine if a SMAS layer could be identified. Results indicate that a SMAS layer was present in all anthropoid species but neither of the prosimian species. This connective tissue layer may be a factor in the increased facial mobility and facial display repertoire present in these species. Anat Rec, 299:1779-1788, 2016. © 2016 Wiley Periodicals, Inc.

  3. I Reach Faster When I See You Look: Gaze Effects in Human-Human and Human-Robot Face-to-Face Cooperation.

    Science.gov (United States)

    Boucher, Jean-David; Pattacini, Ugo; Lelong, Amelie; Bailly, Gerrard; Elisei, Frederic; Fagel, Sascha; Dominey, Peter Ford; Ventre-Dominey, Jocelyne

    2012-01-01

    Human-human interaction in natural environments relies on a variety of perceptual cues. Humanoid robots are becoming increasingly refined in their sensorimotor capabilities, and thus should now be able to manipulate and exploit these social cues in cooperation with their human partners. Previous studies have demonstrated that people follow human and robot gaze, and that it can help them to cope with spatially ambiguous language. Our goal is to extend these findings into the domain of action, to determine how human and robot gaze can influence the speed and accuracy of human action. We report on results from a human-human cooperation experiment demonstrating that an agent's vision of her/his partner's gaze can significantly improve that agent's performance in a cooperative task. We then implement a heuristic capability to generate such gaze cues by a humanoid robot that engages in the same cooperative interaction. The subsequent human-robot experiments demonstrate that a human agent can indeed exploit the predictive gaze of their robot partner in a cooperative task. This allows us to render the humanoid robot more human-like in its ability to communicate with humans. The long term objectives of the work are thus to identify social cooperation cues, and to validate their pertinence through implementation in a cooperative robot. The current research provides the robot with the capability to produce appropriate speech and gaze cues in the context of human-robot cooperation tasks. Gaze is manipulated in three conditions: Full gaze (coordinated eye and head), eyes hidden with sunglasses, and head fixed. We demonstrate the pertinence of these cues in terms of statistical measures of action times for humans in the context of a cooperative task, as gaze significantly facilitates cooperation as measured by human response times.

  4. A Comparative Study of Human Thermal Face Recognition Based on Haar Wavelet Transform and Local Binary Pattern

    Directory of Open Access Journals (Sweden)

    Debotosh Bhattacharjee

    2012-01-01

    Full Text Available Thermal infrared (IR images focus on changes of temperature distribution on facial muscles and blood vessels. These temperature changes can be regarded as texture features of images. A comparative study of face two recognition methods working in thermal spectrum is carried out in this paper. In the first approach, the training images and the test images are processed with Haar wavelet transform and the LL band and the average of LH/HL/HH bands subimages are created for each face image. Then a total confidence matrix is formed for each face image by taking a weighted sum of the corresponding pixel values of the LL band and average band. For LBP feature extraction, each of the face images in training and test datasets is divided into 161 numbers of subimages, each of size 8 × 8 pixels. For each such subimages, LBP features are extracted which are concatenated in manner. PCA is performed separately on the individual feature set for dimensionality reduction. Finally, two different classifiers namely multilayer feed forward neural network and minimum distance classifier are used to classify face images. The experiments have been performed on the database created at our own laboratory and Terravic Facial IR Database.

  5. A comparative study of human thermal face recognition based on Haar wavelet transform and local binary pattern.

    Science.gov (United States)

    Bhattacharjee, Debotosh; Seal, Ayan; Ganguly, Suranjan; Nasipuri, Mita; Basu, Dipak Kumar

    2012-01-01

    Thermal infrared (IR) images focus on changes of temperature distribution on facial muscles and blood vessels. These temperature changes can be regarded as texture features of images. A comparative study of face two recognition methods working in thermal spectrum is carried out in this paper. In the first approach, the training images and the test images are processed with Haar wavelet transform and the LL band and the average of LH/HL/HH bands subimages are created for each face image. Then a total confidence matrix is formed for each face image by taking a weighted sum of the corresponding pixel values of the LL band and average band. For LBP feature extraction, each of the face images in training and test datasets is divided into 161 numbers of subimages, each of size 8 × 8 pixels. For each such subimages, LBP features are extracted which are concatenated in manner. PCA is performed separately on the individual feature set for dimensionality reduction. Finally, two different classifiers namely multilayer feed forward neural network and minimum distance classifier are used to classify face images. The experiments have been performed on the database created at our own laboratory and Terravic Facial IR Database.

  6. Microtubule organization within mitotic spindles revealed by serial block face scanning electron microscopy and image analysis.

    Science.gov (United States)

    Nixon, Faye M; Honnor, Thomas R; Clarke, Nicholas I; Starling, Georgina P; Beckett, Alison J; Johansen, Adam M; Brettschneider, Julia A; Prior, Ian A; Royle, Stephen J

    2017-05-15

    Serial block face scanning electron microscopy (SBF-SEM) is a powerful method to analyze cells in 3D. Here, working at the resolution limit of the method, we describe a correlative light-SBF-SEM workflow to resolve microtubules of the mitotic spindle in human cells. We present four examples of uses for this workflow that are not practical by light microscopy and/or transmission electron microscopy. First, distinguishing closely associated microtubules within K-fibers; second, resolving bridging fibers in the mitotic spindle; third, visualizing membranes in mitotic cells, relative to the spindle apparatus; and fourth, volumetric analysis of kinetochores. Our workflow also includes new computational tools for exploring the spatial arrangement of microtubules within the mitotic spindle. We use these tools to show that microtubule order in mitotic spindles is sensitive to the level of TACC3 on the spindle. © 2017. Published by The Company of Biologists Ltd.

  7. Image reconstruction techniques for high resolution human brain PET imaging

    Energy Technology Data Exchange (ETDEWEB)

    Comtat, C.; Bataille, F.; Sureau, F. [Service Hospitalier Frederic Joliot (CEA/DSV/DRM), 91 - Orsay (France)

    2006-07-01

    High resolution PET imaging is now a well established technique not only for small animal, but also for human brain studies. The ECAT HRRT brain PET scanner(Siemens Molecular Imaging) is characterized by an effective isotropic spatial resolution of 2.5 mm, about a factor of 2 better than for state-of-the-art whole-body clinical PET scanners. Although the absolute sensitivity of the HRRT (6.5 %) for point source in the center of the field-of-view is increased relative to whole-body scanner (typically 4.5 %) thanks to a larger co-polar aperture, the sensitivity in terms of volumetric resolution (75 (m{sup 3} at best for whole-body scanners and 16 (m{sup 3} for t he HRRT) is much lower. This constraint has an impact on the performance of image reconstruction techniques, in particular for dynamic studies. Standard reconstruction methods used with clinical whole-body PET scanners are not optimal for this application. Specific methods had to be developed, based on fully 3D iterative techniques. Different refinements can be added in the reconstruction process to improve image quality: more accurate modeling of the acquisition system, more accurate modeling of the statistical properties of the acquired data, anatomical side information to guide the reconstruction . We will present the performances these added developments for neuronal imaging in humans. (author)

  8. Optimal Feature Extraction Using Greedy Approach for Random Image Components and Subspace Approach in Face Recognition

    Institute of Scientific and Technical Information of China (English)

    Mathu Soothana S.Kumar Retna Swami; Muneeswaran Karuppiah

    2013-01-01

    An innovative and uniform framework based on a combination of Gabor wavelets with principal component analysis (PCA) and multiple discriminant analysis (MDA) is presented in this paper.In this framework,features are extracted from the optimal random image components using greedy approach.These feature vectors are then projected to subspaces for dimensionality reduction which is used for solving linear problems.The design of Gabor filters,PCA and MDA are crucial processes used for facial feature extraction.The FERET,ORL and YALE face databases are used to generate the results.Experiments show that optimal random image component selection (ORICS) plus MDA outperforms ORICS and subspace projection approach such as ORICS plus PCA.Our method achieves 96.25%,99.44% and 100% recognition accuracy on the FERET,ORL and YALE databases for 30% training respectively.This is a considerably improved performance compared with other standard methodologies described in the literature.

  9. 基于HCbCr的人脸检测方法%DETECTING HUMAN FACE BASED ON HCBCR

    Institute of Scientific and Technical Information of China (English)

    赵怀勋; 徐锋

    2011-01-01

    在对HSV和YCbCr子空间分析的基础上,提出了HCbCr肤色模型.通过数学形态学处理和连通性分析,实现人脸检测.实验将提出的方法与其它几种基于肤色的人脸检测方法进行比较,验证了基于HCbCr的人脸检测方法对于光照、表情等的鲁棒性高,具有较高的检测成功率.%An RCbCr complexion model is proposed based on the analysis of subspaces of HSV and YCbCr. By mathematical morphology processing and connectivity anaysis, the human face detection is achieved. We compare the presented method with other complexion-based face detection methods in experiment, and validate the presented HCbCr-based human face detection method has high robustness on light and facial expression, and has higher detection success rate as well.

  10. Face Recognition using Eigenfaces and Neural Networks

    Directory of Open Access Journals (Sweden)

    Mohamed Rizon

    2006-01-01

    Full Text Available In this study, we develop a computational model to identify the face of an unknown person’s by applying eigenfaces. The eigenfaces has been applied to extract the basic face of the human face images. The eigenfaces is then projecting onto human faces to identify unique features vectors. This significant features vector can be used to identify an unknown face by using the backpropagation neural network that utilized euclidean distance for classification and recognition. The ORL database for this investigation consists of 40 people with various 400 face images had been used for the learning. The eigenfaces including implemented Jacobi’s method for eigenvalues and eigenvectors has been performed. The classification and recognition using backpropagation neural network showed impressive positive result to classify face images.

  11. Comparative Study of Statistical Skin Detection Algorithms for Sub-Continental Human Images

    CERN Document Server

    Tabassum, Mirza Rehenuma; Kamal, Md Mostafa; Muctadir, Hossain Muhammad; Ibrahim, Muhammad; Shakir, Asif Khan; Imran, Asif; Islamm, Saiful; Rabbani, Md Golam; Khaled, Shah Mostafa; Islam, Md Saiful; Begum, Zerina; 10.3923/itj.2010.811.817

    2010-01-01

    Object detection has been a focus of research in human-computer interaction. Skin area detection has been a key to different recognitions like face recognition, human motion detection, pornographic and nude image prediction, etc. Most of the research done in the fields of skin detection has been trained and tested on human images of African, Mongolian and Anglo-Saxon ethnic origins. Although there are several intensity invariant approaches to skin detection, the skin color of Indian sub-continentals have not been focused separately. The approach of this research is to make a comparative study between three image segmentation approaches using Indian sub-continental human images, to optimize the detection criteria, and to find some efficient parameters to detect the skin area from these images. The experiments observed that HSV color model based approach to Indian sub-continental skin detection is more suitable with considerable success rate of 91.1% true positives and 88.1% true negatives.

  12. 2D Methods for pose invariant face recognition

    CSIR Research Space (South Africa)

    Mokoena, Ntabiseng

    2016-12-01

    Full Text Available The ability to recognise face images under random pose is a task that is done effortlessly by human beings. However, for a computer system, recognising face images under varying poses still remains an open research area. Face recognition across pose...

  13. Human gene therapy and imaging: cardiology

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Joseph C. [Stanford University School of Medicine, Department of Medicine, Stanford, CA (United States); Yla-Herttuala, Seppo [University of Kuopio, A.I.Virtanen Institute, Kuopio (Finland)

    2005-12-01

    This review discusses the basics of cardiovascular gene therapy, the results of recent human clinical trials, and the rapid progress in imaging techniques in cardiology. Improved understanding of the molecular and genetic basis of coronary heart disease has made gene therapy a potential new alternative for the treatment of cardiovascular diseases. Experimental studies have established the proof-of-principle that gene transfer to the cardiovascular system can achieve therapeutic effects. First human clinical trials provided initial evidence of feasibility and safety of cardiovascular gene therapy. However, phase II/III clinical trials have so far been rather disappointing and one of the major problems in cardiovascular gene therapy has been the inability to verify gene expression in the target tissue. New imaging techniques could significantly contribute to the development of better gene therapeutic approaches. Although the exact choice of imaging modality will depend on the biological question asked, further improvement in image resolution and detection sensitivity will be needed for all modalities as we move from imaging of organs and tissues to imaging of cells and genes. (orig.)

  14. Age- and fatigue-related markers of human faces: an eye-tracking study.

    Science.gov (United States)

    Nguyen, Huy Tu; Isaacowitz, Derek M; Rubin, Peter A D

    2009-02-01

    To investigate the facial cues that are used when making judgments about how old or tired a face appears. Experimental study. Forty-seven subjects: 15 male and 32 female participants, ranging from age 18 to 30 years. Forty-eight full-face digital images of "normal-appearing" patients were collected and uploaded to an eye-tracking system. We used an Applied Science Laboratories (Bedford, MA) Eye Tracker device associated with gaze-tracking software to record and calculate the gaze and fixation of the participants' left eye as they viewed images on a computer screen. After seeing each picture, participants were asked to assess the age of the face in the picture by making a selection on a rating scale divided into 5-year intervals; for fatigue judgments we used a rating scale from 1 (not tired) to 7 (most tired). The main outcome measure was gaze fixation, as assessed by tracking the eye movements of participants as they viewed full-face digital pictures. For fatigue judgments, participants spent the most time looking at the eye region (31.81%), then the forehead and the nose regions (14.99% and 14.12%, respectively); in the eye region, participants looked most at the brows (13.1%) and lower lids (9.4%). Participants spent more time looking at the cheeks on faces they rated as least tired than they did on those they rated as most tired (t = 2.079, Peye region (27.22%) and then the forehead (15.71%) and the nose (14.30%) had the highest frequencies of interest; in the eye region, the brows and lower lids also had the highest frequencies of interest (11.40% and 8.90%, respectively). Participants looked more at the brows (t = -2.63, Peye region. Consequently, these results suggest that aesthetic or functional surgery to the eye region may be one of the most effective interventions in enhancing the appearance of an individual. The author(s) have no proprietary or commercial interest in any materials discussed in this article.

  15. A comparison of student performance in human development classes using three different modes of delivery: Online, face-to-face, and combined

    Science.gov (United States)

    Kalsow, Susan Christensen

    1999-11-01

    The problem. The dual purposes of this research were to determine if there is a difference in student performance in three Human Development classes when the modes of delivery are different and to analyze student perceptions of using Web-based learning as all or part of their course experience. Procedures. Data for this study were collected from three Human Development courses taught at Drake University. Grades from five essays, projects, and overall grades were used in the three classes and analyzed using a single factor analysis of variance to determine if there was a significant difference. Content analysis was used on the evaluation comments of the participants in the online and combined classes to determine their perceptions of Web-based learning. Findings. The single factor analysis of variance measuring student performance showed no significant difference among the online, face-to-face, and combined scores at the .05 level of significance, however, the difference was significant at the .06. The content analysis of the online and combined course showed the three major strengths of learning totally or partly online to be increased comfort in using the computer, the quality of the overall experience, and convenience in terms of increased access to educational opportunities. The barriers included lack of human interaction and access to the professor. Conclusions. The study indicates that Web-based learning is a viable option for postsecondary educational delivery in terms of student performance and learning. On the average, performance is at least as good as performance in traditional face-to-face classrooms. Improved performance, however, is contingent on adequate access to equipment, faculty skill in teaching using a new mode of delivery, and the personality of the student. The convenient access to educational opportunities and becoming more comfortable with technology are benefits that were important to these two groups. Web-based learning is not for everyone

  16. 静态灰度图像中的人脸检测方法综述%Survey on Face Detection Methods in Gray-level Still Images

    Institute of Scientific and Technical Information of China (English)

    唐伟; 陈兆乾; 吴建鑫; 周志华

    2002-01-01

    In recent twenty years,the technique of face detection and face recognition,as one of the important research area of computer vision and image understanding,attracts more and more attenion.In general,face detection in graylevel still images is more difficualt than that in color images.Therefore this paper briefly surveys this raes and indicates some issues for exploration.

  17. Face and eye scanning in gorillas (Gorilla gorilla), orangutans (Pongo abelii), and humans (Homo sapiens): unique eye-viewing patterns in humans among hominids.

    Science.gov (United States)

    Kano, Fumihiro; Call, Josep; Tomonaga, Masaki

    2012-11-01

    Because the faces and eyes of primates convey a rich array of social information, the way in which primates view faces and eyes reflects species-specific strategies for facial communication. How are humans and closely related species such as great apes similar and different in their viewing patterns for faces and eyes? Following previous studies comparing chimpanzees (Pan troglodytes) with humans (Homo sapiens), this study used the eye-tracking method to directly compare the patterns of face and eye scanning by humans, gorillas (Gorilla gorilla), and orangutans (Pongo abelii). Human and ape participants freely viewed pictures of whole bodies and full faces of conspecifics and allospecifics under the same experimental conditions. All species were strikingly similar in that they viewed predominantly faces and eyes. No particular difference was identified between gorillas and orangutans, and they also did not differ from the chimpanzees tested in previous studies. However, humans were somewhat different from apes, especially with respect to prolonged eye viewing. We also examined how species-specific facial morphologies, such as the male flange of orangutans and the black-white contrast of human eyes, affected viewing patterns. Whereas the male flange of orangutans affected viewing patterns, the color contrast of human eyes did not. Humans showed prolonged eye viewing independently of the eye color of presented faces, indicating that this pattern is internally driven rather than stimulus dependent. Overall, the results show general similarities among the species and also identify unique eye-viewing patterns in humans.

  18. Thresholding magnetic resonance images of human brain

    Institute of Scientific and Technical Information of China (English)

    Qing-mao HU; Wieslaw L NOWINSKI

    2005-01-01

    In this paper, methods are proposed and validated to determine low and high thresholds to segment out gray matter and white matter for MR images of different pulse sequences of human brain. First, a two-dimensional reference image is determined to represent the intensity characteristics of the original three-dimensional data. Then a region of interest of the reference image is determined where brain tissues are present. The non-supervised fuzzy c-means clustering is employed to determine: the threshold for obtaining head mask, the low threshold for T2-weighted and PD-weighted images, and the high threshold for T1-weighted, SPGR and FLAIR images. Supervised range-constrained thresholding is employed to determine the low threshold for T1-weighted, SPGR and FLAIR images. Thresholding based on pairs of boundary pixels is proposed to determine the high threshold for T2- and PD-weighted images. Quantification against public data sets with various noise and inhomogeneity levels shows that the proposed methods can yield segmentation robust to noise and intensity inhomogeneity. Qualitatively the proposed methods work well with real clinical data.

  19. Finger and face representations in the ipsilateral precentral motor areas in humans

    OpenAIRE

    Hanakawa, Takashi; Parikh, Sachin; Bruno, Michiko K.; Hallett, Mark

    2004-01-01

    Many human neuroimaging studies reported activity in the precentral gyrus (PcG) ipsilateral to the side of hand movements. This activity has been interpreted as the part of the primary motor cortex (M1) that controls bilateral or ipsilateral hand movements. For the better understanding of hand ipsilateral-PcG activity, we performed a functional MRI experiment in 8 healthy right-handed adults. Behavioral tasks involved hand or lower face movements on each side, or motor imagery of the same mov...

  20. MALDI-MS-imaging of whole human lens capsule.

    Science.gov (United States)

    Ronci, Maurizio; Sharma, Shiwani; Chataway, Tim; Burdon, Kathryn P; Martin, Sarah; Craig, Jamie E; Voelcker, Nicolas H

    2011-08-05

    The ocular lens capsule is a smooth, transparent basement membrane that encapsulates the lens and is composed of a rigid network of interacting structural proteins and glycosaminoglycans. During cataract surgery, the anterior lens capsule is routinely removed in the form of a circular disk. We considered that the excised capsule could be easily prepared for matrix-assisted laser desorption/ionization time-of-flight mass spectrometry imaging (MALDI-MSI) analysis. MALDI-MSI is a powerful tool to elucidate the spatial distribution of small molecules, peptides, and proteins within tissues. Here, we apply this molecular imaging technique to analyze the freshly excised human lens capsule en face. We demonstrate that novel information about the distribution of proteins by MALDI-MSI can be obtained from this highly compact connective tissue, having no evident histo-morphological characteristics. Trypsin digestion carried out on-tissue is shown to improve MALDI-MSI analysis of human lens capsules and affords high repeatability. Most importantly, MALDI-MSI analysis reveals a concentric distribution pattern of proteins such as apolipoprotein E (ApoE) and collagen IV alpha-1 on the anterior surface of surgically removed lens capsule, which may indicate direct or indirect effects of environmental and mechanical stresses on the human ocular lens.

  1. Characterization and recognition of mixed emotional expressions in thermal face image

    Science.gov (United States)

    Saha, Priya; Bhattacharjee, Debotosh; De, Barin K.; Nasipuri, Mita

    2016-05-01

    Facial expressions in infrared imaging have been introduced to solve the problem of illumination, which is an integral constituent of visual imagery. The paper investigates facial skin temperature distribution on mixed thermal facial expressions of our created face database where six are basic expressions and rest 12 are a mixture of those basic expressions. Temperature analysis has been performed on three facial regions of interest (ROIs); periorbital, supraorbital and mouth. Temperature variability of the ROIs in different expressions has been measured using statistical parameters. The temperature variation measurement in ROIs of a particular expression corresponds to a vector, which is later used in recognition of mixed facial expressions. Investigations show that facial features in mixed facial expressions can be characterized by positive emotion induced facial features and negative emotion induced facial features. Supraorbital is a useful facial region that can differentiate basic expressions from mixed expressions. Analysis and interpretation of mixed expressions have been conducted with the help of box and whisker plot. Facial region containing mixture of two expressions is generally less temperature inducing than corresponding facial region containing basic expressions.

  2. High-accuracy and robust face recognition system based on optical parallel correlator using a temporal image sequence

    Science.gov (United States)

    Watanabe, Eriko; Ishikawa, Mami; Ohta, Maiko; Kodate, Kashiko

    2005-09-01

    Face recognition is used in a wide range of security systems, such as monitoring credit card use, searching for individuals with street cameras via Internet and maintaining immigration control. There are still many technical subjects under study. For instance, the number of images that can be stored is limited under the current system, and the rate of recognition must be improved to account for photo shots taken at different angles under various conditions. We implemented a fully automatic Fast Face Recognition Optical Correlator (FARCO) system by using a 1000 frame/s optical parallel correlator designed and assembled by us. Operational speed for the 1: N (i.e. matching a pair of images among N, where N refers to the number of images in the database) identification experiment (4000 face images) amounts to less than 1.5 seconds, including the pre/post processing. From trial 1: N identification experiments using FARCO, we acquired low error rates of 2.6% False Reject Rate and 1.3% False Accept Rate. By making the most of the high-speed data-processing capability of this system, much more robustness can be achieved for various recognition conditions when large-category data are registered for a single person. We propose a face recognition algorithm for the FARCO while employing a temporal image sequence of moving images. Applying this algorithm to a natural posture, a two times higher recognition rate scored compared with our conventional system. The system has high potential for future use in a variety of purposes such as search for criminal suspects by use of street and airport video cameras, registration of babies at hospitals or handling of an immeasurable number of images in a database.

  3. Matching novel face and voice identity using static and dynamic facial images.

    Science.gov (United States)

    Smith, Harriet M J; Dunn, Andrew K; Baguley, Thom; Stacey, Paula C

    2016-04-01

    Research investigating whether faces and voices share common source identity information has offered contradictory results. Accurate face-voice matching is consistently above chance when the facial stimuli are dynamic, but not when the facial stimuli are static. We tested whether procedural differences might help to account for the previous inconsistencies. In Experiment 1, participants completed a sequential two-alternative forced choice matching task. They either heard a voice and then saw two faces or saw a face and then heard two voices. Face-voice matching was above chance when the facial stimuli were dynamic and articulating, but not when they were static. In Experiment 2, we tested whether matching was more accurate when faces and voices were presented simultaneously. The participants saw two face-voice combinations, presented one after the other. They had to decide which combination was the same identity. As in Experiment 1, only dynamic face-voice matching was above chance. In Experiment 3, participants heard a voice and then saw two static faces presented simultaneously. With this procedure, static face-voice matching was above chance. The overall results, analyzed using multilevel modeling, showed that voices and dynamic articulating faces, as well as voices and static faces, share concordant source identity information. It seems, therefore, that above-chance static face-voice matching is sensitive to the experimental procedure employed. In addition, the inconsistencies in previous research might depend on the specific stimulus sets used; our multilevel modeling analyses show that some people look and sound more similar than others.

  4. Infants' ability to respond to depth from the retinal size of human faces: comparing monocular and binocular preferential-looking.

    Science.gov (United States)

    Tsuruhara, Aki; Corrow, Sherryse; Kanazawa, So; Yamaguchi, Masami K; Yonas, Albert

    2014-11-01

    To examine sensitivity to pictorial depth cues in young infants (4 and 5 months-of-age), we compared monocular and binocular preferential looking to a display on which two faces were equidistantly presented and one was larger than the other, depicting depth from the size of human faces. Because human faces vary little in size, the correlation between retinal size and distance can provide depth information. As a result, adults perceive a larger face as closer than a smaller one. Although binocular information for depth provided information that the faces in our display were equidistant, under monocular viewing, no such information was provided. Rather, the size of the faces indicated that one was closer than the other. Infants are known to look longer at apparently closer objects. Therefore, we hypothesized that infants would look longer at a larger face in the monocular than in the binocular condition if they perceived depth from the size of human faces. Because the displays were identical in the two conditions, any difference in looking-behavior between monocular and binocular viewing indicated sensitivity to depth information. Results showed that 5-month-old infants preferred the larger, apparently closer, face in the monocular condition compared to the binocular condition when static displays were presented. In addition, when presented with a dynamic display, 4-month-old infants showed a stronger 'closer' preference in the monocular condition compared to the binocular condition. This was not the case when the faces were inverted. These results suggest that even 4-month-old infants respond to depth information from a depth cue that may require learning, the size of faces. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. Infants’ ability to respond to depth from the retinal size of human faces: Comparing monocular and binocular preferential-looking

    Science.gov (United States)

    Tsuruhara, Aki; Corrow, Sherryse; Kanazawa, So; Yamaguchi, Masami K.; Yonas, Albert

    2014-01-01

    To examine sensitivity to pictorial depth cues in young infants (4 and 5 months-of-age), we compared monocular and binocular preferential looking to a display on which two faces were equidistantly presented and one was larger than the other, depicting depth from the size of human faces. Because human faces vary little in size, the correlation between retinal size and distance can provide depth information. As a result, adults perceive a larger face as closer than a smaller one. Although binocular information for depth provided information that the faces in our display were equidistant, under monocular viewing, no such information was provided. Rather, the size of the faces indicated that one was closer than the other. Infants are known to look longer at apparently closer objects. Therefore, we hypothesized that infants would look longer at a larger face in the monocular than in the binocular condition if they perceived depth from the size of human faces. Because the displays were identical in the two conditions, any difference in looking-behavior between monocular and binocular viewing indicated sensitivity to depth information. Results showed that 5-month-old infants preferred the larger, apparently closer, face in the monocular condition compared to the binocular condition when static displays were presented. In addition, when presented with a dynamic display, 4-month-old infants showed a stronger ‘closer’ preference in the monocular condition compared to the binocular condition. This was not the case when the faces were inverted. These results suggest that even 4-month-old infants respond to depth information from a depth cue that may require learning, the size of faces. PMID:25113916

  6. Research on automatic human chromosome image analysis

    Science.gov (United States)

    Ming, Delie; Tian, Jinwen; Liu, Jian

    2007-11-01

    Human chromosome karyotyping is one of the essential tasks in cytogenetics, especially in genetic syndrome diagnoses. In this thesis, an automatic procedure is introduced for human chromosome image analysis. According to different status of touching and overlapping chromosomes, several segmentation methods are proposed to achieve the best results. Medial axis is extracted by the middle point algorithm. Chromosome band is enhanced by the algorithm based on multiscale B-spline wavelets, extracted by average gray profile, gradient profile and shape profile, and calculated by the WDD (Weighted Density Distribution) descriptors. The multilayer classifier is used in classification. Experiment results demonstrate that the algorithms perform well.

  7. Using Frogs Faces to Dissect the Mechanisms Underlying Human Orofacial Defects

    Science.gov (United States)

    Dickinson, Amanda J.G.

    2016-01-01

    In this review I discuss how Xenopus laevis is an effective model to dissect the mechanisms underlying orofacial defects. This species has been particularly useful in studying the understudied structures of the developing face including the embryonic mouth and primary palate. The embryonic mouth is the first opening between the foregut and the environment and is critical for adult mouth development. The final step in embryonic mouth formation is the perforation of a thin layer of tissue covering the digestive tube called the buccopharyngeal membrane. When this tissue does not perforate in humans it can pose serious health risks for the fetus and child. The primary palate forms just dorsal to the embryonic mouth and in non-amniotes it functions as the roof of the adult mouth. Defects in the primary palate result in a median oral cleft that appears similar across the vertebrates. In humans, these median clefts are often severe and surgically difficult to repair. Xenopus has several qualities that make it advantageous for craniofacial research. The free living embryo has an easily accessible face and we have also developed several new tools to analyze the development of the region. Further, Xenopus is readily amenable to chemical screens allowing us to uncover novel gene-environment interactions during orofacial development, as well as to define underlying mechanisms governing such interactions. In conclusion, we are utilizing Xenopus in new and innovative ways to contribute to craniofacial research. PMID:26778163

  8. Patterns of Visual Attention to Faces and Objects in Autism Spectrum Disorder

    Science.gov (United States)

    McPartland, James C.; Webb, Sara Jane; Keehn, Brandon; Dawson, Geraldine

    2011-01-01

    This study used eye-tracking to examine visual attention to faces and objects in adolescents with autism spectrum disorder (ASD) and typical peers. Point of gaze was recorded during passive viewing of images of human faces, inverted human faces, monkey faces, three-dimensional curvilinear objects, and two-dimensional geometric patterns.…

  9. Fast optical imaging of human brain function

    Directory of Open Access Journals (Sweden)

    Gabriele Gratton

    2010-06-01

    Full Text Available Great advancements in brain imaging during the last few decades have opened a large number of new possibilities for neuroscientists. The most dominant methodologies (electrophysiological and magnetic resonance-based methods emphasize temporal and spatial information, respectively. However, theorizing about brain function has recently emphasized the importance of rapid (within 100 ms or so interactions between different elements of complex neuronal networks. Fast optical imaging, and in particular the event-related optical signal (EROS, a technology that has emerged over the last 15 years may provide descriptions of localized (to sub-cm level brain activity with a temporal resolution of less than 100 ms. The main limitations of EROS are its limited penetration, which allows us to image cortical structures not deeper than 3 cm from the surface of the head, and its low signal-to-noise ratio. Advantages include the fact that EROS is compatible with most other imaging methods, including electrophysiological, magnetic resonance, and trans-cranial magnetic stimulation techniques, with which can be recorded concurrently. In this paper we present a summary of the research that has been conducted so far on fast optical imaging, including evidence for the possibility of recording neuronal signals with this method, the properties of the signals, and various examples of applications to the study of human cognitive neuroscience. Extant issues, controversies, and possible future developments are also discussed.

  10. Thermal imaging to detect physiological indicators of stress in humans

    Science.gov (United States)

    Cross, Carl B.; Skipper, Julie A.; Petkie, Douglas T.

    2013-05-01

    Real-time, stand-off sensing of human subjects to detect emotional state would be valuable in many defense, security and medical scenarios. We are developing a multimodal sensor platform that incorporates high-resolution electro-optical and mid-wave infrared (MWIR) cameras and a millimeter-wave radar system to identify individuals who are psychologically stressed. Recent experiments have aimed to: 1) assess responses to physical versus psychological stressors; 2) examine the impact of topical skin products on thermal signatures; and 3) evaluate the fidelity of vital signs extracted from thermal imagery and radar signatures. Registered image and sensor data were collected as subjects (n=32) performed mental and physical tasks. In each image, the face was segmented into 29 non-overlapping segments based on fiducial points automatically output by our facial feature tracker. Image features were defined that facilitated discrimination between psychological and physical stress states. To test the ability to intentionally mask thermal responses indicative of anxiety or fear, subjects applied one of four topical skin products to one half of their face before performing tasks. Finally, we evaluated the performance of two non-contact techniques to detect respiration and heart rate: chest displacement extracted from the radar signal and temperature fluctuations at the nose tip and regions near superficial arteries to detect respiration and heart rates, respectively, extracted from the MWIR imagery. Our results are very satisfactory: classification of physical versus psychological stressors is repeatedly greater than 90%, thermal masking was almost always ineffective, and accurate heart and respiration rates are detectable in both thermal and radar signatures.

  11. Age Dependent Face Recognition using Eigenface

    Directory of Open Access Journals (Sweden)

    Hlaing Htake Khaung Tin

    2013-10-01

    Full Text Available Face recognition is the most successful form of human surveillance. Face recognition technology, is being used to improve human efficiency when recognition faces, is one of the fastest growing fields in the biometric industry. In the first stage, the age is classified into eleven categories which distinguish the person oldness in terms of age. In the second stage of the process is face recognition based on the predicted age. Age prediction has considerable potential applications in human computer interaction and multimedia communication. In this paper proposes an Eigen based age estimation algorithm for estimate an image from the database. Eigenface has proven to be a useful and robust cue for age prediction, age simulation, face recognition, localization and tracking. The scheme is based on an information theory approach that decomposes face images into a small set of characteristic feature images called eigenfaces, which may be thought of as the principal components of the initial training set of face images. The eigenface approach used in this scheme has advantages over other face recognition methods in its speed, simplicity, learning capability and robustness to small changes in the face image.

  12. Does masculinity matter? The contribution of masculine face shape to male attractiveness in humans.

    Directory of Open Access Journals (Sweden)

    Isabel M L Scott

    Full Text Available BACKGROUND: In many animals, exaggerated sex-typical male traits are preferred by females, and may be a signal of both past and current disease resistance. The proposal that the same is true in humans--i.e., that masculine men are immunocompetent and attractive--underpins a large literature on facial masculinity preferences. Recently, theoretical models have suggested that current condition may be a better index of mate value than past immunocompetence. This is particularly likely in populations where pathogenic fluctuation is fast relative to host life history. As life history is slow in humans, there is reason to expect that, among humans, condition-dependent traits might contribute more to attractiveness than relatively stable traits such as masculinity. To date, however, there has been little rigorous assessment of whether, in the presence of variation in other cues, masculinity predicts attractiveness or not. METHODOLOGY/PRINCIPAL FINDINGS: The relationship between masculinity and attractiveness was assessed in two samples of male faces. Most previous research has assessed masculinity either with subjective ratings or with simple anatomical measures. Here, we used geometric morphometric techniques to assess facial masculinity, generating a morphological masculinity measure based on a discriminant function that correctly classified >96% faces as male or female. When assessed using this measure, there was no relationship between morphological masculinity and rated attractiveness. In contrast, skin colour--a fluctuating, condition-dependent cue--was a significant predictor of attractiveness. CONCLUSIONS/SIGNIFICANCE: These findings suggest that facial morphological masculinity may contribute less to men's attractiveness than previously assumed. Our results are consistent with the hypothesis that current condition is more relevant to male mate value than past disease resistance, and hence that temporally fluctuating traits (such as colour

  13. Does masculinity matter? The contribution of masculine face shape to male attractiveness in humans.

    Science.gov (United States)

    Scott, Isabel M L; Pound, Nicholas; Stephen, Ian D; Clark, Andrew P; Penton-Voak, Ian S

    2010-10-27

    In many animals, exaggerated sex-typical male traits are preferred by females, and may be a signal of both past and current disease resistance. The proposal that the same is true in humans--i.e., that masculine men are immunocompetent and attractive--underpins a large literature on facial masculinity preferences. Recently, theoretical models have suggested that current condition may be a better index of mate value than past immunocompetence. This is particularly likely in populations where pathogenic fluctuation is fast relative to host life history. As life history is slow in humans, there is reason to expect that, among humans, condition-dependent traits might contribute more to attractiveness than relatively stable traits such as masculinity. To date, however, there has been little rigorous assessment of whether, in the presence of variation in other cues, masculinity predicts attractiveness or not. The relationship between masculinity and attractiveness was assessed in two samples of male faces. Most previous research has assessed masculinity either with subjective ratings or with simple anatomical measures. Here, we used geometric morphometric techniques to assess facial masculinity, generating a morphological masculinity measure based on a discriminant function that correctly classified >96% faces as male or female. When assessed using this measure, there was no relationship between morphological masculinity and rated attractiveness. In contrast, skin colour--a fluctuating, condition-dependent cue--was a significant predictor of attractiveness. These findings suggest that facial morphological masculinity may contribute less to men's attractiveness than previously assumed. Our results are consistent with the hypothesis that current condition is more relevant to male mate value than past disease resistance, and hence that temporally fluctuating traits (such as colour) contribute more to male attractiveness than stable cues of sexual dimorphism.

  14. Radiological evaluation of the fetal face using three-dimensional ultrasound imaging

    Directory of Open Access Journals (Sweden)

    Bäumler M

    2012-12-01

    Full Text Available Marcel Bäumler,1–3 Michèle Bigorre,1,4 Jean-Michel Faure1,51CHU Montpellier, Centre de Compétence des Fentes Faciales, Hôpital Lapeyronie, Montpellier, 2Clinique du Parc, Imagerie de la Femme, Castelnau-le-Lez, 3Cabinet de Radiologie du Trident, Lunel, 4CHU Service de Chirurgie Plastique Pédiatrique, Hôpital Lapeyronie, Montpellier, 5CHU Montpellier, Service de Gynécologie-Obstétrique, Hôpital Arnaud de Villeneuve, Montpellier, FranceAbstract: This paper reviews screening and three-dimensional diagnostic ultrasound imaging of the fetal face. The different techniques available for analyzing biometric and morphological items of the profile, eyes, ears, lips, and hard and soft palate are commented on and briefly compared with the respective bi-dimensional techniques. The available literature supports the use of three-dimensional ultrasound in difficult prenatal diagnostic conditions because of its diagnostic accuracy, enabling improved safety of perinatal care. Globally, a marked increase has been observed in the accuracy of three-dimensional ultrasound in comparison with the bi-dimensional approach. Because there is no consensus about the performance of the different three-dimensional techniques, future studies are needed in order to compare them and to find the best technique for analysis of each of the respective facial elements. Universal prenatal standards may integrate these potential new findings in the future. At this time, the existing guidelines for prenatal facial screening should not be changed.Keywords: prenatal three-dimensional ultrasound, prenatal screening, prenatal diagnosis, cleft lip and palate, fetal profile, retrognathism

  15. Education of a Future Human is the Key to Solving the Global Problems Facing Humanity

    Directory of Open Access Journals (Sweden)

    Olga Khrystenko

    2016-07-01

    Full Text Available The present research considers two Global problems of the humanity:intercivilizational contradictions and the pandemic of abortion as serious conflicts, the solution of which depends on the relevant public educational policies. The tension in the relationship between the Islamic World and the West, caused by the so-called “caricature scandal”, encourages to understanding the conflict and the ways of its solution. There is also the problem of massive numbers of abortions in the world that requires a scientific analysis and relevant conclusions. The research revealed that both sides of intercivilizational conflicts are responsible for it. The freedom of speech as an ingredient of democracy cannot exist only for itself. It should be based on the human values, including respect for other nations, religions, cultures, as well as the protection of human life. The second part of the research concerns the pandemic of abortion. Based on the achievements of modern embryology, sociology and bioethics, four levels of this conflict were defined. The first level is a conflict concerning the life of the unborn child. The second one is a conflict concerning a mother. The third one is a conflict with the nation. The fourth one is a conflict with God. On these issues, the survey was conducted among the first year medical students at Ternopil State Medical University. It was also concluded that it would have been useful to present the model of state policy aimed to prevent conflictsbetween civilizations, aswellasthepandemicofabortiontothestudents. Thispolicy should include: information policy (promotion of the idea that human life is the highest value, and human relationships should be based on the principles of tolerance; education policy (education in today’s youth of the culture of interpersonal relationships based on honesty, responsibility; social policy (creation of the material conditions for young families, single mothers; policy in the health sector

  16. Frontal Face Detection using Haar Wavelet Coefficients and Local Histogram Correlation

    Directory of Open Access Journals (Sweden)

    Iwan Setyawan

    2011-12-01

    Full Text Available Face detection is the main building block on which all automatic systems dealing with human faces is built. For example, a face recognition system must rely on face detection to process an input image and determine which areas contain human faces. These areas then become the input for the face recognition system for further processing. This paper presents a face detection system designed to detect frontal faces. The system uses Haar wavelet coefficients and local histogram correlation as differentiating features. Our proposed system is trained using 100 training images. Our experiments show that the proposed system performed well during testing, achieving a detection rate of 91.5%.

  17. Self Realization and Meaning Making in the Face of Adversity: A Eudaimonic Approach to Human Resilience.

    Science.gov (United States)

    Ryff, Carol D

    2014-01-01

    This article considers a eudaimonic approach to psychological well-being built on the integration of developmental, existential and humanistic formulations as well as distant writings of Aristotle. Eudaimonia emphasizes meaning-making, self realization and growth, quality connections to others, self-knowledge, managing life, and marching to one's own drummer. These qualities may be of particular importance in the confrontation with significant life challenges. Prior formulations of resilience are reviewed to underscore the unique features of a eudaimonic approach. Empirical findings on meaning making and self realization are then reviewed to document the capacity of some to maintain high well-being in the face of socioeconomic inequality, the challenges of aging, and in dealing with specific challenges (child abuse, cancer, loss of spouse). Moreover, those who sustain or deepen their well-being as they deal with adversity, show better health profiles, thereby underscoring broader benefits of eudaimonia. How meaning is made and personal capacities realized in the confrontation with challenge is revealed by narrative accounts. Thus, the latter half of the article illustrates human resilience in action via the personal stories of three individuals (Mark Mathabane, Ben Mattlin, Victor Frankl) who endured unimaginable hardship, but prevailed and grew in the face of it. The essential roles of strong social ties and the capacity to derive meaning and realize personal growth in grappling with adversity are unmistakable in all three cases.

  18. Facing global environmental change. Environmental, human, energy, food, health and water security concepts

    Energy Technology Data Exchange (ETDEWEB)

    Brauch, Hans Guenter [Freie Univ. Berlin (Germany). Dept. of Political and Social Sciences; United Nations Univ., Bonn (DE). Inst. for Environment and Human Security (UNU-EHS); AFES-Press, Mosbach (Germany); Oswald Spring, Ursula [National Univ. of Mexico (UNAM), Cuernavaca, MOR (MX). Centro Regional de Investigaciones Multidiscipinarias (CRIM); United Nations Univ., Bonn (DE). Inst. for Environment and Human Security (UNU-EHS); Grin, John [Amsterdam Univ. (Netherlands). Amsterdam School for Social Science Research; Mesjasz, Czeslaw [Cracow Univ. of Economics (Poland). Faculty of Management; Kameri-Mbote, Patricia [Nairobi Univ. (Kenya). School of Law; International Environmental Law Research Centre, Nairobi (Kenya); Behera, Navnita Chadha [Jamia Millia Islamia Univ., New Delhi (India). Nelson Mandela Center for Peace and Conflict Resolution; Chourou, Bechir [Tunis-Carthage Univ., Hammam-Chatt (Tunisia); Krummenacher, Heinz (eds.) [swisspeace, Bern (Switzerland). FAST International

    2009-07-01

    This policy-focused, global and multidisciplinary security handbook on Facing Global Environmental Change addresses new security threats of the 21st century posed by climate change, desertification, water stress, population growth and urbanization. These security dangers and concerns lead to migration, crises and conflicts. They are on the agenda of the UN, OECD, OSCE, NATO and EU. In 100 chapters, 132 authors from 49 countries analyze the global debate on environmental, human and gender, energy, food, livelihood, health and water security concepts and policy problems. In 10 parts they discuss the context and the securitization of global environmental change and of extreme natural and societal outcomes. They suggest a new research programme to move from knowledge to action, from reactive to proactive policies and to explore the opportunities of environ-mental cooperation for a new peace policy. (orig.)

  19. CURRENT ISSUES FACING THE INTRODUCTION OF HUMAN PAPILLOMAVIRUS VACCINE IN MALAYSIA

    Directory of Open Access Journals (Sweden)

    I-Ching Sam

    2007-01-01

    Full Text Available Certain human papillomavirus (HPV types are strongly associated with cervical cancer. Recently-described effective vaccines against these HPV types represent a great medical breakthrough in preventing cervical cancer. In Malaysia, the vaccine has just received regulatory approval. We are likely to face similar barriers to implementing HPV vaccination as reported by countries where vaccination has been introduced. Most women have poor understanding of HPV and its link to cervical cancer. Physicians who will be recommending HPV vaccines may not have extensive knowledge or experience with HPV-related disease. Furthermore, a vaccine against a sexually-transmitted infection may elicit negative reactions from potential recipients or their carers, particularly in a conservative society. Given the high cost of the vaccine, reaching the most vulnerable women is a concern. To foster broad acceptance of HPV vaccine, education must be provided to health care providers, parents and young women about the risks of HPV infection and the benefits of vaccination.

  20. High-Frequency EEG Variations in Children with Autism Spectrum Disorder during Human Faces Visualization

    Directory of Open Access Journals (Sweden)

    Celina A. Reis Paula

    2017-01-01

    Full Text Available Autism spectrum disorder (ASD is a neuropsychiatric disorder characterized by the impairment in the social reciprocity, interaction/language, and behavior, with stereotypes and signs of sensory function deficits. Electroencephalography (EEG is a well-established and noninvasive tool for neurophysiological characterization and monitoring of the brain electrical activity, able to identify abnormalities related to frequency range, connectivity, and lateralization of brain functions. This research aims to evidence quantitative differences in the frequency spectrum pattern between EEG signals of children with and without ASD during visualization of human faces in three different expressions: neutral, happy, and angry. Quantitative clinical evaluations, neuropsychological evaluation, and EEG of children with and without ASD were analyzed paired by age and gender. The results showed stronger activation in higher frequencies (above 30 Hz in frontal, central, parietal, and occipital regions in the ASD group. This pattern of activation may correlate with developmental characteristics in the children with ASD.

  1. The influence of banner advertisements on attention and memory: human faces with averted gaze can enhance advertising effectiveness.

    Science.gov (United States)

    Sajjacholapunt, Pitch; Ball, Linden J

    2014-01-01

    Research suggests that banner advertisements used in online marketing are often overlooked, especially when positioned horizontally on webpages. Such inattention invariably gives rise to an inability to remember advertising brands and messages, undermining the effectiveness of this marketing method. Recent interest has focused on whether human faces within banner advertisements can increase attention to the information they contain, since the gaze cues conveyed by faces can influence where observers look. We report an experiment that investigated the efficacy of faces located in banner advertisements to enhance the attentional processing and memorability of banner contents. We tracked participants' eye movements when they examined webpages containing either bottom-right vertical banners or bottom-center horizontal banners. We also manipulated facial information such that banners either contained no face, a face with mutual gaze or a face with averted gaze. We additionally assessed people's memories for brands and advertising messages. Results indicated that relative to other conditions, the condition involving faces with averted gaze increased attention to the banner overall, as well as to the advertising text and product. Memorability of the brand and advertising message was also enhanced. Conversely, in the condition involving faces with mutual gaze, the focus of attention was localized more on the face region rather than on the text or product, weakening any memory benefits for the brand and advertising message. This detrimental impact of mutual gaze on attention to advertised products was especially marked for vertical banners. These results demonstrate that the inclusion of human faces with averted gaze in banner advertisements provides a promising means for marketers to increase the attention paid to such adverts, thereby enhancing memory for advertising information.

  2. The influence of banner advertisements on attention and memory: Human faces with averted gaze can enhance advertising effectiveness

    Directory of Open Access Journals (Sweden)

    Pitch eSajjacholapunt

    2014-03-01

    Full Text Available Research suggests that banner advertisements used in online marketing are often overlooked, especially when positioned horizontally on webpages. Such inattention invariably gives rise to an inability to remember advertising brands and messages, undermining the effectiveness of this marketing method. Recent interest has focused on whether human faces within banner advertisements can increase attention to the information they contain, since the gaze cues conveyed by faces can influence where observers look. We report an experiment that investigated the efficacy of faces located in banner advertisements to enhance the attentional processing and memorability of banner contents. We tracked participants’ eye movements when they examined webpages containing either bottom-right vertical banners or bottom-centre horizontal banners. We also manipulated facial information such that banners either contained no face, a face with mutual gaze or a face with averted gaze. We additionally assessed people’s memories for brands and advertising messages. Results indicated that relative to other conditions, the condition involving faces with averted gaze increased attention to the banner overall, as well as to the advertising text and product. Memorability of the brand and advertising message was also enhanced. Conversely, in the condition involving faces with mutual gaze, the focus of attention was localised more on the face region rather than on the text or product, weakening any memory benefits for the brand and advertising message. This detrimental impact of mutual gaze on attention to advertised products was especially marked for vertical banners. These results demonstrate that the inclusion of human faces with averted gaze in banner advertisements provides a promising means for marketers to increase the attention paid to such adverts, thereby enhancing memory for advertising information.

  3. Quantitative Analysis of Face Symmetry.

    Science.gov (United States)

    Tamir, Abraham

    2015-06-01

    The major objective of this article was to report quantitatively the degree of human face symmetry for reported images taken from the Internet. From the original image of a certain person that appears in the center of each triplet, 2 symmetric combinations were constructed that are based on the left part of the image and its mirror image (left-left) and on the right part of the image and its mirror image (right-right). By applying a computer software that enables to determine length, surface area, and perimeter of any geometric shape, the following measurements were obtained for each triplet: face perimeter and area; distance between the pupils; mouth length; its perimeter and area; nose length and face length, usually below the ears; as well as the area and perimeter of the pupils. Then, for each of the above measurements, the value C, which characterizes the degree of symmetry of the real image with respect to the combinations right-right and left-left, was calculated. C appears on the right-hand side below each image. A high value of C indicates a low symmetry, and as the value is decreasing, the symmetry is increasing. The magnitude on the left relates to the pupils and compares the difference between the area and perimeter of the 2 pupils. The major conclusion arrived at here is that the human face is asymmetric to some degree; the degree of asymmetry is reported quantitatively under each portrait.

  4. Face feature processor on mobile service robot

    Science.gov (United States)

    Ahn, Ho Seok; Park, Myoung Soo; Na, Jin Hee; Choi, Jin Young

    2005-12-01

    In recent years, many mobile service robots have been developed. These robots are different from industrial robots. Service robots were confronted to unexpected changes in the human environment. So many capabilities were needed to service mobile robot, for example, the capability to recognize people's face and voice, the capability to understand people's conversation, and the capability to express the robot's thinking etc. This research considered face detection, face tracking and face recognition from continuous camera image. For face detection module, it used CBCH algorithm using openCV library from Intel Corporation. For face tracking module, it used the fuzzy controller to control the pan-tilt camera movement smoothly with face detection result. A PCA-FX, which adds class information to PCA, was used for face recognition module. These three procedures were called face feature processor, which were implemented on mobile service robot OMR to verify.

  5. An Efficient Secure Multimodal Biometric Fusion Using Palmprint and Face Image

    CERN Document Server

    Nageshkumar, M; Swamy, M N S

    2009-01-01

    Biometrics based personal identification is regarded as an effective method for automatically recognizing, with a high confidence a person's identity. A multimodal biometric systems consolidate the evidence presented by multiple biometric sources and typically better recognition performance compare to system based on a single biometric modality. This paper proposes an authentication method for a multimodal biometric system identification using two traits i.e. face and palmprint. The proposed system is designed for application where the training data contains a face and palmprint. Integrating the palmprint and face features increases robustness of the person authentication. The final decision is made by fusion at matching score level architecture in which features vectors are created independently for query measures and are then compared to the enrolment template, which are stored during database preparation. Multimodal biometric system is developed through fusion of face and palmprint recognition.

  6. Human body region enhancement method based on Kinect infrared imaging

    Science.gov (United States)

    Yang, Lei; Fan, Yubo; Song, Xiaowei; Cai, Wenjing

    2016-10-01

    To effectively improve the low contrast of human body region in the infrared images, a combing method of several enhancement methods is utilized to enhance the human body region. Firstly, for the infrared images acquired by Kinect, in order to improve the overall contrast of the infrared images, an Optimal Contrast-Tone Mapping (OCTM) method with multi-iterations is applied to balance the contrast of low-luminosity infrared images. Secondly, to enhance the human body region better, a Level Set algorithm is employed to improve the contour edges of human body region. Finally, to further improve the human body region in infrared images, Laplacian Pyramid decomposition is adopted to enhance the contour-improved human body region. Meanwhile, the background area without human body region is processed by bilateral filtering to improve the overall effect. With theoretical analysis and experimental verification, the results show that the proposed method could effectively enhance the human body region of such infrared images.

  7. Real time imaging of human progenitor neurogenesis.

    Directory of Open Access Journals (Sweden)

    Thomas M Keenan

    Full Text Available Human neural progenitors are increasingly being employed in drug screens and emerging cell therapies targeted towards neurological disorders where neurogenesis is thought to play a key role including developmental disorders, Alzheimer's disease, and depression. Key to the success of these applications is understanding the mechanisms by which neurons arise. Our understanding of development can provide some guidance but since little is known about the specifics of human neural development and the requirement that cultures be expanded in vitro prior to use, it is unclear whether neural progenitors obey the same developmental mechanisms that exist in vivo. In previous studies we have shown that progenitors derived from fetal cortex can be cultured for many weeks in vitro as undifferentiated neurospheres and then induced to undergo neurogenesis by removing mitogens and exposing them to supportive substrates. Here we use live time lapse imaging and immunocytochemical analysis to show that neural progenitors use developmental mechanisms to generate neurons. Cells with morphologies and marker profiles consistent with radial glia and recently described outer radial glia divide asymmetrically and symmetrically to generate multipolar intermediate progenitors, a portion of which express ASCL1. These multipolar intermediate progenitors subsequently divide symmetrically to produce CTIP2(+ neurons. This 3-cell neurogenic scheme echoes observations in rodents in vivo and in human fetal slice cultures in vitro, providing evidence that hNPCs represent a renewable and robust in vitro assay system to explore mechanisms of human neurogenesis without the continual need for fresh primary human fetal tissue. Knowledge provided by this and future explorations of human neural progenitor neurogenesis will help maximize the safety and efficacy of new stem cell therapies by providing an understanding of how to generate physiologically-relevant cell types that maintain their

  8. Face Detection and Modeling for Recognition

    Science.gov (United States)

    2002-01-01

    facial components show the important role of hair and face outlines in human face recognition. . . 8 1.6 Caricatures of (a) Vincent Van Gogh ; (b) Jim... Vincent Van Gogh ; (b) Jim Carrey; (c) Arnold Schwarzenegger; (d) Einstein; (e) G. W. Bush; and (f) Bill Gates. Images are down- loaded from [9], [10

  9. Recent advances in human viruses imaging studies.

    Science.gov (United States)

    Florian, Paula Ecaterina; Rouillé, Yves; Ruta, Simona; Nichita, Norica; Roseanu, Anca

    2016-06-01

    Microscopy techniques are often exploited by virologists to investigate molecular details of critical steps in viruses' life cycles such as host cell recognition and entry, genome replication, intracellular trafficking, and release of mature virions. Fluorescence microscopy is the most attractive tool employed to detect intracellular localizations of various stages of the viral infection and monitor the pathogen-host interactions associated with them. Super-resolution microscopy techniques have overcome the technical limitations of conventional microscopy and offered new exciting insights into the formation and trafficking of human viruses. In addition, the development of state-of-the art electron microscopy techniques has become particularly important in studying virus morphogenesis by revealing ground-braking ultrastructural details of this process. This review provides recent advances in human viruses imaging in both, in vitro cell culture systems and in vivo, in the animal models recently developed. The newly available imaging technologies bring a major contribution to our understanding of virus pathogenesis and will become an important tool in early diagnosis of viral infection and the development of novel therapeutics to combat the disease.

  10. Detection of hypercholesterolemia using hyperspectral imaging of human skin

    Science.gov (United States)

    Milanic, Matija; Bjorgan, Asgeir; Larsson, Marcus; Strömberg, Tomas; Randeberg, Lise L.

    2015-07-01

    Hypercholesterolemia is characterized by high blood levels of cholesterol and is associated with increased risk of atherosclerosis and cardiovascular disease. Xanthelasma is a subcutaneous lesion appearing in the skin around the eyes. Xanthelasma is related to hypercholesterolemia. Identifying micro-xanthelasma can thereforeprovide a mean for early detection of hypercholesterolemia and prevent onset and progress of disease. The goal of this study was to investigate spectral and spatial characteristics of hypercholesterolemia in facial skin. Optical techniques like hyperspectral imaging (HSI) might be a suitable tool for such characterization as it simultaneously provides high resolution spatial and spectral information. In this study a 3D Monte Carlo model of lipid inclusions in human skin was developed to create hyperspectral images in the spectral range 400-1090 nm. Four lesions with diameters 0.12-1.0 mm were simulated for three different skin types. The simulations were analyzed using three algorithms: the Tissue Indices (TI), the two layer Diffusion Approximation (DA), and the Minimum Noise Fraction transform (MNF). The simulated lesions were detected by all methods, but the best performance was obtained by the MNF algorithm. The results were verified using data from 11 volunteers with known cholesterol levels. The face of the volunteers was imaged by a LCTF system (400- 720 nm), and the images were analyzed using the previously mentioned algorithms. The identified features were then compared to the known cholesterol levels of the subjects. Significant correlation was obtained for the MNF algorithm only. This study demonstrates that HSI can be a promising, rapid modality for detection of hypercholesterolemia.

  11. Towards the imaging of Weibel–Palade body biogenesis by serial block face-scanning electron microscopy

    Science.gov (United States)

    Mourik, MJ; Faas, FGA; Zimmermann, H; Eikenboom, J; Koster, AJ

    2015-01-01

    Electron microscopy is used in biological research to study the ultrastructure at high resolution to obtain information on specific cellular processes. Serial block face-scanning electron microscopy is a relatively novel electron microscopy imaging technique that allows three-dimensional characterization of the ultrastructure in both tissues and cells by measuring volumes of thousands of cubic micrometres yet at nanometre-scale resolution. In the scanning electron microscope, repeatedly an image is acquired followed by the removal of a thin layer resin embedded biological material by either a microtome or a focused ion beam. In this way, each recorded image contains novel structural information which can be used for three-dimensional analysis. Here, we explore focused ion beam facilitated serial block face-scanning electron microscopy to study the endothelial cell–specific storage organelles, the Weibel–Palade bodies, during their biogenesis at the Golgi apparatus. Weibel–Palade bodies predominantly contain the coagulation protein Von Willebrand factor which is secreted by the cell upon vascular damage. Using focused ion beam facilitated serial block face-scanning electron microscopy we show that the technique has the sensitivity to clearly reveal subcellular details like mitochondrial cristae and small vesicles with a diameter of about 50 nm. Also, we reveal numerous associations between Weibel–Palade bodies and Golgi stacks which became conceivable in large-scale three-dimensional data. We demonstrate that serial block face-scanning electron microscopy is a promising tool that offers an alternative for electron tomography to study subcellular organelle interactions in the context of a complete cell. PMID:25644989

  12. Towards the imaging of Weibel-Palade body biogenesis by serial block face-scanning electron microscopy.

    Science.gov (United States)

    Mourik, M J; Faas, F G A; Zimmermann, H; Eikenboom, J; Koster, A J

    2015-08-01

    Electron microscopy is used in biological research to study the ultrastructure at high resolution to obtain information on specific cellular processes. Serial block face-scanning electron microscopy is a relatively novel electron microscopy imaging technique that allows three-dimensional characterization of the ultrastructure in both tissues and cells by measuring volumes of thousands of cubic micrometres yet at nanometre-scale resolution. In the scanning electron microscope, repeatedly an image is acquired followed by the removal of a thin layer resin embedded biological material by either a microtome or a focused ion beam. In this way, each recorded image contains novel structural information which can be used for three-dimensional analysis. Here, we explore focused ion beam facilitated serial block face-scanning electron microscopy to study the endothelial cell-specific storage organelles, the Weibel-Palade bodies, during their biogenesis at the Golgi apparatus. Weibel-Palade bodies predominantly contain the coagulation protein Von Willebrand factor which is secreted by the cell upon vascular damage. Using focused ion beam facilitated serial block face-scanning electron microscopy we show that the technique has the sensitivity to clearly reveal subcellular details like mitochondrial cristae and small vesicles with a diameter of about 50 nm. Also, we reveal numerous associations between Weibel-Palade bodies and Golgi stacks which became conceivable in large-scale three-dimensional data. We demonstrate that serial block face-scanning electron microscopy is a promising tool that offers an alternative for electron tomography to study subcellular organelle interactions in the context of a complete cell.

  13. Social contact and other-race face processing in the human brain

    Science.gov (United States)

    Silvert, Laetitia; Hewstone, Miles; Nobre, Anna C.

    2008-01-01

    The present study investigated the influence social factors upon the neural processing of faces of other races using event-related potentials. A multi-tiered approach was used to identify face-specific stages of processing, to test for effects of race-of-face upon processing at these stages and to evaluate the impact of social contact and individuating experience upon these effects. The results showed that race-of-face has significant effects upon face processing, starting from early perceptual stages of structural encoding, and that social factors may play an important role in mediating these effects. PMID:19015091

  14. Real-time face and gesture analysis for human-robot interaction

    Science.gov (United States)

    Wallhoff, Frank; Rehrl, Tobias; Mayer, Christoph; Radig, Bernd

    2010-05-01

    Human communication relies on a large number of different communication mechanisms like spoken language, facial expressions, or gestures. Facial expressions and gestures are one of the main nonverbal communication mechanisms and pass large amounts of information between human dialog partners. Therefore, to allow for intuitive human-machine interaction, a real-time capable processing and recognition of facial expressions, hand and head gestures are of great importance. We present a system that is tackling these challenges. The input features for the dynamic head gestures and facial expressions are obtained from a sophisticated three-dimensional model, which is fitted to the user in a real-time capable manner. Applying this model different kinds of information are extracted from the image data and afterwards handed over to a real-time capable data-transferring framework, the so-called Real-Time DataBase (RTDB). In addition to the head and facial-related features, also low-level image features regarding the human hand - optical flow, Hu-moments are stored into the RTDB for the evaluation process of hand gestures. In general, the input of a single camera is sufficient for the parallel evaluation of the different gestures and facial expressions. The real-time capable recognition of the dynamic hand and head gestures are performed via different Hidden Markov Models, which have proven to be a quick and real-time capable classification method. On the other hand, for the facial expressions classical decision trees or more sophisticated support vector machines are used for the classification process. These obtained results of the classification processes are again handed over to the RTDB, where other processes (like a Dialog Management Unit) can easily access them without any blocking effects. In addition, an adjustable amount of history can be stored by the RTDB buffer unit.

  15. Emotional expectations influence neural sensitivity to fearful faces in humans:An event-related potential study

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    The present study tested whether neural sensitivity to salient emotional facial expressions was influenced by emotional expectations induced by a cue that validly predicted the expression of a subsequently presented target face. Event-related potentials (ERPs) elicited by fearful and neutral faces were recorded while participants performed a gender discrimination task under cued (‘expected’) and uncued (‘unexpected’) conditions. The behavioral results revealed that accuracy was lower for fearful compared with neutral faces in the unexpected condition, while accuracy was similar for fearful and neutral faces in the expected condition. ERP data revealed increased amplitudes in the P2 component and 200–250 ms interval for unexpected fearful versus neutral faces. By contrast, ERP responses were similar for fearful and neutral faces in the expected condition. These findings indicate that human neural sensitivity to fearful faces is modulated by emotional expectations. Although the neural system is sensitive to unpredictable emotionally salient stimuli, sensitivity to salient stimuli is reduced when these stimuli are predictable.

  16. Body image and face image in Asian American and white women: Examining associations with surveillance, construal of self, perfectionism, and sociocultural pressures.

    Science.gov (United States)

    Frederick, David A; Kelly, Mackenzie C; Latner, Janet D; Sandhu, Gaganjyot; Tsong, Yuying

    2016-03-01

    Asian American women experience sociocultural pressures that could place them at increased risk for experiencing body and face dissatisfaction. Asian American and White women completed measures of appearance evaluation, overweight preoccupation, face satisfaction, face dissatisfaction frequency, perfectionism, surveillance, interdependent and independent self-construal, and perceived sociocultural pressures. In Study 1 (N=182), Asian American women were more likely than White women to report low appearance evaluation (24% vs. 12%; d=-0.50) and to be sometimes-always dissatisfied with the appearance of their eyes (38% vs. 6%; d=0.90) and face overall (59% vs. 34%; d=0.41). In Study 2 (N=488), they were more likely to report low appearance evaluation (36% vs. 23%; d=-0.31) and were less likely to report high eye appearance satisfaction (59% vs. 88%; d=-0.84). The findings highlight the importance of considering ethnic differences when assessing body and face image. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Model-Based Illumination Correction for Face Images in Uncontrolled Scenarios

    NARCIS (Netherlands)

    Boom, Bas; Spreeuwers, Luuk; Veldhuis, Raymond

    2009-01-01

    Face Recognition under uncontrolled illumination conditions is partly an unsolved problem. Several illumination correction methods have been proposed, but these are usually tested on illumination conditions created in a laboratory. Our focus is more on uncontrolled conditions. We use the Phong model

  18. High-resolution imaging of expertise reveals reliable object selectivity in the fusiform face area related to perceptual performance.

    Science.gov (United States)

    McGugin, Rankin Williams; Gatenby, J Christopher; Gore, John C; Gauthier, Isabel

    2012-10-16

    The fusiform face area (FFA) is a region of human cortex that responds selectively to faces, but whether it supports a more general function relevant for perceptual expertise is debated. Although both faces and objects of expertise engage many brain areas, the FFA remains the focus of the strongest modular claims and the clearest predictions about expertise. Functional MRI studies at standard-resolution (SR-fMRI) have found responses in the FFA for nonface objects of expertise, but high-resolution fMRI (HR-fMRI) in the FFA [Grill-Spector K, et al. (2006) Nat Neurosci 9:1177-1185] and neurophysiology in face patches in the monkey brain [Tsao DY, et al. (2006) Science 311:670-674] reveal no reliable selectivity for objects. It is thus possible that FFA responses to objects with SR-fMRI are a result of spatial blurring of responses from nonface-selective areas, potentially driven by attention to objects of expertise. Using HR-fMRI in two experiments, we provide evidence of reliable responses to cars in the FFA that correlate with behavioral car expertise. Effects of expertise in the FFA for nonface objects cannot be attributed to spatial blurring beyond the scale at which modular claims have been made, and within the lateral fusiform gyrus, they are restricted to a small area (200 mm(2) on the right and 50 mm(2) on the left) centered on the peak of face selectivity. Experience with a category may be sufficient to explain the spatially clustered face selectivity observed in this region.

  19. Composite multi-lobe descriptor for cross spectral face recognition: matching active IR to visible light images

    Science.gov (United States)

    Cao, Zhicheng; Schmid, Natalia A.

    2015-05-01

    Matching facial images across electromagnetic spectrum presents a challenging problem in the field of biometrics and identity management. An example of this problem includes cross spectral matching of active infrared (IR) face images or thermal IR face images against a dataset of visible light images. This paper describes a new operator named Composite Multi-Lobe Descriptor (CMLD) for facial feature extraction in cross spectral matching of near-infrared (NIR) or short-wave infrared (SWIR) against visible light images. The new operator is inspired by the design of ordinal measures. The operator combines Gaussian-based multi-lobe kernel functions, Local Binary Pattern (LBP), generalized LBP (GLBP) and Weber Local Descriptor (WLD) and modifies them into multi-lobe functions with smoothed neighborhoods. The new operator encodes both the magnitude and phase responses of Gabor filters. The combining of LBP and WLD utilizes both the orientation and intensity information of edges. Introduction of multi-lobe functions with smoothed neighborhoods further makes the proposed operator robust against noise and poor image quality. Output templates are transformed into histograms and then compared by means of a symmetric Kullback-Leibler metric resulting in a matching score. The performance of the multi-lobe descriptor is compared with that of other operators such as LBP, Histogram of Oriented Gradients (HOG), ordinal measures, and their combinations. The experimental results show that in many cases the proposed method, CMLD, outperforms the other operators and their combinations. In addition to different infrared spectra, various standoff distances from close-up (1.5 m) to intermediate (50 m) and long (106 m) are also investigated in this paper. Performance of CMLD is evaluated for of each of the three cases of distances.

  20. Automated regional behavioral analysis for human brain images

    National Research Council Canada - National Science Library

    Lancaster, Jack L; Laird, Angela R; Eickhoff, Simon B; Martinez, Michael J; Fox, P Mickle; Fox, Peter T

    2012-01-01

    Behavioral categories of functional imaging experiments along with standardized brain coordinates of associated activations were used to develop a method to automate regional behavioral analysis of human brain images...

  1. Fusion of Appearance Image and Passive Stereo Depth Map for Face Recognition Based on the Bilateral 2DLDA

    Directory of Open Access Journals (Sweden)

    Jian-Gang Wang

    2007-08-01

    Full Text Available This paper presents a novel approach for face recognition based on the fusion of the appearance and depth information at the match score level. We apply passive stereoscopy instead of active range scanning as popularly used by others. We show that present-day passive stereoscopy, though less robust and accurate, does make positive contribution to face recognition. By combining the appearance and disparity in a linear fashion, we verified experimentally that the combined results are noticeably better than those for each individual modality. We also propose an original learning method, the bilateral two-dimensional linear discriminant analysis (B2DLDA, to extract facial features of the appearance and disparity images. We compare B2DLDA with some existing 2DLDA methods on both XM2VTS database and our database. The results show that the B2DLDA can achieve better results than others.

  2. High-Resolution En Face Images of Microcystic Macular Edema in Patients with Autosomal Dominant Optic Atrophy

    Directory of Open Access Journals (Sweden)

    Kiyoko Gocho

    2013-01-01

    Full Text Available The purpose of this study was to investigate the characteristics of microcystic macular edema (MME determined from the en face images obtained by an adaptive optics (AO fundus camera in patients with autosomal dominant optic atrophy (ADOA and to try to determine the mechanisms underlying the degeneration of the inner retinal cells and RNFL by using the advantage of AO. Six patients from 4 families with ADOA underwent detailed ophthalmic examinations including spectral domain optical coherence tomography (SD-OCT. Mutational screening of all coding and flanking intron sequences of the OPA1 gene was performed by DNA sequencing. SD-OCT showed a severe reduction in the retinal nerve fiber layer (RNFL thickness in all patients. A new splicing defect and two new frameshift mutations with premature termination of the Opa1 protein were identified in three families. A reported nonsense mutation was identified in one family. SD-OCT of one patient showed MME in the inner nuclear layer (INL of the retina. AO images showed microcysts in the en face images of the INL. Our data indicate that AO is a useful method to identify MME in neurodegenerative diseases and may also help determine the mechanisms underlying the degeneration of the inner retinal cells and RNFL.

  3. Advanced human machine interaction for an image interpretation workstation

    Science.gov (United States)

    Maier, S.; Martin, M.; van de Camp, F.; Peinsipp-Byma, E.; Beyerer, J.

    2016-05-01

    In recent years, many new interaction technologies have been developed that enhance the usability of computer systems and allow for novel types of interaction. The areas of application for these technologies have mostly been in gaming and entertainment. However, in professional environments, there are especially demanding tasks that would greatly benefit from improved human machine interfaces as well as an overall improved user experience. We, therefore, envisioned and built an image-interpretation-workstation of the future, a multi-monitor workplace comprised of four screens. Each screen is dedicated to a complex software product such as a geo-information system to provide geographic context, an image annotation tool, software to generate standardized reports and a tool to aid in the identification of objects. Using self-developed systems for hand tracking, pointing gestures and head pose estimation in addition to touchscreens, face identification, and speech recognition systems we created a novel approach to this complex task. For example, head pose information is used to save the position of the mouse cursor on the currently focused screen and to restore it as soon as the same screen is focused again while hand gestures allow for intuitive manipulation of 3d objects in mid-air. While the primary focus is on the task of image interpretation, all of the technologies involved provide generic ways of efficiently interacting with a multi-screen setup and could be utilized in other fields as well. In preliminary experiments, we received promising feedback from users in the military and started to tailor the functionality to their needs

  4. Galactose uncovers face recognition and mental images in congenital prosopagnosia : the first case report.

    OpenAIRE

    Esins, J.; Schultz, J; Bülthoff, I.; Kennerknecht, I.

    2014-01-01

    A woman in her early 40s with congenital prosopagnosia and attention deficit hyperactivity disorder observed for the first time sudden and extensive improvement of her face recognition abilities, mental imagery, and sense of navigation after galactose intake. This effect of galactose on prosopagnosia has never been reported before. Even if this effect is restricted to a subform of congenital prosopagnosia, galactose might improve the condition of other prosopagnosics. Congenital prosopagnosia...

  5. PET imaging of human cardiac opioid receptors

    Energy Technology Data Exchange (ETDEWEB)

    Villemagne, Patricia S.R.; Dannals, Robert F. [Department of Radiology, The Johns Hopkins University School of Medicine, 605 N Caroline St., Baltimore, Maryland (United States); Department of Environmental Health Sciences, The Johns Hopkins University School of Medicine, Baltimore, Maryland (United States); Ravert, Hayden T. [Department of Radiology, The Johns Hopkins University School of Medicine, 605 N Caroline St., Baltimore, Maryland (United States); Frost, James J. [Department of Radiology, The Johns Hopkins University School of Medicine, 605 N Caroline St., Baltimore, Maryland (United States); Department of Environmental Health Sciences, The Johns Hopkins University School of Medicine, Baltimore, Maryland (United States); Department of Neuroscience, The Johns Hopkins University School of Medicine, Baltimore, Maryland (United States)

    2002-10-01

    The presence of opioid peptides and receptors and their role in the regulation of cardiovascular function has been previously demonstrated in the mammalian heart. The aim of this study was to image {mu} and {delta} opioid receptors in the human heart using positron emission tomography (PET). Five subjects (three females, two males, 65{+-}8 years old) underwent PET scanning of the chest with [{sup 11}C]carfentanil ([{sup 11}C]CFN) and [{sup 11}C]-N-methyl-naltrindole ([{sup 11}C]MeNTI) and the images were analyzed for evidence of opioid receptor binding in the heart. Either [{sup 11}C]CFN or [{sup 11}C]MeNTI (20 mCi) was injected i.v. with subsequent dynamic acquisitions over 90 min. For the blocking studies, either 0.2 mg/kg or 1 mg/kg of naloxone was injected i.v. 5 min prior to the injection of [{sup 11}C]CFN and [{sup 11}C]MeNTI, respectively. Regions of interest were placed over the left ventricle, left ventricular chamber, lung and skeletal muscle. Graphical analysis demonstrated average baseline myocardial binding potentials (BP) of 4.37{+-}0.91 with [{sup 11}C]CFN and 3.86{+-}0.60 with [{sup 11}C]MeNTI. Administration of 0.2 mg/kg naloxone prior to [{sup 11}C]CFN produced a 25% reduction in BP in one subject in comparison with baseline values, and a 19% decrease in myocardial distribution volume (DV). Administration of 1 mg/kg of naloxone before [{sup 11}C]MeNTI in another subject produced a 14% decrease in BP and a 21% decrease in the myocardial DV. These results demonstrate the ability to image these receptors in vivo by PET. PET imaging of cardiac opioid receptors may help to better understand their role in cardiovascular pathophysiology and the effect of abuse of opioids and drugs on heart function. (orig.)

  6. IMAGING WHITE MATTER IN HUMAN BRAINSTEM

    Directory of Open Access Journals (Sweden)

    Anastasia A Ford

    2013-07-01

    Full Text Available The human brainstem is critical for the control of many life-sustaining functions, such as consciousness, respiration, sleep, and transfer of sensory and motor information between the brain and the spinal cord. Most of our knowledge about structure and organization of white and gray matter within the brainstem is derived from ex vivo dissection and histology studies. However, these methods cannot be applied to study structural architecture in live human participants. Tractography from diffusion-weighted MRI may provide valuable insights about white matter organization within the brainstem in vivo. However, this method presents technical challenges in vivo due to susceptibility artifacts, functionally dense anatomy, as well as pulsatile and respiratory motion. To investigate the limits of MR tractography, we present results from high angular resolution diffusion imaging (HARDI of an intact excised human brainstem performed at 11.1T using isotropic resolution of 0.333, 1, and 2 mm, with the latter reflecting resolution currently used clinically. At the highest resolution, the dense fiber architecture of the brainstem is evident, but the definition of structures degrades as resolution decreases. In particular, the inferred corticopontine/corticospinal tracts (CPT/CST, superior (SCP and middle cerebellar peduncle (MCP, and medial lemniscus (ML pathways are clearly discernable and follow known anatomical trajectories at the highest spatial resolution. At lower resolutions, the CST/CPT, SCP, and MCP pathways are artificially enlarged due to inclusion of collinear and crossing fibers not inherent to these three pathways. The inferred ML pathways appear smaller at lower resolutions, indicating insufficient spatial information to successfully resolve smaller fiber pathways. Our results suggest that white matter tractography maps derived from the excised brainstem can be used to guide the study of the brainstem architecture using diffusion MRI in vivo.

  7. The IMM Frontal Face Database

    DEFF Research Database (Denmark)

    Fagertun, Jens; Stegmann, Mikkel Bille

    2005-01-01

    This note describes a data set consisting of 120 annotated monocular images of 12 different frontal human faces. Points of correspondence are placed on each image so the data set can be readily used for building statistical models of shape. Format specifications and terms of use are also given in...... in this note. The data set is available in two versions: i) low resolution, given in the zip-file electronic version, ii) high, given in the publication link....

  8. Image Magnification Based on the Human Visual Processing

    OpenAIRE

    Je, Sung-Kwan; Kim, Kwang-Baek; Cho, Jae-Hyun; Song, Doo-Heon

    2007-01-01

    In image processing, the interpolated magnification method brings about the problem of image loss such as the blocking and blurring phenomenon when the image is enlarged. In this paper, we proposed the magnification method considering the properties of human visual processing to solve such problems. As a result, our method is faster than any other algorithm that is capable of removing the blocking and blurring phenomenon when the image is enlarged. The cubic convolution interpolation in image...

  9. A New Medical Image Enhancement Based on Human Visual Characteristics

    Institute of Scientific and Technical Information of China (English)

    DONG Ai-bin; HE Jun

    2013-01-01

    Study of image enhancement shows that the quality of image heavily relies on human visual system. In this paper, we apply this fact effectively to design a new image enhancement method for medical images that improves the detail regions. First, the eye region of interest (ROI) is segmented; then the Un-sharp Masking (USM) is used to enhance the detail regions. Experiments show that the proposed method can effectively improve the accuracy of medical image enhancement and has a significant effect.

  10. The many faces of pulmonary aspergillosis: Imaging findings with pathologic correlation

    Directory of Open Access Journals (Sweden)

    Prasad Panse

    2016-12-01

    Conclusion: In this article we correlate the radiologic findings of the various pulmonary manifestations of Aspergillus infection with their pathologic features to better understand the disease process and better comprehend the associated imaging patterns.

  11. Hierarchical imaging of the human knee

    Science.gov (United States)

    Schulz, Georg; Götz, Christian; Deyhle, Hans; Müller-Gerbl, Magdalena; Zanette, Irene; Zdora, Marie-Christine; Khimchenko, Anna; Thalmann, Peter; Rack, Alexander; Müller, Bert

    2016-10-01

    Among the clinically relevant imaging techniques, computed tomography (CT) reaches the best spatial resolution. Sub-millimeter voxel sizes are regularly obtained. For investigations on true micrometer level lab-based μCT has become gold standard. The aim of the present study is the hierarchical investigation of a human knee post mortem using hard X-ray μCT. After the visualization of the entire knee using a clinical CT with a spatial resolution on the sub-millimeter range, a hierarchical imaging study was performed using a laboratory μCT system nanotom m. Due to the size of the whole knee the pixel length could not be reduced below 65 μm. These first two data sets were directly compared after a rigid registration using a cross-correlation algorithm. The μCT data set allowed an investigation of the trabecular structures of the bones. The further reduction of the pixel length down to 25 μm could be achieved by removing the skin and soft tissues and measuring the tibia and the femur separately. True micrometer resolution could be achieved after extracting cylinders of several millimeters diameters from the two bones. The high resolution scans revealed the mineralized cartilage zone including the tide mark line as well as individual calcified chondrocytes. The visualization of soft tissues including cartilage, was arranged by X-ray grating interferometry (XGI) at ESRF and Diamond Light Source. Whereas the high-energy measurements at ESRF allowed the simultaneous visualization of soft and hard tissues, the low-energy results from Diamond Light Source made individual chondrocytes within the cartilage visual.

  12. Effects of symmetry and familiarity on the attractiveness of human faces

    Directory of Open Access Journals (Sweden)

    Mentus Tatjana

    2016-01-01

    Full Text Available The effects of both symmetry (perceptual factor and familiarity (cognitive factor on facial attractiveness were investigated. From the photographs of original slightly asymmetric faces, symmetric left-left (LL and right-right (RR versions were generated. Familiarity was induced in the learning block using the repetitive presentation of original faces. In the test block participants rated the attractiveness of original, previously seen (familiar faces, original, not previously seen faces, and both LL and RR versions of all faces. The analysis of variance showed main effects of symmetry. Post hoc tests revealed that asymmetric original faces were rated as more attractive than both LL and RR symmetric versions. Familiarity doesn’t have a significant main effect, but the symmetry-familiarity interaction was obtained. Additional post hoc tests indicated that facial attractiveness is positively associated with natural slight asymmetry rather than with perfect symmetry. Also, unfamiliar LL symmetric versions were rated as more attractive than familiar LL versions, whereas familiar RR versions were rated as more attractive than RR unfamiliar faces. These results suggested that symmetry (perceptual factor and familiarity (cognitive or memorial factor play differential roles in facial attractiveness, and indicate a relatively stronger effect of the perceptual compared to the cognitive factor. [Projekat Ministarstva nauke Republike Srbije, br. ON179018 i br. ON179033

  13. Visual adaptation of the perception of "life": animacy is a basic perceptual dimension of faces.

    Science.gov (United States)

    Koldewyn, Kami; Hanus, Patricia; Balas, Benjamin

    2014-08-01

    One critical component of understanding another's mind is the perception of "life" in a face. However, little is known about the cognitive and neural mechanisms underlying this perception of animacy. Here, using a visual adaptation paradigm, we ask whether face animacy is (1) a basic dimension of face perception and (2) supported by a common neural mechanism across distinct face categories defined by age and species. Observers rated the perceived animacy of adult human faces before and after adaptation to (1) adult faces, (2) child faces, and (3) dog faces. When testing the perception of animacy in human faces, we found significant adaptation to both adult and child faces, but not dog faces. We did, however, find significant adaptation when morphed dog images and dog adaptors were used. Thus, animacy perception in faces appears to be a basic dimension of face perception that is species specific but not constrained by age categories.

  14. Enlarge the training set based on inter-class relationship for face recognition from one image per person.

    Directory of Open Access Journals (Sweden)

    Qin Li

    Full Text Available In some large-scale face recognition task, such as driver license identification and law enforcement, the training set only contains one image per person. This situation is referred to as one sample problem. Because many face recognition techniques implicitly assume that several (at least two images per person are available for training, they cannot deal with the one sample problem. This paper investigates principal component analysis (PCA, Fisher linear discriminant analysis (LDA, and locality preserving projections (LPP and shows why they cannot perform well in one sample problem. After that, this paper presents four reasons that make one sample problem itself difficult: the small sample size problem; the lack of representative samples; the underestimated intra-class variation; and the overestimated inter-class variation. Based on the analysis, this paper proposes to enlarge the training set based on the inter-class relationship. This paper also extends LDA and LPP to extract features from the enlarged training set. The experimental results show the effectiveness of the proposed method.

  15. Phase resolved and coherence gated en face reflection imaging of multilayered embryonal carcinoma cells

    Science.gov (United States)

    Yamauchi, Toyohiko; Fukami, Tadashi; Iwai, Hidenao; Yamashita, Yutaka

    2012-03-01

    Embryonal carcinoma (EC) cells, which are cell lines derived from teratocarcinomas, have characteristics in common with stem cells and differentiate into many kinds of functional cells. Similar to embryonic stem (ES) cells, undifferentiated EC cells form multi-layered spheroids. In order to visualize the three-dimensional structure of multilayered EC cells without labeling, we employed full-field interference microscopy with the aid of a low-coherence quantitative phase microscope, which is a reflection-type interference microscope employing the digital holographic technique with a low-coherent light source. Owing to the low-coherency of the light-source (halogen lamp), only the light reflected from reflective surface at a specific sectioning height generates an interference image on the CCD camera. P19CL6 EC cells, derived from mouse teratocarcinomas, formed spheroids that are about 50 to 200 micrometers in diameter. Since the height of each cell is around 10 micrometers, it is assumed that each spheroid has 5 to 20 cell layers. The P19CL6 spheroids were imaged in an upright configuration and the horizontally sectioned reflection images of the sample were obtained by sequentially and vertically scanning the zero-path-length height. Our results show the threedimensional structure of the spheroids, in which plasma and nuclear membranes were distinguishably imaged. The results imply that our technique is further capable of imaging induced pluripotent stem (iPS) cells for the assessment of cell properties including their pluripotency.

  16. Face-to-face: Perceived personal relevance amplifies face processing.

    Science.gov (United States)

    Bublatzky, Florian; Pittig, Andre; Schupp, Harald T; Alpers, Georg W

    2017-05-01

    The human face conveys emotional and social information, but it is not well understood how these two aspects influence face perception. In order to model a group situation, two faces displaying happy, neutral or angry expressions were presented. Importantly, faces were either facing the observer, or they were presented in profile view directed towards, or looking away from each other. In Experiment 1 (n = 64), face pairs were rated regarding perceived relevance, wish-to-interact, and displayed interactivity, as well as valence and arousal. All variables revealed main effects of facial expression (emotional > neutral), face orientation (facing observer > towards > away) and interactions showed that evaluation of emotional faces strongly varies with their orientation. Experiment 2 (n = 33) examined the temporal dynamics of perceptual-attentional processing of these face constellations with event-related potentials. Processing of emotional and neutral faces differed significantly in N170 amplitudes, early posterior negativity (EPN), and sustained positive potentials. Importantly, selective emotional face processing varied as a function of face orientation, indicating early emotion-specific (N170, EPN) and late threat-specific effects (LPP, sustained positivity). Taken together, perceived personal relevance to the observer-conveyed by facial expression and face direction-amplifies emotional face processing within triadic group situations. © The Author (2017). Published by Oxford University Press.

  17. Matched filtering determines human visual search in natural images

    NARCIS (Netherlands)

    Toet, A.

    2011-01-01

    The structural image similarity index (SSIM), introduced by Wang and Bovik (IEEE Signal Processing Letters 9-3, pp. 81-84, 2002) measures the similarity between images in terms of luminance, contrast en structure. It has successfully been deployed to model human visual perception of image

  18. Human-Centered Object-Based Image Retrieval

    NARCIS (Netherlands)

    Broek, E.L. van den; Rikxoort, E.M. van; Schouten, T.E.

    2005-01-01

    A new object-based image retrieval (OBIR) scheme is introduced. The images are analyzed using the recently developed, human-based 11 colors quantization scheme and the color correlogram. Their output served as input for the image segmentation algorithm: agglomerative merging, which is extended to co

  19. HUMAN FACE RECOGNITION BY PSEUDO ZERNIKE MOMENT AND PROBABILISTIC NEURAL NETWORK

    Directory of Open Access Journals (Sweden)

    ESMAEEL FATEMI BEHBAHANI

    2011-07-01

    Full Text Available In this paper a new face recognition method has been introduced. By this method face features have been extracted by pseudo-Zernike moments (PZM.Then a probabilistic neural network has applied to classify thesemoments as feature vectors. Moment features are invariant under scaling, translation, rotation and reflection. Probabilistic Neural Networks have fast computational time. Results show that PZM of order of 14 with PNN have the best performance among all the moments

  20. Are patients with schizophrenia impaired in processing non-emotional features of human faces?

    Directory of Open Access Journals (Sweden)

    Hayley eDarke

    2013-08-01

    Full Text Available It is known that individuals with schizophrenia exhibit signs of impaired face processing, however, the exact perceptual and cognitive mechanisms underlying these deficits are yet to be elucidated. One possible source of confusion in the current literature is the methodological and conceptual inconsistencies that can arise from the varied treatment of different aspects of face processing relating to emotional and non-emotional aspects of face perception. This review aims to disentangle the literature by focusing on the performance of patients with schizophrenia in a range of tasks that required processing of non-emotional features of face stimuli (e.g. identity or gender. We also consider the performance of patients on non-face stimuli that share common elements such as familiarity (e.g. cars and social relevance (e.g. gait. We conclude by exploring whether observed deficits are best considered as face-specific and note that further investigation is required to properly assess the potential contribution of more generalised attentional or perceptual impairments.

  1. Early integration processing between faces and vowel sounds in human brain: an MEG investigation.

    Science.gov (United States)

    Nakamura, Itta; Hirano, Yoji; Ohara, Naotoshi; Hirano, Shogo; Ueno, Takefumi; Tsuchimoto, Rikako; Kanba, Shigenobu; Onitsuka, Toshiaki

    2015-01-01

    Unconscious fast integration of face and voice information is a crucial brain function necessary for communicating effectively with others. Here, we investigated for evidence of rapid face-voice integration in the auditory cortex. Magnetic fields (P50m and N100m) evoked by visual stimuli (V), auditory stimuli (A) and audiovisual stimuli (VA), i.e. by face, vowel and simultaneous vowel-face stimuli, were recorded in 22 healthy subjects. Magnetoencephalographic data from 28 channels around bilateral auditory cortices were analyzed. In both hemispheres, AV - V showed significantly larger P50m amplitudes than A. Additionally, compared with A, the N100m amplitudes and dipole moments of AV - V were significantly smaller in the left hemisphere, but not in the right hemisphere. Differential changes in P50m (bilateral) and N100m (left hemisphere) that occur when V (faces) are associated with A (vowel sounds) indicate that AV (face-voice) integration occurs in early processing, likely enabling us to communicate effectively in our lives. © 2015 S. Karger AG, Basel.

  2. Brain and face: communicating signals of health in the left and right sides of the face.

    Science.gov (United States)

    Reis, V A; Zaidel, D W

    2001-01-01

    In human communication and mate selection the appearance of health sends signals regarding biological fitness. We compared the appearance of health in the sides of the face to previous results on left-right facial asymmetry in the appearance of beauty (1). The stimuli were created by aligning the left and right sides of the face each with its own mirror image. Here, participants viewed 38 pairs of left-left and right-right faces and judged which member of the pair looked healthier. No significant interaction emerged between decision (health vs attractiveness) and face side. Rather, in women's faces right-right was significantly more healthy and attractive than left-left, while in men's faces there was no significant left-right difference. In biology and evolution, health and beauty are closely linked and the findings here confirm this relationship in human faces.

  3. A color based face detection system using multiple templates

    Institute of Scientific and Technical Information of China (English)

    王涛; 卜佳俊; 陈纯

    2003-01-01

    A color based system using multiple templates was developed and implemented for detecting human faces in color images. The algorithm consists of three image processing steps. The first step is human skin color statistics. Then it separates skin regions from non-skin regions. After that, it locates the frontal human face(s) within the skin regions. In the first step, 250 skin samples from persons of different ethnicities are used to determine the color distribution of human skin in chromatic color space in order to get a chroma chart showing likelihoods of skin colors. This chroma chart is used to generate, from the original color image, a gray scale image whose gray value at a pixel shows its likelihood of representing the skin. The algorithm uses an adaptive thresholding process to achieve the optimal threshold value for dividing the gray scale image into separate skin regions from non skin regions. Finally, multiple face templates matching is used to determine if a given skin region represents a frontal human face or not. Test of the system with more than 400 color images showed that the resulting detection rate was 83%, which is better than most color-based face detection systems. The average speed for face detection is 0.8 second/image (400×300 pixels) on a Pentium 3 (800MHz) PC.

  4. A color based face detection system using multiple templates

    Institute of Scientific and Technical Information of China (English)

    王涛; 卜佳酸; 陈纯

    2003-01-01

    A color based system using multiple templates was developed and implemented for detecting hu-man faces in color images.The algorithm comsists of three image processing steps.The first step is human skin color statistics.Then it separates skin regions from non-skin regions.After that,it locates the frontal human face(s) within the skin regions.In the first step,250 skin samples from persons of different ethnicities are used to determine the color distribution of human skin in chromatic color space in order to get a chroma chart showing likelihoods of skin colors.This chroma chart is used to generate,from the original color image,a gray scale image whose gray value at a pixel shows its likelihood of representing the shin,The algorithm uses an adaptive thresholding process to achieve the optimal threshold value for dividing the gray scale image into sep-arate skin regions from non skin regions.Finally,multiple face templates matching is used to determine if a given skin region represents a frontal human face or not.Test of the system with more than 400 color images showed that the resulting detection rate was 83%,which is better than most colou-based face detection sys-tems.The average speed for face detection is 0.8 second/image(400×300pixels) on a Pentium 3(800MHz) PC.

  5. Facing "the Curse of Dimensionality": Image Fusion and Nonlinear Dimensionality Reduction for Advanced Data Mining and Visualization of Astronomical Images

    Science.gov (United States)

    Pesenson, Meyer; Pesenson, I. Z.; McCollum, B.

    2009-05-01

    The complexity of multitemporal/multispectral astronomical data sets together with the approaching petascale of such datasets and large astronomical surveys require automated or semi-automated methods for knowledge discovery. Traditional statistical methods of analysis may break down not only because of the amount of data, but mostly because of the increase of the dimensionality of data. Image fusion (combining information from multiple sensors in order to create a composite enhanced image) and dimension reduction (finding lower-dimensional representation of high-dimensional data) are effective approaches to "the curse of dimensionality,” thus facilitating automated feature selection, classification and data segmentation. Dimension reduction methods greatly increase computational efficiency of machine learning algorithms, improve statistical inference and together with image fusion enable effective scientific visualization (as opposed to mere illustrative visualization). The main approach of this work utilizes recent advances in multidimensional image processing, as well as representation of essential structure of a data set in terms of its fundamental eigenfunctions, which are used as an orthonormal basis for the data visualization and analysis. We consider multidimensional data sets and images as manifolds or combinatorial graphs and construct variational splines that minimize certain Sobolev norms. These splines allow us to reconstruct the eigenfunctions of the combinatorial Laplace operator by using only a small portion of the graph. We use the first two or three eigenfunctions for embedding large data sets into two- or three-dimensional Euclidean space. Such reduced data sets allow efficient data organization, retrieval, analysis and visualization. We demonstrate applications of the algorithms to test cases from the Spitzer Space Telescope. This work was carried out with funding from the National Geospatial-Intelligence Agency University Research Initiative

  6. Microwave Imaging of Human Forearms: Pilot Study and Image Enhancement

    Directory of Open Access Journals (Sweden)

    Colin Gilmore

    2013-01-01

    Full Text Available We present a pilot study using a microwave tomography system in which we image the forearms of 5 adult male and female volunteers between the ages of 30 and 48. Microwave scattering data were collected at 0.8 to 1.2 GHz with 24 transmitting and receiving antennas located in a matching fluid of deionized water and table salt. Inversion of the microwave data was performed with a balanced version of the multiplicative-regularized contrast source inversion algorithm formulated using the finite-element method (FEM-CSI. T1-weighted MRI images of each volunteer’s forearm were also collected in the same plane as the microwave scattering experiment. Initial “blind” imaging results from the utilized inversion algorithm show that the image quality is dependent on the thickness of the arm’s peripheral adipose tissue layer; thicker layers of adipose tissue lead to poorer overall image quality. Due to the exible nature of the FEM-CSI algorithm used, prior information can be readily incorporated into the microwave imaging inversion process. We show that by introducing prior information into the FEM-CSI algorithm the internal anatomical features of all the arms are resolved, significantly improving the images. The prior information was estimated manually from the blind inversions using an ad hoc procedure.

  7. Non-invasive Imaging of Human Embryonic Stem Cells

    OpenAIRE

    Hong, Hao; Yang, Yunan; Zhang, Yin; Cai, Weibo

    2010-01-01

    Human embryonic stem cells (hESCs) hold tremendous therapeutic potential in a variety of diseases. Over the last decade, non-invasive imaging techniques have proven to be of great value in tracking transplanted hESCs. This review article will briefly summarize the various techniques used for non-invasive imaging of hESCs, which include magnetic resonance imaging (MRI), bioluminescence imaging (BLI), fluorescence, single-photon emission computed tomography (SPECT), positron emission tomography...

  8. Ultra-rapid categorization of fourier-spectrum equalized natural images: macaques and humans perform similarly.

    Directory of Open Access Journals (Sweden)

    Pascal Girard

    Full Text Available BACKGROUND: Comparative studies of cognitive processes find similarities between humans and apes but also monkeys. Even high-level processes, like the ability to categorize classes of object from any natural scene under ultra-rapid time constraints, seem to be present in rhesus macaque monkeys (despite a smaller brain and the lack of language and a cultural background. An interesting and still open question concerns the degree to which the same images are treated with the same efficacy by humans and monkeys when a low level cue, the spatial frequency content, is controlled. METHODOLOGY/PRINCIPAL FINDINGS: We used a set of natural images equalized in Fourier spectrum and asked whether it is still possible to categorize them as containing an animal and at what speed. One rhesus macaque monkey performed a forced-choice saccadic task with a good accuracy (67.5% and 76% for new and familiar images respectively although performance was lower than with non-equalized images. Importantly, the minimum reaction time was still very fast (100 ms. We compared the performances of human subjects with the same setup and the same set of (new images. Overall mean performance of humans was also lower than with original images (64% correct but the minimum reaction time was still short (140 ms. CONCLUSION: Performances on individual images (% correct but not reaction times for both humans and the monkey were significantly correlated suggesting that both species use similar features to perform the task. A similar advantage for full-face images was seen for both species. The results also suggest that local low spatial frequency information could be important, a finding that fits the theory that fast categorization relies on a rapid feedforward magnocellular signal.

  9. The Changing Face of Vascular Interventional Radiology: The Future Role of Pharmacotherapies and Molecular Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Tapping, Charles R., E-mail: crtapping@doctors.org.uk; Bratby, Mark J., E-mail: mark.bratby@ouh.nhs.uk [Oxford University Hospitals, John Radcliffe Hospital, Department of Radiology (United Kingdom)

    2013-08-01

    Interventional radiology has had to evolve constantly because there is the ever-present competition and threat from other specialties within medicine, surgery, and research. The development of new technologies, techniques, and therapies is vital to broaden the horizon of interventional radiology and to ensure its continued success in the future. In part, this change will be due to improved chronic disease prevention altering what we treat and in whom. The most important of these strategies are the therapeutic use of statins, Beta-blockers, angiotensin-converting enzyme inhibitors, and substances that interfere with mast cell degeneration. Molecular imaging and therapeutic strategies will move away from conventional techniques and nano and microparticle molecular technology, tissue factor imaging, gene therapy, endothelial progenitor cells, and photodynamic therapy will become an important part of interventional radiology of the future. This review looks at these new and exciting technologies.

  10. The public face of zoos: Images of entertainment, education, and conservation

    OpenAIRE

    Carr, N; Cohen, SA

    2011-01-01

    The contemporary justification for zoos is based on their ability to act as sites of wildlife conservation. Alongside this is the reality that zoos have historically been defined as sites for the entertainment of the general public and continue to be dependent on the revenue raised through visitor receipts. Consequently, zoos are, today, identified as sites of conservation, research, education, and entertainment. In recognition of this, the aim of our research was to assess the image that zoo...

  11. REAL TIME FACE RECOGNITION USING ADABOOST IMPROVED FAST PCA ALGORITHM

    Directory of Open Access Journals (Sweden)

    K. Susheel Kumar

    2011-08-01

    Full Text Available This paper presents an automated system for human face recognition in a real time background world fora large homemade dataset of persons face. The task is very difficult as the real time backgroundsubtraction in an image is still a challenge. Addition to this there is a huge variation in human face imagein terms of size, pose and expression. The system proposed collapses most of this variance. To detect realtime human face AdaBoost with Haar cascade is used and a simple fast PCA and LDA is used torecognize the faces detected. The matched face is then used to mark attendance in the laboratory, in ourcase. This biometric system is a real time attendance system based on the human face recognition with asimple and fast algorithms and gaining a high accuracy rate..

  12. Data merging of infrared and ultrasonic images for plasma facing components inspection

    Energy Technology Data Exchange (ETDEWEB)

    Richou, M. [CEA, IRFM, F-13108 Saint Paul-lez-Durance (France)], E-mail: marianne.richou@cea.fr; Durocher, A. [CEA, IRFM, F-13108 Saint Paul-lez-Durance (France); Medrano, M. [Association EURATOM - CIEMAT, Avda. Complutense 22, 28040 Madrid (Spain); Martinez-Ona, R. [Tecnatom, 28703 S. Sebastian de los Reyes, Madrid (Spain); Moysan, J. [LCND, Universite de la Mediterranee, F-13625 Aix-en-Provence (France); Riccardi, B. [Fusion For Energy, 08019 Barcelona (Spain)

    2009-06-15

    For steady-state magnetic thermonuclear fusion devices which need large power exhaust capability, actively cooled plasma facing components have been developed. In order to guarantee the integrity of these components during the required lifetime, their thermal and mechanical behaviour must be assessed. Before the procurement of the ITER Divertor, the examination of the heat sink to armour joints with non-destructive techniques is an essential topic to be addressed. Defects may be localised at different bonding interfaces. In order to improve the defect detection capability of the SATIR technique, the possibility of merging the infrared thermography test data coming from SATIR results with the ultrasonic test data has been identified. The data merging of SATIR and ultrasonic results has been performed on Carbon Fiber Composite (CFC) monoblocks with calibrated defects, identified by their position and extension. These calibrated defects were realised with machining, with 'stop-off' or by a lack of CFC activation techniques, these last two representing more accurately a real defect. A batch of 56 samples was produced to simulate each possibility of combination with regards to interface location, position and extension and way of realising the defect. The use of a data merging method based on Dempster-Shafer theory improves significantly the detection sensibility and reliability of defect location and size.

  13. Learning to Discriminate Face Views

    Directory of Open Access Journals (Sweden)

    Fang Fang

    2011-05-01

    Full Text Available Although visual feature leaning has been well studied, we still know little about the mechanisms of perceptual learning of complex object. Here, human perceptual learning in discrimination of in-depth orientation of face view was studied using psychophysics, EEG and fMRI. We trained subjects to discriminate face orientations around a face view (i.e. 30° over eight daily sessions, which resulted in a significant improvement in sensitivity to the face view orientation. This improved sensitivity was highly specific to the trained orientation and persisted up to six months. Different from perceptual learning of simple visual features, this orientation-specific learning effect could completely transfer across changes in face size, visual field and face identity. A complete transfer also occurred between two partial face images that were mutually exclusive but constituted a complete face. However, the transfer of the learning effect between upright and inverted faces and between a face and a paperclip object was very weak. Before and after training, we measured EEG and fMRI BOLD signals responding to both the trained and the untrained face views. Analyses of ERPs and induced gamma activity showed that face view discrimination training led to a larger reduction of N170 latency at the left occipital-temporal area and a concurrent larger decrease of induced gamma activity at the left frontal area with the trained face view, compared with the untrained ones. BOLD signal amplitude and MVPA analyses showed that, in face-selective cortical areas, training did not lead to a significant amplitude change, but induced a more reliable spatial pattern of neural activity in the left FFA. These results suggest that the visual system had learned how to compute face orientation from face configural information more accurately and that a large amount of plastic changes took place at a level of higher visual processing where size-, location-, and identity

  14. The fusiform face area is engaged in holistic, not parts-based, representation of faces.

    Directory of Open Access Journals (Sweden)

    Jiedong Zhang

    Full Text Available Numerous studies with functional magnetic resonance imaging have shown that the fusiform face area (FFA in the human brain plays a key role in face perception. Recent studies have found that both the featural information of faces (e.g., eyes, nose, and mouth and the configural information of faces (i.e., spatial relation among features are encoded in the FFA. However, little is known about whether the featural information is encoded independent of or combined with the configural information in the FFA. Here we used multi-voxel pattern analysis to examine holistic representation of faces in the FFA by correlating spatial patterns of activation with behavioral performance in discriminating face parts with face configurations either present or absent. Behaviorally, the absence of face configurations (versus presence impaired discrimination of face parts, suggesting a holistic representation in the brain. Neurally, spatial patterns of activation in the FFA were more similar among correct than incorrect trials only when face parts were presented in a veridical face configuration. In contrast, spatial patterns of activation in the occipital face area, as well as the object-selective lateral occipital complex, were more similar among correct than incorrect trials regardless of the presence of veridical face configurations. This finding suggests that in the FFA faces are represented not on the basis of individual parts but in terms of the whole that emerges from the parts.

  15. Rapid specimen preparation to improve the throughput of electron microscopic volume imaging for three-dimensional analyses of subcellular ultrastructures with serial block-face scanning electron microscopy.

    Science.gov (United States)

    Thai, Truc Quynh; Nguyen, Huy Bang; Saitoh, Sei; Wu, Bao; Saitoh, Yurika; Shimo, Satoshi; Elewa, Yaser Hosny Ali; Ichii, Osamu; Kon, Yasuhiro; Takaki, Takashi; Joh, Kensuke; Ohno, Nobuhiko

    2016-09-01

    Serial block-face imaging using scanning electron microscopy enables rapid observations of three-dimensional ultrastructures in a large volume of biological specimens. However, such imaging usually requires days for sample preparation to reduce charging and increase image contrast. In this study, we report a rapid procedure to acquire serial electron microscopic images within 1 day for three-dimensional analyses of subcellular ultrastructures. This procedure is based on serial block-face with two major modifications, including a new sample treatment device and direct polymerization on the rivets, to reduce the time and workload needed. The modified procedure without uranyl acetate can produce tens of embedded samples observable under serial block-face scanning electron microscopy within 1 day. The serial images obtained are similar to the block-face images acquired by common procedures, and are applicable to three-dimensional reconstructions at a subcellular resolution. Using this approach, regional immune deposits and the double contour or heterogeneous thinning of basement membranes were observed in the glomerular capillary loops of an autoimmune nephropathy model. These modifications provide options to improve the throughput of three-dimensional electron microscopic examinations, and will ultimately be beneficial for the wider application of volume imaging in life science and clinical medicine.

  16. New enhancement of infrared image based on human visual system

    Institute of Scientific and Technical Information of China (English)

    Tianhe Yu; Qiuming Li; Jingmin Dai

    2009-01-01

    Infrared images are firstly analyzed using the multifractal theory so that the singularity of each pixel can be extracted from the images. The multifractal spectrum is then estimated, which can reflect overall characteristic of an infrared image. Thus the edge and texture of an infrared image can be accurately extracted based on the singularity of each pixel and the multifractal spectrum. Finally the edge pixels are classified and enhanced in accordance with the sensitivity of human visual system to the edge profile of an infrared image. The experimental results obtained by this approach are compared with those obtained by other methods. It is found that the proposed approach can be used to highlight the edge area of an infrared image to make an infrared image more suitable for observation by human eyes.

  17. Putative sex-specific human pheromones do not affect gender perception, attractiveness ratings or unfaithfulness judgements of opposite sex faces

    Science.gov (United States)

    Hare, Robin M.; Schlatter, Sophie; Rhodes, Gillian

    2017-01-01

    Debate continues over the existence of human sex pheromones. Two substances, androstadienone (AND) and estratetraenol (EST), were recently reported to signal male and female gender, respectively, potentially qualifying them as human sex pheromones. If AND and EST truly signal gender, then they should affect reproductively relevant behaviours such as mate perception. To test this hypothesis, heterosexual, Caucasian human participants completed two computer-based tasks twice, on two consecutive days, exposed to a control scent on one day and a putative pheromone (AND or EST) on the other. In the first task, 46 participants (24 male, 22 female) indicated the gender (male or female) of five gender-neutral facial morphs. Exposure to AND or EST had no effect on gender perception. In the second task, 94 participants (43 male, 51 female) rated photographs of opposite-sex faces for attractiveness and probable sexual unfaithfulness. Exposure to the putative pheromones had no effect on either attractiveness or unfaithfulness ratings. These results are consistent with those of other experimental studies and reviews that suggest AND and EST are unlikely to be human pheromones. The double-blind nature of the current study lends increased support to this conclusion. If human sex pheromones affect our judgements of gender, attractiveness or unfaithfulness from faces, they are unlikely to be AND or EST.

  18. En Face Optical Coherence Tomography Imaging of the Choroid in a Case with Central Serous Chorioretinopathy during the Course of Vogt-Koyanagi-Harada Disease: A Case Report

    Directory of Open Access Journals (Sweden)

    Yuki Komuku

    2015-12-01

    Full Text Available Vogt-Koyanagi-Harada (VKH disease and central serous chorioretinopathy (CSC develop serous retinal detachment; however, the treatment of each disease is totally different. Steroids treat VKH but worsen CSC; therefore, it is important to distinguish these diseases. Here, we report a case with CSC which was diagnosed by en face optical coherence tomography (OCT imaging during the course of VKH disease. A 50-year-old man was referred with blurring of vision in his right eye. Fundus examination showed bilateral optic disc swelling and macular fluid in the right eye. OCT showed thick choroid, and en face OCT images depicted blurry choroid without clear delineation of choroidal vessels. Combined with angiography findings, this patient was diagnosed with VKH disease and treated with steroids. Promptly, fundus abnormalities resolved with the reduction of the choroidal thickness and the choroidal vessels became visible on the en face images. During the tapering of the steroid, serous macular detachment in the right eye recurred several times. Steroid treatment was effective at first; however, at the fourth appearance of submacular fluid, the patient did not respond. At that time, the choroidal vessels on the en face OCT images were clear, which significantly differed from the images at the time of recurrence of VKH. Angiography also suggested CSC-like leakage. The tapering of the steroids was effective in resolving the fluid. Secondary CSC may develop in the eye with VKH after steroid treatment. En face OCT observation of the choroid may be helpful to distinguish each condition.

  19. [INVITED] Non-intrusive optical imaging of face to probe physiological traits in Autism Spectrum Disorder

    Science.gov (United States)

    Samad, Manar D.; Bobzien, Jonna L.; Harrington, John W.; Iftekharuddin, Khan M.

    2016-03-01

    Autism Spectrum Disorders (ASD) can impair non-verbal communication including the variety and extent of facial expressions in social and interpersonal communication. These impairments may appear as differential traits in the physiology of facial muscles of an individual with ASD when compared to a typically developing individual. The differential traits in the facial expressions as shown by facial muscle-specific changes (also known as 'facial oddity' for subjects with ASD) may be measured visually. However, this mode of measurement may not discern the subtlety in facial oddity distinctive to ASD. Earlier studies have used intrusive electrophysiological sensors on the facial skin to gauge facial muscle actions from quantitative physiological data. This study demonstrates, for the first time in the literature, novel quantitative measures for facial oddity recognition using non-intrusive facial imaging sensors such as video and 3D optical cameras. An Institutional Review Board (IRB) approved that pilot study has been conducted on a group of individuals consisting of eight participants with ASD and eight typically developing participants in a control group to capture their facial images in response to visual stimuli. The proposed computational techniques and statistical analyses reveal higher mean of actions in the facial muscles of the ASD group versus the control group. The facial muscle-specific evaluation reveals intense yet asymmetric facial responses as facial oddity in participants with ASD. This finding about the facial oddity may objectively define measurable differential markers in the facial expressions of individuals with ASD.

  20. Choosing parameters of kernel subspace LDA for recognition of face images under pose and illumination variations.

    Science.gov (United States)

    Huang, Jian; Yuen, Pong C; Chen, Wen-Sheng; Lai, Jian Huang

    2007-08-01

    This paper addresses the problem of automatically tuning multiple kernel parameters for the kernel-based linear discriminant analysis (LDA) method. The kernel approach has been proposed to solve face recognition problems under complex distribution by mapping the input space to a high-dimensional feature space. Some recognition algorithms such as the kernel principal components analysis, kernel Fisher discriminant, generalized discriminant analysis, and kernel direct LDA have been developed in the last five years. The experimental results show that the kernel-based method is a good and feasible approach to tackle the pose and illumination variations. One of the crucial factors in the kernel approach is the selection of kernel parameters, which highly affects the generalization capability and stability of the kernel-based learning methods. In view of this, we propose an eigenvalue-stability-bounded margin maximization (ESBMM) algorithm to automatically tune the multiple parameters of the Gaussian radial basis function kernel for the kernel subspace LDA (KSLDA) method, which is developed based on our previously developed subspace LDA method. The ESBMM algorithm improves the generalization capability of the kernel-based LDA method by maximizing the margin maximization criterion while maintaining the eigenvalue stability of the kernel-based LDA method. An in-depth investigation on the generalization performance on pose and illumination dimensions is performed using the YaleB and CMU PIE databases. The FERET database is also used for benchmark evaluation. Compared with the existing PCA-based and LDA-based methods, our proposed KSLDA method, with the ESBMM kernel parameter estimation algorithm, gives superior performance.

  1. Research on Human Face Detection%人脸检测技术研究

    Institute of Scientific and Technical Information of China (English)

    盛仲飙

    2012-01-01

    Face detection has became a research hotspot in the field of artificial intelligence in recent years, which is used as a key technology in face information processing. The paper briefly introduces the theoretical basis of face detection, and then the article describes the three commonly used methods such as pretreatment, threshold detection and edge detection. Focus on the key technologies of face detection—histogram threshold segmentation and edge detection, then simulation in the MATLAB environment. The results show, that only a combination of edge detection techniques can achieve the desired results.%人脸检测作为人脸信息处理中的一项关键技术,成为近年来人工智能领域内的一项研究热点.文章简要介绍了人脸检测的理论基础,讨论了人脸检测的预处理、阈值检测和边缘检测三种常用的方法;重点研究了直方图阈值分割和边缘检测这一关键技术,并在MATLAB环境下进行了仿真.通过结果可以看出,只有将边缘检测技术和其他方法结合起来才能达到理想的检测效果.

  2. Study of Different Face Recognition Algorithms and Challenges

    Directory of Open Access Journals (Sweden)

    Uma Shankar Kurmi

    2014-03-01

    Full Text Available At present face recognition has wide area of applications such as security, law enforcement. Imaging conditions, Orientation, Pose and presence of occlusion are huge problems associated with face recognition. The performance of face recognition systems decreases due to these problems. Discriminant Analysis (LDA or Principal Components Analysis (PCA is used to get better recognition results. Human face contains relevant information that can extracted from face model developed by PCA technique. Principal Components Analysis method uses eigenface approach to describe face image variation. A face recognition technique that is robust to all situations is not available. Some techniques are better in case of illumination, some for pose problem and some for occlusion problem. This paper presents some algorithms for face recognition.

  3. Quantified Faces

    DEFF Research Database (Denmark)

    Sørensen, Mette-Marie Zacher

    2016-01-01

    Abstract: The article presents three contemporary art projects that, in various ways, thematise questions regarding numerical representation of the human face in relation to the identification of faces, for example through the use of biometric video analysis software, or DNA technology. The Dutch...... and critically examine bias in surveillance technologies, as well as scientific investigations, regarding the stereotyping mode of the human gaze. The American artist Heather Dewey-Hagborg creates three-dimensional portraits of persons she has “identified” from their garbage. Her project from 2013 entitled....... The three works are analysed with perspectives to historical physiognomy and Francis Galton's composite portraits from the 1800s. It is argued that, rather than being a statistical compression like the historical composites, contemporary statistical visual portraits (composites) are irreversible...

  4. A Statistical Nonparametric Approach of Face Recognition: Combination of Eigenface & Modified k-Means Clustering

    CERN Document Server

    Bag, Soumen; Sen, Prithwiraj; Sanyal, Gautam

    2011-01-01

    Facial expressions convey non-verbal cues, which play an important role in interpersonal relations. Automatic recognition of human face based on facial expression can be an important component of natural human-machine interface. It may also be used in behavioural science. Although human can recognize the face practically without any effort, but reliable face recognition by machine is a challenge. This paper presents a new approach for recognizing the face of a person considering the expressions of the same human face at different instances of time. This methodology is developed combining Eigenface method for feature extraction and modified k-Means clustering for identification of the human face. This method endowed the face recognition without using the conventional distance measure classifiers. Simulation results show that proposed face recognition using perception of k-Means clustering is useful for face images with different facial expressions.

  5. Microwave non-contact imaging of subcutaneous human body tissues

    Science.gov (United States)

    Chernokalov, Alexander; Khripkov, Alexander; Cho, Jaegeol; Druchinin, Sergey

    2015-01-01

    A small-size microwave sensor is developed for non-contact imaging of a human body structure in 2D, enabling fitness and health monitoring using mobile devices. A method for human body tissue structure imaging is developed and experimentally validated. Subcutaneous fat tissue reconstruction depth of up to 70 mm and maximum fat thickness measurement error below 2 mm are demonstrated by measurements with a human body phantom and human subjects. Electrically small antennas are developed for integration of the microwave sensor into a mobile device. Usability of the developed microwave sensor for fitness applications, healthcare, and body weight management is demonstrated. PMID:26609415

  6. Microwave non-contact imaging of subcutaneous human body tissues.

    Science.gov (United States)

    Kletsov, Andrey; Chernokalov, Alexander; Khripkov, Alexander; Cho, Jaegeol; Druchinin, Sergey

    2015-10-01

    A small-size microwave sensor is developed for non-contact imaging of a human body structure in 2D, enabling fitness and health monitoring using mobile devices. A method for human body tissue structure imaging is developed and experimentally validated. Subcutaneous fat tissue reconstruction depth of up to 70 mm and maximum fat thickness measurement error below 2 mm are demonstrated by measurements with a human body phantom and human subjects. Electrically small antennas are developed for integration of the microwave sensor into a mobile device. Usability of the developed microwave sensor for fitness applications, healthcare, and body weight management is demonstrated.

  7. Fatigue pattern recognition of human face based on Gabor wavelet transform%基于Gabor小波变换的人脸疲劳模式识别

    Institute of Scientific and Technical Information of China (English)

    成奋华; 杨海燕

    2011-01-01

    疲劳是造成交通事故的主因之一,提出了一种基于Gabor小波变换的疲劳监控新方法.首先,在训练阶段采用频繁模式挖掘算法对疲劳脸部图像序列集进行疲劳模式挖掘;然后,在疲劳识别阶段,将待检测的脸部图像序列基于Gabor小波变换表示为融合特征序列;最后,采用分类算法进行人脸序列的疲劳检测.对自行收集的一天内500幅疲劳图像的仿真结果表明,所提方法正确检测率达到92.8%,错误检测率达到0.02%,优于比较算法.%Fatigue is one of the main factors that cause traffic accidents. A new method for monitoring fatigue state based on Gabor wavelet transform was proposed. In this method, the frequent patterns mining algorithm was designed to mine the fatigue patterns of fatigue facial image sequences during the training phase first. And then, during the fatigue recognition phase, the face image sequence to be detected was represented by fused feature sequence through Gabor wavelet transform.Afterwards, the classification algorithm was used for fatigue detection of the human face sequence. The simulation results on 500 fatigue images sampled by the authors show that the proposed algorithm achieves 92. 8% in right detection rate and 0.02% in error detection rate, and outperforms than some similar method.

  8. Image enhancement using thermal-visible fusion for human detection

    Science.gov (United States)

    Zaihidee, Ezrinda Mohd; Hawari Ghazali, Kamarul; Zuki Saleh, Mohd

    2017-09-01

    An increased interest in detecting human beings in video surveillance system has emerged in recent years. Multisensory image fusion deserves more research attention due to the capability to improve the visual interpretability of an image. This study proposed fusion techniques for human detection based on multiscale transform using grayscale visual light and infrared images. The samples for this study were taken from online dataset. Both images captured by the two sensors were decomposed into high and low frequency coefficients using Stationary Wavelet Transform (SWT). Hence, the appropriate fusion rule was used to merge the coefficients and finally, the final fused image was obtained by using inverse SWT. From the qualitative and quantitative results, the proposed method is more superior than the two other methods in terms of enhancement of the target region and preservation of details information of the image.

  9. Human gesture recognition using three-dimensional integral imaging.

    Science.gov (United States)

    Javier Traver, V; Latorre-Carmona, Pedro; Salvador-Balaguer, Eva; Pla, Filiberto; Javidi, Bahram

    2014-10-01

    Three-dimensional (3D) integral imaging allows one to reconstruct a 3D scene, including range information, and provides sectional refocused imaging of 3D objects at different ranges. This paper explores the potential use of 3D passive sensing integral imaging for human gesture recognition tasks from sequences of reconstructed 3D video scenes. As a preliminary testbed, the 3D integral imaging sensing is implemented using an array of cameras with the appropriate algorithms for 3D scene reconstruction. Recognition experiments are performed by acquiring 3D video scenes of multiple hand gestures performed by ten people. We analyze the capability and performance of gesture recognition using 3D integral imaging representations at given distances and compare its performance with the use of standard two-dimensional (2D) single-camera videos. To the best of our knowledge, this is the first report on using 3D integral imaging for human gesture recognition.

  10. A Paradigm Shift: Detecting Human Rights Violations Through Web Images

    OpenAIRE

    Kalliatakis, Grigorios; Ehsan, Shoaib; McDonald-Maier, Klaus D.

    2017-01-01

    The growing presence of devices carrying digital cameras, such as mobile phones and tablets, combined with ever improving internet networks have enabled ordinary citizens, victims of human rights abuse, and participants in armed conflicts, protests, and disaster situations to capture and share via social media networks images and videos of specific events. This paper discusses the potential of images in human rights context including the opportunities and challenges they present. This study d...

  11. Three-dimensional visualization of the human face using DICOM data and its application to facial contouring surgery using free anterolateral thigh flap transfer.

    Science.gov (United States)

    Shimizu, Fumiaki; Uehara, Miyuki; Oatari, Miwako; Kusatsu, Manami

    2016-01-01

    One of the main challenges faced by surgeons performing reconstructive surgery in cases of facial asymmetry due to hemifacial atrophy or tumor surgery is the restoration of the natural contour of the face. Soft-tissue augmentation using free-flap transfer is one of the most commonly used methods for facial reconstruction. The most important part of a successful reconstruction is the preoperative assessment of the volume, position, and shape of the flap to be transplanted. This study focuses on three cases of facial deformity due to hemifacial progressive atrophy or tumor excision. For the preoperative assessment, digital imaging and communications in medicine (DICOM) data obtained from computed tomography was used and applied to a three-dimensional (3D) picture software program (ZedView, LEXI, Tokyo, Japan). Using computer simulation, a mirror image of the unaffected side of the face was applied to the affected side, and 3D visualization was performed. Using this procedure, a postoperative image of the face and precise shape, position, and amount of the flap that was going to be transferred was simulated preoperatively. In all cases, the postoperative shape of the face was acceptable, and a natural shape of the face could be obtained. Preoperative 3D visualization using computer simulation was helpful for estimating the reconstructive procedure and postoperative shape of the face. Using free-flap transfer, this procedure facilitates the natural shape after reconstruction of the face in facial contouring surgery.

  12. Fear of evaluation in social anxiety: mediation of attentional bias to human faces.

    Science.gov (United States)

    Sluis, Rachel A; Boschen, Mark J

    2014-12-01

    Social anxiety disorder (SAD) is a debilitating psychological disorder characterised by excessive fears of one or more social or performance situations, where there is potential for evaluation by others. A recently expanded cognitive-behavioural model of SAD emphasizes that both the fear of negative evaluation (FNE) and the fear of positive evaluation (FPE) contribute to enduring symptoms of SAD. Research also suggests that socially anxious individuals may show biases toward threat relevant stimuli, such as angry faces. The current study utilised a modified version of the pictorial dot-probe task in order to examine whether FNE and FPE mediate the relationship between social anxiety and an attentional bias. A group of 38 participants with moderate to high levels of self-reported social anxiety were tested in groups of two to four people and were advised that they would be required to deliver an impromptu speech. All participants then completed an assessment of attentional bias using angry-neutral, happy-neutral, and angry-happy face pairs. Conditions were satisfied for only one mediation model, indicating that the relationship between social anxiety and attentional avoidance of angry faces was mediated by FPE. These findings have important clinical implications for types of treatment concerning cognitive symptoms of SAD, along with advancing models of social anxiety. Limitations and ideas for future research from the current study were also discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Human race as indicator of 3D planning of soft tissue of face and multidisciplinary approach.

    Science.gov (United States)

    Nadazdyova, A; Samohyl, M; Stefankova, E; Pintesova, S; Stanko, P

    2017-01-01

    The aim of this study was to determine the optimal parameters for 3D soft tissue planning for ortognatic treatment by gender and increases the effectiveness of multidisciplinary cooperation. Craniofacial parameters which were analysed: nose breadth (al-al), bi-entocanthion breadth (en-en), bi-zygomatic breadth (zy-zy), bi-gonial breadth (go-go), total facial height (n-gn), mouth breadth (ch-ch), morphologic face height (sn-gn), upper-lip height (Ls-Stm), lower-lip height (Stm-Li) and pupils - mid-face (right). The statistically significant level was determined at p values maxilofacial surgeon. The gender and age influenced the variability of following parameters: bi-gonial breadth, total facial height and morphologic face height. The soft tissue values for craniofacial parameters can be used to identify the surgical-orthodontic goal for patient - europoid race. Due to the immigration and the mix of races it is necessary to take this fact into account (Tab. 3, Fig. 1, Ref. 41).

  14. A three-dimensional study of sexual dimorphism in the human face.

    Science.gov (United States)

    Ferrario, V F; Sforza, C; Poggio, C E; Serrao, G; Miani, A

    1994-01-01

    The sexual dimorphism in three-dimensional facial form (size plus shape) was investigated in a sample of 40 men and 36 women by using Euclidean-distance matrix analysis. Subjects ranged in age from 19 to 32 years, had excellent dentitions, and had no craniocervical disorders. For each subject, 16 facial landmarks were automatically collected using a computerized system consisting of two infrared CCD cameras, real-time hardware for the recognition of markers, and software for the three-dimensional reconstruction of landmarks' x, y, z coordinates. Euclidean-distance matrix analysis confirmed the well-known size difference between adult male and female faces (men's faces being 6% to 7% larger than women's faces), while it demonstrated no significant gender differences in three-dimensional facial shape. This result contrasted with the shape differences previously found when separate two-dimensional frontal and sagittal plane projections were analyzed. It could be explained by a relative three-dimensional compensation between the different facial dimensions.

  15. Putting the face in context: Body expressions impact facial emotion processing in human infants

    Directory of Open Access Journals (Sweden)

    Purva Rajhans

    2016-06-01

    Full Text Available Body expressions exert strong contextual effects on facial emotion perception in adults. Specifically, conflicting body cues hamper the recognition of emotion from faces, as evident on both the behavioral and neural level. We examined the developmental origins of the neural processes involved in emotion perception across body and face in 8-month-old infants by measuring event-related brain potentials (ERPs. We primed infants with body postures (fearful, happy that were followed by either congruent or incongruent facial expressions. Our results revealed that body expressions impact facial emotion processing and that incongruent body cues impair the neural discrimination of emotional facial expressions. Priming effects were associated with attentional and recognition memory processes, as reflected in a modulation of the Nc and Pc evoked at anterior electrodes. These findings demonstrate that 8-month-old infants possess neural mechanisms that allow for the integration of emotion across body and face, providing evidence for the early developmental emergence of context-sensitive facial emotion perception.

  16. Images of war: using satellite images for human rights monitoring in Turkish Kurdistan

    NARCIS (Netherlands)

    Vos, de H.; Jongerden, J.P.; Etten, van J.

    2008-01-01

    In areas of war and armed conflict it is difficult to get trustworthy and coherent information. Civil society and human rights groups often face problems of dealing with fragmented witness reports, disinformation of war propaganda, and difficult direct access to these areas. Turkish Kurdistan was us

  17. Human gene therapy and imaging in neurological diseases

    Energy Technology Data Exchange (ETDEWEB)

    Jacobs, Andreas H.; Winkler, Alexandra [Max Planck-Institute for Neurological Research, Center of Molecular Medicine (CMMC) and Department of Neurology, Cologne (Germany); MPI for Neurological Research, Laboratory for Gene Therapy and Molecular Imaging, Cologne (Germany); Castro, Maria G.; Lowenstein, Pedro [University of California Los Angeles (United States). Department of Medicine

    2005-12-01

    Molecular imaging aims to assess non-invasively disease-specific biological and molecular processes in animal models and humans in vivo. Apart from precise anatomical localisation and quantification, the most intriguing advantage of such imaging is the opportunity it provides to investigate the time course (dynamics) of disease-specific molecular events in the intact organism. Further, molecular imaging can be used to address basic scientific questions, e.g. transcriptional regulation, signal transduction or protein/protein interaction, and will be essential in developing treatment strategies based on gene therapy. Most importantly, molecular imaging is a key technology in translational research, helping to develop experimental protocols which may later be applied to human patients. Over the past 20 years, imaging based on positron emission tomography (PET) and magnetic resonance imaging (MRI) has been employed for the assessment and ''phenotyping'' of various neurological diseases, including cerebral ischaemia, neurodegeneration and brain gliomas. While in the past neuro-anatomical studies had to be performed post mortem, molecular imaging has ushered in the era of in vivo functional neuro-anatomy by allowing neuroscience to image structure, function, metabolism and molecular processes of the central nervous system in vivo in both health and disease. Recently, PET and MRI have been successfully utilised together in the non-invasive assessment of gene transfer and gene therapy in humans. To assess the efficiency of gene transfer, the same markers are being used in animals and humans, and have been applied for phenotyping human disease. Here, we review the imaging hallmarks of focal and disseminated neurological diseases, such as cerebral ischaemia, neurodegeneration and glioblastoma multiforme, as well as the attempts to translate gene therapy's experimental knowledge into clinical applications and the way in which this process is being

  18. The hierarchical brain network for face recognition.

    Science.gov (United States)

    Zhen, Zonglei; Fang, Huizhen; Liu, Jia

    2013-01-01

    Numerous functional magnetic resonance imaging (fMRI) studies have identified multiple cortical regions that are involved in face processing in the human brain. However, few studies have characterized the face-processing network as a functioning whole. In this study, we used fMRI to identify face-selective regions in the entire brain and then explore the hierarchical structure of the face-processing network by analyzing functional connectivity among these regions. We identified twenty-five regions mainly in the occipital, temporal and frontal cortex that showed a reliable response selective to faces (versus objects) across participants and across scan sessions. Furthermore, these regions were clustered into three relatively independent sub-networks in a face-recognition task on the basis of the strength of functional connectivity among them. The functionality of the sub-networks likely corresponds to the recognition of individual identity, retrieval of semantic knowledge and representation of emotional information. Interestingly, when the task was switched to object recognition from face recognition, the functional connectivity between the inferior occipital gyrus and the rest of the face-selective regions were significantly reduced, suggesting that this region may serve as an entry node in the face-processing network. In sum, our study provides empirical evidence for cognitive and neural models of face recognition and helps elucidate the neural mechanisms underlying face recognition at the network level.

  19. Attentional bias to affective faces and complex IAPS images in early visual cortex follows emotional cue extraction.

    Science.gov (United States)

    Bekhtereva, Valeria; Craddock, Matt; Müller, Matthias M

    2015-05-15

    Emotionally arousing stimuli are known to rapidly draw the brain's processing resources, even when they are task-irrelevant. The steady-state visual evoked potential (SSVEP) response, a neural response to a flickering stimulus which effectively allows measurement of the processing resources devoted to that stimulus, has been used to examine this process of attentional shifting. Previous studies have used a task in which participants detected periods of coherent motion in flickering random dot kinematograms (RDKs) which generate an SSVEP, and found that task-irrelevant emotional stimuli withdraw more attentional resources from the task-relevant RDKs than task-irrelevant neutral stimuli. However, it is not clear whether the emotion-related differences in the SSVEP response are conditional on higher-level extraction of emotional cues as indexed by well-known event-related potential (ERPs) components (N170, early posterior negativity, EPN), or if affective bias in competition for visual attention resources is a consequence of a time-invariant shifting process. In the present study, we used two different types of emotional distractors - IAPS pictures and facial expressions - for which emotional cue extraction occurs at different speeds, being typically earlier for faces (at ~170ms, as indexed by the N170) than for IAPS images (~220-280ms, EPN). We found that emotional modulation of attentional resources as measured by the SSVEP occurred earlier for faces (around 180ms) than for IAPS pictures (around 550ms), after the extraction of emotional cues as indexed by visual ERP components. This is consistent with emotion related re-allocation of attentional resources occurring after emotional cue extraction rather than being linked to a time-fixed shifting process. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Trichinella spiralis in human muscle (image)

    Science.gov (United States)

    This is the parasite Trichinella spiralis in human muscle tissue. The parasite is transmitted by eating undercooked ... produce large numbers of larvae that migrate into muscle tissue. The cysts may cause muscle pain and ...

  1. Imaging of foreign bodies in the face and ultrasound guided surgical removal; Diagnostico por imagem de corpos estranhos da face e retirada cirurgica guiada por ultra-sonografia

    Energy Technology Data Exchange (ETDEWEB)

    Lima, Claudio Marcio Amaral de Oliveira; Gambin, Moises; Ribeiro, Erica Barreiros; Amarante Junior, Jose Luiz de Medeiros [Hospital Naval Marcilio Dias, Rio de Janeiro, RJ (Brazil)]. E-mail: cmaol@br.inter.net; cmaolima@hotmail.com; Monteiro, Alexandra Maria Vieira [Pontificia Univ. Catolica do Rio de Janeiro (PUC/Rio), RJ (Brazil). Curso de Pos-Graduacao em Radiologia

    2006-10-15

    The identification and surgical removal of foreign bodies is a complex procedure in medical practice, principally when the involved material is radiolucent. The technological advent of the ultrasonography equipment comes enlarging the field of application of this method more and more, in medical practice. The authors describe a case of an ultrasound guided surgical removal of glass fragments from the face of a patient. The foreign bodies were previously diagnosed by ultrasound and computed tomography. The guided technic showed secure, less invasive and efficient, allowing the retreat of all the fragments. (author)

  2. Human eye visual hyperacuity: Controlled diffraction for image resolution improvement

    Science.gov (United States)

    Lagunas, A.; Domínguez, O.; Martinez-Conde, S.; Macknik, S. L.; Del-Río, C.

    2017-09-01

    The Human Visual System appears to be using a low number of sensors for image capturing, and furthermore, regarding the physical dimensions of cones—photoreceptors responsible for the sharp central vision—we may realize that these sensors are of a relatively small size and area. Nonetheless, the human eye is capable of resolving fine details thanks to visual hyperacuity and presents an impressive sensitivity and dynamic range when set against conventional digital cameras of similar characteristics. This article is based on the hypothesis that the human eye may be benefiting from diffraction to improve both image resolution and acquisition process. The developed method involves the introduction of a controlled diffraction pattern at an initial stage that enables the use of a limited number of sensors for capturing the image and makes possible a subsequent post-processing to improve the final image resolution.

  3. 基于强度 PCNN 的静态图像人脸识别%Static Image Face Recognition Based on the Strength of PCNN

    Institute of Scientific and Technical Information of China (English)

    常莎; 邓红霞; 李海芳

    2015-01-01

    In order to reduce the influence of the change of face posture ,facial expression and light in the face image on face recognition ,a novel feature extracting method is quoted for face recognition based on the strength of Pulse Coupled Neural network (PCNN ) .Different face ima_ges have different grayscale characteristics ,every image can form a specific pulse intensity matrix after being put into the new PCNN model .Experiments use pulse intensity matrix as facial fea_tures ,and combine the distance classifier—cosine distance for face recognition .Simulation results show that ,the characteristics extracted by the strength of PCNN model can portray the details of the face .For different posture ,facial expression and facial mask of the face image ,good recogni_tion result is obtained .This method has strong robustness in the aspact of feature extraction for complex face image .%为了减少人脸图像中姿势、表情和光照等因素对人脸识别的影响,引用了一种基于脉冲发放强度的脉冲耦合神经网络(PCNN ,pulse coupled neural network)的人脸特征提取方法。不同人脸图像具有不同的灰度特征,将人脸图像输入PC N N模型后可以得到各个图像特定的脉冲发放强度矩阵。实验利用脉冲强度矩阵作为人脸特征,并结合距离分类器———余弦距离进行人脸识别。仿真实验表明,基于强度PCNN模型提取的特征能刻画出人脸的细节,对于不同姿势、表情及面部明显遮挡物的人脸图像,具有较好的识别结果。该方法对于复杂人脸图像特征的提取,具有较强的鲁棒性。

  4. The study of the face images processing based on OpenCV%基于OpenCV的人脸图像预处理技术研究

    Institute of Scientific and Technical Information of China (English)

    梁永霖

    2012-01-01

    对采集到的人脸图像进行预处理和训练,以改善图像的视觉效果,提高图像的清晰度,并且使图像更有利于计算机处理,便于对图像进行分割和边缘检测,从而提高人脸图像人别的准确率,为人脸的提取特征值和识别等操作做好准备.利用PCA人脸识别方法,实现简单且识别准确率高,OpenCV的特点是实现了图像处理和计算机视觉方面的很多通用算法,实验结果表明,通过预处理后的人脸图像识别效果更好,识别速度更快.%According to the collected face images, conducting the processing and training. To improve the im- ages of the visual affection and the clarity of the images, which enabled them more easily to computer process.- ing, facilitating images segmentation and edge detection, such as improving the detection accuracy of the face images, preparing for the face feature extraction values and identification operations. Using PCA face recognitior~ method, for its simple and accurate identification rate, the characteristics of OpenCV provide many common a~- gorithms of image processing and computer vision. The experimental results show that, after pretreatment of the images, it has a better face recognition affection and recognition speed.

  5. Human Pose Estimation from Monocular Images: A Comprehensive Survey

    Directory of Open Access Journals (Sweden)

    Wenjuan Gong

    2016-11-01

    Full Text Available Human pose estimation refers to the estimation of the location of body parts and how they are connected in an image. Human pose estimation from monocular images has wide applications (e.g., image indexing. Several surveys on human pose estimation can be found in the literature, but they focus on a certain category; for example, model-based approaches or human motion analysis, etc. As far as we know, an overall review of this problem domain has yet to be provided. Furthermore, recent advancements based on deep learning have brought novel algorithms for this problem. In this paper, a comprehensive survey of human pose estimation from monocular images is carried out including milestone works and recent advancements. Based on one standard pipeline for the solution of computer vision problems, this survey splits the problem into several modules: feature extraction and description, human body models, and modeling methods. Problem modeling methods are approached based on two means of categorization in this survey. One way to categorize includes top-down and bottom-up methods, and another way includes generative and discriminative methods. Considering the fact that one direct application of human pose estimation is to provide initialization for automatic video surveillance, there are additional sections for motion-related methods in all modules: motion features, motion models, and motion-based methods. Finally, the paper also collects 26 publicly available data sets for validation and provides error measurement methods that are frequently used.

  6. Editorial Commentary: Lesions of the Superior Labrum From Anterior to Posterior (SLAP) Are a Slap in the Face to the Traditional Trinity of History, Examination, and Imaging.

    Science.gov (United States)

    Lubowitz, James H

    2015-12-01

    Superior labrum from anterior to posterior (SLAP) lesions are a slap in the face to the revered trinity of history, physical examination, and imaging. SLAP lesions are difficult to diagnose, and arthroscopy is not only the gold standard, but the current method on which expert arthroscopic shoulder subspecialists rely.

  7. Comparing Face Detection and Recognition Techniques

    OpenAIRE

    Korra, Jyothi

    2016-01-01

    This paper implements and compares different techniques for face detection and recognition. One is find where the face is located in the images that is face detection and second is face recognition that is identifying the person. We study three techniques in this paper: Face detection using self organizing map (SOM), Face recognition by projection and nearest neighbor and Face recognition using SVM.

  8. Investigating Human Evolution Using Digital Imaging & Craniometry

    Science.gov (United States)

    Robertson, John C.

    2007-01-01

    Human evolution is an important and intriguing area of biology. The significance of evolution as a component of biology curricula, at all levels, can not be overstated; the need to make the most of opportunities to effectively educate students in evolution as a central and unifying realm of biology is paramount. Developing engaging laboratory or…

  9. Investigating Human Evolution Using Digital Imaging & Craniometry

    Science.gov (United States)

    Robertson, John C.

    2007-01-01

    Human evolution is an important and intriguing area of biology. The significance of evolution as a component of biology curricula, at all levels, can not be overstated; the need to make the most of opportunities to effectively educate students in evolution as a central and unifying realm of biology is paramount. Developing engaging laboratory or…

  10. Implementing Tumor Detection and Area Calculation in Mri Image of Human Brain Using Image Processing Techniques

    OpenAIRE

    Sunil L. Bangare; Madhura Patil

    2015-01-01

    This paper is based on the research on Human Brain Tumor which uses the MRI imaging technique to capture the image. In this proposed work Brain Tumor area is calculated to define the Stage or level of seriousness of the tumor. Image Processing techniques are used for the brain tumor area calculation and Neural Network algorithms for the tumor position calculation. Also in the further advancement the classification of the tumor based on few parameters is also expected. Proposed wor...

  11. Reduction of Feature Vectors Using Rough Set Theory for Human Face Recognition

    CERN Document Server

    Bhattacharjee, Debotosh; Nasipuri, Mita; Kundu, M

    2010-01-01

    In this paper we describe a procedure to reduce the size of the input feature vector. A complex pattern recognition problem like face recognition involves huge dimension of input feature vector. To reduce that dimension here we have used eigenspace projection (also called as Principal Component Analysis), which is basically transformation of space. To reduce further we have applied feature selection method to select indispensable features, which will remain in the final feature vectors. Features those are not selected are removed from the final feature vector considering them as redundant or superfluous. For selection of features we have used the concept of reduct and core from rough set theory. This method has shown very good performance. It is worth to mention that in some cases the recognition rate increases with the decrease in the feature vector dimension.

  12. Toward Perceiving Robots as Humans: Three Handshake Models Face the Turing-Like Handshake Test.

    Science.gov (United States)

    Avraham, G; Nisky, I; Fernandes, H L; Acuna, D E; Kording, K P; Loeb, G E; Karniel, A

    2012-01-01

    In the Turing test a computer model is deemed to "think intelligently" if it can generate answers that are indistinguishable from those of a human. We developed an analogous Turing-like handshake test to determine if a machine can produce similarly indistinguishable movements. The test is administered through a telerobotic system in which an interrogator holds a robotic stylus and interacts with another party - artificial or human with varying levels of noise. The interrogator is asked which party seems to be more human. Here, we compare the human-likeness levels of three different models for handshake: (1) Tit-for-Tat model, (2) λ model, and (3) Machine Learning model. The Tit-for-Tat and the Machine Learning models generated handshakes that were perceived as the most human-like among the three models that were tested. Combining the best aspects of each of the three models into a single robotic handshake algorithm might allow us to advance our understanding of the way the nervous system controls sensorimotor interactions and further improve the human-likeness of robotic handshakes.

  13. Quality labeled faces in the wild (QLFW): a database for studying face recognition in real-world environments

    Science.gov (United States)

    Karam, Lina J.; Zhu, Tong

    2015-03-01

    The varying quality of face images is an important challenge that limits the effectiveness of face recognition technology when applied in real-world applications. Existing face image databases do not consider the effect of distortions that commonly occur in real-world environments. This database (QLFW) represents an initial attempt to provide a set of labeled face images spanning the wide range of quality, from no perceived impairment to strong perceived impairment for face detection and face recognition applications. Types of impairment include JPEG2000 compression, JPEG compression, additive white noise, Gaussian blur and contrast change. Subjective experiments are conducted to assess the perceived visual quality of faces under different levels and types of distortions and also to assess the human recognition performance under the considered distortions. One goal of this work is to enable automated performance evaluation of face recognition technologies in the presence of different types and levels of visual distortions. This will consequently enable the development of face recognition systems that can operate reliably on real-world visual content in the presence of real-world visual distortions. Another goal is to enable the development and assessment of visual quality metrics for face images and for face detection and recognition applications.

  14. Fetal magnetic resonance imaging and human genetics

    Energy Technology Data Exchange (ETDEWEB)

    Hengstschlaeger, Markus [Medical Genetics, Obstetrics and Gynecology, Medical University of Vienna, Waehringer Guertel 18-20, 1090 Vienna (Austria)]. E-mail: markus.hengstschlaeger@meduniwien.ac.at

    2006-02-15

    The use of fetal magnetic resonance imaging (MRI), in addition to prenatal genetic testing and sonography, has the potential to improve prenatal diagnosis of genetic disorders. MRI plays an important role in the evaluation of fetal abnormalities and malformations. Fetal MRI often enables a differential diagnosis, a determination of the extent of the disorder, the prognosis, and an improvement in therapeutic management. For counseling of parents, as well as to basically understand how genetic aberrations affect fetal development, it is of great importance to correlate different genotypes with fetal MRI data.

  15. Multilevel depth and image fusion for human activity detection.

    Science.gov (United States)

    Ni, Bingbing; Pei, Yong; Moulin, Pierre; Yan, Shuicheng

    2013-10-01

    Recognizing complex human activities usually requires the detection and modeling of individual visual features and the interactions between them. Current methods only rely on the visual features extracted from 2-D images, and therefore often lead to unreliable salient visual feature detection and inaccurate modeling of the interaction context between individual features. In this paper, we show that these problems can be addressed by combining data from a conventional camera and a depth sensor (e.g., Microsoft Kinect). We propose a novel complex activity recognition and localization framework that effectively fuses information from both grayscale and depth image channels at multiple levels of the video processing pipeline. In the individual visual feature detection level, depth-based filters are applied to the detected human/object rectangles to remove false detections. In the next level of interaction modeling, 3-D spatial and temporal contexts among human subjects or objects are extracted by integrating information from both grayscale and depth images. Depth information is also utilized to distinguish different types of indoor scenes. Finally, a latent structural model is developed to integrate the information from multiple levels of video processing for an activity detection. Extensive experiments on two activity recognition benchmarks (one with depth information) and a challenging grayscale + depth human activity database that contains complex interactions between human-human, human-object, and human-surroundings demonstrate the effectiveness of the proposed multilevel grayscale + depth fusion scheme. Higher recognition and localization accuracies are obtained relative to the previous methods.

  16. Self-face recognition in social context.

    Science.gov (United States)

    Sugiura, Motoaki; Sassa, Yuko; Jeong, Hyeonjeong; Wakusawa, Keisuke; Horie, Kaoru; Sato, Shigeru; Kawashima, Ryuta

    2012-06-01

    The concept of "social self" is often described as a representation of the self-reflected in the eyes or minds of others. Although the appearance of one's own face has substantial social significance for humans, neuroimaging studies have failed to link self-face recognition and the likely neural substrate of the social self, the medial prefrontal cortex (MPFC). We assumed that the social self is recruited during self-face recognition under a rich social context where multiple other faces are available for comparison of social values. Using functional magnetic resonance imaging (fMRI), we examined the modulation of neural responses to the faces of the self and of a close friend in a social context. We identified an enhanced response in the ventral MPFC and right occipitoparietal sulcus in the social context specifically for the self-face. Neural response in the right lateral parietal and inferior temporal cortices, previously claimed as self-face-specific, was unaffected for the self-face but unexpectedly enhanced for the friend's face in the social context. Self-face-specific activation in the pars triangularis of the inferior frontal gyrus, and self-face-specific reduction of activation in the left middle temporal gyrus and the right supramarginal gyrus, replicating a previous finding, were not subject to such modulation. Our results thus demonstrated the recruitment of a social self during self-face recognition in the social context. At least three brain networks for self-face-specific activation may be dissociated by different patterns of response-modulation in the social context, suggesting multiple dynamic self-other representations in the human brain.

  17. Imaging Monoamine Oxidase in the Human Brain

    Energy Technology Data Exchange (ETDEWEB)

    Fowler, J. S.; Volkow, N. D.; Wang, G-J.; Logan, Jean

    1999-11-10

    Positron emission tomography (PET) studies mapping monoamine oxidase in the human brain have been used to measure the turnover rate for MAO B; to determine the minimum effective dose of a new MAO inhibitor drug lazabemide and to document MAO inhibition by cigarette smoke. These studies illustrate the power of PET and radiotracer chemistry to measure normal biochemical processes and to provide information on the effect of drug exposure on specific molecular targets.

  18. En-face optical coherence tomography revival

    Science.gov (United States)

    Bradu, Adrian; Kapinchev, Konstantin; Barnes, Frederick; Podoleanu, Adrian Gh.

    2016-03-01

    Quite recently, we introduced a novel Optical Coherence Tomography (OCT) method, termed as Master Slave OCT (MS-OCT), especially to deliver en-face images. MS-OCT operates like a time domain OCT, selecting signal from a selected depth only while scanning the laser beam across the sample. Time domain OCT allows real time production of an en-face image, although relatively slowly. As a major advance, the Master Slave method allows collection of signals from any number of depths, as required by the user. MS-OCT is an OCT method that does not require resampling of data and can be used to deliver en-face images from several depths simultaneously. However, as the MS-OCT method requires important computational resources, the number of multiple depth en-face images produced in real-time is limited. Here, we demonstrate that taking advantage of the parallel processing feature of the MS-OCT technology by harnessing the capabilities of graphics processing units (GPU)s, information from 384 depth positions is acquired in one raster with real time display of 40 en-face OCT images. These exhibit comparable resolution and sensitivity to the images produced using the traditional Fourier domain based method. The GPU facilitates versatile real time selection of parameters, such as the depth positions of the 40 images out of a set of 384 depth locations, as well as their axial resolution. Here, we present in parallel with the 40 en-face OCT images of a human tooth, a confocal microscopy lookalike image, together with two B-scan OCT images along rectangular directions.

  19. Modulation of face-sensitive event-related potentials by canonical and distorted human faces: the role of vertical symmetry and up-down featural arrangement.

    Science.gov (United States)

    Macchi Cassia, Viola; Kuefner, Dana; Westerlund, Alissa; Nelson, Charles A

    2006-08-01

    This study examined the sensitivity of early face-sensitive event-related potential (ERP) components to the disruption of two structural properties embedded in faces, namely, "updown featural arrangement" and "vertical symmetry." Behavioral measures and ERPs were recorded as adults made an orientation judgment for canonical faces and distorted faces that had been manipulated for either or both of the mentioned properties. The P1, the N170, and the vertex positive potential (VPP) exhibited a similar gradient in sensitivity to the two investigated properties, in that they all showed a linear increase in amplitude or latency as the properties were selectively disrupted in the order of (1) up-down featural arrangement, (2) vertical symmetry, and (3) both up-down featural arrangement and vertical symmetry. Exceptions to this finding were seen for the amplitudes of the N170 and VPP, which were largest for the stimulus in which solely vertical symmetry was disrupted. Interestingly, the enhanced amplitudes of the N170 and VPP are consistent with a drop in behavioral performance on the orientation judgment for this stimulus.

  20. [Human physiology: images and practices of the reflex].

    Science.gov (United States)

    Wübben, Yvonne

    2010-01-01

    The essay examines the function of visualizations and practices in the formation of the reflex concept from Thomas Willis to Marshall Hall. It focuses on the specific form of reflex knowledge that images and practices can contain. In addition, the essay argues that it is through visual representations and experimental practices that technical knowledge is transferred to the field of human reflex physiology. When using technical metaphors in human physiology authors often seem to feel obliged to draw distinctions between humans, machines and animals. On closer scrutiny, these distinctions sometimes fail to establish firm borders between the human and the technical.

  1. Diagnose human colonic tissues by terahertz near-field imaging

    Science.gov (United States)

    Chen, Hua; Ma, Shihua; Wu, Xiumei; Yang, Wenxing; Zhao, Tian

    2015-03-01

    Based on a terahertz (THz) pipe-based near-field imaging system, we demonstrate the capability of THz imaging to diagnose freshly surgically excised human colonic tissues. Through THz near-field scanning the absorbance of the colonic tissues, the acquired images can clearly distinguish cancerous tissues from healthy tissues fast and automatically without pathological hematoxylin and eosin stain diagnosis. A statistical study on 58 specimens (20 healthy tissues and 38 tissues with tumor) from 31 patients (mean age: 59 years; range: 46 to 79 years) shows that the corresponding diagnostic sensitivity and specificity on colonic tissues are both 100%. Due to its capability to perform quantitative analysis, our study indicates the potential of the THz pipe-based near-field imaging for future automation on human tumor pathological examinations.

  2. Variation at genes influencing facial morphology are not associated with developmental imprecision in human faces.

    Directory of Open Access Journals (Sweden)

    Sonja Windhager

    Full Text Available Facial asymmetries are commonly used as a proxy for human developmental imprecision resulting from inbreeding, and thus reduced genetic heterozygosity. Several environmental factors influence human facial asymmetry (e.g., health care, parasites, but the generalizability of findings on genetic stressors has been limited in humans by sample characteristics (island populations, endogamy and indirect genetic assessment (inference from pedigrees. In a sample of 3215 adult humans from the Rotterdam Study, we therefore studied the relationship of facial asymmetry, estimated from nine mid-facial landmarks, with genetic variation at 102 single nucleotide polymorphism (SNP loci recently associated with facial shape variation. We further tested whether the degree of individual heterozygosity is negatively correlated with facial asymmetry. An ANOVA tree regression did not identify any SNP relating to either fluctuating asymmetry or total asymmetry. In a general linear model, only age and sex--but neither heterozygosity nor any SNP previously reported to covary with facial shape--was significantly related to total or fluctuating asymmetry of the midface. Our study does not corroborate the common assumption in evolutionary and behavioral biology that morphological asymmetries reflect heterozygosity. Our results, however, may be affected by a relatively small degree of inbreeding, a relatively stable environment, and an advanced age in the Rotterdam sample. Further large-scale genetic studies, including gene expression studies, are necessary to validate the genetic and developmental origin of morphological asymmetries.

  3. Silhouette extraction from human gait images sequence using cosegmentation

    Science.gov (United States)

    Chen, Jinyan; Zhang, Yi

    2012-11-01

    Gait based human identification is very useful for automatic person recognize through visual surveillance and has attracted more and more researchers. A key step in gait based human identification is to extract human silhouette from images sequence. Current silhouette extraction methods are mainly based on simple color subtraction. These methods have a very poor performance when the color of some body parts is similar to the background. In this paper a cosegmentation based human silhouette extraction method is proposed. Cosegmentation is typically defined as the task of jointly segmenting "something similar" in a given set of images. We can divide the human gait images sequence into several step cycles and every step cycle consist of 10-15 frames. The frames in human gait images sequence have following similarity: every frame is similar to the next or previous frame; every frame is similar to the corresponding frame in the next or previous step cycle; every pixel can find similar pixel in other frames. The progress of cosegmentation based human silhouette extraction can be described as follows: Initially only points which have high contrast to background are used as foreground kernel points, the points in the background are used as background kernel points, then points similar to foreground points will be added to foreground points set and the points similar to background points will be added to background points set. The definition of the similarity consider the context of the point. Experimental result shows that our method has a better performance comparing to traditional human silhouette extraction methods. Keywords: Human gait

  4. Automatic 2D-to-3D video conversion by monocular depth cues fusion and utilizing human face landmarks

    Science.gov (United States)

    Fard, Mani B.; Bayazit, Ulug

    2013-12-01

    In this paper, we propose a hybrid 2D-to-3D video conversion system to recover the 3D structure of the scene. Depending on the scene characteristics, geometric or height depth information is adopted to form the initial depth map. This depth map is fused with color-based depth cues to construct the nal depth map of the scene background. The depths of the foreground objects are estimated after their classi cation into human and non-human regions. Speci cally, the depth of a non-human foreground object is directly calculated from the depth of the region behind it in the background. To acquire more accurate depth for the regions containing a human, the estimation of the distance between face landmarks is also taken into account. Finally, the computed depth information of the foreground regions is superimposed on the background depth map to generate the complete depth map of the scene which is the main goal in the process of converting 2D video to 3D.

  5. Unaware person recognition from the body when face identification fails.

    Science.gov (United States)

    Rice, Allyson; Phillips, P Jonathon; Natu, Vaidehi; An, Xiaobo; O'Toole, Alice J

    2013-11-01

    How does one recognize a person when face identification fails? Here, we show that people rely on the body but are unaware of doing so. State-of-the-art face-recognition algorithms were used to select images of people with almost no useful identity information in the face. Recognition of the face alone in these cases was near chance level, but recognition of the person was accurate. Accuracy in identifying the person without the face was identical to that in identifying the whole person. Paradoxically, people reported relying heavily on facial features over noninternal face and body features in making their identity decisions. Eye movements indicated otherwise, with gaze duration and fixations shifting adaptively toward the body and away from the face when the body was a better indicator of identity than the face. This shift occurred with no cost to accuracy or response time. Human identity processing may be partially inaccessible to conscious awareness.

  6. Awake fMRI reveals a specialized region in dog temporal cortex for face processing

    Science.gov (United States)

    Dilks, Daniel D.; Cook, Peter; Weiller, Samuel K.; Berns, Helen P.; Spivak, Mark

    2015-01-01

    Recent behavioral evidence suggests that dogs, like humans and monkeys, are capable of visual face recognition. But do dogs also exhibit specialized cortical face regions similar to humans and monkeys? Using functional magnetic resonance imaging (fMRI) in six dogs trained to remain motionless during scanning without restraint or sedation, we found a region in the canine temporal lobe that responded significantly more to movies of human faces than to movies of everyday objects. Next, using a new stimulus set to investigate face selectivity in this predefined candidate dog face area, we found that this region responded similarly to images of human faces and dog faces, yet significantly more to both human and dog faces than to images of objects. Such face selectivity was not found in dog primary visual cortex. Taken together, these findings: (1) provide the first evidence for a face-selective region in the temporal cortex of dogs, which cannot be explained by simple low-level visual feature extraction; (2) reveal that neural machinery dedicated to face processing is not unique to primates; and (3) may help explain dogs’ exquisite sensitivity to human social cues. PMID:26290784

  7. Human Posture Recognition Based on Images Captured by the Kinect Sensor

    Directory of Open Access Journals (Sweden)

    Wen-June Wang

    2016-03-01

    Full Text Available In this paper we combine several image processing techniques with the depth images captured by a Kinect sensor to successfully recognize the five distinct human postures of sitting, standing, stooping, kneeling, and lying. The proposed recognition procedure first uses background subtraction on the depth image to extract a silhouette contour of a human. Then, a horizontal projection of the silhouette contour is employed to ascertain whether or not the human is kneeling. If the figure is not kneeling, the star skeleton technique is applied to the silhouette contour to obtain its feature points. We can then use the feature points together with the centre of gravity to calculate the feature vectors and depth values of the body. Next, we input the feature vectors and the depth values into a pre-trained LVQ (learning vector quantization neural network; the outputs of this will determine the postures of sitting (or standing, stooping, and lying. Lastly, if an output indicates sitting or standing, one further, similar feature identification technique is needed to confirm this output. Based on the results of many experiments, using the proposed method, the rate of successful recognition is higher than 97% in the test data, even though the subjects of the experiments may not have been facing the Kinect sensor and may have had different statures. The proposed method can be called a “hybrid recognition method”, as many techniques are combined in order to achieve a very high recognition rate paired with a very short processing time.

  8. Imaging Cytometry of Human Leukocytes with Third Harmonic Generation Microscopy

    Science.gov (United States)

    Wu, Cheng-Ham; Wang, Tzung-Dau; Hsieh, Chia-Hung; Huang, Shih-Hung; Lin, Jong-Wei; Hsu, Szu-Chun; Wu, Hau-Tieng; Wu, Yao-Ming; Liu, Tzu-Ming

    2016-11-01

    Based on third-harmonic-generation (THG) microscopy and a k-means clustering algorithm, we developed a label-free imaging cytometry method to differentiate and determine the types of human leukocytes. According to the size and average intensity of cells in THG images, in a two-dimensional scatter plot, the neutrophils, monocytes, and lymphocytes in peripheral blood samples from healthy volunteers were clustered into three differentiable groups. Using these features in THG images, we could count the number of each of the three leukocyte types both in vitro and in vivo. The THG imaging-based counting results agreed well with conventional blood count results. In the future, we believe that the combination of this THG microscopy-based imaging cytometry approach with advanced texture analysis of sub-cellular features can differentiate and count more types of blood cells with smaller quantities of blood.

  9. Early Stage Disease Diagnosis System Using Human Nail Image Processing

    Directory of Open Access Journals (Sweden)

    Trupti S. Indi

    2016-07-01

    Full Text Available Human’s hand nail is analyzed to identify many diseases at early stage of diagnosis. Study of person hand nail color helps in identification of particular disease in healthcare domain. The proposed system guides in such scenario to take decision in disease diagnosis. The input to the proposed system is person nail image. The system will process an image of nail and extract features of nail which is used for disease diagnosis. Human nail consist of various features, out of which proposed system uses nail color changes for disease diagnosis. Here, first training set data is prepared using Weka tool from nail images of patients of specific diseases. A feature extracted from input nail image is compared with the training data set to get result. In this experiment we found that using color feature of nail image average 65% results are correctly matched with training set data during three tests conducted.

  10. Blurred Face Image Recovery Algorithm Based on Total Variation%基于全变分的模糊人脸图像复原算法

    Institute of Scientific and Technical Information of China (English)

    魏雪飞; 葛成伟

    2013-01-01

    Face image was always affected by many factors,then its quality was degraded during the acquisition process,so image recovery from blurred images to clear images became one of the hot spots of the image processing domain.Based on the total variational regularization algorithm,by introduction of point sets to cluster,and with some auxiliary constraints,this paper puts forward an optimization constraint model of face image recovery,which is iteratived by steepest descent method,to make the blurred face images clear.The experimental results demonstrate that it is feasible for this model by adjusting the parameters,and it recovers the original image substantially.%在人脸图像采集过程中,会受到多种因素影响,图像质量都会有所退化,因此将模糊图像恢复成清晰图像一直是图像处理领域的热点之一.根据全变分正则化的图像复原算法的思想,引入相容点集与不相容点集的概念,以及一些辅助的约束条件,提出一种有约束人脸图像复原优化模型,并使用最速下降法求解此模型,使模糊的人脸图像清晰化.实验结果表明,这种模型是可行的,基本恢复了原始的图像.

  11. Bones and humanity. On Forensic Anthropology and its constitutive power facing forced disappearance

    Directory of Open Access Journals (Sweden)

    Anne Huffschmid

    2015-11-01

    Full Text Available Forensic anthropologists seek to decipher traces of anonymous dead, to restitute identities of human remains and to provide their families with the possibility to conclude mourning and even of justice. The article explores the contributions and meanings of forensic anthropology as state-independent practice beyond a mereley criminalistic approach, as it was conceptualized by the Argentine pioneers after the last dictatorship in this nation. I conceive this practice as a sort of arqueology of contemporary terror that seeks to confront a specific violence as the forced disappearance of persons and the deshumanization of their dead bodies. The article proposes reading forensic anthropology as a 'situated cience', with its complexities and ambigueties, that operates between nameless bones (the human remains and names without bodies (the so-called disappeared in settings of violent pasts such as Argentina or Guatemala, and especially in Mexico, where mass graves became the new symbol of a horrified present.

  12. Human Activity Detection from RGBD Images

    CERN Document Server

    Sung, Jaeyong; Selman, Bart; Saxena, Ashutosh

    2011-01-01

    Being able to detect and recognize human activities is important for making personal assistant robots useful in performing assistive tasks. The challenge is to develop a system that is low-cost, reliable in unstructured home settings, and also straightforward to use. In this paper, we use a RGBD sensor (Microsoft Kinect) as the input sensor, and present learning algorithms to infer the activities. Our algorithm is based on a hierarchical maximum entropy Markov model (MEMM). It considers a person's activity as composed of a set of sub-activities, and infers the two-layered graph structure using a dynamic programming approach. We test our algorithm on detecting and recognizing twelve different activities performed by four people in different environments, such as a kitchen, a living room, an office, etc., and achieve an average performance of 84.3% when the person was seen before in the training set (and 64.2% when the person was not seen before).

  13. Facing the Challenge of Data Transfer from Animal Models to Humans: the Case of Persistent Organohalogens

    Directory of Open Access Journals (Sweden)

    Takser Larissa

    2008-11-01

    Full Text Available Abstract A well-documented fact for a group of persistent, bioaccumulating organohalogens contaminants, namely polychlorinated biphenyls (PCBs, is that appropriate regulation was delayed, on average, up to 50 years. Some of the delay may be attributed to the fact that the science of toxicology was in its infancy when PCBs were introduced in 1920's. Nevertheless, even following the development of modern toxicology this story repeats itself 45 years later with polybrominated diphenyl ethers (PBDEs another compound of concern for public health. The question is why? One possible explanation may be the low coherence between experimental studies of toxic effects in animal models and human studies. To explore this further, we reviewed a total of 807 PubMed abstracts and full texts reporting studies of toxic effects of PCB and PBDE in animal models. Our analysis documents that human epidemiological studies of PBDE stand to gain little from animal studies due to the following: 1 the significant delay between the commercialisation of a substance and studies with animal models; 2 experimental exposure levels in animals are several orders of magnitude higher than exposures in the general human population; 3 the limited set of evidence-based endocrine endpoints; 4 the traditional testing sequence (adult animals – neonates – foetuses postpones investigation of the critical developmental stages; 5 limited number of animal species with human-like toxicokinetics, physiology of development and pregnancy; 6 lack of suitable experimental outcomes for the purpose of epidemiological studies. Our comparison of published PCB and PBDE studies underscore an important shortcoming: history has, unfortunately, repeated itself. Broadening the crosstalk between the various branches of toxicology should therefore accelerate accumulation of data to enable timely and appropriate regulatory action.

  14. Does Masculinity Matter? The Contribution of Masculine Face Shape to Male Attractiveness in Humans

    OpenAIRE

    Isabel M L Scott; Nicholas Pound; Stephen, Ian D.; Clark, Andrew P; Penton-Voak, Ian S

    2010-01-01

    Copyright: © 2010 Scott et al. Background: In many animals, exaggerated sex-typical male traits are preferred by females, and may be a signal of both past and current disease resistance. The proposal that the same is true in humans - i.e., that masculine men are immunocompetent and attractive - underpins a large literature on facial masculinity preferences. Recently, theoretical models have suggested that current condition may be a better index of mate value than past immunocompetence. Thi...

  15. Visible Korean human images on MIOS system

    Science.gov (United States)

    Har, Donghwan; Son, Young-Ho; Lee, Sung-Won; Lee, Jung Beom

    2004-05-01

    Basically, photography has the attributes of reason, which encompasses the scientific knowledge of optics, physics and chemistry, and delicate sensibility of individuals. Ultimately, the photograph pursues "effective communication." Communication is "mental and psychosocial exchange mediated by material symbols, such as language, gesture and picture," and it has four compositions: "sender, receiver, message and channel." Recently, a change in the communication method is on the rise in the field of art and culture, including photography. Until now, communication was mainly achieved by the form of messages unilaterally transferred from senders to receivers. But, nowadays, an interactive method, in which the boundary of sender and receiver is obscure, is on the increase. Such new communication method may be said to have arrived from the desire of art and culture societies, pursuing something new and creative in the background of utilization of a variety of information media. The multi-view screen we developed is also a communication tool capable of effective interaction using photos or motion pictures. The viewer can see different images at different locations. It utilizes the basic lenticular characteristics, which have been used in printing. Each motion picture is displayed on the screen without crosstalk. The multi-view screen is different in many aspects from other display media, and is expected to be utilized in many fields, including advertisement, display and education.

  16. SEM and microCT validation for en face OCT imagistic evaluation of endodontically treated human teeth

    Science.gov (United States)

    Negrutiu, Meda L.; Nica, Luminita; Sinescu, Cosmin; Topala, Florin; Ionita, Ciprian; Bradu, Adrian; Petrescu, Emanuela L.; Pop, Daniela M.; Rominu, Mihai; Podoleanu, Adrian Gh.

    2011-03-01

    Successful root canal treatment is based on diagnosis, treatment planning, knowledge of tooth anatomy, endodontic access cavity design, controlling the infection by thorough cleaning and shaping, methods and materials used in root canal obturation. An endodontic obturation must be a complete, three-dimensional filling of the root canal system, as close as possible to cemento-dentinal junction, without massive overfilling or underfilling. There are several known methods which are used to assess the quality of the endodontic sealing, but most are invasive. These lead to the destruction of the samples and often no conclusion could be drawn in respect to the existence of any microleakage in the investigated areas of interest. Using an time domain en-face OCT system, we have recently demonstrated real time thorough evaluation of quality of root canal fillings. The purpose of this in vitro study was to validate the en face OCT imagistic evaluation of endodontically treated human teeth by using scanning electron microscopy (SEM) and microcomputer tomography (μCT). SEM investigations evidenced the nonlinear aspect of the interface between the endodontic filling material and the root canal walls and materials defects in some samples. The results obtained by μCT revealed also some defects inside the root-canal filling and at the interfaces between the material and the root canal walls. The advantages of the OCT method consist in non-invasiveness and high resolution. In addition, en face OCT investigations permit visualization of the more complex stratified structure at the interface between the filling material and the dental hard tissue.

  17. Sensations evoked by microstimulation of single mechanoreceptive afferents innervating the human face and mouth.

    Science.gov (United States)

    Trulsson, M; Essick, G K

    2010-04-01

    Intraneural microneurography and microstimulation were performed on single afferent axons in the inferior alveolar and lingual nerves innervating the face, teeth, labial, or oral mucosa. Using natural mechanical stimuli, 35 single mechanoreceptive afferents were characterized with respect to unit type [fast adapting type I (FA I), FA hair, slowly adapting type I and II (SA I and SA II), periodontal, and deep tongue units] as well as size and shape of the receptive field. All afferents were subsequently microstimulated with pulse trains at 30 Hz lasting 1.0 s. Afferents recordings whose were stable thereafter were also tested with single pulses and pulse trains at 5 and 60 Hz. The results revealed that electrical stimulation of single FA I, FA hair, and SA I afferents from the orofacial region can evoke a percept that is spatially matched to the afferent's receptive field and consistent with the afferent's response properties as observed on natural mechanical stimulation. Stimulation of FA afferents typically evoked sensations that were vibratory in nature; whereas those of SA I afferents were felt as constant pressure. These afferents terminate superficially in the orofacial tissues and seem to have a particularly powerful access to perceptual levels. In contrast, microstimulation of single periodontal, SA II, and deep tongue afferents failed to evoke a sensation that matched the receptive field of the afferent. These afferents terminate more deeply in the tissues, are often active in the absence of external stimulation, and probably access perceptual levels only when multiple afferents are stimulated. It is suggested that the spontaneously active afferents that monitor tension in collagen fibers (SA II and periodontal afferents) may have the role to register the mechanical state of the soft tissues, which has been hypothesized to help maintain the body's representation in the central somatosensory system.

  18. Robust video foreground segmentation and face recognition

    Institute of Scientific and Technical Information of China (English)

    GUAN Ye-peng

    2009-01-01

    Face recognition provides a natural visual interface for human computer interaction (HCI) applications.The process of face recognition,however,is inhibited by variations in the appearance of face images caused by changes in lighting,expression,viewpoint,aging and introduction of occlusion.Although various algorithms have been presented for face recognition,face recognition is still a very challenging topic.A novel approach of real time face recognition for HCI is proposed in the paper.In view of the limits of the popular approaches to foreground segmentation,wavelet multi-scale transform based background subtraction is developed to extract foreground objects.The optimal selection of the threshold is automatically determined,which does not require any complex supervised training or manual experimental calibration.A robust real time face recognition algorithm is presented,which combines the projection matrixes without iteration and kernel Fisher discriminant analysis (KFDA) to overcome some difficulties existing in the real face recognition.Superior performance of the proposed algorithm is demonstrated by comparing with other algorithms through experiments.The proposed algorithm can also be applied to the video image sequences of natural HCI.

  19. Processed images in human perception: A case study in ultrasound breast imaging

    Energy Technology Data Exchange (ETDEWEB)

    Yap, Moi Hoon [Department of Computer Science, Loughborough University, FH09, Ergonomics and Safety Research Institute, Holywell Park (United Kingdom)], E-mail: M.H.Yap@lboro.ac.uk; Edirisinghe, Eran [Department of Computer Science, Loughborough University, FJ.05, Garendon Wing, Holywell Park, Loughborough LE11 3TU (United Kingdom); Bez, Helmut [Department of Computer Science, Loughborough University, Room N.2.26, Haslegrave Building, Loughborough University, Loughborough LE11 3TU (United Kingdom)

    2010-03-15

    Two main research efforts in early detection of breast cancer include the development of software tools to assist radiologists in identifying abnormalities and the development of training tools to enhance their skills. Medical image analysis systems, widely known as Computer-Aided Diagnosis (CADx) systems, play an important role in this respect. Often it is important to determine whether there is a benefit in including computer-processed images in the development of such software tools. In this paper, we investigate the effects of computer-processed images in improving human performance in ultrasound breast cancer detection (a perceptual task) and classification (a cognitive task). A survey was conducted on a group of expert radiologists and a group of non-radiologists. In our experiments, random test images from a large database of ultrasound images were presented to subjects. In order to gather appropriate formal feedback, questionnaires were prepared to comment on random selections of original images only, and on image pairs consisting of original images displayed alongside computer-processed images. We critically compare and contrast the performance of the two groups according to perceptual and cognitive tasks. From a Receiver Operating Curve (ROC) analysis, we conclude that the provision of computer-processed images alongside the original ultrasound images, significantly improve the perceptual tasks of non-radiologists but only marginal improvements are shown in the perceptual and cognitive tasks of the group of expert radiologists.

  20. A special purpose knowledge-based face localization method

    Science.gov (United States)

    Hassanat, Ahmad; Jassim, Sabah

    2008-04-01

    This paper is concerned with face localization for visual speech recognition (VSR) system. Face detection and localization have got a great deal of attention in the last few years, because it is an essential pre-processing step in many techniques that handle or deal with faces, (e.g. age, face, gender, race and visual speech recognition). We shall present an efficient method for localization human's faces in video images captured on mobile constrained devices, under a wide variation in lighting conditions. We use a multiphase method that may include all or some of the following steps starting with image pre-processing, followed by a special purpose edge detection, then an image refinement step. The output image will be passed through a discrete wavelet decomposition procedure, and the computed LL sub-band at a certain level will be transformed into a binary image that will be scanned by using a special template to select a number of possible candidate locations. Finally, we fuse the scores from the wavelet step with scores determined by color information for the candidate location and employ a form of fuzzy logic to distinguish face from non-face locations. We shall present results of large number of experiments to demonstrate that the proposed face localization method is efficient and achieve high level of accuracy that outperforms existing general-purpose face detection methods.