Ekman, Paul; Friesen, Wallace V.
The Facial Action Code (FAC) was derived from an analysis of the anatomical basis of facial movement. The development of the method is explained, contrasting it to other methods of measuring facial behavior. An example of how facial behavior is measured is provided, and ideas about research applications are discussed. (Author)
Full Text Available We report current findings when considering video recordings of facial expressions and body movements to provide affective personalized support in an educational context from an enriched multimodal emotion detection approach. In particular, we describe an annotation methodology to tag facial expression and body movements that conform to changes in the affective states of learners while dealing with cognitive tasks in a learning process. The ultimate goal is to combine these annotations with additional affective information collected during experimental learning sessions from different sources such as qualitative, self-reported, physiological, and behavioral information. These data altogether are to train data mining algorithms that serve to automatically identify changes in the learners’ affective states when dealing with cognitive tasks which help to provide emotional personalized support.
Pisani, Francesco; Pavlidis, Elena; Cattani, Luca; Ferrari, Gianluigi; Raheli, Riccardo; Spagnoli, Carlotta
Objectives We retrospectively analyze the diagnostic accuracy for paroxysmal abnormal facial movements, comparing one camera versus multi-camera approach. Background Polygraphic video-electroencephalogram (vEEG) recording is the current gold standard for brain monitoring in high-risk newborns, especially when neonatal seizures are suspected. One camera synchronized with the EEG is commonly used. Methods Since mid-June 2012, we have started using multiple cameras, one of which point toward newborns' faces. We evaluated vEEGs recorded in newborns in the study period between mid-June 2012 and the end of September 2014 and compared, for each recording, the diagnostic accuracies obtained with one-camera and multi-camera approaches. Results We recorded 147 vEEGs from 87 newborns and found 73 episodes of paroxysmal facial abnormal movements in 18 vEEGs of 11 newborns with the multi-camera approach. By using the single-camera approach, only 28.8% of these events were identified (21/73). Ten positive vEEGs with multicamera with 52 paroxysmal facial abnormal movements (52/73, 71.2%) would have been considered as negative with the single-camera approach. Conclusions The use of one additional facial camera can significantly increase the diagnostic accuracy of vEEGs in the detection of paroxysmal abnormal facial movements in the newborns.
Full Text Available In this review, we introduced our three studies that focused on facial movements. In the first study, we examined the temporal characteristics of neural responses elicited by viewing mouth movements, and assessed differences between the responses to mouth opening and closing movements and an averting eyes condition. Our results showed that the occipitotemporal area, the human MT/V5 homologue, was active in the perception of both mouth and eye motions. Viewing mouth and eye movements did not elicit significantly different activity in the occipitotemporal area, which indicated that perception of the movement of facial parts may be processed in the same manner, and this is different from motion in general. In the second study, we investigated whether early activity in the occipitotemporal region evoked by eye movements was influenced by a face contour and/or features such as the mouth. Our results revealed specific information processing for eye movements in the occipitotemporal region, and this activity was significantly influenced by whether movements appeared with the facial contour and/or features, in other words, whether the eyes moved, even if the movement itself was the same. In the third study, we examined the effects of inverting the facial contour (hair and chin and features (eyes, nose, and mouth on processing for static and dynamic face perception. Our results showed the following: (1 In static face perception, activity in the right fusiform area was affected more by the inversion of features while that in the left fusiform area was affected more by a disruption in the spatial relationship between the contour and features, and (2 In dynamic face perception, activity in the right occipitotemporal area was affected by the inversion of the facial contour.
Krippl, Martin; Karim, Ahmed A; Brechmann, André
Whereas the somatotopy of finger movements has been extensively studied with neuroimaging, the neural foundations of facial movements remain elusive. Therefore, we systematically studied the neuronal correlates of voluntary facial movements using the Facial Action Coding System (FACS, Ekman et al., 2002). The facial movements performed in the MRI scanner were defined as Action Units (AUs) and were controlled by a certified FACS coder. The main goal of the study was to investigate the detailed somatotopy of the facial primary motor area (facial M1). Eighteen participants were asked to produce the following four facial movements in the fMRI scanner: AU1+2 (brow raiser), AU4 (brow lowerer), AU12 (lip corner puller) and AU24 (lip presser), each in alternation with a resting phase. Our facial movement task induced generally high activation in brain motor areas (e.g., M1, premotor cortex, supplementary motor area, putamen), as well as in the thalamus, insula, and visual cortex. BOLD activations revealed overlapping representations for the four facial movements. However, within the activated facial M1 areas, we could find distinct peak activities in the left and right hemisphere supporting a rough somatotopic upper to lower face organization within the right facial M1 area, and a somatotopic organization within the right M1 upper face part. In both hemispheres, the order was an inverse somatotopy within the lower face representations. In contrast to the right hemisphere, in the left hemisphere the representation of AU4 was more lateral and anterior compared to the rest of the facial movements. Our findings support the notion of a partial somatotopic order within the M1 face area confirming the "like attracts like" principle (Donoghue et al., 1992). AUs which are often used together or are similar are located close to each other in the motor cortex.
Full Text Available Whereas the somatotopy of finger movements has been extensively studied with neuroimaging, the neural foundations of facial movements remain elusive. Therefore, we systematically studied the neuronal correlates of voluntary facial movements using the Facial Action Coding System (FACS,Ekman et al., 2002. The facial movements performed in the MRI scanner were defined as Action Units (AUs and were controlled by a certified FACS coder. The main goal of the study was to investigate the detailed somatotopy of the facial primary motor area (facial M1. Eighteen participants were asked to produce the following four facial movements in the fMRI scanner: AU1+2 (brow raiser, AU4 (brow lowerer, AU12 (lip corner puller and AU24 (lip presser, each in alternation with a resting phase.Our facial movement task induced generally high activation in brain motor areas (e.g. M1, premotor cortex, SMA, putamen, as well as in the thalamus, insula and visual cortex. BOLD activations revealed overlapping representations for the four facial movements. However, within the activated facial M1 areas, we could find distinct peak activities in the left and right hemisphere supporting a rough somatotopic upper to lower face organization within the right facial M1 area, and a somatotopic organization within the right M1 upper face part. In both hemispheres, the order was an inverse somatotopy within the lower face representations. In contrast to the right hemisphere, in the left hemisphere the representation of AU 4 was more lateral and anterior compared to the rest of the facial movements. Our findings support the notion of a partial somatotopic order within the M1 face area confirming the like attracts like principle (Donoghue et al., 1992 . AUs which are often used together or are similar are located close to each other in the motor cortex.
Full Text Available This paper aims to detect multi features of a facial sketch by using a novel approach. The detection of multi features of facial sketch has been conducted by several researchers, but they mainly considered frontal face sketches as object samples. In fact, the detection of multi features of facial sketch with certain angle is very important to assist police for describing the criminal’s face, when criminal’s face only appears on certain angle. Integration of the maximum line gradient value enhancement and the level set methods was implemented to detect facial features sketches with tilt angle to 15 degrees. However, these methods tend to move towards non features when there are a lot of graffiti around the shape. To overcome this weakness, the author proposes a novel approach to move the shape by adding a parameter to control the movement based on enhancement of the adaptive shape variants average values with 8 movement directions. The experimental results show that the proposed method can improve the detection accuracy up to 92.74%.
Full Text Available This paper aims to detect multi features of a facial sketch by using a novel approach. The detection of multi features of facial sketch has been conducted by several researchers, but they mainly considered frontal face sketches as object samples. In fact, the detection of multi features of facial sketch with certain angle is very important to assist police for describing the criminal’s face, when criminal’s face only appears on certain angle. Integration of the maximum line gradient value enhancement and the level set methods was implemented to detect facial features sketches with tilt angle to 15 degrees. However, these methods tend to move towards non features when there are a lot of graffiti around the shape. To overcome this weakness, the author proposes a novel approach to move the shape by adding a parameter to control the movement based on enhancement of the adaptive shape variants average values with 8 movement directions. The experimental results show that the proposed method can improve the detection accuracy up to 92.74%.
Dobs, Katharina; Bülthoff, Isabelle; Schultz, Johannes
Facial movements convey information about many social cues, including identity. However, how much information about a person’s identity is conveyed by different kinds of facial movements is unknown. We addressed this question using a recent motion capture and animation system, with which we animated one avatar head with facial movements of three types: (1) emotional, (2) emotional in social interaction and (3) conversational, all recorded from several actors. In a delayed match-to-sample task, observers were best at matching actor identity across conversational movements, worse with emotional movements in social interactions, and at chance level with emotional facial expressions. Model observers performing this task showed similar performance profiles, indicating that performance variation was due to differences in information content, rather than processing. Our results suggest that conversational facial movements transmit more dynamic identity information than emotional facial expressions, thus suggesting different functional roles and processing mechanisms for different types of facial motion.
Full Text Available Change in a speaker’s emotion is a fundamental component in human communication. Automatic recognition of spontaneous emotion would significantly impact human-computer interaction and emotion-related studies in education, psychology and psychiatry. In this paper, we explore methods for detecting emotional facial expressions occurring in a realistic human conversation setting—the Adult Attachment Interview (AAI. Because non-emotional facial expressions have no distinct description and are expensive to model, we treat emotional facial expression detection as a one- class classification problem, which is to describe target objects (i.e., emotional facial expressions and distinguish them from outliers (i.e., non-emotional ones. Our preliminary experiments on AAI data suggest that one-class classification methods can reach a good balance between cost (labeling and computing and recognition performance by avoiding non-emotional expression labeling and modeling.
Wang, Ting; Dong, Junyu; Sun, Xin; Zhang, Shu; Wang, Shengke
Facial nerve paralysis is a common disease due to nerve damage. Most approaches for evaluating the degree of facial paralysis rely on a set of different facial movements as commanded by doctors. Therefore, automatic recognition of the patterns of facial movement is fundamental to the evaluation of the degree of facial paralysis. In this paper, a novel method named Active Shape Models plus Local Binary Patterns (ASMLBP) is presented for recognizing facial movement patterns. Firstly, the Active Shape Models (ASMs) are used in the method to locate facial key points. According to these points, the face is divided into eight local regions. Then the descriptors of these regions are extracted by using Local Binary Patterns (LBP) to recognize the patterns of facial movement. The proposed ASMLBP method is tested on both the collected facial paralysis database with 57 patients and another publicly available database named the Japanese Female Facial Expression (JAFFE). Experimental results demonstrate that the proposed method is efficient for both paralyzed and normal faces.
Herfst, Lucas J; Brecht, Michael
The lateral facial nucleus is the sole output structure whose neuronal activity leads to whisker movements. To understand how single facial nucleus neurons contribute to whisker movement we combined single-cell stimulation and high-precision whisker tracking. Half of the 44 stimulated neurons gave rise to fast whisker protraction or retraction movement, whereas no stimulation-evoked movements could be detected for the remainder. Direction, speed, and amplitude of evoked movements varied across neurons. Protraction movements were more common than retraction movements (n = 16 vs. n = 4), had larger amplitudes (1.8 vs. 0.3 degrees for single spike events), and most protraction movements involved only a single whisker, whereas most retraction movements involved multiple whiskers. We found a large range in the amplitude of single spike-evoked whisker movements (0.06-5.6 degrees ). Onset of the movement occurred at 7.6 (SD 2.5) ms after the spike and the time to peak deflection was 18.2 (SD 4.3) ms. Each spike reliably evoked a stereotyped movement. In two of five cases peak whisker deflection resulting from consecutive spikes was larger than expected when based on linear summation of single spike-evoked movement profiles. Our data suggest the following coding scheme for whisker movements in the facial nucleus. 1) Evoked movement characteristics depend on the identity of the stimulated neuron (a labeled line code). 2) The facial nucleus neurons are heterogeneous with respect to the movement properties they encode. 3) Facial nucleus spikes are translated in a one-to-one manner into whisker movements.
Sawada, Reiko; Sato, Wataru; Uono, Shota; Kochiyama, Takanori; Kubota, Yasutaka; Yoshimura, Sayaka; Toichi, Motomi
The rapid detection of emotional signals from facial expressions is fundamental for human social interaction. The personality factor of neuroticism modulates the processing of various types of emotional facial expressions; however, its effect on the detection of emotional facial expressions remains unclear. In this study, participants with high- and low-neuroticism scores performed a visual search task to detect normal expressions of anger and happiness, and their anti-expressions within a cr...
Gill, Daniel; Garrod, Oliver G B; Jack, Rachael E; Schyns, Philippe G
Animals use social camouflage as a tool of deceit to increase the likelihood of survival and reproduction. We tested whether humans can also strategically deploy transient facial movements to camouflage the default social traits conveyed by the phenotypic morphology of their faces. We used the responses of 12 observers to create models of the dynamic facial signals of dominance, trustworthiness, and attractiveness. We applied these dynamic models to facial morphologies differing on perceived dominance, trustworthiness, and attractiveness to create a set of dynamic faces; new observers rated each dynamic face according to the three social traits. We found that specific facial movements camouflage the social appearance of a face by modulating the features of phenotypic morphology. A comparison of these facial expressions with those similarly derived for facial emotions showed that social-trait expressions, rather than being simple one-to-one overgeneralizations of emotional expressions, are a distinct set of signals composed of movements from different emotions. Our generative face models represent novel psychophysical laws for social sciences; these laws predict the perception of social traits on the basis of dynamic face identities.
Sawada, Reiko; Sato, Wataru; Uono, Shota; Kochiyama, Takanori; Kubota, Yasutaka; Yoshimura, Sayaka; Toichi, Motomi
The rapid detection of emotional signals from facial expressions is fundamental for human social interaction. The personality factor of neuroticism modulates the processing of various types of emotional facial expressions; however, its effect on the detection of emotional facial expressions remains unclear. In this study, participants with high- and low-neuroticism scores performed a visual search task to detect normal expressions of anger and happiness, and their anti-expressions within a crowd of neutral expressions. Anti-expressions contained an amount of visual changes equivalent to those found in normal expressions compared to neutral expressions, but they were usually recognized as neutral expressions. Subjective emotional ratings in response to each facial expression stimulus were also obtained. Participants with high-neuroticism showed an overall delay in the detection of target facial expressions compared to participants with low-neuroticism. Additionally, the high-neuroticism group showed higher levels of arousal to facial expressions compared to the low-neuroticism group. These data suggest that neuroticism modulates the detection of emotional facial expressions in healthy participants; high levels of neuroticism delay overall detection of facial expressions and enhance emotional arousal in response to facial expressions.
Full Text Available In order to obtain correct facial recognition results, one needs to adopt appropriate facial detection techniques. Moreover, the effects of facial detection are usually affected by the environmental conditions such as background, illumination, and complexity of objectives. In this paper, the proposed facial detection scheme, which is based on depth map analysis, aims to improve the effectiveness of facial detection and recognition under different environmental illumination conditions. The proposed procedures consist of scene depth determination, outline analysis, Haar-like classification, and related image processing operations. Since infrared light sources can be used to increase dark visibility, the active infrared visual images captured by a structured light sensory device such as Kinect will be less influenced by environmental lights. It benefits the accuracy of the facial detection. Therefore, the proposed system will detect the objective human and face firstly and obtain the relative position by structured light analysis. Next, the face can be determined by image processing operations. From the experimental results, it demonstrates that the proposed scheme not only improves facial detection under varying light conditions but also benefits facial recognition.
Full Text Available Effects of facial expressions on recognizing emotions expressed in dance movements were investigated. Dancers expressed three emotions: joy, sadness, and anger through dance movements. We used digital video cameras and a 3D motion capturing system to record and capture the movements. We then created full-video displays with an expressive face, full-video displays with an unexpressive face, stick figure displays (no face, or point-light displays (no face from these data using 3D animation software. To make point-light displays, 13 markers were attached to the body of each dancer. We examined how accurately observers were able to identify the expression that the dancers intended to create through their dance movements. Dance experienced and inexperienced observers participated in the experiment. They watched the movements and rated the compatibility of each emotion with each movement on a 5-point Likert scale. The results indicated that both experienced and inexperienced observers could identify all the emotions that dancers intended to express. Identification scores for dance movements with an expressive face were higher than for other expressions. This finding indicates that facial expressions affect the identification of emotions in dance movements, whereas only bodily expressions provide sufficient information to recognize emotions.
Vick, Sarah-Jane; Waller, Bridget M; Parr, Lisa A; Smith Pasqualini, Marcia C; Bard, Kim A
A comparative perspective has remained central to the study of human facial expressions since Darwin's [(1872/1998). The expression of the emotions in man and animals (3rd ed.). New York: Oxford University Press] insightful observations on the presence and significance of cross-species continuities and species-unique phenomena. However, cross-species comparisons are often difficult to draw due to methodological limitations. We report the application of a common methodology, the Facial Action Coding System (FACS) to examine facial movement across two species of hominoids, namely humans and chimpanzees. FACS [Ekman & Friesen (1978). Facial action coding system. CA: Consulting Psychology Press] has been employed to identify the repertoire of human facial movements. We demonstrate that FACS can be applied to other species, but highlight that any modifications must be based on both underlying anatomy and detailed observational analysis of movements. Here we describe the ChimpFACS and use it to compare the repertoire of facial movement in chimpanzees and humans. While the underlying mimetic musculature shows minimal differences, important differences in facial morphology impact upon the identification and detection of related surface appearance changes across these two species.
Woody, C D
The motor cortex plays a role in determining which of three different facial movements is acquired in Pavlovian conditioning experiments. Three separate facial reflexes can be distinguished by recording electromyographic activity from the orbicularis oculi (eye blink) and levator orii (nose twitch) muscles. One in a pure eye blink; a second is a nose twitch; the third is a compound eye blink and nose twitch. Which of these movements is elicited by a click (conditioned stimulus) following associative conditioning is reflected by the pattern of unit activity elicited by the click at the motor cortex. Activity is enhanced, after conditioning, in those units that project polysynaptically to the specific muscles performing the learned movement. This enhancement of activity is, in turn, relatable to an enhanced electrical excitability of the involved neurons. Analogous changes in the excitability of neurons of the motor cortex to applied currents can be produced by local application of cholinergic agents. Iontophoresis of acetylcholine, aceclidine (a cholinomimetic drug), or intracellularly applied cyclic GMP produces changes in single neuron membrane resistance that increase neuronal excitability. The units of the motor cortex that respond preferentially to these agents and to the click conditioned stimuli with short latencies have been identified as pyramidal cells of layer V. The axons of these neurons form the pyramidal tract, a pathway characterized as serving voluntary movement. It appears that this system supports rapid transmission and processing of auditory-motor information used to perform learned movements adaptively, selectively, and discriminatively.
Bilodeau-Mercure, Mylène; Kirouac, Vanessa; Langlois, Nancy; Ouellet, Claudie; Gasse, Isabelle; Tremblay, Pascale
The manner and extent to which normal aging affects the ability to speak are not fully understood. While age-related changes in voice fundamental frequency and intensity have been documented, changes affecting the planning and articulation of speech are less well understood. In the present study, 76 healthy, cognitively normal participants aged between 18 and 93 years old were asked to produce auditorily and visually triggered sequences of finely controlled movements (speech, oro-facial, and manual movement). These sequences of movements were either (1) simple, in which at least two of the three movements were the same, or (2) complex, in which three different movements were produced. For each of the resulting experimental condition, accuracy was calculated. The results show that, for speech and oro-facial movements, accuracy declined as a function of age and complexity. For these movements, the negative effect of complexity on performance accuracy increased with age. No aging or complexity effects were found for the manual movements on accuracy, but a significant slowing of movement was found, particularly for the complex sequences. These results demonstrate that there is a significant deterioration of fine motor control in normal aging across different response modalities.
Cula, Gabriela O.; Bargo, Paulo R.; Kollias, Nikiforos
Nowadays, documenting the face appearance through imaging is prevalent in skin research, therefore detection and quantitative assessment of the degree of facial wrinkling is a useful tool for establishing an objective baseline and for communicating benefits to facial appearance due to cosmetic procedures or product applications. In this work, an algorithm for automatic detection of facial wrinkles is developed, based on estimating the orientation and the frequency of elongated features apparent on faces. By over-filtering the skin texture image with finely tuned oriented Gabor filters, an enhanced skin image is created. The wrinkles are detected by adaptively thresholding the enhanced image, and the degree of wrinkling is estimated based on the magnitude of the filter responses. The algorithm is tested against a clinically scored set of images of periorbital lines of different severity and we find that the proposed computational assessment correlates well with the corresponding clinical scores.
Schmidt, Karen L; VanSwearingen, Jessie M; Levenstein, Rachel M
The context of voluntary movement during facial assessment has significant effects on the activity of facial muscles. Using automated facial analysis, we found that healthy subjects instructed to blow produced lip movements that were longer in duration and larger in amplitude than when subjects were instructed to pucker. We also determined that lip movement for puckering expressions was more asymmetric than lip movement in blowing. Differences in characteristics of lip movement were noted using facial movement analysis and were associated with the context of the movement. The impact of the instructions given for voluntary movement on the characteristics of facial movement might have important implications for assessing the capabilities and deficits of movement control in individuals with facial movement disorders. If results generalize to the clinical context, assessment of generally focused voluntary facial expressions might inadequately demonstrate the full range of facial movement capability of an individual patient.
Heaton, J.T.; Sheu, S.H.; Hohman, M.H.; Knox, C.J.; Weinberg, J.S.; Kleiss, I.J.; Hadlock, T.A.
Vibrissal whisking is often employed to track facial nerve regeneration in rats; however, we have observed similar degrees of whisking recovery after facial nerve transection with or without repair. We hypothesized that the source of non-facial nerve-mediated whisker movement after chronic denervati
Celebi, M; Smolka, Bogdan
This book presents the state-of-the-art in face detection and analysis. It outlines new research directions, including in particular psychology-based facial dynamics recognition, aimed at various applications such as behavior analysis, deception detection, and diagnosis of various psychological disorders. Topics of interest include face and facial landmark detection, face recognition, facial expression and emotion analysis, facial dynamics analysis, face classification, identification, and clustering, and gaze direction and head pose estimation, as well as applications of face analysis.
L.J. Herfst (Lucas); M. Brecht (Michael)
textabstractThe lateral facial nucleus is the sole output structure whose neuronal activity leads to whisker movements. To understand how single facial nucleus neurons contribute to whisker movement we combined single-cell stimulation and high-precision whisker tracking. Half of the 44 stimulated ne
Dong, Yanchao; Wang, Yanming; Yue, Jiguang; Hu, Zhencheng
The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference.
Hontanilla, B; Aubá, C
The aim of this study is to present a new 3D capture system of facial movements called FACIAL CLIMA. It is an automatic optical motion system that involves placing special reflecting dots on the subject's face and video recording with three infrared-light cameras the subject performing several face movements such as smile, mouth puckering, eye closure and forehead elevation. Images from the cameras are automatically processed with a software program that generates customised information such as 3D data on velocities and areas. The study has been performed in 20 healthy volunteers. The accuracy of the measurement process and the intrarater and interrater reliabilities have been evaluated. Comparison of a known distance and angle with those obtained by FACIAL CLIMA shows that this system is accurate to within 0.13 mm and 0.41 degrees . In conclusion, the accuracy of the FACIAL CLIMA system for evaluation of facial movements is demonstrated and also the high intrarater and interrater reliability. It has advantages with respect to other systems that have been developed for evaluation of facial movements, such as short calibration time, short measuring time, easiness to use and it provides not only distances but also velocities and areas. Thus the FACIAL CLIMA system could be considered as an adequate tool to assess the outcome of facial paralysis reanimation surgery. Thus, patients with facial paralysis could be compared between surgical centres such that effectiveness of facial reanimation operations could be evaluated.
De Silva, Liyanage C.; Aizawa, Kiyoharu; Hatori, Mitsutoshi
Detection and tracking of facial features without using any head mounted devices may become required in various future visual communication applications, such as teleconferencing, virtual reality etc. In this paper we propose an automatic method of face feature detection using a method called edge pixel counting. Instead of utilizing color or gray scale information of the facial image, the proposed edge pixel counting method utilized the edge information to estimate the face feature positions such as eyes, nose and mouth in the first frame of a moving facial image sequence, using a variable size face feature template. For the remaining frames, feature tracking is carried out alternatively using a method called deformable template matching and edge pixel counting. One main advantage of using edge pixel counting in feature tracking is that it does not require the condition of a high inter frame correlation around the feature areas as is required in template matching. Some experimental results are shown to demonstrate the effectiveness of the proposed method.
De Letter, Miet; Vanhoutte, Sarah; Aerts, Annelies; Santens, Patrick; Vermeersch, Hubert; Roche, Nathalie; Stillaert, Filip; Blondeel, Philip; Van Lierde, Kristiane
Facial allotransplantation constitutes a reconstructive option after extensive damage to facial structures. Functional recovery has been reported but remains an issue. A patient underwent facial allotransplantation after a ballistic injury with extensive facial tissue damage. Speech motor function was sequentially assessed clinically, along with repeated electromyography of lip movements during a follow-up of 3 years. Facial nerve recovery could be demonstrated within the first month, followed by a gradual increase in electromyographic amplitude and decrease in reaction times. These were accompanied by gradual improvement of clinical assessments. Axonal recovery starts early after transplantation. Electromyographic testing is sensitive in demonstrating this early recovery, which ultimately results in clinical improvements. Copyright © 2017 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.
Frigerio, Alice; Hadlock, Tessa A; Murray, Elizabeth H; Heaton, James T
IMPORTANCE Facial paralysis remains one of the most challenging conditions to effectively manage, often causing life-altering deficits in both function and appearance. Facial rehabilitation via pacing and robotic technology has great yet unmet potential. A critical first step toward reanimating symmetrical facial movement in cases of unilateral paralysis is the detection of healthy movement to use as a trigger for stimulated movement. OBJECTIVE To test a blink detection system that can be attached to standard eyeglasses and used as part of a closed-loop facial pacing system. DESIGN, SETTING, AND PARTICIPANTS Standard safety glasses were equipped with an infrared (IR) emitter-detector unit, oriented horizontally across the palpebral fissure, creating a monitored IR beam that became interrupted when the eyelids closed, and were tested in 24 healthy volunteers from a tertiary care facial nerve center community. MAIN OUTCOMES AND MEASURES Video-quantified blinking was compared with both IR sensor signal magnitude and rate of change in healthy participants with their gaze in repose, while they shifted their gaze from central to far-peripheral positions, and during the production of particular facial expressions. RESULTS Blink detection based on signal magnitude achieved 100% sensitivity in forward gaze but generated false detections on downward gaze. Calculations of peak rate of signal change (first derivative) typically distinguished blinks from gaze-related eyelid movements. During forward gaze, 87% of detected blink events were true positives, 11% were false positives, and 2% were false negatives. Of the 11% false positives, 6% were associated with partial eyelid closures. During gaze changes, false blink detection occurred 6% of the time during lateral eye movements, 10% of the time during upward movements, 47% of the time during downward movements, and 6% of the time for movements from an upward or downward gaze back to the primary gaze. Facial expressions
Tardif, Carole; Lainé, France; Rodriguez, Mélissa; Gepner, Bruno
International audience; This study examined the effects of slowing down presentation of facial expressions and their corresponding vocal sounds on facial expression recognition and facial and/or vocal imitation in children with autism. Twelve autistic children and twenty-four normal control children were presented with emotional and non-emotional facial expressions on CD-Rom, under audio or silent conditions, and under dynamic visual conditions (slowly, very slowly, at normal speed) plus a st...
Kun Ha Suh
Full Text Available Facial muscle micro movements for eight emotions were induced via visual and auditory stimuli and were verified according to sex. Thirty-one main facial features were chosen from the Kinect API out of 121 initially obtained facial features; the average change of pixel value was measured after image alignment. The proposed method is advantageous as it allows for comparisons. Facial micro-expressions are analyzed in real time using 31 facial feature points. The amount of micro-expressions for the various emotion stimuli was comparatively analyzed for differences according to sex. Men’s facial movements were similar for each emotion, whereas women’s facial movements were different for each emotion. The six feature positions were significantly different according to sex; in particular, the inner eyebrow of the right eye had a confidence level of p < 0.01. Consequently, discriminative power showed that men’s ability to separate one emotion from the others was lower compared to women’s ability in terms of facial expression, despite men’s average movements being higher compared to women’s. Additionally, the asymmetric phenomena around the left eye region of women appeared more strongly in cases of positive emotions.
Julle-Danière, Églantine; Micheletta, Jérôme; Whitehouse, Jamie; Joly, Marine; Gass, Carolin; Burrows, Anne M; Waller, Bridget M
Human and non-human primates exhibit facial movements or displays to communicate with one another. The evolution of form and function of those displays could be better understood through multispecies comparisons. Anatomically based coding systems (Facial Action Coding Systems: FACS) are developed to enable such comparisons because they are standardized and systematic and aid identification of homologous expressions underpinned by similar muscle contractions. To date, FACS has been developed for humans, and subsequently modified for chimpanzees, rhesus macaques, orangutans, hylobatids, dogs, and cats. Here, we wanted to test whether the MaqFACS system developed in rhesus macaques (Macaca mulatta) could be used to code facial movements in Barbary macaques (M. sylvanus), a species phylogenetically close to the rhesus macaques. The findings show that the facial movement capacity of Barbary macaques can be reliably coded using the MaqFACS. We found differences in use and form of some movements, most likely due to specializations in the communicative repertoire of each species, rather than morphological differences.
Lucey, Patrick; Cohn, Jeffrey F; Matthews, Iain; Lucey, Simon; Sridharan, Sridha; Howlett, Jessica; Prkachin, Kenneth M
In a clinical setting, pain is reported either through patient self-report or via an observer. Such measures are problematic as they are: 1) subjective, and 2) give no specific timing information. Coding pain as a series of facial action units (AUs) can avoid these issues as it can be used to gain an objective measure of pain on a frame-by-frame basis. Using video data from patients with shoulder injuries, in this paper, we describe an active appearance model (AAM)-based system that can automatically detect the frames in video in which a patient is in pain. This pain data set highlights the many challenges associated with spontaneous emotion detection, particularly that of expression and head movement due to the patient's reaction to pain. In this paper, we show that the AAM can deal with these movements and can achieve significant improvements in both the AU and pain detection performance compared to the current-state-of-the-art approaches which utilize similarity-normalized appearance features only.
Sandbach, Georgia; Zafeiriou, Stefanos; Pantic, Maja
In this paper we propose new binary pattern features for use in the problem of 3D facial action unit (AU) detection. Two representations of 3D facial geometries are employed, the depth map and the Azimuthal Projection Distance Image (APDI). To these the traditional Local Binary Pattern is applied,
Schaede, Rebecca Anna; Volk, Gerd Fabian; Modersohn, Luise; Barth, Jodi Maron; Denzler, Joachim; Guntinas-Lichius, Orlando
Photografy and video are necessary to record the severity of a facial palsy or to allow offline grading with a grading system. There is no international standard for the video recording urgently needed to allow a standardized comparison of different patient cohorts. A video instruction was developed. The instruction was shown to the patient and presents several mimic movements. At the same time the patient is recorded while repeating the presented movement using commercial hardware. Facial movements were selected in such a way that it was afterwards possible to evaluate the recordings with standard grading systems (House-Brackmann, Sunnybrook, Stennert, Yanagihara) or even with (semi)automatic software. For quality control, the patients evaluated the instruction using a questionnaire. The video instruction takes 11 min and 05 and is divided in 3 parts: 1) Explanation of the procedure; 2) Foreplay and recreating of the facial movements; 3) Repeating of sentences to analyze the communication skills. So far 13 healthy subjects and 10 patients with acute or chronic facial palsy were recorded. All recordings could be assessed by the above mentioned grading systems. The instruction was rated as well explaining and easy to follow by healthy persons and patients. There is now a video instruction available for standardized recording of facial movement. This instruction is recommended for use in clinical routine and in clinical trials. This will allow a standardized comparison of patients within Germany and international patient cohorts. © Georg Thieme Verlag KG Stuttgart · New York.
Full Text Available For robot systems, robust facial landmark detection is the first and critical step for face-based human identification and facial expression recognition. In recent years, the cascaded-regression-based method has achieved excellent performance in facial landmark detection. Nevertheless, it still has certain weakness, such as high sensitivity to the initialization. To address this problem, regression based on multiple initializations is established in a unified model; face shapes are then estimated independently according to these initializations. With a ranking strategy, the best estimate is selected as the final output. Moreover, a face shape model based on restricted Boltzmann machines is built as a constraint to improve the robustness of ranking. Experiments on three challenging datasets demonstrate the effectiveness of the proposed facial landmark detection method against state-of-the-art methods.
Contreras, Viridiana; Díaz-Ramírez, Víctor H.
An algorithm for facial landmark detection based on template matched filtering is presented. The algorithm is able to detect and estimate the position of a set of prespecified landmarks by employing a bank of linear filters. Each filter in the bank is trained to detect a single landmark that is located in a small region of the input face image. The filter bank is implemented in parallel on a graphics processing unit to perform facial landmark detection in real-time. Computer simulation results obtained with the proposed algorithm are presented and discussed in terms of detection rate, accuracy of landmark location estimation, and real-time efficiency.
Heaton, James T; Sheu, Shu Hsien; Hohman, Marc H; Knox, Christopher J; Weinberg, Julie S; Kleiss, Ingrid J; Hadlock, Tessa A
Vibrissal whisking is often employed to track facial nerve regeneration in rats; however, we have observed similar degrees of whisking recovery after facial nerve transection with or without repair. We hypothesized that the source of non-facial nerve-mediated whisker movement after chronic denervation was from autonomic, cholinergic axons traveling within the infraorbital branch of the trigeminal nerve (ION). Rats underwent unilateral facial nerve transection with repair (N=7) or resection without repair (N=11). Post-operative whisking amplitude was measured weekly across 10weeks, and during intraoperative stimulation of the ION and facial nerves at ⩾18weeks. Whisking was also measured after subsequent ION transection (N=6) or pharmacologic blocking of the autonomic ganglia using hexamethonium (N=3), and after snout cooling intended to elicit a vasodilation reflex (N=3). Whisking recovered more quickly and with greater amplitude in rats that underwent facial nerve repair compared to resection (Pwhisker movements decreased in all rats during the initial recovery period (indicative of reinnervation), but re-appeared in the resected rats after undergoing ION transection (indicative of motor denervation). Cholinergic, parasympathetic axons traveling within the ION innervate whisker pad vasculature, and immunohistochemistry for vasoactive intestinal peptide revealed these axons branching extensively over whisker pad muscles and contacting neuromuscular junctions after facial nerve resection. This study provides the first behavioral and anatomical evidence of spontaneous autonomic innervation of skeletal muscle after motor nerve lesion, which not only has implications for interpreting facial nerve reinnervation results, but also calls into question whether autonomic-mediated innervation of striated muscle occurs naturally in other forms of neuropathy. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.
Morrison, Edward R; Clark, Andrew P; Gralewski, Lisa; Campbell, Neill; Penton-Voak, Ian S
Women's preferences for facial structure vary over the menstrual cycle. Little is known, however, as to how preferences for behavior may be influenced by hormonal factors. Here, we demonstrate that social properties of facial motion influence attractiveness judgments in the absence of other cues, and that women's preferences for these displays vary over the menstrual cycle, as has been demonstrated for structural traits of men's faces in static stimuli. We produced shape-standardized facial models that were animated with male movement and assessed for flirtatiousness by 16 women and attractiveness by 47 women. In fertile phases of the menstrual cycle, women showed stronger preferences for flirtatious movement, but not for absolute movement. These data show that women (1) recognize specific mating-relevant social cues in male facial movement and (2) are differentially influenced by these cues at different phases of the menstrual cycle. This preference for flirtatiousness may promote the adaptive allocation of mating effort towards men who are, in turn, likely to respond positively.
Soleymani, Mohammad; Asghari-Esfeden, Sadjad; Pantic, Maja; Fu, Yun
Emotions play an important role in how we select and consume multimedia. Recent advances on affect detection are focused on detecting emotions continuously. In this paper, for the first time, we continuously detect valence from electroencephalogram (EEG) signals and facial expressions in response to
Sawada, Reiko; Sato, Wataru; Uono, Shota; Kochiyama, Takanori; Toichi, Motomi
Behavioral studies have shown that emotional facial expressions are detected more rapidly and accurately than are neutral expressions. However, the neural mechanism underlying this efficient detection has remained unclear. To investigate this mechanism, we measured event-related potentials (ERPs) during a visual search task in which participants detected the normal emotional facial expressions of anger and happiness or their control stimuli, termed "anti-expressions," within crowds of neutral expressions. The anti-expressions, which were created using a morphing technique that produced changes equivalent to those in the normal emotional facial expressions compared with the neutral facial expressions, were most frequently recognized as emotionally neutral. Behaviorally, normal expressions were detected faster and more accurately and were rated as more emotionally arousing than were the anti-expressions. Regarding ERPs, the normal expressions elicited larger early posterior negativity (EPN) at 200-400ms compared with anti-expressions. Furthermore, larger EPN was related to faster and more accurate detection and higher emotional arousal. These data suggest that the efficient detection of emotional facial expressions is implemented via enhanced activation of the posterior visual cortices at 200-400ms based on their emotional significance. Copyright © 2014 Elsevier B.V. All rights reserved.
Parr, L A; Waller, B M; Burrows, A M; Gothard, K M; Vick, S J
Over 125 years ago, Charles Darwin (1872) suggested that the only way to fully understand the form and function of human facial expression was to make comparisons with other species. Nevertheless, it has been only recently that facial expressions in humans and related primate species have been compared using systematic, anatomically based techniques. Through this approach, large-scale evolutionary and phylogenetic analyses of facial expressions, including their homology, can now be addressed. Here, the development of a muscular-based system for measuring facial movement in rhesus macaques (Macaca mulatta) is described based on the well-known FACS (Facial Action Coding System) and ChimpFACS. These systems describe facial movement according to the action of the underlying facial musculature, which is highly conserved across primates. The coding systems are standardized; thus, their use is comparable across laboratories and study populations. In the development of MaqFACS, several species differences in the facial movement repertoire of rhesus macaques were observed in comparison with chimpanzees and humans, particularly with regard to brow movements, puckering of the lips, and ear movements. These differences do not seem to be the result of constraints imposed by morphological differences in the facial structure of these three species. It is more likely that they reflect unique specializations in the communicative repertoire of each species.
Chen, Zhanli; Ansari, Rashid; Wilkie, Diana J.
In this paper, we examine the problem of using video analysis to assess pain, an important problem especially for critically ill, non-communicative patients, and people with dementia. We propose and evaluate an automated method to detect the presence of pain manifested in patient videos using a unique and large collection of cancer patient videos captured in patient homes. The method is based on detecting pain-related facial action units defined in the Facial Action Coding System (FACS) that is widely used for objective assessment in pain analysis. In our research, a person-specific Active Appearance Model (AAM) based on Project-Out Inverse Compositional Method is trained for each patient individually for the modeling purpose. A flexible representation of the shape model is used in a rule-based method that is better suited than the more commonly used classifier-based methods for application to the cancer patient videos in which pain-related facial actions occur infrequently and more subtly. The rule-based method relies on the feature points that provide facial action cues and is extracted from the shape vertices of AAM, which have a natural correspondence to face muscular movement. In this paper, we investigate the detection of a commonly used set of pain-related action units in both the upper and lower face. Our detection results show good agreement with the results obtained by three trained FACS coders who independently reviewed and scored the action units in the cancer patient videos.
Zhao, Qian; Rosenbaum, Kenneth; Sze, Raymond; Zand, Dina; Summar, Marshall; Linguraru, Marius George
Down syndrome is the most commonly occurring chromosomal condition; one in every 691 babies in United States is born with it. Patients with Down syndrome have an increased risk for heart defects, respiratory and hearing problems and the early detection of the syndrome is fundamental for managing the disease. Clinically, facial appearance is an important indicator in diagnosing Down syndrome and it paves the way for computer-aided diagnosis based on facial image analysis. In this study, we propose a novel method to detect Down syndrome using photography for computer-assisted image-based facial dysmorphology. Geometric features based on facial anatomical landmarks, local texture features based on the Contourlet transform and local binary pattern are investigated to represent facial characteristics. Then a support vector machine classifier is used to discriminate normal and abnormal cases; accuracy, precision and recall are used to evaluate the method. The comparison among the geometric, local texture and combined features was performed using the leave-one-out validation. Our method achieved 97.92% accuracy with high precision and recall for the combined features; the detection results were higher than using only geometric or texture features. The promising results indicate that our method has the potential for automated assessment for Down syndrome from simple, noninvasive imaging data.
Dimitri J Bayle
Full Text Available BACKGROUND: In everyday life, signals of danger, such as aversive facial expressions, usually appear in the peripheral visual field. Although facial expression processing in central vision has been extensively studied, this processing in peripheral vision has been poorly studied. METHODOLOGY/PRINCIPAL FINDINGS: Using behavioral measures, we explored the human ability to detect fear and disgust vs. neutral expressions and compared it to the ability to discriminate between genders at eccentricities up to 40°. Responses were faster for the detection of emotion compared to gender. Emotion was detected from fearful faces up to 40° of eccentricity. CONCLUSIONS: Our results demonstrate the human ability to detect facial expressions presented in the far periphery up to 40° of eccentricity. The increasing advantage of emotion compared to gender processing with increasing eccentricity might reflect a major implication of the magnocellular visual pathway in facial expression processing. This advantage may suggest that emotion detection, relative to gender identification, is less impacted by visual acuity and within-face crowding in the periphery. These results are consistent with specific and automatic processing of danger-related information, which may drive attention to those messages and allow for a fast behavioral reaction.
Is empathy necessary to comprehend the emotional faces? The empathic effect on attentional mechanisms (eye movements), cortical correlates (N200 event-related potentials) and facial behaviour (electromyography) in face processing.
Balconi, Michela; Canavesio, Ylenia
The present research explored the effect of social empathy on processing emotional facial expressions. Previous evidence suggested a close relationship between emotional empathy and both the ability to detect facial emotions and the attentional mechanisms involved. A multi-measure approach was adopted: we investigated the association between trait empathy (Balanced Emotional Empathy Scale) and individuals' performance (response times; RTs), attentional mechanisms (eye movements; number and duration of fixations), correlates of cortical activation (event-related potential (ERP) N200 component), and facial responsiveness (facial zygomatic and corrugator activity). Trait empathy was found to affect face detection performance (reduced RTs), attentional processes (more scanning eye movements in specific areas of interest), ERP salience effect (increased N200 amplitude), and electromyographic activity (more facial responses). A second important result was the demonstration of strong, direct correlations among these measures. We suggest that empathy may function as a social facilitator of the processes underlying the detection of facial emotion, and a general "facial response effect" is proposed to explain these results. We assumed that empathy influences cognitive and the facial responsiveness, such that empathic individuals are more skilful in processing facial emotion.
Arteaga, Carmina; Poblano, Adrián
BACKGROUND: Despite repeated demonstrations of asymmetries in several brain functions, the biological bases of such asymmetries have remained obscure. OBJECTIVE: To investigate development of lateralized facial and eye movements evoked by hemispheric stimulation in right-handed and left-handed children. METHOD: Fifty children were tested according to handedness by means of four tests: I. Mono-syllabic non-sense words, II. Tri-syllabic sense words, III. Visual field occlusion by black wall, an...
by digital imaging, which is an active area of research for the detection of facial ... The term anthropometry refers to measurement of any aspect of .... right lower mid eye ridge, 23 = left upper mid eye ridge, 24 = left lower mid eye ridge, 25 = mid.
Skwarczynski, M.A. [Faculty of Environmental Engineering, Institute of Environmental Protection Engineering, Department of Indoor Environment Engineering, Lublin University of Technology, Lublin (Poland); International Centre for Indoor Environment and Energy, Department of Civil Engineering, Technical University of Denmark, Copenhagen (Denmark); Melikov, A.K.; Lyubenova, V. [International Centre for Indoor Environment and Energy, Department of Civil Engineering, Technical University of Denmark, Copenhagen (Denmark); Kaczmarczyk, J. [Faculty of Energy and Environmental Engineering, Department of Heating, Ventilation and Dust Removal Technology, Silesian University of Technology, Gliwice (Poland)
The effect of facially applied air movement on perceived air quality (PAQ) at high humidity was studied. Thirty subjects (21 males and 9 females) participated in three, 3-h experiments performed in a climate chamber. The experimental conditions covered three combinations of relative humidity and local air velocity under a constant air temperature of 26 C, namely: 70% relative humidity without air movement, 30% relative humidity without air movement and 70% relative humidity with air movement under isothermal conditions. Personalized ventilation was used to supply room air from the front toward the upper part of the body (upper chest, head). The subjects could control the flow rate (velocity) of the supplied air in the vicinity of their bodies. The results indicate an airflow with elevated velocity applied to the face significantly improves the acceptability of the air quality at the room air temperature of 26 C and relative humidity of 70%. (author)
Full Text Available The opossum, Monodelphis domestica, is born very immature but crawls, unaided, with its forelimbs (FL from the mother's birth canal to a nipple where it attaches to pursue its development. What sensory cues guide the newborn to the nipple and trigger its attachment to it? Previous experiments showed that low intensity electrical stimulation of the trigeminal ganglion induces FL movement in in vitro preparations and that trigeminal innervation of the facial skin is well developed in the newborn. The skin does not contain Vater-Pacini or Meissner touch corpuscles at this age, but it contains cells which appear to be Merkel cells (MC. We sought to determine if touch perceived by MC could exert an influence on FL movements. Application of the fluorescent dye AM1-43, which labels sensory cells such as MC, revealed the presence of a large number of labeled cells in the facial epidermis, especially in the snout skin, in newborn opossums. Moreover, calibrated pressure applied to the snout induced bilateral and simultaneous electromyographic responses of the triceps muscle in in vitro preparations of the neuraxis and FL from newborn. These responses increase with stimulation intensity and tend to decrease over time. Removing the facial skin nearly abolished these responses. Metabotropic glutamate 1 receptors being involved in MC neurotransmission, an antagonist of these receptors was applied to the bath, which decreased the EMG responses in a reversible manner. Likewise, bath application of the purinergic type 2 receptors, used by AM1-43 to penetrate sensory cells, also decreased the triceps EMG responses. The combined results support a strong influence of facial mechanosensation on FL movement in newborn opossums, and suggest that this influence could be exerted via MC.
Liao, Lina; Long, Hu; Zhang, Li; Chen, Helin; Zhou, Yang; Ye, Niansong; Lai, Wenli
This study was carried out to evaluate pain in rats by monitoring their facial expressions following experimental tooth movement. Male Sprague-Dawley rats were divided into the following five groups based on the magnitude of orthodontic force applied and administration of analgesics: control; 20 g; 40 g; 80 g; and morphine + 40 g. Closed-coil springs were used to mimic orthodontic forces. The facial expressions of each rat were videotaped, and the resulting rat grimace scale (RGS) coding was employed for pain quantification. The RGS score increased on day 1 but showed no significant change thereafter in the control and 20-g groups. In the 40- and 80-g groups, the RGS scores increased on day 1, peaked on day 3, and started to decrease on day 5. At 14 d, the RGS scores were similar in control and 20-, 40-, and 80-g groups and did not return to baseline. The RGS scores in the morphine + 40-g group were significantly lower than those in the control group. Our results reveal that coding of facial expression is a valid method for evaluation of pain in rats following experimental tooth movement. Inactivated springs (no force) still cause discomfort and result in an increase in the RGS. The threshold force magnitude required to evoke orthodontic pain in rats is between 20 and 40 g. © 2014 Eur J Oral Sci.
Tardif, Carole; Laine, France; Rodriguez, Melissa; Gepner, Bruno
This study examined the effects of slowing down presentation of facial expressions and their corresponding vocal sounds on facial expression recognition and facial and/or vocal imitation in children with autism. Twelve autistic children and twenty-four normal control children were presented with emotional and non-emotional facial expressions on…
Tardif, Carole; Laine, France; Rodriguez, Melissa; Gepner, Bruno
This study examined the effects of slowing down presentation of facial expressions and their corresponding vocal sounds on facial expression recognition and facial and/or vocal imitation in children with autism. Twelve autistic children and twenty-four normal control children were presented with emotional and non-emotional facial expressions on…
Chartrand, Josée; Gosselin, Pierre
The smile is one of the most often expressed emotions during social interactions. It can be authentic, that is, associated with a joyful emotional state in the person expressing it, but it can also be false, that is, deliberately produced in the absence of that emotional state in order to deceive one or more individuals (Ekman, 1993). Even though the fake smile very much resembles the authentic smile, it generally does not constitute the perfect simile. The fake smile more often has a certain degree of asymmetry than the authentic smile (Ekman, Hager, & Friesen, 1981) and it uses the cheek raiser action less often than with the authentic smile (Ekman, Friesen, & O'Sullivan, 1988; Frank, Ekman, & Friesen, 1993). This study looked at the knowledge that adults have of these differences as well as their perceptive ability to detect them. The visual stimuli presented to participants were prepared using the Facial Action Coding System (Ekman & Friesen, 1978). Results show that participants detected the differences between the two types of smile and that detection was better using smile asymmetry than with the cheek raiser action. Analysis of the use of response categories in the detection task indicated that participants underestimated the differences between smiles when they were different and that this tendency was more apparent with the cheek raiser detection method than for asymmetry detection. Participants also demonstrated a better knowledge of smile asymmetry than cheek raiser action. The knowledge gathered suggests that the ability of the receptor to judge smile authenticity is limited by perceptive factors. However, the mediation analyses that we conducted show the judging smile authenticity is not limited to simple perceptive detection of facial clues. Detecting facial clues is a necessary condition for correctly assessing smile authenticity, but it does not explain the variance in these assessments. We believe that this variance would be due more to the
Daniel LÓPEZ SÁNCHEZ
Full Text Available The problem of face recognition has been extensively studied in the available literature, however, some aspects of this field require further research. The design and implementation of face recognition systems that can efficiently handle unconstrained conditions (e.g. pose variations, illumination, partial occlusion... is still an area under active research. This work focuses on the design of a new nonparametric occlusion detection technique. In addition, we present some preliminary results that indicate that the proposed technique might be useful to face recognition systems, allowing them to dynamically discard occluded face parts.
Skwarczynski, Mariusz; Melikov, Arsen Krikor; Kaczmarczyk, J.
The effect of facially applied air movement on perceived air quality (PAQ) at high humidity was studied. Thirty subjects (21 males and 9 females) participated in three, 3-h experiments performed in a climate chamber. The experimental conditions covered three combinations of relative humidity...... toward the upper part of the body (upper chest, head). The subjects could control the flow rate (velocity) of the supplied air in the vicinity of their bodies. The results indicate an airflow with elevated velocity applied to the face significantly improves the acceptability of the air quality...
Shimizu, T; Shimizu, A; Yamashita, K; Iwase, M; Kajimoto, O; Kawasaki, T
Patients with schizophrenia are known to have deficits in facial affect recognition. Subjects were 25 schizophrenic patients and 25 normal subjects who were shown pairs of slides of laughing faces and asked to compare the intensity of laughter in the two slides. Eye movements were recorded using an infrared scleral reflection technique. Normal subjects efficiently compared the same facial features in the two slides, examining the eyes and mouth, important areas for recognizing laughter, for a longer time than other regions of the face. Schizophrenic patients spent less time ex amining the eyes and mouth and often examined other regions of the face or areas other than the face. Similar results were obtained for the number of fixation points. That schizophrenic patients may have employed an inefficient strategy with few effective eye movements in facial comparison and recognition may help to explain the deficits in facial recognition observed in schizophrenic patients.
Aviezer, Hillel; Messinger, Daniel S; Zangvil, Shiri; Mattson, Whitney I; Gangi, Devon N; Todorov, Alexander
Although the distinction between positive and negative facial expressions is assumed to be clear and robust, recent research with intense real-life faces has shown that viewers are unable to reliably differentiate the valence of such expressions (Aviezer, Trope, & Todorov, 2012). Yet, the fact that viewers fail to distinguish these expressions does not in itself testify that the faces are physically identical. In Experiment 1, the muscular activity of victorious and defeated faces was analyzed. Higher numbers of individually coded facial actions--particularly smiling and mouth opening--were more common among winners than losers, indicating an objective difference in facial activity. In Experiment 2, we asked whether supplying participants with valid or invalid information about objective facial activity and valence would alter their ratings. Notwithstanding these manipulations, valence ratings were virtually identical in all groups, and participants failed to differentiate between positive and negative faces. While objective differences between intense positive and negative faces are detectable, human viewers do not utilize these differences in determining valence. These results suggest a surprising dissociation between information present in expressions and information used by perceivers.
Xiaohong W. Gao
Full Text Available A new approach of determination of head movement is presented from the pictures recorded via digital cameras monitoring the scanning processing of PET. Two human vision models of CIECAMs and BMV are applied to segment the face region via skin colour and to detect local facial landmarks respectively. The developed algorithms are evaluated on the pictures (n=12 monitoring a subjects head while simulating PET scanning captured by two calibrated cameras (located in the front and left side from a subject. It is shown that centers of chosen facial landmarks of eye corners and middle point of nose basement have been detected with very high precision (1 0.64 pixels. Three landmarks on pictures received by the front camera and two by the side camera have been identified. Preliminary results on 2D images with known moving parameters show that movement parameters of rotations and translations along X, Y, and Z directions can be obtained very accurately via the described methods.
Liu, Tongran; Xiao, Tong; Jiannong, Shi
Adolescence is a critical period for the neurodevelopment of social-emotional processing, wherein the automatic detection of changes in facial expressions is crucial for the development of interpersonal communication. Two groups of participants (an adolescent group and an adult group) were...... recruited to complete an emotional oddball task featuring on happy and one fearful condition. The measurement of event-related potential was carried out via electroencephalography and electrooculography recording, to detect visual mismatch negativity (vMMN) with regard to the automatic detection of changes...... automatic processing on fearful faces than happy faces. The present study indicated that adolescent’s posses stronger automatic detection of changes in emotional expression relative to adults, and sheds light on the neurodevelopment of automatic processes concerning social-emotional information....
Full Text Available Blindsight denotes unconscious residual visual capacities in the context of an inability to consciously recollect or identify visual information. It has been described for color and shape discrimination, movement or facial emotion recognition. The present study investigates a patient suffering from cortical blindness whilst maintaining select residual abilities in face detection. Our patient presented the capacity to distinguish between jumbled/normal faces, known/unknown faces or famous people’s categories although he failed to explicitly recognize or describe them. Conversely, performance was at chance level when asked to categorize non-facial stimuli. Our results provide clinical evidence for the notion that some aspects of facial processing can occur without perceptual awareness, possibly using direct tracts from the thalamus to associative visual cortex, bypassing the primary visual cortex.
V.K. NARENDIRA KUMAR
Full Text Available Biometrics is measurable characteristics specific to an individual. Face detection has diverse applications especially as an identification solution which can meet the crying needs in security areas. While traditionally 2D images of faces have been used, 3D scans that contain both 3D data and registered color are becoming easier to acquire. Before 3D face images can be used to identify an individual, they require some form of initial alignment information, typically based on facial feature locations. We follow this by a discussion of the algorithms performance when constrained to frontal images and an analysis of its performance on a more complex dataset with significant head pose variation using 3D face data for detection provides a promising route to improved performance.
Martín-Ruiz, María-Luisa; Máximo-Bocanegra, Nuria; Luna-Oliva, Laura
The importance of an early rehabilitation process in children with cerebral palsy (CP) is widely recognized. On the one hand, new and useful treatment tools such as rehabilitation systems based on interactive technologies have appeared for rehabilitation of gross motor movements. On the other hand, from the therapeutic point of view, performing rehabilitation exercises with the facial muscles can improve the swallowing process, the facial expression through the management of muscles in the face, and even the speech of children with cerebral palsy. However, it is difficult to find interactive games to improve the detection and evaluation of oral-facial musculature dysfunctions in children with CP. This paper describes a framework based on strategies developed for interactive serious games that is created both for typically developed children and children with disabilities. Four interactive games are the core of a Virtual Environment called SONRIE. This paper demonstrates the benefits of SONRIE to monitor children’s oral-facial difficulties. The next steps will focus on the validation of SONRIE to carry out the rehabilitation process of oral-facial musculature in children with cerebral palsy. PMID:27023561
Martín-Ruiz, María-Luisa; Máximo-Bocanegra, Nuria; Luna-Oliva, Laura
The importance of an early rehabilitation process in children with cerebral palsy (CP) is widely recognized. On the one hand, new and useful treatment tools such as rehabilitation systems based on interactive technologies have appeared for rehabilitation of gross motor movements. On the other hand, from the therapeutic point of view, performing rehabilitation exercises with the facial muscles can improve the swallowing process, the facial expression through the management of muscles in the face, and even the speech of children with cerebral palsy. However, it is difficult to find interactive games to improve the detection and evaluation of oral-facial musculature dysfunctions in children with CP. This paper describes a framework based on strategies developed for interactive serious games that is created both for typically developed children and children with disabilities. Four interactive games are the core of a Virtual Environment called SONRIE. This paper demonstrates the benefits of SONRIE to monitor children's oral-facial difficulties. The next steps will focus on the validation of SONRIE to carry out the rehabilitation process of oral-facial musculature in children with cerebral palsy.
Full Text Available The importance of an early rehabilitation process in children with cerebral palsy (CP is widely recognized. On the one hand, new and useful treatment tools such as rehabilitation systems based on interactive technologies have appeared for rehabilitation of gross motor movements. On the other hand, from the therapeutic point of view, performing rehabilitation exercises with the facial muscles can improve the swallowing process, the facial expression through the management of muscles in the face, and even the speech of children with cerebral palsy. However, it is difficult to find interactive games to improve the detection and evaluation of oral-facial musculature dysfunctions in children with CP. This paper describes a framework based on strategies developed for interactive serious games that is created both for typically developed children and children with disabilities. Four interactive games are the core of a Virtual Environment called SONRIE. This paper demonstrates the benefits of SONRIE to monitor children’s oral-facial difficulties. The next steps will focus on the validation of SONRIE to carry out the rehabilitation process of oral-facial musculature in children with cerebral palsy.
Chickerur, Satyadhyan; Joshi, Kartik
Emotion detection using facial images is a technique that researchers have been using for the last two decades to try to analyze a person's emotional state given his/her image. Detection of various kinds of emotion using facial expressions of students in educational environment is useful in providing insight into the effectiveness of tutoring…
Gentsch, Kornelia; Grandjean, Didier; Scherer, Klaus R
facial expressions dynamically over time, immediately after an event is perceived. In addition, our results provide further indications for the chronography of appraisal-driven facial movements and the underlying cognitive processes.
Matsugu, Masakazu; Mori, Katsuhiko; Mitari, Yusuke; Kaneda, Yuji
Reliable detection of ordinary facial expressions (e.g. smile) despite the variability among individuals as well as face appearance is an important step toward the realization of perceptual user interface with autonomous perception of persons. We describe a rule-based algorithm for robust facial expression recognition combined with robust face detection using a convolutional neural network. In this study, we address the problem of subject independence as well as translation, rotation, and scale invariance in the recognition of facial expression. The result shows reliable detection of smiles with recognition rate of 97.6% for 5600 still images of more than 10 subjects. The proposed algorithm demonstrated the ability to discriminate smiling from talking based on the saliency score obtained from voting visual cues. To the best of our knowledge, it is the first facial expression recognition model with the property of subject independence combined with robustness to variability in facial appearance.
Jiang, Bihan; Martinez, Brais; Pantic, Maja
In this paper we propose the very first weakly supervised approach for detecting facial action unit temporal segments. This is achieved by means of behaviour similarity matching, where no training of dedicated classifiers is needed and the input facial behaviour episode is compared to a template. Th
Eleftheriadis, Stefanos; Rudovic, Ognjen; Pantic, Maja
We propose a novel multi-conditional latent variable model for simultaneous facial feature fusion and detection of facial action units. In our approach we exploit the structure-discovery capabilities of generative models such as Gaussian processes, and the discriminative power of classifiers such as
Banks, Caroline A; Hadlock, Tessa A
Facial paralysis is a rare but severe condition in the pediatric population. Impaired facial movement has multiple causes and varied presentations, therefore individualized treatment plans are essential for optimal results. Advances in facial reanimation over the past 4 decades have given rise to new treatments designed to restore balance and function in pediatric patients with facial paralysis. This article provides a comprehensive review of pediatric facial rehabilitation and describes a zone-based approach to assessment and treatment of impaired facial movement.
Full Text Available Previous studies have demonstrated that the serotonin transporter gene-linked polymorphic region (5-HTTLPR affects the recognition of facial expressions and attention to them. However, the relationship between 5-HTTLPR and the perceptual detection of others' facial expressions, the process which takes place prior to emotional labeling (i.e., recognition, is not clear. To examine whether the perceptual detection of emotional facial expressions is influenced by the allelic variation (short/long of 5-HTTLPR, happy and sad facial expressions were presented at weak and mid intensities (25% and 50%. Ninety-eight participants, genotyped for 5-HTTLPR, judged whether emotion in images of faces was present. Participants with short alleles showed higher sensitivity (d' to happy than to sad expressions, while participants with long allele(s showed no such positivity advantage. This effect of 5-HTTLPR was found at different facial expression intensities among males and females. The results suggest that at the perceptual stage, a short allele enhances the processing of positive facial expressions rather than that of negative facial expressions.
Sanders, Richard D.
There are close functional and anatomical relationships between cranial nerves V and VII in both their sensory and motor divisions. Sensation on the face is innervated by the trigeminal nerves (V) as are the muscles of mastication, but the muscles of facial expression are innervated mainly by the facial nerve (VII) as is the sensation of taste. This article briefly reviews the anatomy of these cranial nerves, disorders of these nerves that are of particular importance to psychiatry, and some ...
Tsoi, Daniel T; Lee, Kwang-Hyuk; Khokhar, Waqqas A; Mir, Nusrat U; Swalli, Jaspal S; Gee, Kate A; Pluck, Graham; Woodruff, Peter W R
Patients with schizophrenia have difficulty recognising the emotion that corresponds to a given facial expression. According to signal detection theory, two separate processes are involved in facial emotion perception: a sensory process (measured by sensitivity which is the ability to distinguish one facial emotion from another facial emotion) and a cognitive decision process (measured by response criterion which is the tendency to judge a facial emotion as a particular emotion). It is uncertain whether facial emotion recognition deficits in schizophrenia are primarily due to impaired sensitivity or response bias. In this study, we hypothesised that individuals with schizophrenia would have both diminished sensitivity and different response criteria in facial emotion recognition across different emotions compared with healthy controls. Twenty-five individuals with a DSM-IV diagnosis of schizophrenia were compared with age and IQ matched healthy controls. Participants performed a "yes-no" task by indicating whether the 88 Ekman faces shown briefly expressed one of the target emotions in three randomly ordered runs (happy, sad and fear). Sensitivity and response criteria for facial emotion recognition was calculated as d-prime and In(beta) respectively using signal detection theory. Patients with schizophrenia showed diminished sensitivity (d-prime) in recognising happy faces, but not faces that expressed fear or sadness. By contrast, patients exhibited a significantly less strict response criteria (In(beta)) in recognising fearful and sad faces. Our results suggest that patients with schizophrenia have a specific deficit in recognising happy faces, whereas they were more inclined to attribute any facial emotion as fearful or sad.
Full Text Available The mechanisms through which people perceive different types of smiles and judge their authenticity remain unclear. Here, 19 different types of smiles were created based on the Facial Action Coding System (FACS, using highly controlled, dynamic avatar faces. Participants observed short videos of smiles while their facial mimicry was measured with electromyography (EMG over four facial muscles. Smile authenticity was judged after each trial. Avatar attractiveness was judged once in response to each avatar's neutral face. Results suggest that, in contrast to most earlier work using static pictures as stimuli, participants relied less on the Duchenne marker (the presence of crow's feet wrinkles around the eyes in their judgments of authenticity. Furthermore, mimicry of smiles occurred in the Zygomaticus Major, Orbicularis Oculi, and Corrugator muscles. Consistent with theories of embodied cognition, activity in these muscles predicted authenticity judgments, suggesting that facial mimicry influences the perception of smiles. However, no significant mediation effect of facial mimicry was found. Avatar attractiveness did not predict authenticity judgments or mimicry patterns.
Korb, Sebastian; With, Stéphane; Niedenthal, Paula; Kaiser, Susanne; Grandjean, Didier
The mechanisms through which people perceive different types of smiles and judge their authenticity remain unclear. Here, 19 different types of smiles were created based on the Facial Action Coding System (FACS), using highly controlled, dynamic avatar faces. Participants observed short videos of smiles while their facial mimicry was measured with electromyography (EMG) over four facial muscles. Smile authenticity was judged after each trial. Avatar attractiveness was judged once in response to each avatar's neutral face. Results suggest that, in contrast to most earlier work using static pictures as stimuli, participants relied less on the Duchenne marker (the presence of crow's feet wrinkles around the eyes) in their judgments of authenticity. Furthermore, mimicry of smiles occurred in the Zygomaticus Major, Orbicularis Oculi, and Corrugator muscles. Consistent with theories of embodied cognition, activity in these muscles predicted authenticity judgments, suggesting that facial mimicry influences the perception of smiles. However, no significant mediation effect of facial mimicry was found. Avatar attractiveness did not predict authenticity judgments or mimicry patterns.
Tic - facial; Mimic spasm ... Tics may involve repeated, uncontrolled spasm-like muscle movements, such as: Eye blinking Grimacing Mouth twitching Nose wrinkling Squinting Repeated throat clearing or grunting may also be ...
Full Text Available The facial nerve is the most frequently damaged nerve in head and neck trauma. Patients undergoing facial nerve reconstruction often complain about disturbing abnormal synkinetic movements of the facial muscles (mass movements, synkinesis which are thought to result from misguided collateral branching of regenerating motor axons and reinnervation of inappropriate muscles. Here, we examined whether use of an aorta Y-tube conduit during reconstructive surgery after facial nerve injury reduces synkinesis of orbicularis oris (blink reflex and vibrissal (whisking musculature. The abdominal aorta plus its bifurcation was harvested (N = 12 for Y-tube conduits. Animal groups comprised intact animals (Group 1, those receiving hypoglossal-facial nerve end-to-end coaptation alone (HFA; Group 2, and those receiving hypoglossal-facial nerve reconstruction using a Y-tube (HFA-Y-tube, Group 3. Videotape motion analysis at 4 months showed that HFA-Y-tube group showed a reduced synkinesis of eyelid and whisker movements compared to HFA alone.
Lange, W.G.; Heuer, K.; Langner, O.; Keijsers, G.P.J.; Becker, E.S.; Rinck, M.
Scientific evidence is equivocal on whether Social Anxiety Disorder (SAD) is characterized by a biased negative evaluation of (grouped) facial expressions, even though it is assumed that such a bias plays a crucial role in the maintenance of the disorder. To shed light on the underlying mechanisms o
Dicarla Motta Magnani
Full Text Available OBJECTIVES: The purpose of this study was to analyze the characteristics of oral-motor movements and facial mimic in patients with head and neck burns. METHODS: An observational descriptive cross-sectional study was conducted with patients who suffered burns to the head and neck and who were referred to the Division of Orofacial Myology of a public hospital for assessment and rehabilitation. Only patients presenting deep partial-thickness and full-thickness burns to areas of the face and neck were included in the study. Patients underwent clinical assessment that involved an oral-motor evaluation, mandibular range of movement assessment, and facial mimic assessment. Patients were divided into two groups: G1 - patients with deep partial-thickness burns; G2 - patients with full-thickness burns. RESULTS: Our final study sample comprised 40 patients: G1 with 19 individuals and G2 with 21 individuals. The overall scores obtained in the clinical assessment of oral-motor organs indicated that patients with both second- and third-degree burns presented deficits related to posture, position and mobility of the oral-motor organs. Considering facial mimic, groups significantly differed when performing voluntary facial movements. Patients also presented limited maximal incisor opening. Deficits were greater for individuals in G2 in all assessments. CONCLUSION: Patients with head and neck burns present significant deficits related to posture, position and mobility of the oral myofunctional structures, including facial movements.
Eleftheriadis, Stefanos; Rudovic, Ognjen; Pantic, Maja
Automated analysis of facial expressions can benefit many domains, from marketing to clinical diagnosis of neurodevelopmental disorders. Facial expressions are typically encoded as a combination of facial muscle activations, i.e., action units. Depending on context, these action units co-occur in specific patterns, and rarely in isolation. Yet, most existing methods for automatic action unit detection fail to exploit dependencies among them, and the corresponding facial features. To address this, we propose a novel multi-conditional latent variable model for simultaneous fusion of facial features and joint action unit detection. Specifically, the proposed model performs feature fusion in a generative fashion via a low-dimensional shared subspace, while simultaneously performing action unit detection using a discriminative classification approach. We show that by combining the merits of both approaches, the proposed methodology outperforms existing purely discriminative/generative methods for the target task. To reduce the number of parameters, and avoid overfitting, a novel Bayesian learning approach based on Monte Carlo sampling is proposed, to integrate out the shared subspace. We validate the proposed method on posed and spontaneous data from three publicly available datasets (CK+, DISFA and Shoulder-pain), and show that both feature fusion and joint learning of action units leads to improved performance compared to the state-of-the-art methods for the task.
Lam Thanh Hien
Full Text Available Many scholars worldwide have paid special efforts in searching for advance approaches to efficiently estimate human head direction which has been successfully applied in numerous applications such as human-computer interaction, teleconferencing, virtual reality, and 3D audio rendering. However, one of the existing shortcomings in the current literature is the violation of some ideal assumptions in practice. Hence, this paper aims at proposing a novel algorithm based on the normal of human face to recognize human head direction by optimizing a 3D face model combined with the facial normal model. In our experiments, a computational program was also developed based on the proposed algorithm and integrated with the surveillance system to alert the driver drowsiness. The program intakes data from either video or webcam, and then automatically identify the critical points of facial features based on the analysis of major components on the faces; and it keeps monitoring the slant angle of the head closely and makes alarming signal whenever the driver dozes off. From our empirical experiments, we found that our proposed algorithm effectively works in real-time basis and provides highly accurate results
Niazi, Imran Khan; Jiang, Ning; Tiberghien, Olivier; Feldbæk Nielsen, Jørgen; Dremstrup, Kim; Farina, Dario
Detection of movement intention from neural signals combined with assistive technologies may be used for effective neurofeedback in rehabilitation. In order to promote plasticity, a causal relation between intended actions (detected for example from the EEG) and the corresponding feedback should be established. This requires reliable detection of motor intentions. In this study, we propose a method to detect movements from EEG with limited latency. In a self-paced asynchronous BCI paradigm, the initial negative phase of the movement-related cortical potentials (MRCPs), extracted from multi-channel scalp EEG was used to detect motor execution/imagination in healthy subjects and stroke patients. For MRCP detection, it was demonstrated that a new optimized spatial filtering technique led to better accuracy than a large Laplacian spatial filter and common spatial pattern. With the optimized spatial filter, the true positive rate (TPR) for detection of movement execution in healthy subjects (n = 15) was 82.5 ± 7.8%, with latency of -66.6 ± 121 ms. Although TPR decreased with motor imagination in healthy subject (n = 10, 64.5 ± 5.33%) and with attempted movements in stroke patients (n = 5, 55.01 ± 12.01%), the results are promising for the application of this approach to provide patient-driven real-time neurofeedback.
Full Text Available We present SONRIE, a serious game based on virtual reality and comprising four games which act as tests where children must perform gestures in order to progress through several screens (raising eyebrows, kissing, blowing, and smiling. The aims of this pilot study were to evaluate the overall acceptance of the game and the capacity for detecting anomalies in motor execution and, lastly, to establish motor control benchmarks in orofacial muscles. For this purpose, tests were performed in school settings with 96 typically developing children aged between five and seven years. Regarding the different games, in the kissing game, children were able to execute the correct movement at six years of age and a precise movement at the age of seven years. Blowing actions required more maturity, starting from the age of five and achievable by the age of six years. The smiling game was performed correctly among all ages evaluated. The percentage of children who mastered this gesture with both precision and speed was progressively greater reaching more than 75% of values above 100 for children aged seven years. SONRIE was accepted enthusiastically among the population under study. In the future, SONRIE could be used as a tool for detecting difficulties regarding self-control and for influencing performance and the ability to produce fine-tuned facial movements.
Full Text Available BACKGROUND: Recent research has shown that the presence of a task-irrelevant attractive face can induce a transient diversion of attention from a perceptual task that requires covert deployment of attention to one of the two locations. However, it is not known whether this spontaneous appraisal for facial beauty also modulates attention in change detection among multiple locations, where a slower, and more controlled search process is simultaneously affected by the magnitude of a change and the facial distinctiveness. Using the flicker paradigm, this study examines how spontaneous appraisal for facial beauty affects the detection of identity change among multiple faces. METHODOLOGY/PRINCIPAL FINDINGS: Participants viewed a display consisting of two alternating frames of four faces separated by a blank frame. In half of the trials, one of the faces (target face changed to a different person. The task of the participant was to indicate whether a change of face identity had occurred. The results showed that (1 observers were less efficient at detecting identity change among multiple attractive faces relative to unattractive faces when the target and distractor faces were not highly distinctive from one another; and (2 it is difficult to detect a change if the new face is similar to the old. CONCLUSIONS/SIGNIFICANCE: The findings suggest that attractive faces may interfere with the attention-switch process in change detection. The results also show that attention in change detection was strongly modulated by physical similarity between the alternating faces. Although facial beauty is a powerful stimulus that has well-demonstrated priority, its influence on change detection is easily superseded by low-level image similarity. The visual system appears to take a different approach to facial beauty when a task requires resource-demanding feature comparisons.
Hentschel, Juliane; Ruff, Ruth; Juette, Frauke; von Gontard, Alexander; Gortner, Ludwig
Infants born by caesarean section (CS) near or at term were observed to display spontaneous facial movements in their first minutes. We hypothesized that those are reproducible. Up to now, nothing was known about the significance, frequency, and determinants of such facial activity. Repetitive eye opening (EO) and tongue thrust (TT) actions were documented during 1 to 5 minutes, 5 to 10 minutes, and 1 to 15 minutes in 102 infants. In addition, 32 infants were recorded on video from minute 2 to minute 10. Infant- and maternal- influencing factors were noted and videos analyzed using Interact (Version 7.1, Mangold International, Arnstorf, Germany). According to our results, 99 of 102 newborns (gestational age, 33 to 42 weeks) performed EO or TT during the first 15 minutes. Preterm and infants with lower Apgar scores and infants born under general anesthesia showed less EO. Infants of smoking mothers, newborns admitted to special care, and infants with lower umbilical artery pH had significantly fewer TT episodes. Within a "normal" population of newborns of > 37 weeks at delivery (n = 57), 97% showed EO and 95% showed TT. In the filmed 32 newborns, infants began EO at 2:40 and TT at 2:34 minutes of life on average. Crying had no influence, but suctioning/intervention reduced EO frequency. In conclusion, EO and TT are occurring regularly during neonatal adaptation. TT seems to be an inborn automatic behavior; numerous occurrences of EO argue for neurological well-being. Both facial actions may initiate maternal-infant attachment.
Coulson, Susan E; O'Dwyer, Nicholas J; Adams, Roger D; Croxson, Glen R
Voluntary eyelid closure and smiling were studied in 11 normal subjects and 11 patients with long-term unilateral facial nerve palsy (FNP). The conjugacy of eyelid movements shown previously for blinks was maintained for voluntary eye closures in normal subjects, with movement onset being synchronous in both eyes. Bilateral onset synchrony of the sides of the mouth was also observed in smiling movements in normal subjects. In FNP patients, initiation of movement of the paretic and non-paretic eyelids was also synchronous, but markedly delayed relative to normal (by 136 ms = 32%). The initiation of bilateral movements at the mouth was similarly delayed, but in contrast to the eyes, it was not synchronous. Central neural processing in the FNP subjects was normal, however, since unilateral movements at the mouth were not delayed. The delays therefore point to considerable additional information processing needed for initiating bilateral facial movements after FNP. The maintenance of bilateral onset synchrony in eyelid closure and its loss in smiling following FNP is an important difference in the neural control of these facial regions. Bilateral conjugacy of eyelid movements is probably crucial for coordinating visual input and was achieved apparently without conscious effort on the part of the patients. Bilateral conjugacy of movements at the sides of the mouth may be less critical for normal function, although patients would very much like to achieve it in order to improve the appearance of their smile. Since the everyday frequency of eyelid movements is considerably greater than that of smiling, it is possible that the preserved eyelid conjugacy in these patients with long-term FNP is merely a product of greater experience. However, if synchrony of movement onset is found to be preserved in patients with acute FNP, then it would suggest that eyelid conjugacy has a privileged status in the neural organisation of the face.
Simone Damasceno de Faria
paralisia facial a partir da observação clínica desses animais.AIM: standardization of the technique to section the extratemporal facial nerve in rats and creation of a scale to evaluate facial movements in these animals before and after surgery. STUDY DESIGN: Experimental. METHOD: twenty Wistar rats were anesthetized with ketamine xylazine and submitted to sectioning of the facial nerve near its emergence through the mastoid foramen. Eye closure and blinking reflex, vibrissae movement and positioning were observed in all animals and a scale to evaluate these parameters was then created. RESULTS: The facial nerve trunk was found between the tendinous margin of the clavotrapezius muscle and the auricular cartilage. The trunk was proximally sectioned as it exits the mastoid foramen and the stumps were sutured with a 9-0-nylon thread. An evaluation and graduation scale of facial movements, independent for eye and vibrissae, was elaborated, together with a sum of the parameters, as a means to evaluate facial palsy. Absence of eye blinking and closure scored 1; the presence of orbicular muscle contraction, without blinking reflex, scored 2; 50% of eye closure through blinking reflex, scored 3, 75% of closure scored 4. The presence of complete eye closure and blinking reflex scored 5. The absence of movement and posterior position of the vibrissae scored 1; slight shivering and posterior position scored 2; greater shivering and posterior position, scored 3 and normal movement with posterior position, scored 4; symmetrical movement of he vibrissae, with anterior position, scored 5. CONCLUSION: The rat anatomy allows easy access to the extratemporal facial nerve, allowing its sectioning and standardized suture. It was also possible to establish an evaluation and graduation scale of the rat facial movements with facial palsy based on the clinical observation of these animals.
Moreno, J; Ramos-Castro, J; Movellan, J; Parrado, E; Rodas, G; Capdevila, L
Our aim is to demonstrate the usefulness of photoplethysmography (PPG) for analyzing heart rate variability (HRV) using a standard 5-min test at rest with paced breathing, comparing the results with real RR intervals and testing supine and sitting positions. Simultaneous recordings of R-R intervals were conducted with a Polar system and a non-contact PPG, based on facial video recording on 20 individuals. Data analysis and editing were performed with individually designated software for each instrument. Agreement on HRV parameters was assessed with concordance correlations, effect size from ANOVA and Bland and Altman plots. For supine position, differences between video and Polar systems showed a small effect size in most HRV parameters. For sitting position, these differences showed a moderate effect size in most HRV parameters. A new procedure, based on the pixels that contained more heart beat information, is proposed for improving the signal-to-noise ratio in the PPG video signal. Results were acceptable in both positions but better in the supine position. Our approach could be relevant for applications that require monitoring of stress or cardio-respiratory health, such as effort/recuperation states in sports. © Georg Thieme Verlag KG Stuttgart · New York.
Shasteen, Jonathon R; Sasson, Noah J; Pinkham, Amy E
Quickly and accurately perceiving the potential for aggression in others is adaptive and beneficial for self-protection. Superior detection of facial threat is demonstrated by studies in which transient threat indices (i.e., angry expressions) are identified more efficiently than are transient approach indices (i.e., happy expressions). Not all signs of facial threat are temporary, however: Persistent, biologically based craniofacial attributes (e.g., low eyebrow ridge) are also associated with a perceived propensity for aggression. It remains unclear whether such static properties of the face elicit comparable attentional biases. We used a novel visual search task of faces for the present study that lacked explicit displays of emotion, but varied on perceived threat via manipulated craniofacial structure. A search advantage for threatening facial elements surfaced, suggesting that efficient detection of threat is not limited to the perception of anger, but rather extends to more latent facial signals of aggressive potential. Although all stimuli were primarily identified as emotionally neutral, thus confirming that the effect does not require emotional content, individual variation in the perception of structurally threatening faces as angry was associated with a greater detection advantage. These results indicate that attributing anger to objectively emotionless faces may serve as a mechanism for their heightened salience and influence important facets of social perception and interaction.
Olsen, Mikkel Damgaard; Herskind, Anna; Nielsen, Jens Bo;
We study the characteristics of infants’ spontaneous movements, based on data obtained from a markerless motion tracking system. From the pose data, the set of features are generated from the raw joint-angles of the infants and different classifiers are trained and evaluated using annotated data....... Furthermore, we look at the importance of different features and outline the most significant features for detecting spontaneous movements of infants. Using these findings for further analysis of infants’ movements, this might be used to identify infants in risk of cerebral palsy....
PENG ZhenYun(彭振云); AI HaiZhou(艾海舟); Hong Wei(洪微); LIANG LuHong(梁路宏); XU GuangYou(徐光祐)
An approach is presented to detect faces and facial features on a video segmentbased on multi-cues, including gray-level distribution, color, motion, templates, algebraic featuresand so on. Faces are first detected across the frames by using color segmentation, template matchingand artificial neural network. A PCA-based (Principal Component Analysis) feature detector forstill images is then used to detect facial features on each single frame until the resulting features ofthree adjacent frames, named as base frames, are consistent with each other. The features of framesneighboring the base frames are first detected by the still-image feature detector, then verifiedand corrected according to the smoothness constraint and the planar surface motion constraint.Experiments have been performed on video segments captured under different environments, andthe presented method is proved to be robust and accurate over variable poses, ages and illuminationconditions.
Sady Antônio Santos Filho
Full Text Available This work investigates the Magnitude Squared of Coherence (MSC for detection of Event Related Potentials (ERPs related to left-hand index finger movement. Initially, ERP presence was examined in different brain areas. To accomplish that, 20 EEG channels were used, positioned according to the 10–20 international system. The grand average, resulting from 10 normal subjects showed, as expected, responses at frontal, central, and parietal areas, particularly evident at the central area (C3, C4, Cz. The MSC, applied to movement imagination related EEG signals, detected a consistent response in frequencies around 0.3–1 Hz (delta band, mainly at central area (C3, Cz, and C4. Ability differences in control imagination among subjects produced different detection performance. Some subjects needed up to 45 events for a detectable response, while for some others only 10 events proved sufficient. Some subjects also required two or three experimental sessions in order to achieve detectable responses. For one subject, response detection was not possible at all. However, due to brain plasticity, it is plausible to expect that training sessions (to practice movement imagination improve signal-noise ratio and lead to better detection using MSC. Results are sufficiently encouraging as to suggest further exploration of MSC for future BCI application.
Santos Filho, Sady Antônio; Tierra-Criollo, Carlos Julio; Souza, Ana Paula; Silva Pinto, Marcos Antonio; Cunha Lima, Maria Luiza; Manzano, Gilberto Mastrocola
This work investigates the Magnitude Squared of Coherence (MSC) for detection of Event Related Potentials (ERPs) related to left-hand index finger movement. Initially, ERP presence was examined in different brain areas. To accomplish that, 20 EEG channels were used, positioned according to the 10-20 international system. The grand average, resulting from 10 normal subjects showed, as expected, responses at frontal, central, and parietal areas, particularly evident at the central area (C3, C4, Cz). The MSC, applied to movement imagination related EEG signals, detected a consistent response in frequencies around 0.3-1 Hz (delta band), mainly at central area (C3, Cz, and C4). Ability differences in control imagination among subjects produced different detection performance. Some subjects needed up to 45 events for a detectable response, while for some others only 10 events proved sufficient. Some subjects also required two or three experimental sessions in order to achieve detectable responses. For one subject, response detection was not possible at all. However, due to brain plasticity, it is plausible to expect that training sessions (to practice movement imagination) improve signal-noise ratio and lead to better detection using MSC. Results are sufficiently encouraging as to suggest further exploration of MSC for future BCI application.
Lundtoft, Dennis Holm; Nasrollahi, Kamal; Moeslund, Thomas B.
show that by employing super-pixels we can divide the face into three regions, in a way that only one of these regions (about one third of the face) contributes to the pain estimation and the other two regions can be discarded. The experimental results on the UNBC- McMaster database show...... that the proposed system using this single region outperforms state-of-the-art systems in detecting no-pain scenarios, while it reaches comparable results in detecting weak and severe pain scenarios....
Full Text Available Eyes are the most salient and stable features in the human face, and hence automatic extraction or detection of eyes is often considered as the most important step in many applications, such as face identification and recognition. This paper presents a method for eye detection of still grayscale images. The method is based on two facts: eye regions exhibit unpredictable local intensity, therefore entropy in eye regions is high and the center of eye (iris is too dark circle (low intensity compared to the neighboring regions. A score based on the entropy of eye and darkness of iris is used to detect eye center coordinates. Experimental results on two databases; namely, FERET with variations in views and BioID with variations in gaze directions and uncontrolled conditions show that the proposed method is robust against gaze direction, variations in views and variety of illumination. It can achieve a correct detection rate of 97.8% and 94.3% on a set containing 2500 images of FERET and BioID databases respectively. Moreover, in the cases with glasses and severe conditions, the performance is still acceptable.
Aliakbaryhosseinabadi, Susan; Jiang, Ning; Petrini, Laura;
Alterations in attention are known to modify excitability of underlying cortical structures and thus the activity recorded during non-invasive electroencephalography (EEG). Brain-Computer-Interface systems for neuromodulation are based on reliable detection of intended movements from continuous EEG...
Marian Stewart Bartlett
Full Text Available Spontaneous facial expressions differ from posed expressions in both which muscles are moved, and in the dynamics of the movement. Advances in the field of automatic facial expression measurement will require development and assessment on spontaneous behavior. Here we present preliminary results on a task of facial action detection in spontaneous facial expressions. We employ a user independent fully automatic system for real time recognition of facial actions from the Facial Action Coding System (FACS. The system automatically detects frontal faces in the video stream and coded each frame with respect to 20 Action units. The approach applies machine learning methods such as support vector machines and AdaBoost, to texture-based image representations. The output margin for the learned classifiers predicts action unit intensity. Frame-by-frame intensity measurements will enable investigations into facial expression dynamics which were previously intractable by human coding.
High-acceleration loss of consciousness is a serious problem for military pilots. In this laboratory, a video cognizer has been developed that in real time detects facial changes closely coupled to the onset of loss of consciousness. Efficient algorithms are compatible with video digital signal processing hardware and are thus configurable on an autonomous single board that generates alarm triggers to activate autopilot, and is avionics-compatible.
Full Text Available CANOMAD is a rare chronic neuropathy, characterized by chronic sensory ataxia and intermittent brain stem symptoms due to antidisialosyl antibodies. The disorder results in significant morbidity but is poorly understood and often misdiagnosed. We describe a unique case of CANOMAD, associated with involuntary movements of the face; patient reported exacerbations with citrus and chocolate and respiratory muscle weakness. Our patient was initially misdiagnosed with Miller Fisher Syndrome, highlighting the need for vigilance should neurological symptoms recur in patients initially diagnosed with a Guillain Barre variant. Moreover, the optimal treatment is unknown. This patient responded remarkably to intravenous immunoglobulin and has been maintained on this treatment, without further exacerbations.
Yamamoto, Kouichi; Tatsutani, Soichi; Ishida, Takayuki
Patients receiving cancer chemotherapy experience nausea and vomiting. They are not life-threatening symptoms, but their insufficient control reduces the patients' quality of life. To identify methods for the management of nausea and vomiting in preclinical studies, the objective evaluation of these symptoms in laboratory animals is required. Unlike vomiting, nausea is defined as a subjective feeling described as recognition of the need to vomit; thus, determination of the severity of nausea in laboratory animals is considered to be difficult. However, since we observed that rats grimace after the administration of cisplatin, we hypothesized that changes in facial expression can be used as a method to detect nausea. In this study, we monitored the changes in the facial expression of rats after the administration of cisplatin and investigated the effect of anti-emetic drugs on the prevention of cisplatin-induced changes in facial expression. Rats were housed in individual cages with free access to food and tap water, and their facial expressions were continuously recorded by infrared video camera. On the day of the experiment, rats received cisplatin (0, 3, and 6 mg/kg, i.p.) with or without a daily injection of a 5-HT3 receptor antagonist (granisetron: 0.1 mg/kg, i.p.) or a neurokinin NK1 receptor antagonist (fosaprepitant: 2 mg/kg, i.p.), and their eye-opening index (the ratio between longitudinal and axial lengths of the eye) in the recorded video image was calculated. Cisplatin significantly and dose-dependently induced a decrease of the eye-opening index 6 h after the cisplatin injection, and the decrease continued for 2 days. The acute phase (day 1), but not the delayed phase (day 2), of the decreased eye-opening index was inhibited by treatment with granisetron; however, fosaprepitant abolished both phases of changes. The time-course of changes in facial expression are similar to clinical evidence of cisplatin-induced nausea in humans. These findings indicate
Full Text Available Previous studies consistently reported abnormal recognition of facial expressions in depression. However, it is still not clear whether this abnormality is due to an enhanced or impaired ability to recognize facial expressions, and what underlying cognitive systems are involved. The present study aimed to examine how individuals with elevated levels of depressive symptoms differ from controls on facial expression recognition and to assess attention and information processing using eye tracking. Forty participants (18 with elevated depressive symptoms were instructed to label facial expressions depicting one of seven emotions. Results showed that the high-depression group, in comparison with the low-depression group, recognized facial expressions faster and with comparable accuracy. Furthermore, the high-depression group demonstrated greater leftwards attention bias which has been argued to be an indicator of hyperactivation of right hemisphere during facial expression recognition.
Haque, Mohammad Ahsanul; Irani, Ramin; Nasrollahi, Kamal;
Physical fatigue reveals the health condition of a person at for example health checkup, fitness assessment or rehabilitation training. This paper presents an efficient noncontact system for detecting non-localized physi-cal fatigue from maximal muscle activity using facial videos acquired...... the challenges originates from realistic sce-nario. A face quality assessment system was also incorporated in the proposed system to reduce erroneous results by discarding low quality faces that occurred in a video sequence due to problems in realistic lighting, head motion and pose variation. Experimental...... results show that the proposed system outperforms video based existing system for physical fatigue detection....
Previous studies consistently reported abnormal recognition of facial expressions in depression. However, it is still not clear whether this abnormality is due to an enhanced or impaired ability to recognize facial expressions, and what underlying cognitive systems are involved. The present study aimed to examine how individuals with elevated levels of depressive symptoms differ from controls on facial expression recognition and to assess attention and information processing using eye trackin...
ZHANG Xiao-wen; YANG Yu-pu; XU Xiao-ming; HU Tian-pei; GAO Zhong-hua; ZHANG Jian; CHEN Tong-yi; CHEN Zhong-wei
Neuro signal has many more advantages than myoelectricity in providing information for prosthesis control, and can be an ideal source for developing new prosthesis. In this work, by implanting intrafascicular electrode clinically in the amputee's upper extremity, collective signals from fascicules of three main nerves (radial nerve, ulnar nerve and medium nerve) were successfully detected with sufficient fidelity and without infection. Initial analysis of features under different actions was performed and movement recognition of detected samples was attempted. Singular value decomposition features (SVD) extracted from wavelet coefficients were used as inputs for neural network classifier to predict amputee's movement intentions. The whole training rate was up to 80.94% and the test rate was 56.87% without over-training. This result gives inspiring prospect that collective signals from fascicules of the three main nerves are feasible sources for controlling prosthesis. Ways for improving accuracy in developing prosthesis controlled by neuro signals are discussed in the end.
Hohensinn, Roland; Geiger, Alain
High-alpine terrain reacts very sensitively to varying environmental conditions. As an example, increasing temperatures cause thawing of permafrost areas. This, in turn causes an increasing threat by natural hazards like debris flow (e.g. rock glaciers) or rockfalls. The Institute of Geodesy and Photogrammetry is contributing to alpine mass-movement monitoring systems in different project areas in the Swiss Alps. A main focus lies on providing geodetic mass-movement information derived from GNSS static solutions on a daily and a sub-daily basis, obtained with low-cost and autonomous GNSS stations. Another focus is set on rapidly providing reliable geodetic information in real-time i.e. for an integration in early warning systems. One way to achieve this is the estimation of accurate station velocities from observations of range rates, which can be obtained as Doppler observables from time derivatives of carrier phase measurements. The key for this method lies in a precise modeling of prominent effects contributing to the observed range rates, which are satellite velocity, atmospheric delay rates and relativistic effects. A suitable observation model is then devised, which accounts for these predictions. The observation model, combined with a simple kinematic movement model forms the basis for the parameter estimation. Based on the estimated station velocities, movements are then detected using a statistical test. To improve the reliablity of the estimated parameters, another spotlight is set on an on-line quality control procedure. We will present the basic algorithms as well as results from first tests which were carried out with a low-cost GPS L1 phase receiver. With a u-blox module and a sampling rate of 5 Hz, accuracies on the mm/s level can be obtained and velocities down to 1 cm/s can be detected. Reliable and accurate station velocities and movement information can be provided within seconds.
Full Text Available Adolescence is a critical period for the neurodevelopment of social-emotional processing, wherein the automatic detection of changes in facial expressions is crucial for the development of interpersonal communication. Two groups of participants (an adolescent group and an adult group were recruited to complete an emotional oddball task featuring on happy and one fearful condition. The measurement of event-related potential (ERP was carried out via electroencephalography (EEG and electrooculography (EOG recording, to detect visual mismatch negativity (vMMN with regard to the automatic detection of changes in facial expressions between the two age groups. The current findings demonstrated that the adolescent group featured more negative vMMN amplitudes than the adult group in the fronto-central region during the 120-200 ms interval. During the time window of 370-450 ms, only the adult group showed better automatic processing on fearful faces than happy faces. The present study indicated that adolescents posses stronger automatic detection of changes in emotional expression relative to adults, and sheds light on the neurodevelopment of automatic processes concerning social-emotional information.
Kovarski, Klara; Latinus, Marianne; Charpentier, Judith; Cléry, Helen; Roux, Sylvie; Houy-Durand, Emmanuelle; Saby, Agathe; Bonnet-Brilhault, Frédérique; Batty, Magali; Gomot, Marie
Detection of changes in facial emotional expressions is crucial to communicate and to rapidly and automatically process possible threats in the environment. Recent studies suggest that expression-related visual mismatch negativity (vMMN) reflects automatic processing of emotional changes. In the present study we used a controlled paradigm to investigate the specificity of emotional change-detection. In order to disentangle specific responses to emotional deviants from that of neutral deviants, we presented neutral expression as standard stimulus (p = 0.80) and both angry and neutral expressions as deviants (p = 0.10, each). In addition to an oddball sequence, an equiprobable sequence was presented, to control for refractoriness and low-level differences. Our results showed that in an early time window (100–200 ms), the controlled vMMN was greater than the oddball vMMN only for the angry deviant, suggesting the importance of controlling for refractoriness and stimulus physical features in emotion related studies. Within the controlled vMMN, angry and neutral deviants both elicited early and late peaks occurring at 140 and 310 ms, respectively, but only the emotional vMMN presented sustained amplitude after each peak. By directly comparing responses to emotional and neutral deviants, our study provides evidence of specific activity reflecting the automatic detection of emotional change. This differs from broader “visual” change processing, and suggests the involvement of two partially-distinct pre-attentional systems in the detection of changes in facial expressions. PMID:28194102
Shu, Ting; Zhang, Bob; Yan Tang, Yuan
Researchers have recently discovered that Diabetes Mellitus can be detected through non-invasive computerized method. However, the focus has been on facial block color features. In this paper, we extensively study the effects of texture features extracted from facial specific regions at detecting Diabetes Mellitus using eight texture extractors. The eight methods are from four texture feature families: (1) statistical texture feature family: Image Gray-scale Histogram, Gray-level Co-occurance Matrix, and Local Binary Pattern, (2) structural texture feature family: Voronoi Tessellation, (3) signal processing based texture feature family: Gaussian, Steerable, and Gabor filters, and (4) model based texture feature family: Markov Random Field. In order to determine the most appropriate extractor with optimal parameter(s), various parameter(s) of each extractor are experimented. For each extractor, the same dataset (284 Diabetes Mellitus and 231 Healthy samples), classifiers (k-Nearest Neighbors and Support Vector Machines), and validation method (10-fold cross validation) are used. According to the experiments, the first and third families achieved a better outcome at detecting Diabetes Mellitus than the other two. The best texture feature extractor for Diabetes Mellitus detection is the Image Gray-scale Histogram with bin number=256, obtaining an accuracy of 99.02%, a sensitivity of 99.64%, and a specificity of 98.26% by using SVM. Copyright © 2017 Elsevier Ltd. All rights reserved.
Pantic, Maja; Li, S.; Jain, A.
Facial expression recognition is a process performed by humans or computers, which consists of: 1. Locating faces in the scene (e.g., in an image; this step is also referred to as face detection), 2. Extracting facial features from the detected face region (e.g., detecting the shape of facial compon
Pantic, Maja; Li, S.; Jain, A.
Facial expression recognition is a process performed by humans or computers, which consists of: 1. Locating faces in the scene (e.g., in an image; this step is also referred to as face detection), 2. Extracting facial features from the detected face region (e.g., detecting the shape of facial
Kim, Keonwook; Choi, Anthony
Sound localization can be realized by utilizing the physics of acoustics in various methods. This paper investigates a novel detection architecture for the azimuthal movement of sound source based on the interaural level difference (ILD) between two receivers. One of the microphones in the system is surrounded by barriers of various heights in order to cast the direction dependent diffraction of the incoming signal. The gradient analysis of the ILD between the structured and unstructured microphone demonstrates the rotation directions as clockwise, counter clockwise, and no rotation of the sound source. Acoustic experiments with different types of sound source over a wide range of target movements show that the average true positive and false positive rates are 67% and 16%, respectively. Spectral analysis demonstrates that the low frequency delivers decreased true and false positive rates and the high frequency presents increases of both rates, overall.
Achaibou, Amal; Loth, Eva
Recruitment of ‘top-down’ frontal attentional mechanisms is held to support detection of changes in task-relevant stimuli. Fluctuations in intrinsic frontal activity have been shown to impact task performance more generally. Meanwhile, the amygdala has been implicated in ‘bottom-up’ attentional capture by threat. Here, 22 adult human participants took part in a functional magnetic resonance change detection study aimed at investigating the correlates of successful (vs failed) detection of changes in facial identity vs expression. For identity changes, we expected prefrontal recruitment to differentiate ‘hit’ from ‘miss’ trials, in line with previous reports. Meanwhile, we postulated that a different mechanism would support detection of emotionally salient changes. Specifically, elevated amygdala activation was predicted to be associated with successful detection of threat-related changes in expression, over-riding the influence of fluctuations in top-down attention. Our findings revealed that fusiform activity tracked change detection across conditions. Ventrolateral prefrontal cortical activity was uniquely linked to detection of changes in identity not expression, and amygdala activity to detection of changes from neutral to fearful expressions. These results are consistent with distinct mechanisms supporting detection of changes in face identity vs expression, the former potentially reflecting top-down attention, the latter bottom-up attentional capture by stimulus emotional salience. PMID:26245835
Achaibou, Amal; Loth, Eva; Bishop, Sonia J
Recruitment of 'top-down' frontal attentional mechanisms is held to support detection of changes in task-relevant stimuli. Fluctuations in intrinsic frontal activity have been shown to impact task performance more generally. Meanwhile, the amygdala has been implicated in 'bottom-up' attentional capture by threat. Here, 22 adult human participants took part in a functional magnetic resonance change detection study aimed at investigating the correlates of successful (vs failed) detection of changes in facial identity vs expression. For identity changes, we expected prefrontal recruitment to differentiate 'hit' from 'miss' trials, in line with previous reports. Meanwhile, we postulated that a different mechanism would support detection of emotionally salient changes. Specifically, elevated amygdala activation was predicted to be associated with successful detection of threat-related changes in expression, over-riding the influence of fluctuations in top-down attention. Our findings revealed that fusiform activity tracked change detection across conditions. Ventrolateral prefrontal cortical activity was uniquely linked to detection of changes in identity not expression, and amygdala activity to detection of changes from neutral to fearful expressions. These results are consistent with distinct mechanisms supporting detection of changes in face identity vs expression, the former potentially reflecting top-down attention, the latter bottom-up attentional capture by stimulus emotional salience.
Eleftheriadis, Stefanos; Rudovic, Ognjen; Pantic, Maja
Automated analysis of facial expressions can benefit many domains, from marketing to clinical diagnosis of neurodevelopmental disorders. Facial expressions are typically encoded as a combination of facial muscle activations, i.e., action units. Depending on context, these action units co-occur in sp
Full Text Available Pyroelectric infrared (PIR sensors are widely used as a presence trigger, but the analog output of PIR sensors depends on several other aspects, including the distance of the body from the PIR sensor, the direction and speed of movement, the body shape and gait. In this paper, we present an empirical study of human movement detection and identification using a set of PIR sensors. We have developed a data collection module having two pairs of PIR sensors orthogonally aligned and modified Fresnel lenses. We have placed three PIR-based modules in a hallway for monitoring people; one module on the ceiling; two modules on opposite walls facing each other. We have collected a data set from eight subjects when walking in three different conditions: two directions (back and forth, three distance intervals (close to one wall sensor, in the middle, close to the other wall sensor and three speed levels (slow, moderate, fast. We have used two types of feature sets: a raw data set and a reduced feature set composed of amplitude and time to peaks; and passage duration extracted from each PIR sensor. We have performed classification analysis with well-known machine learning algorithms, including instance-based learning and support vector machine. Our findings show that with the raw data set captured from a single PIR sensor of each of the three modules, we could achieve more than 92% accuracy in classifying the direction and speed of movement, the distance interval and identifying subjects. We could also achieve more than 94% accuracy in classifying the direction, speed and distance and identifying subjects using the reduced feature set extracted from two pairs of PIR sensors of each of the three modules.
Cheng, Yong; Li, Zuoyong; Jiao, Liangbao; Lu, Hong; Cao, Xuehong
We improved classic retinal modeling to alleviate the adverse effect of complex illumination on face recognition and extracted robust image features. Our improvements on classic retinal modeling included three aspects. First, a combined filtering scheme was applied to simulate functions of horizontal and amacrine cells for accurate local illumination estimation. Second, we developed an optimal threshold method for illumination classification. Finally, we proposed an adaptive factor acquisition model based on the arctangent function. Experimental results on the combined Yale B; the Carnegie Mellon University poses, illumination, and expression; and the Labeled Face Parts in the Wild databases show that the proposed method can effectively alleviate illumination difference of images under complex illumination conditions, which is helpful for improving the accuracy of face recognition and that of facial feature point detection.
This study examined facial expression in the presentation of sarcasm. 60 responses (sarcastic responses = 30, nonsarcastic responses = 30) from 40 different speakers were coded by two trained coders. Expressions in three facial areas--eyebrow, eyes, and mouth--were evaluated. Only movement in the mouth area significantly differentiated ratings of sarcasm from nonsarcasm.
Truong, K.P.; Leeuwen, D.A. van; Neerincx, M.A.
Two unobtrusive modalities for automatic emotion recognition are discussed: speech and facial expressions. First, an overview is given of emotion recognition studies based on a combination of speech and facial expressions. We will identify difficulties concerning data collection, data fusion, system
Truong, K.P.; Leeuwen, D.A. van; Neerincx, M.A.
Two unobtrusive modalities for automatic emotion recognition are discussed: speech and facial expressions. First, an overview is given of emotion recognition studies based on a combination of speech and facial expressions. We will identify difficulties concerning data collection, data fusion, system
Fujinawa, Yukio; Matsumoto, Takumi; Iitaka, Hiroshi; Takahashi, Kozo; Nakano, Hiroshi; Doi, Takuya; Saito, Toshiyuki; Kasai, Naoko; Sato, Sohjun
Volcanic eruptions are generally preceded by magma intrusion. Volcanic forecasting is sure to make considerable progress if we have a practical means to detect magma movements. Electric potential variations have been observed since April 1999 at Miyake Island, a volcanic island in Japan. Measurements have been conducted by a special long vertical antenna using a steel casing pipe and a short horizontal dipole. Beginning about half a day before as well as at the time period of the largest eruption in 2000 of Miyake-jima volcano on August 18, 2000, conspicuous electric field variations were observed on the horizontal and vertical components in the frequency bands of DC, ULF and ELF/VLF. And several types of anomalies were found to occur in association with different stage of volcanic activities. We suggest that transient self-potential variations are induced by confined ground water pressure fluctuations through interaction between intruding magma and hydrothermal circulation through electro-kinetic effect. Subsurface transient self-potential measurement has been suggested to be useful means for monitoring volcanic eruption and to provide an efficient window for looking into modification of hydrothermal circulation induced by the volcanic activity.
Diels, H J; Combs, D
Neuromuscular retraining is an effective method for rehabilitating facial musculature in patients with facial paralysis. This nonsurgical therapy has demonstrated improved functional outcomes and is an important adjunct to surgical treatment for restoring facial movement. Treatment begins with an intensive clinical evaluation and incorporates appropriate sensory feedback techniques into a patient-specific, comprehensive, home therapy program. This article discusses appropriate patients, timelines for referral, and basic treatment practices of facial neuromuscular retraining for restoring function and expression to the highest level possible.
Larsson, Linnéa; Schwaller, Andrea; Nyström, Marcus; Stridh, Martin
The complexity of analyzing eye-tracking signals increases as eye-trackers become more mobile. The signals from a mobile eye-tracker are recorded in relation to the head coordinate system and when the head and body move, the recorded eye-tracking signal is influenced by these movements, which render the subsequent event detection difficult. The purpose of the present paper is to develop a method that performs robust event detection in signals recorded using a mobile eye-tracker. The proposed method performs compensation of head movements recorded using an inertial measurement unit and employs a multi-modal event detection algorithm. The event detection algorithm is based on the head compensated eye-tracking signal combined with information about detected objects extracted from the scene camera of the mobile eye-tracker. The method is evaluated when participants are seated 2.6m in front of a big screen, and is therefore only valid for distant targets. The proposed method for head compensation decreases the standard deviation during intervals of fixations from 8° to 3.3° for eye-tracking signals recorded during large head movements. The multi-modal event detection algorithm outperforms both an existing algorithm (I-VDT) and the built-in-algorithm of the mobile eye-tracker with an average balanced accuracy, calculated over all types of eye movements, of 0.90, compared to 0.85 and 0.75, respectively for the compared algorithms. The proposed event detector that combines head movement compensation and information regarding detected objects in the scene video enables for improved classification of events in mobile eye-tracking data. Copyright © 2016 Elsevier B.V. All rights reserved.
Full Text Available The movement-related cortical potential (MRCP is a low-frequency negative shift in the electroencephalography (EEG recording that takes place about 2 seconds prior to voluntary movement production. MRCP replicates the cortical processes employed in planning and preparation of movement. In this study, we recapitulate the features such as signal’s acquisition, processing, and enhancement and different electrode montages used for EEG data recoding from different studies that used MRCPs to predict the upcoming real or imaginary movement. An authentic identification of human movement intention, accompanying the knowledge of the limb engaged in the performance and its direction of movement, has a potential implication in the control of external devices. This information could be helpful in development of a proficient patient-driven rehabilitation tool based on brain-computer interfaces (BCIs. Such a BCI paradigm with shorter response time appears more natural to the amputees and can also induce plasticity in brain. Along with different training schedules, this can lead to restoration of motor control in stroke patients.
Shakeel, Aqsa; Navid, Muhammad Samran; Anwar, Muhammad Nabeel; Mazhar, Suleman; Jochumsen, Mads; Niazi, Imran Khan
The movement-related cortical potential (MRCP) is a low-frequency negative shift in the electroencephalography (EEG) recording that takes place about 2 seconds prior to voluntary movement production. MRCP replicates the cortical processes employed in planning and preparation of movement. In this study, we recapitulate the features such as signal's acquisition, processing, and enhancement and different electrode montages used for EEG data recoding from different studies that used MRCPs to predict the upcoming real or imaginary movement. An authentic identification of human movement intention, accompanying the knowledge of the limb engaged in the performance and its direction of movement, has a potential implication in the control of external devices. This information could be helpful in development of a proficient patient-driven rehabilitation tool based on brain-computer interfaces (BCIs). Such a BCI paradigm with shorter response time appears more natural to the amputees and can also induce plasticity in brain. Along with different training schedules, this can lead to restoration of motor control in stroke patients.
Huang, Charles Lung-Cheng; Hsiao, Sigmund; Hwu, Hai-Gwo; Howng, Shen-Long
This study assessed facial emotion recognition abilities in subjects with paranoid and non-paranoid schizophrenia (NPS) using signal detection theory. We explore the differential deficits in facial emotion recognition in 44 paranoid patients with schizophrenia (PS) and 30 non-paranoid patients with schizophrenia (NPS), compared to 80 healthy controls. We used morphed faces with different intensities of emotion and computed the sensitivity index (d') of each emotion. The results showed that performance differed between the schizophrenia and healthy controls groups in the recognition of both negative and positive affects. The PS group performed worse than the healthy controls group but better than the NPS group in overall performance. Performance differed between the NPS and healthy controls groups in the recognition of all basic emotions and neutral faces; between the PS and healthy controls groups in the recognition of angry faces; and between the PS and NPS groups in the recognition of happiness, anger, sadness, disgust, and neutral affects. The facial emotion recognition impairment in schizophrenia may reflect a generalized deficit rather than a negative-emotion specific deficit. The PS group performed worse than the control group, but better than the NPS group in facial expression recognition, with differential deficits between PS and NPS patients.
Dunes on Earth move downwind at different speeds depending upon the local wind conditions, the amount of loose sand available to be transported by wind, the shape and volume of the dunes, and overgrowths of vegetation. Typically, smaller dunes move faster than larger dunes. On Earth, some of the fastest-moving dunes that have been measured (e.g., in the deserts of Peru) move 10 to 30 meters (33 to 100 feet) per year. Small dunes usually have an almost crescent-shape to them, and are known to geologists as barchan dunes.To look for evidence of dune movement on Mars, the Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) has been used to re-visit some areas of known barchan dunes--because these types move the fastest--that were observed by the Mariner 9 orbiter in 1972 and the Viking 1 and 2 orbiters between 1976 and 1980. The picture above, left, shows a MOC high-resolution image taken December 25, 1999. The classic, crescentic shape of the dark barchan dunes can be seen in this picture. The steep slopes, also known as the dune slip faces, on these dunes are facing toward the southwest (north is up in both pictures). Thus, the shape of the dunes indicates that they are moving toward the southwest.The picture above right shows the MOC image from December 1999 superimposed on a Viking 1 image taken May 27, 1978. During the 11 1/2 Mars years that passed between these two dates, it turns out that no difference can be detected in the position of the dunes seen in the MOC image and the Viking image. The earlier Viking image had a resolution of about 17 meters (56 ft) per pixel, while the MOC image had a resolution of about 3.8 meters (12 ft) per pixel. Although it looks like the dunes didn't move between the Viking and MOC images, this observation is limited by the resolution of the Viking image. It is entirely possible that the dunes have moved as much as 17-20 meters (16-66 ft) and one would not be able to tell by comparing the images. As it is, movement of less than
JENNIFER BLANCO MARTÍNEZ
Full Text Available Se evaluó el efecto del sexo y la edad de un grupo de personas en la capacidad de detectar cambios faciales ligeros en pares de fotografías. Las fotografías estuvieron expuestas ante la persona durante 1,5 s. Se utilizaron dos tratamientos; uno sin entrenamiento y otro con entrenamiento, donde se presentaba a la persona justo antes de la prueba una pareja de fotografías como ejemplo de los cambios que podrían esperarse. Los hombres y mujeres presentaron diferencias significativas en los resultados de la prueba; siendo las mujeres las que obtuvieron mayor número de aciertos indicando una mayor percepción visual detallada de los rostros. Igualmente, se encontró efecto de la edad sobre la percepción, registrándose un mayor número de aciertos entre los 21 y 30 años; antes de este rango, los valores son menores posiblemente debido a que la capacidad perceptual está en proceso de desarrollo; mientras que después, los valores disminuyen por el patrón normal de envejecimiento. Se encontró un mayor número de aciertos para el tratamiento con entrenamiento, sugiriendo que este método (demostración y ejemplo es eficaz en facilitar la capacidad de percepción de diferencias faciales.The sex and age effect on the capacity to detect slight facial changes in a pair of photographs was evaluated in a group of people. Each pair of photographs was displayed during 1.5 s. Two treatments were used; with and without training. Theformer consisted of a pair of photographs that were exhibited to the person before the test like an example of the changes that could be expected to see in the trial. Men and women showed meaningful differences in the test results; women obtained higher scores indicating an upper detailed visual perception of human faces. Furthermore, age effect over perception was found, where the greater number of correct choices was presented between 21 and 30 years old; before this age range no diffrences were found, because of the
Li, Yongqiang; Wang, Shangfei; Zhao, Yongping; Ji, Qiang
The tracking and recognition of facial activities from images or videos have attracted great attention in computer vision field. Facial activities are characterized by three levels. First, in the bottom level, facial feature points around each facial component, i.e., eyebrow, mouth, etc., capture the detailed face shape information. Second, in the middle level, facial action units, defined in the facial action coding system, represent the contraction of a specific set of facial muscles, i.e., lid tightener, eyebrow raiser, etc. Finally, in the top level, six prototypical facial expressions represent the global facial muscle movement and are commonly used to describe the human emotion states. In contrast to the mainstream approaches, which usually only focus on one or two levels of facial activities, and track (or recognize) them separately, this paper introduces a unified probabilistic framework based on the dynamic Bayesian network to simultaneously and coherently represent the facial evolvement in different levels, their interactions and their observations. Advanced machine learning methods are introduced to learn the model based on both training data and subjective prior knowledge. Given the model and the measurements of facial motions, all three levels of facial activities are simultaneously recognized through a probabilistic inference. Extensive experiments are performed to illustrate the feasibility and effectiveness of the proposed model on all three level facial activities.
Yoshida, Masaki; Kawai, Kazushige; Kitahara, Kenji [Jikei Univ., Tokyo (Japan). School of Medicine; Soulie, D.; Cordoliani, Y.S.; Iba-Zizen, M.T.; Cabanis, E.A.
Cortical activity during eye movement was examined with functional magnetic resonance imaging. Horizontal saccadic eye movements and smooth pursuit eye movements were elicited in normal subjects. Activity in the frontal eye field was found during both saccadic and smooth pursuit eye movements at the posterior margin of the middle frontal gyrus and in parts of the precentral sulcus and precentral gyrus bordering the middle frontal gyrus (Brodmann`s areas 8, 6, and 9). In addition, activity in the parietal eye field was found in the deep, upper margin of the angular gyrus and of the supramarginal gyrus (Brodmann`s areas 39 and 40) during saccadic eye movement. Activity of V5 was found at the intersection of the ascending limb of the inferior temporal sulcus and the lateral occipital sulcus during smooth pursuit eye movement. Our results suggest that functional magnetic resonance imaging is useful for detecting cortical activity during eye movement. (author)
Sanchez-Pages, Santiago; Rodriguez-Ruiz, Claudia; Turiegano, Enrique
Recent research has explored the relationship between facial masculinity, human male behaviour and males' perceived features (i.e. attractiveness). The methods of measurement of facial masculinity employed in the literature are quite diverse. In the present paper, we use several methods of measuring facial masculinity to study the effect of this feature on risk attitudes and trustworthiness. We employ two strategic interactions to measure these two traits, a first-price auction and a trust game. We find that facial width-to-height ratio is the best predictor of trustworthiness, and that measures of masculinity which use Geometric Morphometrics are the best suited to link masculinity and bidding behaviour. However, we observe that the link between masculinity and bidding in the first-price auction might be driven by competitiveness and not by risk aversion only. Finally, we test the relationship between facial measures of masculinity and perceived masculinity. As a conclusion, we suggest that researchers in the field should measure masculinity using one of these methods in order to obtain comparable results. We also encourage researchers to revise the existing literature on this topic following these measurement methods.
Sanchez-Pages, Santiago; Rodriguez-Ruiz, Claudia; Turiegano, Enrique
Recent research has explored the relationship between facial masculinity, human male behaviour and males' perceived features (i.e. attractiveness). The methods of measurement of facial masculinity employed in the literature are quite diverse. In the present paper, we use several methods of measuring facial masculinity to study the effect of this feature on risk attitudes and trustworthiness. We employ two strategic interactions to measure these two traits, a first-price auction and a trust game. We find that facial width-to-height ratio is the best predictor of trustworthiness, and that measures of masculinity which use Geometric Morphometrics are the best suited to link masculinity and bidding behaviour. However, we observe that the link between masculinity and bidding in the first-price auction might be driven by competitiveness and not by risk aversion only. Finally, we test the relationship between facial measures of masculinity and perceived masculinity. As a conclusion, we suggest that researchers in the field should measure masculinity using one of these methods in order to obtain comparable results. We also encourage researchers to revise the existing literature on this topic following these measurement methods. PMID:25389770
Guntinas-Lichius, Orlando; Genther, Dane J; Byrne, Patrick J
Extracranial infiltration of the facial nerve by salivary gland tumors is the most frequent cause of facial palsy secondary to malignancy. Nevertheless, facial palsy related to salivary gland cancer is uncommon. Therefore, reconstructive facial reanimation surgery is not a routine undertaking for most head and neck surgeons. The primary aims of facial reanimation are to restore tone, symmetry, and movement to the paralyzed face. Such restoration should improve the patient's objective motor function and subjective quality of life. The surgical procedures for facial reanimation rely heavily on long-established techniques, but many advances and improvements have been made in recent years. In the past, published experiences on strategies for optimizing functional outcomes in facial paralysis patients were primarily based on small case series and described a wide variety of surgical techniques. However, in the recent years, larger series have been published from high-volume centers with significant and specialized experience in surgical and nonsurgical reanimation of the paralyzed face that have informed modern treatment. This chapter reviews the most important diagnostic methods used for the evaluation of facial paralysis to optimize the planning of each individual's treatment and discusses surgical and nonsurgical techniques for facial rehabilitation based on the contemporary literature.
Caudek, Corrado; Ceccarini, Francesco; Sica, Claudio
The facial dot-probe task is one of the most common experimental paradigms used to assess attentional bias toward emotional information. In recent years, however, the psychometric properties of this paradigm have been questioned. In the present study, attentional bias to emotional face stimuli was measured with dynamic and static images of realistic human faces in 97 college students (63 women) who underwent either a positive or a negative mood-induction prior to the experiment. We controlled the bottom-up salience of the stimuli in order to dissociate the top-down orienting of attention from the effects of the bottom-up physical properties of the stimuli. A Bayesian analysis of our results indicates that 1) the traditional global attentional bias index shows a low reliability, 2) reliability increases dramatically when biased attention is analyzed by extracting a series of bias estimations from trial-to-trial (Zvielli, Bernstein, & Koster, 2015), 3) dynamic expression of emotions strengthens biased attention to emotional information, and 4) mood-congruency facilitates the measurement of biased attention to emotional stimuli. These results highlight the importance of using ecologically valid stimuli in attentional bias research, together with the importance of estimating biased attention at the trial level. Copyright © 2017 Elsevier Ltd. All rights reserved.
Lobo-Prat, J.; Kooren, P.N.; Stienen, A.H.A.; Herder, J.L.; Koopman, B.F.J.M.; Veltink, P.H.
Active movement-assistive devices aim to increase the quality of life for patients with neuromusculoskeletal disorders. This technology requires interaction between the user and the device through a control interface that detects the user’s movement intention. Researchers have explored a wide variet
Jowett, Nate; Hadlock, Tessa A
The management of acute facial nerve insult may entail medical therapy, surgical exploration, decompression, or repair depending on the etiology. When recovery is not complete, facial mimetic function lies on a spectrum ranging from flaccid paralysis to hyperkinesis resulting in facial immobility. Through systematic assessment of the face at rest and with movement, one may tailor the management to the particular pattern of dysfunction. Interventions for long-standing facial palsy include physical therapy, injectables, and surgical reanimation procedures. The goal of the management is to restore facial balance and movement. This article summarizes a contemporary approach to the management of facial nerve insults.
Shakeel, Aqsa; Navid, Muhammad Samran; Anwar, Muhammad Nabeel;
, accompanying the knowledge of the limb engaged in the performance and its direction of movement, has a potential implication in the control of external devices. This information could be helpful in development of a proficient patient-driven rehabilitation tool based on brain-computer interfaces (BCIs......). Such a BCI paradigm with shorter response time appears more natural to the amputees and can also induce plasticity in brain. Along with different training schedules, this can lead to restoration of motor control in stroke patients....
Full Text Available Nowadays, in the aeronautical environments, the use of mobile communication and other wireless technologies is restricted. More specifically, the Federal Communications Commission (FCC and the Federal Aviation Administration (FAA prohibit the use of cellular phones and other wireless devices on airborne aircraft because of potential interference with wireless networks on the ground, and with the aircraft's navigation and communication systems. Within this context, we propose in this paper a movement recognition algorithm that will switch off a module including a GSM (Global System for Mobile Communications device or any other mobile cellular technology as soon as it senses movement and thereby will prevent any forbidden transmissions that could occur in a moving airplane. The algorithm is based solely on measurements of a low-cost accelerometer and is easy to implement with a high degree of reliability.
O'Regan, Simon; Faul, Stephen; Marnane, William
Contamination of EEG signals by artefacts arising from head movements has been a serious obstacle in the deployment of automatic neurological event detection systems in ambulatory EEG. In this paper, we present work on categorizing these head-movement artefacts as one distinct class and on using support vector machines to automatically detect their presence. The use of additional physical signals in detecting head-movement artefacts is also investigated by means of support vector machines classifiers implemented with gyroscope waveforms. Finally, the combination of features extracted from EEG and gyroscope signals is explored in order to design an algorithm which incorporates both physical and physiological signals in accurately detecting artefacts arising from head-movements.
Yu, Hui; Garrod, Oliver; Jack, Rachael; Schyns, Philippe
Facial expressions reflect internal emotional states of a character or in response to social communications. Though much effort has been taken to generate realistic facial expressions, it still remains a challenging topic due to human being's sensitivity to subtle facial movements. In this paper, we present a method for facial animation generation, which reflects true facial muscle movements with high fidelity. An intermediate model space is introduced to transfer captured static AU peak frames based on FACS to the conformed target face. And then dynamic parameters derived using a psychophysics method is integrated to generate facial animation, which is assumed to represent natural correlation of multiple AUs. Finally, the animation sequence in the intermediate model space is mapped to the target face to produce final animation.
Muralidharan, A.; Chae, J.; Taylor, D. M.
Movement-assist devices such as neuromuscular stimulation systems can be used to generate movements in people with chronic hand paralysis due to stroke. If detectable, motor planning activity in the cortex could be used in real time to trigger a movement-assist device and restore a person's ability to perform many of the activities of daily living. Additionally, re-coupling motor planning in the cortex with assisted movement generation in the periphery may provide an even greater benefit—strengthening relevant synaptic connections over time to promote natural motor recovery. This study examined the potential for using electroencephalograms (EEGs) as a means of rapidly detecting the intent to open the hand during movement planning in individuals with moderate chronic hand paralysis following a subcortical ischemic stroke. On average, attempts to open the hand could be detected from EEGs approximately 100-500 ms prior to the first signs of movement onset. This earlier detection would minimize device activation delays and allow for tighter coupling between initial formation of the motor plan in the cortex and augmentation of that plan in the periphery by a movement-assist device. This tight temporal coupling may be important or even essential for strengthening synaptic connections and enhancing natural motor recovery.
Hemming, J.; Henten, van E.J.; Tuijl, van B.A.J.; Bontsema, J.
Besides harvesting the fruits, a very time demanding task is removing old leaves from cucumber and tomato plants grown in greenhouses. To be able to automate this process by a robot, a leaf detection method is required. One possibility for the detection is to exploit the different dynamic behaviour
Ghent, John; McDonald, J.
This paper details a procedure for classifying facial expressions. This is a growing and relatively new type of problem within computer vision. One of the fundamental problems when classifying facial expressions in previous approaches is the lack of a consistent method of measuring expression. This paper solves this problem by the computation of the Facial Expression Shape Model (FESM). This statistical model of facial expression is based on an anatomical analysis of facial expression called the Facial Action Coding System (FACS). We use the term Action Unit (AU) to describe a movement of one or more muscles of the face and all expressions can be described using the AU's described by FACS. The shape model is calculated by marking the face with 122 landmark points. We use Principal Component Analysis (PCA) to analyse how the landmark points move with respect to each other and to lower the dimensionality of the problem. Using the FESM in conjunction with Support Vector Machines (SVM) we classify facial expressions. SVMs are a powerful machine learning technique based on optimisation theory. This project is largely concerned with statistical models, machine learning techniques and psychological tools used in the classification of facial expression. This holistic approach to expression classification provides a means for a level of interaction with a computer that is a significant step forward in human-computer interaction.
Kawamura, Mitsuru; Sugimoto, Azusa; Kobayakawa, Mutsutaka; Tsuruya, Natsuko
To discuss the neurological basis of facial recognition, we present our case reports of impaired recognition and a review of previous literature. First, we present a case of infarction and discuss prosopagnosia, which has had a large impact on face recognition research. From a study of patient symptoms, we assume that prosopagnosia may be caused by unilateral right occipitotemporal lesion and right cerebral dominance of facial recognition. Further, circumscribed lesion and degenerative disease may also cause progressive prosopagnosia. Apperceptive prosopagnosia is observed in patients with posterior cortical atrophy (PCA), pathologically considered as Alzheimer's disease, and associative prosopagnosia in frontotemporal lobar degeneration (FTLD). Second, we discuss face recognition as part of communication. Patients with Parkinson disease show social cognitive impairments, such as difficulty in facial expression recognition and deficits in theory of mind as detected by the reading the mind in the eyes test. Pathological and functional imaging studies indicate that social cognitive impairment in Parkinson disease is possibly related to damages in the amygdalae and surrounding limbic system. The social cognitive deficits can be observed in the early stages of Parkinson disease, and even in the prodromal stage, for example, patients with rapid eye movement (REM) sleep behavior disorder (RBD) show impairment in facial expression recognition. Further, patients with myotonic dystrophy type 1 (DM 1), which is a multisystem disease that mainly affects the muscles, show social cognitive impairment similar to that of Parkinson disease. Our previous study showed that facial expression recognition impairment of DM 1 patients is associated with lesion in the amygdalae and insulae. Our study results indicate that behaviors and personality traits in DM 1 patients, which are revealed by social cognitive impairment, are attributable to dysfunction of the limbic system.
Ernesto Pablo Lana
Full Text Available Introduction : This paper presents a detection method for upper limb movement intention as part of a brain-machine interface using EEG signals, whose final goal is to assist disabled or vulnerable people with activities of daily living. Methods EEG signals were recorded from six naïve healthy volunteers while performing a motor task. Every volunteer remained in an acoustically isolated recording room. The robot was placed in front of the volunteers such that it seemed to be a mirror of their right arm, emulating a Brain Machine Interface environment. The volunteers were seated in an armchair throughout the experiment, outside the reaching area of the robot to guarantee safety. Three conditions are studied: observation, execution, and imagery of right arm’s flexion and extension movements paced by an anthropomorphic manipulator robot. The detector of movement intention uses the spectral F test for discrimination of conditions and uses as feature the desynchronization patterns found on the volunteers. Using a detector provides an objective method to acknowledge for the occurrence of movement intention. Results When using four realizations of the task, detection rates ranging from 53 to 97% were found in five of the volunteers when the movement was executed, in three of them when the movement was imagined, and in two of them when the movement was observed. Conclusions Detection rates for movement observation raises the question of how the visual feedback may affect the performance of a working brain-machine interface, posing another challenge for the upcoming interface implementation. Future developments will focus on the improvement of feature extraction and detection accuracy for movement intention using EEG data.
Bonneton-Botté, Nathalie; De La Haye, Fanny; Marec-Breton, Nathalie; Bara, Florence
This research focuses on the ability of the young child to detect and identify the continuity or discontinuity of a cursive handwriting movement. The evolution of this ability has been studied by comparing the performance of nonscripters (kindergarten) pupils and students scripters (2nd and 5th years of primary school). Results showed that the perception of information relating to the continuity of writing movement is antecedent to the formal learning of cursive handwriting. Analysis of the justifications produced by the youngest participants suggests that early knowledge of handwriting movement is not explicitable.
Full Text Available Objective techniques to evaluate a facial movement are indispensable for the contemporary treatment of patients with motor disorders such as facial paralysis, cleft lip, postoperative head and neck cancer, and so on. Recently, computer-assisted, video-based techniques have been devised and reported as measuring systems in which facial movements can be evaluated quantitatively. Commercially available motion analysis systems, in which a stereo-measuring technique with multiple cameras and markers to facilitate search of matching among images through all cameras, also are utilized, and are used in many measuring systems such as video-based systems. The key is how the problems of facial movement can be extracted precisely, and how useful information for the diagnosis and decision-making process can be derived from analyses of facial movement. Therefore, it is important to discuss which facial animations should be examined, and whether fixation of the head and markers attached to the face can hamper natural facial movement.
Baldi, Paolo; Bitelli, Gabriele; Carrara, Alberto; Zanutta, Antonio
The identification of the spatial and temporal evolution of landslides requires the in- tegration of geomorphological, topographical and geophysical surveys. Aerial pho- tographs flown in a long time span makes it possible to evaluate the morphological changes of the landscape, qualitatively (by photo-interpretation) and quantitatively (by photogrammetric techniques). In particular, the comparison of detailed DTMs, derived from aerial photographs and inserted in a unique reference system, may per- mit a quantitative reconstruction of landslide long-term movements. To generate high resolution DTMs, it is necessary to have a set of photogrammetric ground control points with adequate accuracy, located in an optimal way. However, in historical sur- veys, calibration certificates concerning the employed photogrammetric cameras, and ground control points are not available therefore it is not possible to calculate the ex- ternal orientation parameters of the photographs with the traditional methods. In such circumstances it turns out difficult to orient the stereoscopic models in a unique ref- erence system and approximate techniques are usually adopted (archival photogram- metric techniques). In the present research an archival photogrammetric technique has been applied to investigate a landslide located in Vergato (Bologna, Italy). Three DTMs obtained from three sets of aerial photographs, flown in 1971, 1976 and 2001, were generated through both an analytic stereoplotter and two different digital work- stations. The technique adopted to produce DTMs from historical photogrammetric models, the quantitative comparisons of the DTMs and some considerations concern- ing the main problems arisen are illustrated and discussed in the framework of setting forth a feasible procedure for monitoring landslide evolution over wide areas.
Individuals with facial paralysis and distorted facial expressions and movements secondary to a facial neuromotor disorder experience substantial physical, psychological, and social disability. Previously, facial rehabilitation has not been widely available or considered to be of much benefit. An emerging rehabilitation science of neuromuscular reeducation and evidence for the efficacy of facial neuromuscular reeducation, a process of facilitating the return of intended facial movement patterns and eliminating unwanted patterns of facial movement and expression, may provide patients with disorders of facial paralysis or facial movement control opportunity for the recovery of facial movement and function. We provide a brief overview of the scientific rationale for facial neuromuscular reeducation in the structure and function of the facial neuromotor system, the neuropsychology of facial expression, and relations among expressions, movement, and emotion. The primary purpose is to describe principles of neuromuscular reeducation, assessment and outcome measures, approach to treatment, the process, including surface-electromyographic biofeedback as an adjunct to reeducation, and the goal of enhancing the recovery of facial expression and function in a patient-centered approach to facial rehabilitation.
Hargreaves, A; Mothersill, O; Anderson, M; Lawless, S; Corvin, A; Donohoe, G
Deficits in facial emotion recognition have been associated with functional impairments in patients with Schizophrenia (SZ). Whilst a strong ecological argument has been made for the use of both dynamic facial expressions and varied emotion intensities in research, SZ emotion recognition studies to date have primarily used static stimuli of a singular, 100%, intensity of emotion. To address this issue, the present study aimed to investigate accuracy of emotion recognition amongst patients with SZ and healthy subjects using dynamic facial emotion stimuli of varying intensities. To this end an emotion recognition task (ERT) designed by Montagne (2007) was adapted and employed. 47 patients with a DSM-IV diagnosis of SZ and 51 healthy participants were assessed for emotion recognition. Results of the ERT were tested for correlation with performance in areas of cognitive ability typically found to be impaired in psychosis, including IQ, memory, attention and social cognition. Patients were found to perform less well than healthy participants at recognising each of the 6 emotions analysed. Surprisingly, however, groups did not differ in terms of impact of emotion intensity on recognition accuracy; for both groups higher intensity levels predicted greater accuracy, but no significant interaction between diagnosis and emotional intensity was found for any of the 6 emotions. Accuracy of emotion recognition was, however, more strongly correlated with cognition in the patient cohort. Whilst this study demonstrates the feasibility of using ecologically valid dynamic stimuli in the study of emotion recognition accuracy, varying the intensity of the emotion displayed was not demonstrated to impact patients and healthy participants differentially, and thus may not be a necessary variable to include in emotion recognition research. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Full Text Available One of the crucial problems found in the scientific community of assistive / rehabilitation robotics nowadays is that of automatically detecting what a disabled subject (for instance, a hand amputee wants to do, exactly when she wants to do it and strictly for the time she wants to do it. This problem, commonly called intent detection, has traditionally been tackled using surface electromyography, a technique which suffers from a number of drawbacks, including the changes in the signal induced by sweat and muscle fatigue. With the advent of realistic, physically plausible augmented- and virtual-reality environments for rehabilitation, this approach does not suffice anymore. In this paper we explore a novel method to solve the problem, that we call Optical Myography (OMG. The idea is to visually inspect the human forearm (or stump to reconstruct what fingers are moving and to what extent. In a psychophysical experiment involving ten intact subjects, we used visual fiducial markers (AprilTags and a standard web-camera to visualize the deformations of the surface of the forearm, which then were mapped to the intended finger motions. As ground truth, a visual stimulus was used, avoiding the need for finger sensors (force/position sensors, datagloves, etc.. Two machine-learning approaches, a linear and a non-linear one, were comparatively tested in settings of increasing realism. The results indicate an average error in the range of 0.05 to 0.22 (root mean square error normalized over the signal range, in line with similar results obtained with more mature techniques such as electromyography. If further successfully tested in the large, this approach could lead to vision-based intent detection of amputees, with the main application of letting such disabled persons dexterously and reliably interact in an augmented- / virtual-reality setup.
Full Text Available BACKGROUND: Brain-machine interfaces (BMIs can translate the neuronal activity underlying a user's movement intention into movements of an artificial effector. In spite of continuous improvements, errors in movement decoding are still a major problem of current BMI systems. If the difference between the decoded and intended movements becomes noticeable, it may lead to an execution error. Outcome errors, where subjects fail to reach a certain movement goal, are also present during online BMI operation. Detecting such errors can be beneficial for BMI operation: (i errors can be corrected online after being detected and (ii adaptive BMI decoding algorithm can be updated to make fewer errors in the future. METHODOLOGY/PRINCIPAL FINDINGS: Here, we show that error events can be detected from human electrocorticography (ECoG during a continuous task with high precision, given a temporal tolerance of 300-400 milliseconds. We quantified the error detection accuracy and showed that, using only a small subset of 2×2 ECoG electrodes, 82% of detection information for outcome error and 74% of detection information for execution error available from all ECoG electrodes could be retained. CONCLUSIONS/SIGNIFICANCE: The error detection method presented here could be used to correct errors made during BMI operation or to adapt a BMI algorithm to make fewer errors in the future. Furthermore, our results indicate that smaller ECoG implant could be used for error detection. Reducing the size of an ECoG electrode implant used for BMI decoding and error detection could significantly reduce the medical risk of implantation.
Bethany R. Raiff
Full Text Available Cigarette smoking remains the leading cause of preventable death in the United States. Traditional in-clinic cessation interventions may fail to intervene and interrupt the rapid progression to relapse that typically occurs following a quit attempt. The ability to detect actual smoking behavior in real-time is a measurement challenge for health behavior research and intervention. The successful detection of real-time smoking through mobile health (mHealth methodology has substantial implications for developing highly efficacious treatment interventions. The current study was aimed at further developing and testing the ability of inertial sensors to detect cigarette smoking arm movements among smokers. The current study involved four smokers who smoked six cigarettes each in a laboratory-based assessment. Participants were outfitted with four inertial body movement sensors on the arms, which were used to detect smoking events at two levels: the puff level and the cigarette level. Two different algorithms (Support Vector Machines (SVM and Edge-Detection based learning were trained to detect the features of arm movement sequences transmitted by the sensors that corresponded with each level. The results showed that performance of the SVM algorithm at the cigarette level exceeded detection at the individual puff level, with low rates of false positive puff detection. The current study is the second in a line of programmatic research demonstrating the proof-of-concept for sensor-based tracking of smoking, based on movements of the arm and wrist. This study demonstrates efficacy in a real-world clinical inpatient setting and is the first to provide a detection rate against direct observation, enabling calculation of true and false positive rates. The study results indicate that the approach performs very well with some participants, whereas some challenges remain with participants who generate more frequent non-smoking movements near the face. Future
Central nervous system abnormalities on midline facial defects with hypertelorism detected by magnetic resonance image and computed tomography; Anomalias de sistema nervoso central em defeitos de linha media facial com hipertelorismo detectados por ressonancia magnetica e tomografia computadorizada
Lopes, Vera Lucia Gil da Silva; Giffoni, Silvio David Araujo [Universidade Estadual de Campinas (UNICAMP), SP (Brazil). Faculdade de Ciencias Medicas. Dep. de Genetica Medica]. E-mail: email@example.com
The aim of this study were to describe and to compare structural central nervous system (CNS) anomalies detected by magnetic resonance image (MRI) and computed tomography (CT) in individuals affected by midline facial defects with hypertelorism (MFDH) isolated or associated with multiple congenital anomalies (MCA). The investigation protocol included dysmorphological examination, skull and facial X-rays, brain CT and/or MRI. We studied 24 individuals, 12 of them had an isolated form (Group I) and the others, MCA with unknown etiology (Group II). There was no significant difference between Group I and II and the results are presented in set. In addition to the several CNS anomalies previously described, MRI (n=18) was useful for detection of neuronal migration errors. These data suggested that structural CNS anomalies and MFDH seem to have an intrinsic embryological relationship, which should be taken in account during the clinical follow-up. (author)
Zhao, Wei; Park, Jeong-Seon; Choi, Dong-You; Lee, Sang-Woong
In daily life, language is an important tool of communication between people. Besides language, facial action can also provide a great amount of information. Therefore, facial action recognition has become a popular research topic in the field of human-computer interaction (HCI). However, facial action recognition is quite a challenging task due to its complexity. In a literal sense, there are thousands of facial muscular movements, many of which have very subtle differences. Moreover, muscular movements always occur simultaneously when the pose is changed. To address this problem, we first build a fully automatic facial points detection system based on a local Gabor filter bank and principal component analysis. Then, novel dynamic Bayesian networks are proposed to perform facial action recognition using the junction tree algorithm over a limited number of feature points. In order to evaluate the proposed method, we have used the Korean face database for model training. For testing, we used the CUbiC FacePix, facial expressions and emotion database, Japanese female facial expression database, and our own database. Our experimental results clearly demonstrate the feasibility of the proposed approach.
Edelhoff, Hendrik; Signer, Johannes; Balkenhol, Niko
Increased availability of high-resolution movement data has led to the development of numerous methods for studying changes in animal movement behavior. Path segmentation methods provide basics for detecting movement changes and the behavioral mechanisms driving them. However, available path segmentation methods differ vastly with respect to underlying statistical assumptions and output produced. Consequently, it is currently difficult for researchers new to path segmentation to gain an overview of the different methods, and choose one that is appropriate for their data and research questions. Here, we provide an overview of different methods for segmenting movement paths according to potential changes in underlying behavior. To structure our overview, we outline three broad types of research questions that are commonly addressed through path segmentation: 1) the quantitative description of movement patterns, 2) the detection of significant change-points, and 3) the identification of underlying processes or 'hidden states'. We discuss advantages and limitations of different approaches for addressing these research questions using path-level movement data, and present general guidelines for choosing methods based on data characteristics and questions. Our overview illustrates the large diversity of available path segmentation approaches, highlights the need for studies that compare the utility of different methods, and identifies opportunities for future developments in path-level data analysis.
Solberg, Lars Erik; Fosse, Erik; Hol, Per Kristian
In order to provide information for the use of radar in diagnostics a qualitative map of movements in the thorax has been obtained. This map was based on magnetic resonance image sequences of a human thorax during suspended respiration. The movements were measured using two distinct techniques. Segmentation provided measures of aorta dilatation and displacements, and image edge detection indicated other movements. The largest heart movements were found in the anterior and left regions of the heart with in-plane displacements on the order of 1 cm and which caused lung vessels displacements on the order of 2-3mm especially on the left side due to the heart ventricular. Mechanical coupling between the heart and aorta caused aorta displacements and shape distortions. Despite this coupling, aorta dilatations most likely reflected blood pressure variations.
Full Text Available Mu/beta rhythms are well-studied brain activities that originate from sensorimotor cortices. These rhythms reveal spectral changes in alpha and beta bands induced by movements of different body parts, e.g. hands and limbs, in electroencephalography (EEG signals. However, less can be revealed in them about movements of different fine body parts that activate adjacent brain regions, such as individual fingers from one hand. Several studies have reported spatial and temporal couplings of rhythmic activities at different frequency bands, suggesting the existence of well-defined spectral structures across multiple frequency bands. In the present study, spectral principal component analysis (PCA was applied on EEG data, obtained from a finger movement task, to identify cross-frequency spectral structures. Features from identified spectral structures were examined in their spatial patterns, cross-condition pattern changes, detection capability of finger movements from resting, and decoding performance of individual finger movements in comparison to classic mu/beta rhythms. These new features reveal some similar, but more different spatial and spectral patterns as compared with classic mu/beta rhythms. Decoding results further indicate that these new features (91% can detect finger movements much better than classic mu/beta rhythms (75.6%. More importantly, these new features reveal discriminative information about movements of different fingers (fine body-part movements, which is not available in classic mu/beta rhythms. The capability in decoding fingers (and hand gestures in the future from EEG will contribute significantly to the development of noninvasive brain computer interface (BCI and neuroprosthesis with intuitive and flexible controls.
Scheider, Linda; Liebal, Katja; Oña, Leonardo; Burrows, Anne; Waller, Bridget
Little is known about facial communication of lesser apes (family Hylobatidae) and how their facial expressions (and use of) relate to social organization. We investigated facial expressions (defined as combinations of facial movements) in social interactions of mated pairs in five different hylobat
Chan, Christy KY; Li, Christien KH; To, Olivia TL; Lai, William HS; Tse, Gary; Poh, Yukkee C; Poh, Ming-Zher
Background Modern smartphones allow measurement of heart rate (HR) by detecting pulsatile photoplethysmographic (PPG) signals with built-in cameras from the fingertips or the face, without physical contact, by extracting subtle beat-to-beat variations of skin color. Objective The objective of our study was to evaluate the accuracy of HR measurements at rest and after exercise using a smartphone-based PPG detection app. Methods A total of 40 healthy participants (20 men; mean age 24.7, SD 5.2 years; von Luschan skin color range 14-27) underwent treadmill exercise using the Bruce protocol. We recorded simultaneous PPG signals for each participant by having them (1) facing the front camera and (2) placing their index fingertip over an iPhone’s back camera. We analyzed the PPG signals from the Cardiio-Heart Rate Monitor + 7 Minute Workout (Cardiio) smartphone app for HR measurements compared with a continuous 12-lead electrocardiogram (ECG) as the reference. Recordings of 20 seconds’ duration each were acquired at rest, and immediately after moderate- (50%-70% maximum HR) and vigorous- (70%-85% maximum HR) intensity exercise, and repeated successively until return to resting HR. We used Bland-Altman plots to examine agreement between ECG and PPG-estimated HR. The accuracy criterion was root mean square error (RMSE) ≤5 beats/min or ≤10%, whichever was greater, according to the American National Standards Institute/Association for the Advancement of Medical Instrumentation EC-13 standard. Results We analyzed a total of 631 fingertip and 626 facial PPG measurements. Fingertip PPG-estimated HRs were strongly correlated with resting ECG HR (r=.997, RMSE=1.03 beats/min or 1.40%), postmoderate-intensity exercise (r=.994, RMSE=2.15 beats/min or 2.53%), and postvigorous-intensity exercise HR (r=.995, RMSE=2.01 beats/min or 1.93%). The correlation of facial PPG-estimated HR was stronger with resting ECG HR (r=.997, RMSE=1.02 beats/min or 1.44%) than with postmoderate
An efficient algorithm for facial features extractions is proposed. The facial features we segment are the two eyes, nose and mouth. The algorithm is based on an improved Gabor wavelets edge detector, morphological approach to detect the face region and facial features regions, and an improved T-shape face mask to locate the extract location of facial features. The experimental results show that the proposed method is robust against facial expression, illumination, and can be also effective if the person wearing glasses, and so on.
Mehu, Marc; Scherer, Klaus R
We investigated the role of facial behavior in emotional communication, using both categorical and dimensional approaches. We used a corpus of enacted emotional expressions (GEMEP) in which professional actors are instructed, with the help of scenarios, to communicate a variety of emotional experiences. The results of Study 1 replicated earlier findings showing that only a minority of facial action units are associated with specific emotional categories. Likewise, facial behavior did not show a specific association with particular emotional dimensions. Study 2 showed that facial behavior plays a significant role both in the detection of emotions and in the judgment of their dimensional aspects, such as valence, arousal, dominance, and unpredictability. In addition, a mediation model revealed that the association between facial behavior and recognition of the signaler's emotional intentions is mediated by perceived emotional dimensions. We conclude that, from a production perspective, facial action units convey neither specific emotions nor specific emotional dimensions, but are associated with several emotions and several dimensions. From the perceiver's perspective, facial behavior facilitated both dimensional and categorical judgments, and the former mediated the effect of facial behavior on recognition accuracy. The classification of emotional expressions into discrete categories may, therefore, rely on the perception of more general dimensions such as valence and arousal and, presumably, the underlying appraisals that are inferred from facial movements.
Lo, L Y; Cheng, M Y
Detection of angry and happy faces is generally found to be easier and faster than that of faces expressing emotions other than anger or happiness. This can be explained by the threatening account and the feature account. Few empirical studies have explored the interaction between these two accounts which are seemingly, but not necessarily, mutually exclusive. The present studies hypothesised that prominent facial features are important in facilitating the detection process of both angry and happy expressions; yet the detection of happy faces was more facilitated by the prominent features than angry faces. Results confirmed the hypotheses and indicated that participants reacted faster to the emotional expressions with prominent features (in Study 1) and the detection of happy faces was more facilitated by the prominent feature than angry faces (in Study 2). The findings are compatible with evolutionary speculation which suggests that the angry expression is an alarming signal of potential threats to survival. Compared to the angry faces, the happy faces need more salient physical features to obtain a similar level of processing efficiency.
Full Text Available Further development of an EEG based communication device for patients with disorders of consciousness (DoC could benefit from addressing the following gaps in knowledge – first, an evaluation of different types of motor imagery; second, an evaluation of passive feet movement as a mean of an initial classifier setup; and third, rapid delivery of biased feedback. To that end we investigated whether complex and / or familiar mental imagery, passive, and attempted feet movement can be reliably detected in patients with DoC using EEG recordings, aiming to provide them with a means of communication. Six patients in a minimally conscious state (MCS took part in this study. The patients were verbally instructed to perform different mental imagery tasks (sport, navigation, as well as attempted feet movements, to induce distinctive event-related (desynchronization (ERD/S patterns in the EEG. Offline classification accuracies above chance level were reached in all three tasks (i.e. attempted feet, sport, and navigation, with motor tasks yielding significant (p<0.05 results more often than navigation (sport: 10 out of 18 sessions; attempted feet: 7 out of 14 sessions; navigation: 4 out of 12 sessions. The passive feet movements, evaluated in one patient, yielded mixed results: whereas time-frequency analysis revealed task-related EEG changes over neurophysiological plausible cortical areas, the classification results were not significant enough (p<0.05 to setup an initial classifier for the detection of attempted movements. Concluding, the results presented in this study are consistent with the current state of the art in similar studies, to which we contributed by comparing different types of mental tasks, notably complex motor imagery and attempted feet movements, within patients. Furthermore, we explored new venues, such as an evaluation of passive feet movement as a mean of an initial classifier setup, and rapid delivery of biased feedback.
Donato, Gianluca; Bartlett, Marian Stewart; Hager, Joseph C.; Ekman, Paul; Sejnowski, Terrence J.
The Facial Action Coding System (FACS)  is an objective method for quantifying facial movement in terms of component actions. This system is widely used in behavioral investigations of emotion, cognitive processes, and social interaction. The coding is presently performed by highly trained human experts. This paper explores and compares techniques for automatically recognizing facial actions in sequences of images. These techniques include analysis of facial motion through estimation of optical flow; holistic spatial analysis, such as principal component analysis, independent component analysis, local feature analysis, and linear discriminant analysis; and methods based on the outputs of local filters, such as Gabor wavelet representations and local principal components. Performance of these systems is compared to naive and expert human subjects. Best performances were obtained using the Gabor wavelet representation and the independent component representation, both of which achieved 96 percent accuracy for classifying 12 facial actions of the upper and lower face. The results provide converging evidence for the importance of using local filters, high spatial frequencies, and statistical independence for classifying facial actions. PMID:21188284
Full Text Available Although coordinated patterns of body movement can be used to communicate action intention, they can also be used to deceive. Often known as deceptive movements, these unpredictable patterns of body movement can give a competitive advantage to an attacker when trying to outwit a defender. In this particular study, we immersed novice and expert rugby players in an interactive virtual rugby environment to understand how the dynamics of deceptive body movement influence a defending player's decisions about how and when to act. When asked to judge final running direction, expert players who were found to tune into prospective tau-based information specified in the dynamics of 'honest' movement signals (Centre of Mass, performed significantly better than novices who tuned into the dynamics of 'deceptive' movement signals (upper trunk yaw and out-foot placement (p<.001. These findings were further corroborated in a second experiment where players were able to move as if to intercept or 'tackle' the virtual attacker. An analysis of action responses showed that experts waited significantly longer before initiating movement (p<.001. By waiting longer and picking up more information that would inform about future running direction these experts made significantly fewer errors (p<.05. In this paper we not only present a mathematical model that describes how deception in body-based movement is detected, but we also show how perceptual expertise is manifested in action expertise. We conclude that being able to tune into the 'honest' information specifying true running action intention gives a strong competitive advantage.
This paper presents a 24 GHz FMCW radar system for detection of movement and respiration using change in the statistical properties of the received radar signal, both amplitude and phase. We present the hardware and software segments of the radar system as well as algorithms with measurement results for two distinct use-cases: 1. FMCW radar as a respiration monitor and 2. a dual-use of the same radar system for smart lighting and intrusion detection. By using change in statistical properties of the signal for detection, several system parameters can be relaxed, including, for example, pulse repetition rate, power consumption, computational load, processor speed, and memory space. We will also demonstrate, that the capability to switch between received signal strength and phase difference enables dual-use cases with one requiring extreme sensitivity to movement and the other robustness against small sources of interference. © 2016 IEEE.
Lin, Chuang; Wang, Bing-Hui; Jiang, Ning; Xu, Ren; Mrachacz-Kersting, Natalie; Farina, Dario
The detection of voluntary motor intention from EEG has been applied to closed-loop brain-computer interfacing (BCI). The movement-related cortical potential (MRCP) is a low frequency component of the EEG signal, which represents movement intention, preparation, and execution. In this study, we aim at detecting MRCPs from single-trial EEG traces. For this purpose, we propose a detector based on a discriminant manifold learning method, called locality sensitive discriminant analysis (LSDA), and we test it in both online and offline experiments with executed and imagined movements. The online and offline experimental results demonstrated that the proposed LSDA approach for MRCP detection outperformed the Locality Preserving Projection (LPP) approach, which was previously shown to be the most accurate algorithm so far tested for MRCP detection. For example, in the online tests, the performance of LSDA was superior than LPP in terms of a significant reduction in false positives (FP) (passive FP: 1.6 ±0.9/min versus 2.9 ±1.0/min, p = 0.002, active FP: 2.2 ±0.8/min versus 2.7 ±0.6/min , p = 0.03 ), for a similar rate of true positives. In conclusion, the proposed LSDA based MRCP detection method is superior to previous approaches and is promising for developing patient-driven BCI systems for motor function rehabilitation as well as for neuroscience research.
Facial palsy is a daily challenge for the clinicians. Determining whether facial nerve palsy is peripheral or central is a key step in the diagnosis. Central nervous lesions can give facial palsy which may be easily differentiated from peripheral palsy. The next question is the peripheral facial paralysis idiopathic or symptomatic. A good knowledge of anatomy of facial nerve is helpful. A structure approach is given to identify additional features that distinguish symptomatic facial palsy from idiopathic one. The main cause of peripheral facial palsies is idiopathic one, or Bell's palsy, which remains a diagnosis of exclusion. The most common cause of symptomatic peripheral facial palsy is Ramsay-Hunt syndrome. Early identification of symptomatic facial palsy is important because of often worst outcome and different management. The prognosis of Bell's palsy is on the whole favorable and is improved with a prompt tapering course of prednisone. In Ramsay-Hunt syndrome, an antiviral therapy is added along with prednisone. We also discussed of current treatment recommendations. We will review short and long term complications of peripheral facial palsy.
Bergman, R T
My objective is to present a cephalometric-based facial analysis to correlate with an article that was published previously in the American Journal of Orthodontic and Dentofacial Orthopedics. Eighteen facial or soft tissue traits are discussed in this article. All of them are significant in successful orthodontic outcome, and none of them depend on skeletal landmarks for measurement. Orthodontic analysis most commonly relies on skeletal and dental measurement, placing far less emphasis on facial feature measurement, particularly their relationship to each other. Yet, a thorough examination of the face is critical for understanding the changes in facial appearance that result from orthodontic treatment. A cephalometric approach to facial examination can also benefit the diagnosis and treatment plan. Individual facial traits and their balance with one another should be identified before treatment. Relying solely on skeletal analysis, assuming that the face will balance if the skeletal/dental cephalometric values are normalized, may not yield the desired outcome. Good occlusion does not necessarily mean good facial balance. Orthodontic norms for facial traits can permit their measurement. Further, with a knowledge of standard facial traits and the patient's soft tissue features, an individualized norm can be established for each patient to optimize facial attractiveness. Four questions should be asked regarding each facial trait before treatment: (1) What is the quality and quantity of the trait? (2) How will future growth affect the trait? (3) How will orthodontic tooth movement affect the existing trait (positively or negatively)? (4) How will surgical bone movement to correct the bite affect the trait (positively or negatively)?
Oda, Kazuo; Hattori, Satoko; Takayama, Toko
This paper proposes movement detection method between point clouds created by SFM software, without setting any onsite georeferenced points. SfM software, like Smart3DCaputure, PhotoScan, and Pix4D, are convenient for non-professional operator of photogrammetry, because these systems require simply specification of sequence of photos and output point clouds with colour index which corresponds to the colour of original image pixel where the point is projected. SfM software can execute aerial triangulation and create dense point clouds fully automatically. This is useful when monitoring motion of unstable slopes, or loos rocks in slopes along roads or railroads. Most of existing method, however, uses mesh-based DSM for comparing point clouds before/after movement and it cannot be applied in such cases that part of slopes forms overhangs. And in some cases movement is smaller than precision of ground control points and registering two point clouds with GCP is not appropriate. Change detection method in this paper adopts CCICP (Classification and Combined ICP) algorithm for registering point clouds before / after movement. The CCICP algorithm is a type of ICP (Iterative Closest Points) which minimizes point-to-plane, and point-to-point distances, simultaneously, and also reject incorrect correspondences based on point classification by PCA (Principle Component Analysis). Precision test shows that CCICP method can register two point clouds up to the 1 pixel size order in original images. Ground control points set in site are useful for initial setting of two point clouds. If there are no GCPs in site of slopes, initial setting is achieved by measuring feature points as ground control points in the point clouds before movement, and creating point clouds after movement with these ground control points. When the motion is rigid transformation, in case that a loose Rock is moving in slope, motion including rotation can be analysed by executing CCICP for a loose rock and
Hu, Xie; Wang, Teng; Pierson, Thomas C.; Lu, Zhong; Kim, Jin-Woo; Cecere, Thomas H.
Detection of slow or limited landslide movement within broad areas of forested terrain has long been problematic, particularly for the Cascade landslide complex (Washington) located along the Columbia River Gorge. Although parts of the landslide complex have been found reactivated in recent years, the timing and magnitude of motion have not been systematically monitored or interpreted. Here we apply novel time-series strategies to study the spatial distribution and temporal behavior of the landslide movement between 2007 and 2011 using InSAR images from two overlapping L-band ALOS PALSAR-1 satellite tracks. Our results show that the reactivated part has moved approximately 700 mm downslope during the 4-year observation period, while other parts of the landslide complex have generally remained stable. However, we also detect about 300 mm of seasonal downslope creep in a terrain block upslope of the Cascade landslide complex—terrain previously thought to be stable. The temporal oscillation of the seasonal movement can be correlated with precipitation, implying that seasonal movement here is hydrology-driven. The seasonal movement also has a frequency similar to GPS-derived regional ground oscillations due to mass loading by stored rainfall and subsequent rebound but with much smaller magnitude, suggesting different hydrological loading effects. From the time-series amplitude information on terrain upslope of the headscarp, we also re-evaluate the incipient motion related to the 2008 Greenleaf Basin rock avalanche, not previously recognized by traditional SAR/InSAR methods. The approach used in this study can be used to identify active landslides in forested terrain, to track the seasonal movement of landslides, and to identify previously unknown landslide hazards.
Horki, Petar; Bauernfeind, Günther; Klobassa, Daniela S; Pokorny, Christoph; Pichler, Gerald; Schippinger, Walter; Müller-Putz, Gernot R
Further development of an EEG based communication device for patients with disorders of consciousness (DoC) could benefit from addressing the following gaps in knowledge-first, an evaluation of different types of motor imagery; second, an evaluation of passive feet movement as a mean of an initial classifier setup; and third, rapid delivery of biased feedback. To that end we investigated whether complex and/or familiar mental imagery, passive, and attempted feet movement can be reliably detected in patients with DoC using EEG recordings, aiming to provide them with a means of communication. Six patients in a minimally conscious state (MCS) took part in this study. The patients were verbally instructed to perform different mental imagery tasks (sport, navigation), as well as attempted feet movements, to induce distinctive event-related (de)synchronization (ERD/S) patterns in the EEG. Offline classification accuracies above chance level were reached in all three tasks (i.e., attempted feet, sport, and navigation), with motor tasks yielding significant (p art in similar studies, to which we contributed by comparing different types of mental tasks, notably complex motor imagery and attempted feet movements, within patients. Furthermore, we explored new venues, such as an evaluation of passive feet movement as a mean of an initial classifier setup, and rapid delivery of biased feedback.
Eye-gaze detection and tracking have been an active research field in the past years as it adds convenience to a variety of applications. It is considered a significant untraditional method of human computer interaction. Head movement detection has also received researchers' attention and interest as it has been found to be a simple and effective interaction method. Both technologies are considered the easiest alternative interface methods. They serve a wide range of severely disabled people who are left with minimal motor abilities. For both eye tracking and head movement detection, several different approaches have been proposed and used to implement different algorithms for these technologies. Despite the amount of research done on both technologies, researchers are still trying to find robust methods to use effectively in various applications. This paper presents a state-of-art survey for eye tracking and head movement detection methods proposed in the literature. Examples of different fields of applications for both technologies, such as human-computer interaction, driving assistance systems, and assistive technologies are also investigated. PMID:27170851
Chamanzar, Alireza; Malekmohammadi, Alireza; Bahrani, Masih; Shabany, Mahdi
The outlook of brain-computer interfacing (BCI) is very bright. The real-time, accurate detection of a motor movement task is critical in BCI systems. The poor signal-to-noise-ratio (SNR) of EEG signals and the ambiguity of noise generator sources in brain renders this task quite challenging. In this paper, we demonstrate a novel algorithm for precise detection of the onset of a motor movement through identification of event-related-desynchronization (ERD) patterns. Using an adaptive matched filter technique implemented based on an optimized continues Wavelet transform by selecting an appropriate basis, we can detect single-trial ERDs. Moreover, we use a maximum-likelihood (ML), electrooculography (EOG) artifact removal method to remove eye-related artifacts to significantly improve the detection performance. We have applied this technique to our locally recorded Emotiv(®) data set of 6 healthy subjects, where an average detection selectivity of 85 ± 6% and sensitivity of 88 ± 7.7% is achieved with a temporal precision in the range of -1250 to 367 ms in onset detections of single-trials.
Phua, S H; Dodds, K G; Morris, C A; Henry, H M; Beattie, A E; Garmonsway, H G; Towers, N R; Crawford, A M
Facial eczema (FE) is a secondary photosensitization disease arising from liver cirrhosis caused by the mycotoxin sporidesmin. The disease affects sheep, cattle, deer and goats, and costs the New Zealand sheep industry alone an estimated NZ$63M annually. A long-term sustainable solution to this century-old FE problem is to breed for disease-resistant animals by marker-assisted selection. As a step towards finding a diagnostic DNA test for FE sensitivity, we have conducted a genome-scan experiment to screen for quantitative trait loci (QTL) affecting this trait in Romney sheep. Four F(1) sires, obtained from reciprocal matings of FE resistant and susceptible selection-line animals, were used to generate four outcross families. The resulting half-sib progeny were artificially challenged with sporidesmin to phenotype their FE traits measured in terms of their serum levels of liver-specific enzymes, namely gamma-glutamyl transferase and glutamate dehydrogenase. In a primary screen using selective genotyping on extreme progeny of each family, a total of 244 DNA markers uniformly distributed over all 26 ovine autosomes (with an autosomal genome coverage of 79-91%) were tested for linkage to the FE traits. Data were analysed using Haley-Knott regression. The primary screen detected one significant and one suggestive QTL on chromosomes 3 and 8 respectively. Both the significant and suggestive QTL were followed up in a secondary screen where all progeny were genotyped and analysed; the QTL on chromosome 3 was significant in this analysis.
... help reduce facial swelling. When to Contact a Medical Professional Call your health care provider if you have: Sudden, painful, or severe facial ... or if you have breathing problems. The health care provider will ask about your medical and personal history. This helps determine treatment or ...
Catania, Kenneth C.; Hare, James F.; Campbell, Kevin L.
American water shrews (Sorex palustris) are aggressive predators that feed on a variety of terrestrial and aquatic prey. They often forage at night, diving into streams and ponds in search of food. We investigated how shrews locate submerged prey using high-speed videography, infrared lighting, and stimuli designed to mimic prey. Shrews attacked brief water movements, indicating motion is an important cue used to detect active or escaping prey. They also bit, retrieved, and attempted to eat model fish made of silicone in preference to other silicone objects showing that tactile cues are important in the absence of movement. In addition, water shrews preferentially sniffed model prey fish and crickets underwater by exhaling and reinhaling air through the nostrils, suggesting olfaction plays an important role in aquatic foraging. The possibility of echolocation, sonar, or electroreception was investigated by testing for ultrasonic and audible calls above and below water and by presenting electric fields to foraging shrews. We found no evidence for these abilities. We conclude that water shrews detect motion, shape, and smell to find prey underwater. The short latency of attacks to water movements suggests shrews may use a flush-pursuit strategy to capture some prey. PMID:18184804
Catania, Kenneth C; Hare, James F; Campbell, Kevin L
American water shrews (Sorex palustris) are aggressive predators that feed on a variety of terrestrial and aquatic prey. They often forage at night, diving into streams and ponds in search of food. We investigated how shrews locate submerged prey using high-speed videography, infrared lighting, and stimuli designed to mimic prey. Shrews attacked brief water movements, indicating motion is an important cue used to detect active or escaping prey. They also bit, retrieved, and attempted to eat model fish made of silicone in preference to other silicone objects showing that tactile cues are important in the absence of movement. In addition, water shrews preferentially sniffed model prey fish and crickets underwater by exhaling and reinhaling air through the nostrils, suggesting olfaction plays an important role in aquatic foraging. The possibility of echolocation, sonar, or electroreception was investigated by testing for ultrasonic and audible calls above and below water and by presenting electric fields to foraging shrews. We found no evidence for these abilities. We conclude that water shrews detect motion, shape, and smell to find prey underwater. The short latency of attacks to water movements suggests shrews may use a flush-pursuit strategy to capture some prey.
Soda, Paolo; Mazzoleni, Stefano; Cavallo, Giuseppe; Guglielmelli, Eugenio; Iannello, Giulio
Recent research has successfully introduced the application of robotics and mechatronics to functional assessment and motor therapy. Measurements of movement initiation in isometric conditions are widely used in clinical rehabilitation and their importance in functional assessment has been demonstrated for specific parts of the human body. The determination of the voluntary movement initiation time, also referred to as onset time, represents a challenging issue since the time window characterizing the movement onset is of particular relevance for the understanding of recovery mechanisms after a neurological damage. Establishing it manually as well as a troublesome task may also introduce oversight errors and loss of information. The most commonly used methods for automatic onset time detection compare the raw signal, or some extracted measures such as its derivatives (i.e., velocity and acceleration) with a chosen threshold. However, they suffer from high variability and systematic errors because of the weakness of the signal, the abnormality of response profiles as well as the variability of movement initiation times among patients. In this paper, we introduce a technique to optimise onset detection according to each input signal. It is based on a classification system that enables us to establish which deterministic method provides the most accurate onset time on the basis of information directly derived from the raw signal. The approach was tested on annotated force and torque datasets. Each dataset is constituted by 768 signals acquired from eight anatomical districts in 96 patients who carried out six tasks related to common daily activities. The results show that the proposed technique improves not only on the performance achieved by each of the deterministic methods, but also on that attained by a group of clinical experts. The paper describes a classification system detecting the voluntary movement initiation time and adaptable to different signals. By
Erickson, Richard A.; Rees, Christopher B.; Coulter, Alison A.; Merkes, Christopher; McCalla, Sunnie; Touzinsky, Katherine F; Walleser, Liza R.; Goforth, Reuben R.; Amberg, Jon
Bigheaded carps are invasive fishes threatening to invade the Great Lakes basin and establish spawning populations, and have been monitored using environmental DNA (eDNA). Not only does eDNA hold potential for detecting the presence of species, but may also allow for quantitative comparisons like relative abundance of species across time or space. We examined the relationships among bigheaded carp movement, hydrography, spawning and eDNA on the Wabash River, IN, USA. We found positive relationships between eDNA and movement and eDNA and hydrography. We did not find a relationship between eDNA and spawning activity in the form of drifting eggs. Our first finding demonstrates how eDNA may be used to monitor species abundance, whereas our second finding illustrates the need for additional research into eDNA methodologies. Current applications of eDNA are widespread, but the relatively new technology requires further refinement.
Dobson, Seth D
Body size may be an important factor influencing the evolution of facial expression in anthropoid primates due to allometric constraints on the perception of facial movements. Given this hypothesis, I tested the prediction that observed facial mobility is positively correlated with body size in a comparative sample of nonhuman anthropoids. Facial mobility, or the variety of facial movements a species can produce, was estimated using a novel application of the Facial Action Coding System (FACS). I used FACS to estimate facial mobility in 12 nonhuman anthropoid species, based on video recordings of facial activity in zoo animals. Body mass data were taken from the literature. I used phylogenetic generalized least squares (PGLS) to perform a multiple regression analysis with facial mobility as the dependent variable and two independent variables: log body mass and dummy-coded infraorder. Together, body mass and infraorder explain 92% of the variance in facial mobility. However, the partial effect of body mass is much stronger than for infraorder. The results of my study suggest that allometry is an important constraint on the evolution of facial mobility, which may limit the complexity of facial expression in smaller species. More work is needed to clarify the perceptual bases of this allometric pattern.
Lugade, Vipul; Fortune, Emma; Morrow, Melissa; Kaufman, Kenton
A robust method for identifying movement in the free-living environment is needed to objectively measure physical activity. The purpose of this study was to validate the identification of postural orientation and movement from acceleration data against visual inspection from video recordings. Using tri-axial accelerometers placed on the waist and thigh, static orientations of standing, sitting, and lying down, as well as dynamic movements of walking, jogging and transitions between postures were identified. Additionally, subjects walked and jogged at self-selected slow, comfortable, and fast speeds. Identification of tasks was performed using a combination of the signal magnitude area, continuous wavelet transforms and accelerometer orientations. Twelve healthy adults were studied in the laboratory, with two investigators identifying tasks during each second of video observation. The intraclass correlation coefficients for inter-rater reliability were greater than 0.95 for all activities except for transitions. Results demonstrated high validity, with sensitivity and positive predictive values of greater than 85% for sitting and lying, with walking and jogging identified at greater than 90%. The greatest disagreement in identification accuracy between the algorithm and video occurred when subjects were asked to fidget while standing or sitting. During variable speed tasks, gait was correctly identified for speeds between 0.1m/s and 4.8m/s. This study included a range of walking speeds and natural movements such as fidgeting during static postures, demonstrating that accelerometer data can be used to identify orientation and movement among the general population.
Wang, Jingjing; Henry, Amanda; Welsh, Alec W; Redmond, Stephen J
The Modified Myocardial Performance Index (Mod-MPI) is becoming an important index in fetal cardiac function evaluation. However, the current method for Mod-MPI calculation can be time-consuming and demonstrates poor inter-operator repeatability. This paper presents an automated method for detecting the opening and closing events of fetal cardiac valves with the aim of automating the Mod-MPI calculation. Fifty-four Doppler ultrasound images, showing blood inflow and outflow for the left ventricle, are analyzed to attempt to automatically detect the timings of a total of 905 opening and closing events for both aortic and mitral valves. Timings are found according to the morphological characteristics of waveforms as well as intensity information of images. The proposed method can detect the four valve movement events with high sensitivity (95.60-98.64%) and precision (96.85-100.00%). Results are verified by comparison with manual annotation of same images from an expert.
Jiang, Xiaotian; Liu, Ming; Zhao, Yuejin
A visible light imaging system to detect human cardiac rate is proposed in this paper. A color camera and several LEDs, acting as lighting source, were used to avoid the interference of ambient light. From people's forehead, the cardiac rate could be acquired based on photoplethysmography (PPG) theory. The template matching method was used after the capture of video. The video signal was discomposed into three signal channels (RGB) and the region of interest was chosen to take the average gray value. The green channel signal could provide an excellent waveform of pulse wave on the account of green lights' absorptive characteristics of blood. Through the fast Fourier transform, the cardiac rate was exactly achieved. But the research goal was not just to achieve the cardiac rate accurately. With the template matching method, the effects of body movement are reduced to a large extent, therefore the pulse wave can be detected even while people are in the moving state and the waveform is largely optimized. Several experiments are conducted on volunteers, and the results are compared with the ones gained by a finger clamped pulse oximeter. The contrast results between these two ways are exactly agreeable. This method to detect the cardiac rate and the pulse wave largely reduces the effects of body movement and can probably be widely used in the future.
Yu-feng XIE; Jing WANG; Fu-quan HUO; Hong JIA; Jing-shi TANG
Aim: To investigate the validity and sensitivity of an automatic movement detection system developed by our laboratory for the formalin test in rats. Methods:The effects of systemic morphine and local anesthetic lidocaine on the nociceptive behaviors induced by formalin subcutaneously injected into the hindpaw were examined by using an automated movement detection system and manual measuring methods. Results: Formalin subcutaneously injected into the hindpaw produced typical biphasic nociceptive behaviors (agitation). The mean agitation event rate during a 60-min observation period increased linearly following increases in the formalin concentration (0.0%, 0.5%, 1.5%, 2.5%, and 5%, 50 μL).Systemic application of morphine of different doses (1, 2, and 5 mg/kg) 10-min prior to formalin injection depressed the agitation responses induced by formalin injection in a dose-dependent manner, and the antinociceptive effect induced by the largest dose (5 mg/kg) of morphine was significantly antagonized by systemic application of the opioid receptor antagonist naloxone (1.25 mg/kg). Local anesthetic lidocaine (20 mg/kg) injected into the ipsilateral ankle subskin 5-min prior to formalin completely blocked the agitation response to formalin injection. These results were comparable to those obtained from manual measure of the incidence of flinching or the duration time of licking/biting of the injected paw. Conclusion:These data suggest that this automated movement detection system for formalin test is a simple, validated measure with good pharmacological sensitivity suitable for discovering novel analgesics or investigating central pain mechanisms.
Lee, Samantha Sze-Yee; Black, Alex A; Wood, Joanne M
The mechanisms underlying the elevated crash rates of older drivers with glaucoma are poorly understood. A key driving skill is timely detection of hazards; however, the hazard detection ability of drivers with glaucoma has been largely unexplored. This study assessed the eye movement patterns and visual predictors of performance on a laboratory-based hazard detection task in older drivers with glaucoma. Participants included 30 older drivers with glaucoma (71±7 years; average better-eye mean deviation (MD) = -3.1±3.2 dB; average worse-eye MD = -11.9±6.2 dB) and 25 age-matched controls (72±7 years). Visual acuity, contrast sensitivity, visual fields, useful field of view (UFoV; processing speeds), and motion sensitivity were assessed. Participants completed a computerised Hazard Perception Test (HPT) while their eye movements were recorded using a desk-mounted Tobii TX300 eye-tracking system. The HPT comprises a series of real-world traffic videos recorded from the driver's perspective; participants responded to road hazards appearing in the videos, and hazard response times were determined. Participants with glaucoma exhibited an average of 0.42 seconds delay in hazard response time (p = 0.001), smaller saccades (p = 0.010), and delayed first fixation on hazards (p<0.001) compared to controls. Importantly, larger saccades were associated with faster hazard responses in the glaucoma group (p = 0.004), but not in the control group (p = 0.19). Across both groups, significant visual predictors of hazard response times included motion sensitivity, UFoV, and worse-eye MD (p<0.05). Older drivers with glaucoma had delayed hazard response times compared to controls, with associated changes in eye movement patterns. The association between larger saccades and faster hazard response time in the glaucoma group may represent a compensatory behaviour to facilitate improved performance.
Chávez, Roberto O; Clevers, Jan G P W; Verbesselt, Jan; Naulin, Paulette I; Herold, Martin
Heliotropic leaf movement or leaf 'solar tracking' occurs for a wide variety of plants, including many desert species and some crops. This has an important effect on the canopy spectral reflectance as measured from satellites. For this reason, monitoring systems based on spectral vegetation indices, such as the normalized difference vegetation index (NDVI), should account for heliotropic movements when evaluating the health condition of such species. In the hyper-arid Atacama Desert, Northern Chile, we studied seasonal and diurnal variations of MODIS and Landsat NDVI time series of plantation stands of the endemic species Prosopis tamarugo Phil., subject to different levels of groundwater depletion. As solar irradiation increased during the day and also during the summer, the paraheliotropic leaves of Tamarugo moved to an erectophile position (parallel to the sun rays) making the NDVI signal to drop. This way, Tamarugo stands with no water stress showed a positive NDVI difference between morning and midday (ΔNDVI mo-mi) and between winter and summer (ΔNDVI W-S). In this paper, we showed that the ΔNDVI mo-mi of Tamarugo stands can be detected using MODIS Terra and Aqua images, and the ΔNDVI W-S using Landsat or MODIS Terra images. Because pulvinar movement is triggered by changes in cell turgor, the effects of water stress caused by groundwater depletion can be assessed and monitored using ΔNDVI mo-mi and ΔNDVI W-S. For an 11-year time series without rainfall events, Landsat ΔNDVI W-S of Tamarugo stands showed a positive linear relationship with cumulative groundwater depletion. We conclude that both ΔNDVI mo-mi and ΔNDVI W-S have potential to detect early water stress of paraheliotropic vegetation.
Roberto O Chávez
Full Text Available Heliotropic leaf movement or leaf 'solar tracking' occurs for a wide variety of plants, including many desert species and some crops. This has an important effect on the canopy spectral reflectance as measured from satellites. For this reason, monitoring systems based on spectral vegetation indices, such as the normalized difference vegetation index (NDVI, should account for heliotropic movements when evaluating the health condition of such species. In the hyper-arid Atacama Desert, Northern Chile, we studied seasonal and diurnal variations of MODIS and Landsat NDVI time series of plantation stands of the endemic species Prosopis tamarugo Phil., subject to different levels of groundwater depletion. As solar irradiation increased during the day and also during the summer, the paraheliotropic leaves of Tamarugo moved to an erectophile position (parallel to the sun rays making the NDVI signal to drop. This way, Tamarugo stands with no water stress showed a positive NDVI difference between morning and midday (ΔNDVI mo-mi and between winter and summer (ΔNDVI W-S. In this paper, we showed that the ΔNDVI mo-mi of Tamarugo stands can be detected using MODIS Terra and Aqua images, and the ΔNDVI W-S using Landsat or MODIS Terra images. Because pulvinar movement is triggered by changes in cell turgor, the effects of water stress caused by groundwater depletion can be assessed and monitored using ΔNDVI mo-mi and ΔNDVI W-S. For an 11-year time series without rainfall events, Landsat ΔNDVI W-S of Tamarugo stands showed a positive linear relationship with cumulative groundwater depletion. We conclude that both ΔNDVI mo-mi and ΔNDVI W-S have potential to detect early water stress of paraheliotropic vegetation.
Marur, Tania; Tuna, Yakup; Demirci, Selman
Dermatologic problems of the face affect both function and aesthetics, which are based on complex anatomical features. Treating dermatologic problems while preserving the aesthetics and functions of the face requires knowledge of normal anatomy. When performing successfully invasive procedures of the face, it is essential to understand its underlying topographic anatomy. This chapter presents the anatomy of the facial musculature and neurovascular structures in a systematic way with some clinically important aspects. We describe the attachments of the mimetic and masticatory muscles and emphasize their functions and nerve supply. We highlight clinically relevant facial topographic anatomy by explaining the course and location of the sensory and motor nerves of the face and facial vasculature with their relations. Additionally, this chapter reviews the recent nomenclature of the branching pattern of the facial artery.
Mihalache Sergiu; Stoica Mihaela-Zoica
.... From birth, faces are important in the individual's social interaction. Face perceptions are very complex as the recognition of facial expressions involves extensive and diverse areas in the brain...
Law Smith, Miriam J; Montagne, Barbara; Perrett, David I; Gill, Michael; Gallagher, Louise
Autism Spectrum Disorders (ASD) are characterised by social and communication impairment, yet evidence for deficits in the ability to recognise facial expressions of basic emotions is conflicting. Many studies reporting no deficits have used stimuli that may be too simple (with associated ceiling effects), for example, 100% 'full-blown' expressions. In order to investigate subtle deficits in facial emotion recognition, 21 adolescent males with high-functioning Austism Spectrum Disorders (ASD) and 16 age and IQ matched typically developing control males completed a new sensitive test of facial emotion recognition which uses dynamic stimuli of varying intensities of expressions of the six basic emotions (Emotion Recognition Test; Montagne et al., 2007). Participants with ASD were found to be less accurate at processing the basic emotional expressions of disgust, anger and surprise; disgust recognition was most impaired--at 100% intensity and lower levels, whereas recognition of surprise and anger were intact at 100% but impaired at lower levels of intensity.
Hamm, Jihun; Kohler, Christian G; Gur, Ruben C; Verma, Ragini
Facial expression is widely used to evaluate emotional impairment in neuropsychiatric disorders. Ekman and Friesen's Facial Action Coding System (FACS) encodes movements of individual facial muscles from distinct momentary changes in facial appearance. Unlike facial expression ratings based on categorization of expressions into prototypical emotions (happiness, sadness, anger, fear, disgust, etc.), FACS can encode ambiguous and subtle expressions, and therefore is potentially more suitable for analyzing the small differences in facial affect. However, FACS rating requires extensive training, and is time consuming and subjective thus prone to bias. To overcome these limitations, we developed an automated FACS based on advanced computer science technology. The system automatically tracks faces in a video, extracts geometric and texture features, and produces temporal profiles of each facial muscle movement. These profiles are quantified to compute frequencies of single and combined Action Units (AUs) in videos, and they can facilitate a statistical study of large populations in disorders known to impact facial expression. We derived quantitative measures of flat and inappropriate facial affect automatically from temporal AU profiles. Applicability of the automated FACS was illustrated in a pilot study, by applying it to data of videos from eight schizophrenia patients and controls. We created temporal AU profiles that provided rich information on the dynamics of facial muscle movements for each subject. The quantitative measures of flatness and inappropriateness showed clear differences between patients and the controls, highlighting their potential in automatic and objective quantification of symptom severity. Copyright © 2011 Elsevier B.V. All rights reserved.
Kempfner, Jacob; Sørensen, Gertrud Laura; Nikolic, M.
: Sixteen healthy control subjects, 16 subjects with idiopathic REM sleep behavior disorder, and 16 subjects with periodic limb movement disorder were enrolled. Different combinations of five surface electromyographic channels, including the EOG, were tested. A muscle activity score was automatically...... for quantitative methods to establish objective criteria. This study proposes a semiautomatic algorithm for the early detection of Parkinson's disease. This is achieved by distinguishing between normal REM sleep and REM sleep without atonia by considering muscle activity as an outlier detection problem. METHODS...... computed from manual scored REM sleep. This was accomplished by the use of subject-specific features combined with an outlier detector (one-class support vector machine classifier). RESULTS: It was possible to correctly separate idiopathic REM sleep behavior disorder subjects from healthy control subjects...
Großekathöfer, Ulf; Manyakov, Nikolay V.; Mihajlović, Vojkan; Pandina, Gahan; Skalkin, Andrew; Ness, Seth; Bangerter, Abigail; Goodwin, Matthew S.
A number of recent studies using accelerometer features as input to machine learning classifiers show promising results for automatically detecting stereotypical motor movements (SMM) in individuals with Autism Spectrum Disorder (ASD). However, replicating these results across different types of accelerometers and their position on the body still remains a challenge. We introduce a new set of features in this domain based on recurrence plot and quantification analyses that are orientation invariant and able to capture non-linear dynamics of SMM. Applying these features to an existing published data set containing acceleration data, we achieve up to 9% average increase in accuracy compared to current state-of-the-art published results. Furthermore, we provide evidence that a single torso sensor can automatically detect multiple types of SMM in ASD, and that our approach allows recognition of SMM with high accuracy in individuals when using a person-independent classifier. PMID:28261082
Großekathöfer, Ulf; Manyakov, Nikolay V; Mihajlović, Vojkan; Pandina, Gahan; Skalkin, Andrew; Ness, Seth; Bangerter, Abigail; Goodwin, Matthew S
A number of recent studies using accelerometer features as input to machine learning classifiers show promising results for automatically detecting stereotypical motor movements (SMM) in individuals with Autism Spectrum Disorder (ASD). However, replicating these results across different types of accelerometers and their position on the body still remains a challenge. We introduce a new set of features in this domain based on recurrence plot and quantification analyses that are orientation invariant and able to capture non-linear dynamics of SMM. Applying these features to an existing published data set containing acceleration data, we achieve up to 9% average increase in accuracy compared to current state-of-the-art published results. Furthermore, we provide evidence that a single torso sensor can automatically detect multiple types of SMM in ASD, and that our approach allows recognition of SMM with high accuracy in individuals when using a person-independent classifier.
Hou Wensheng; Jiang Yingtao; Wu Xiaoying; Zheng Xiaolin; Zheng Jun; Ye Yihong
Synergic movement of finger's joints provides human hand tremendous dexterities, and the detection of ki-nematics parameters is critical to describe and evaluate the kinesiology functions of the fingers. The present work is the attempt to investigate how the angular velocity and angular acceleration of the joints of index finger vary with re-spect to time during conducting a motor task. A high-speed video camera has been employed to visually record the movement of index finger, and miniaturized (5-mm diameter) reflective markers have affixed to the subject's index finger on the side close to thumb and dorsum of thumb at different joint landmarks. Captured images have been re-viewed frame by frame to get the coordinate values of each joint, and the angular displacements, angular velocities and angular acceleration can be obtained with triangle function. The experiment results show that the methods here can detect the kinematics parameters of index finger joints during moving, and can be a valid route to study the motor function of index finger.
Full Text Available In this paper, a method of using a one-dimensional position-sensitive detector (PSD by replacing charge-coupled device (CCD to measure the movement of the interference fringes is presented first, and its feasibility is demonstrated through an experimental setup based on the principle of centroid detection. Firstly, the centroid position of the interference fringes in a fiber Mach-Zehnder (M-Z interferometer is solved in theory, showing it has a higher resolution and sensitivity. According to the physical characteristics and principles of PSD, a simulation of the interference fringe’s phase difference in fiber M-Z interferometers and PSD output is carried out. Comparing the simulation results with the relationship between phase differences and centroid positions in fiber M-Z interferometers, the conclusion that the output of interference fringes by PSD is still the centroid position is obtained. Based on massive measurements, the best resolution of the system is achieved with 5.15, 625 μm. Finally, the detection system is evaluated through setup error analysis and an ultra-narrow-band filter structure. The filter structure is configured with a one-dimensional photonic crystal containing positive and negative refraction material, which can eliminate background light in the PSD detection experiment. This detection system has a simple structure, good stability, high precision and easily performs remote measurements, which makes it potentially useful in material small deformation tests, refractivity measurements of optical media and optical wave front detection.
Wang, Qi; Xia, Ji; Liu, Xu; Zhao, Yong
In this paper, a method of using a one-dimensional position-sensitive detector (PSD) by replacing charge-coupled device (CCD) to measure the movement of the interference fringes is presented first, and its feasibility is demonstrated through an experimental setup based on the principle of centroid detection. Firstly, the centroid position of the interference fringes in a fiber Mach-Zehnder (M-Z) interferometer is solved in theory, showing it has a higher resolution and sensitivity. According to the physical characteristics and principles of PSD, a simulation of the interference fringe's phase difference in fiber M-Z interferometers and PSD output is carried out. Comparing the simulation results with the relationship between phase differences and centroid positions in fiber M-Z interferometers, the conclusion that the output of interference fringes by PSD is still the centroid position is obtained. Based on massive measurements, the best resolution of the system is achieved with 5.15, 625 μm. Finally, the detection system is evaluated through setup error analysis and an ultra-narrow-band filter structure. The filter structure is configured with a one-dimensional photonic crystal containing positive and negative refraction material, which can eliminate background light in the PSD detection experiment. This detection system has a simple structure, good stability, high precision and easily performs remote measurements, which makes it potentially useful in material small deformation tests, refractivity measurements of optical media and optical wave front detection.
Handedness of children determines preferential facial and eye movements related to hemispheric specialization La lateralidad manual determina la preferencia motora ocular y facial en relación con la especialización hemisférica en niños
Full Text Available BACKGROUND: Despite repeated demonstrations of asymmetries in several brain functions, the biological bases of such asymmetries have remained obscure. OBJECTIVE: To investigate development of lateralized facial and eye movements evoked by hemispheric stimulation in right-handed and left-handed children. METHOD: Fifty children were tested according to handedness by means of four tests: I. Mono-syllabic non-sense words, II. Tri-syllabic sense words, III. Visual field occlusion by black wall, and presentation of geometric objects to both hands separately, IV. Left eye and the temporal half visual field of the right eye occlusion with special goggles, afterwards asking children to assemble a three-piece puzzle; same tasks were performed contra-laterally. RESULTS: Right-handed children showed higher percentage of eye movements to right side when stimulated by tri-syllabic words, while left-handed children shown higher percentages of eyes movements to left side when stimulated by the same type of words. Left-handed children spent more time in recognizing non-sense mono-syllabic words. Hand laterality correlated with tri-syllabic word recognition performance. Age contributed to laterality development in nearly all cases, except in second test. CONCLUSION: Eye and facial movements were found to be related to left- and right-hand preference and specialization for language development, as well as visual, haptic perception and recognition in an age-dependent fashion in a complex process.CONTEXTO: A pesar de las repetidas demostraciones de asimetría en varias funciones cerebrales, sus bases biológicas permanecen no bien conocidas aún. OBJECTIVO: Investigamos el desarrollo de la lateralización de los movimientos faciales y oculares provocados por la estimulación hemisférica preferencial en niños diestros y zurdos. MÉTODO: Se examinaron 50 niños que se dividieron de acuerdo a su lateralidad manual, se les aplicaron 4 pruebas: I. Discriminación de
Full Text Available The Automatic Facial Expression Recognition has been one of the latest research topic since1990’s.There have been recent advances in detecting face, facial expression recognition andclassification. There are multiple methods devised for facial feature extraction which helps in identifyingface and facial expressions. This paper surveys some of the published work since 2003 till date. Variousmethods are analysed to identify the Facial expression. The Paper also discusses about the facialparameterization using Facial Action Coding System(FACS action units and the methods whichrecognizes the action units parameters using facial expression data that are extracted. Various kinds offacial expressions are present in human face which can be identified based on their geometric features,appearance features and hybrid features . The two basic concepts of extracting features are based onfacial deformation and facial motion. This article also identifies the techniques based on thecharacteristics of expressions and classifies the suitable methods that can be implemented.
... Find an ENT Doctor Near You Facial Sports Injuries Facial Sports Injuries Patient Health Information News media interested in ... should receive immediate medical attention. Prevention Of Facial Sports Injuries The best way to treat facial sports injuries ...
... an ENT Doctor Near You Children and Facial Trauma Children and Facial Trauma Patient Health Information News ... staff at firstname.lastname@example.org . What is facial trauma? The term facial trauma means any injury to ...
... to find out more. Facial Cosmetic Surgery Facial Cosmetic Surgery Extensive education and training in surgical procedures ... to find out more. Facial Cosmetic Surgery Facial Cosmetic Surgery Extensive education and training in surgical procedures ...
... a facial plastic surgeon Facial Scar Revision Understanding Facial Scar Treatment When the skin is injured from a cut or tear the body heals by forming scar tissue. The appearance of the scar can range from ...
Theobald, Barry-John; Matthews, Iain; Mangini, Michael; Spies, Jeffrey R.; Brick, Timothy R.; Cohn, Jeffrey F.; Boker, Steven M.
Nonverbal visual cues accompany speech to supplement the meaning of spoken words, signify emotional state, indicate position in discourse, and provide back-channel feedback. This visual information includes head movements, facial expressions and body gestures. In this article we describe techniques for manipulating both verbal and nonverbal facial…
Lagun, Dmitry; Manzanares, Cecelia; Zola, Stuart M; Buffalo, Elizabeth A; Agichtein, Eugene
The Visual Paired Comparison (VPC) task is a recognition memory test that has shown promise for the detection of memory impairments associated with mild cognitive impairment (MCI). Because patients with MCI often progress to Alzheimer's Disease (AD), the VPC may be useful in predicting the onset of AD. VPC uses noninvasive eye tracking to identify how subjects view novel and repeated visual stimuli. Healthy control subjects demonstrate memory for the repeated stimuli by spending more time looking at the novel images, i.e., novelty preference. Here, we report an application of machine learning methods from computer science to improve the accuracy of detecting MCI by modeling eye movement characteristics such as fixations, saccades, and re-fixations during the VPC task. These characteristics are represented as features provided to automatic classification algorithms such as Support Vector Machines (SVMs). Using the SVM classification algorithm, in tandem with modeling the patterns of fixations, saccade orientation, and regression patterns, our algorithm was able to automatically distinguish age-matched normal control subjects from MCI subjects with 87% accuracy, 97% sensitivity and 77% specificity, compared to the best available classification performance of 67% accuracy, 60% sensitivity, and 73% specificity when using only the novelty preference information. These results demonstrate the effectiveness of applying machine-learning techniques to the detection of MCI, and suggest a promising approach for detection of cognitive impairments associated with other disorders.
Full Text Available As dementia progresses, the cognitive functioning of patients declines, and caregivers and other support staff gradually lose the means to communicate with them. However, some caregivers believe that patients can still recognize their surroundings even when they become akinetic with mutism. In this study, we observed eye-movements (preferential looking paradigm to detect the presence of residual cognitive functions in a patient with severe frontotemporal dementia. The subject was a 76-year-old female. At the time of observation, she had lost all spontaneous activities. Magnetic resonance imaging (MRI imaging showed dense atrophy in the bilateral frontotemporal lobe, but the parieto–occipital lobe was preserved. A preferential looking paradigm was used in the experiment whereby two different faces (learned and non-learned were simultaneously presented to the patient on a TV monitor. As a result, we found no significant differences in looking time between the two faces. However, when the saccade timing to the presented faces was examined, a much longer latency was observed for the right rather than the left side of the target faces. Even though the patient had lost all capacity for spontaneous activity, we were able to observe partial residual cognitive ability using the eye-movement paradigm.
Sun, Xinyao; Byrns, Simon; Cheng, Irene; Zheng, Bin; Basu, Anup
We introduce a smart sensor-based motion detection technique for objective measurement and assessment of surgical dexterity among users at different experience levels. The goal is to allow trainees to evaluate their performance based on a reference model shared through communication technology, e.g., the Internet, without the physical presence of an evaluating surgeon. While in the current implementation we used a Leap Motion Controller to obtain motion data for analysis, our technique can be applied to motion data captured by other smart sensors, e.g., OptiTrack. To differentiate motions captured from different participants, measurement and assessment in our approach are achieved using two strategies: (1) low level descriptive statistical analysis, and (2) Hidden Markov Model (HMM) classification. Based on our surgical knot tying task experiment, we can conclude that finger motions generated from users with different surgical dexterity, e.g., expert and novice performers, display differences in path length, number of movements and task completion time. In order to validate the discriminatory ability of HMM for classifying different movement patterns, a non-surgical task was included in our analysis. Experimental results demonstrate that our approach had 100 % accuracy in discriminating between expert and novice performances. Our proposed motion analysis technique applied to open surgical procedures is a promising step towards the development of objective computer-assisted assessment and training systems.
贺毅岳; 耿国华; 茹少峰; 贾甲; 贺小伟
提出了一种面皮三维表面重建方法SRMFM(surface reconstruction method for facial model),首先采用MC算法重建面皮三维模型,在Frankfurt坐标校正基础上建立面皮体素模型,然后采用基于广度优先遍历的多视点可见性检测算法提取表面顶点,创建面皮三维表面模型.可从断层图像自动、快速地重建出保持表面细节特征的面皮三维表面模型.%A 3D SRMFM (surface reconstruction method for facial model) is proposed,which employs Marching Cubes algorithm to construct 3D facial model,and builds facial voxel model after Frankfurt coordinate correction is accomplished,then applies MVD (multi-view visibility detection) based on BFS (breadth first search) to extract external surface vertexes from which facial surface model is constructed finally.Achieved the goal of reconstructing facial surface model automatically and quickly with geometric details well kept.SRMFM is an automatic and efficient 3D facial surface reconstruction method capable of keeping geometric details excellently.
Motto, A L; Galiana, H L; Brown, K A; Kearney, R E
In  we developed a method for the automated estimation of the phase relation between thoracic and abdominal signals measured by noninvasive respiratory inductance plethysmography (RIP). In the present paper, we improve on the phase estimator by including an automated procedure for the detection of periods of gross body movements. We assume that the number of sleep obstructive events during periods of gross body movements is zero in probability. We hope that combining the phase estimator with the gross body movement detector should yield improved diagnostic tools for the automated classification of obstructive hypopnea events.
L. Daniel Jacubovsky, Dr.
Full Text Available El envejecimiento facial es un proceso único y particular a cada individuo y está regido en especial por su carga genética. El lifting facial es una compleja técnica desarrollada en nuestra especialidad desde principios de siglo, para revertir los principales signos de este proceso. Los factores secundarios que gravitan en el envejecimiento facial son múltiples y por ello las ritidectomías o lifting cérvico faciales descritas han buscado corregir los cambios fisonómicos del envejecimiento excursionando, como se describe, en todos los planos tisulares involucrados. Esta cirugía por lo tanto, exige conocimiento cabal de la anatomía quirúrgica, pericia y experiencia para reducir las complicaciones, estigmas quirúrgicos y revisiones secundarias. La ridectomía facial ha evolucionado hacia un procedimiento más simple, de incisiones más cortas y disecciones menos extensas. Las suspensiones musculares han variado en su ejecución y los vectores de montaje y resección cutánea son cruciales en los resultados estéticos de la cirugía cérvico facial. Hoy estos vectores son de tracción más vertical. La corrección de la flaccidez va acompañada de un interés en reponer el volumen de la superficie del rostro, en especial el tercio medio. Las técnicas quirúrgicas de rejuvenecimiento, en especial el lifting facial, exigen una planificación para cada paciente. Las técnicas adjuntas al lifting, como blefaroplastias, mentoplastía, lipoaspiración de cuello, implantes faciales y otras, también han tenido una positiva evolución hacia la reducción de riesgos y mejor éxito estético.
Full Text Available Traditional edge-detection algorithms in image processing typically convolute afilter operator and the input image, and then map overlapping input image regions tooutput signals. Convolution also serves as a basis in biologically inspired (Sobel, Laplace,Canny algorithms. Recent results in cognitive retinal research have shown that ganglioncell receptive fields cover the mammalian retina in a mosaic arrangement, withinsignificant amounts of overlap in the central fovea. This means that the biologicalrelevance of traditional and widely adapted edge-detection algorithms with convolutionbasedoverlapping operator architectures has been disproved. However, using traditionalfilters with non-overlapping operator architectures leads to considerable losses in contourinformation. This paper introduces a novel, tremor-based retina model and edge-detectionalgorithm that reconciles these differences between the physiology of the retina and theoverlapping architectures used by today's widely adapted algorithms. The algorithm takesinto consideration data convergence, as well as the dynamic properties of the retina, byincorporating a model of involuntary eye tremors and the impulse responses of ganglioncells. Based on the evaluation of the model, two hypotheses are formulated on the highlydebated role of involuntary eye tremors: 1 The role of involuntary eye tremors hasinformation theoretical implications 2 From an information processing point of view, thefunctional role of involuntary eye-movements extends to more than just the maintenance ofaction potentials. Involuntary eye-movements may be responsible for the compensation ofinformation losses caused by a non-overlapping receptive field architecture. In support ofthese hypotheses, the article provides a detailed analysis of the model's biologicalrelevance, along with numerical simulations and a hardware implementation.
Chirivella, Praveen; Singaraju, Gowri Sankar; Mandava, Prasad; Reddy, V Karunakar; Neravati, Jeevan Kumar; George, Suja Ani
Objective: To test the null hypothesis that there is no effect of esthetic perception of smiling profile in three different facial types by a change in the maxillary incisor inclination and position. Materials and Methods: A smiling profile photograph with Class I skeletal and dental pattern, normal profile were taken in each of the three facial types dolichofacial, mesofacial, and brachyfacial. Based on the original digital image, 15 smiling profiles in each of the facial types were created using the FACAD software by altering the labiolingual inclination and anteroposterior position of the maxillary incisors. These photographs were rated on a visual analog scale by three panels of examiners consisting of orthodontists, dentists, and nonprofessionals with twenty members in each group. The responses were assessed by analysis of variance (ANOVA) test followed by post hoc Scheffe. Results: Significant differences (P < 0.001) were detected when ratings of each photograph in each of the individual facial type was compared. In dolichofacial and mesofacial pattern, the position of the maxillary incisor must be limited to 2 mm from the goal anterior limit line. In brachyfacial pattern, any movement of facial axis point of maxillary incisors away from GALL is worsens the facial esthetics. The result of the ANOVA showed differences among the three groups for certain facial profiles. Conclusion: The hypothesis was rejected. The esthetic perception of labiolingual inclination and anteroposterior of maxillary incisors differ in different facial types, and this may effect in formulating treatment plans for different facial types. PMID:28197396
Aliakbaryhosseinabadi, Susan; Jiang, Ning; Vuckovic, Aleksandra
Detection of motor intention with short latency from scalp electroencephalography (EEG) is essential for the development of brain-computer interface (BCI) systems for neuromodulation. This latency determines the temporal association between motor intention and the triggered afferent neurofeedback...
Rehabilitation takes an important part in the treatment of facial paralysis, especially when these are severe. It aims to lead the recovery of motor activity and prevent or reduce sequelae like synkinesis or spasms. It is preferable that it be proposed early in order to set up a treatment plan based on the results of the assessment, sometimes coupled with an electromyography. In case of surgery, preoperative work is recommended, especially in case of hypoglossofacial anastomosis or lengthening temporalis myoplasty (LTM). Our proposal is to present an original technique to enhance the sensorimotor loop and the cortical control of movement, especially when using botulinum toxin and after surgery.
Darwin did not focus on deception. Only a few sentences in his book mentioned the issue. One of them raised the very interesting question of whether it is difficult to voluntarily inhibit the emotional expressions that are most difficult to voluntarily fabricate. Another suggestion was that it would be possible to unmask a fabricated expression by the absence of the difficult-to-voluntarily-generate facial actions. Still another was that during emotion body movements could be more easily suppressed than facial expression. Research relevant to each of Darwin's suggestions is reviewed, as is other research on deception that Darwin did not foresee.
Imaizumi, Mitsuyoshi; Tani, Akiko; Ogawa, Hiroshi; Omori, Koichi
Parotid lymphangioma is a relatively rare disease that is usually detected in infancy or early childhood, and which has typical features. Clinical reports of facial nerve paralysis caused by lymphangioma, however, are very rare. Usually, facial nerve paralysis in a child suggests malignancy. Here we report a very rare case of parotid lymphangioma associated with facial nerve paralysis. A 7-year-old boy was admitted to hospital with a rapidly enlarging mass in the left parotid region. Left peripheral-type facial nerve paralysis was also noted. Computed tomography and magnetic resonance imaging also revealed multiple cystic lesions. Open biopsy was undertaken in order to investigate the cause of the facial nerve paralysis. The histopathological findings of the excised tumor were consistent with lymphangioma. Prednisone (40 mg/day) was given in a tapering dose schedule. Facial nerve paralysis was completely cured 1 month after treatment. There has been no recurrent facial nerve paralysis for eight years.
Full Text Available This paper focuses on the feasibility of tracking the chest wall movement of a human subject during respiration from the waveforms recorded using an impulse-radio (IR ultra-wideband radar. The paper describes the signal processing to estimate sleep apnea detection and breathing rate. Some techniques to solve several problems in these types of measurements, such as the clutter suppression, body movement and body orientation detection are described. Clutter suppression is achieved using a moving averaging filter to dynamically estimate it. The artifacts caused by body movements are removed using a threshold method before analyzing the breathing signal. The motion is detected using the time delay that maximizes the received signal after a clutter removing algorithm is applied. The periods in which the standard deviations of the time delay exceed a threshold are considered macro-movements and they are neglected. The sleep apnea intervals are detected when the breathing signal is below a threshold. The breathing rate is determined from the robust spectrum estimation based on Lomb periodogram algorithm. On the other hand the breathing signal amplitude depends on the body orientation respect to the antennas, and this could be a problem. In this case, in order to maximize the signal-to-noise ratio, multiple sensors are proposed to ensure that the backscattered signal can be detected by at least one sensor, regardless of the direction the human subject is facing. The feasibility of the system is compared with signals recorded by a microphone.
Korb, Sebastian; Wood, Adrienne; Banks, Caroline A; Agoulnik, Dasha; Hadlock, Tessa A; Niedenthal, Paula M
The ability of patients with unilateral facial paralysis to recognize and appropriately judge facial expressions remains underexplored. To test the effects of unilateral facial paralysis on the recognition of and judgments about facial expressions of emotion and to evaluate the asymmetry of facial mimicry. Patients with left or right unilateral facial paralysis at a university facial plastic surgery unit completed 2 computer tasks involving video facial expression recognition. Side of facial paralysis was used as a between-participant factor. Facial function and symmetry were verified electronically with the eFACE facial function scale. Across 2 tasks, short videos were shown on which facial expressions of happiness and anger unfolded earlier on one side of the face or morphed into each other. Patients indicated the moment or side of change between facial expressions and judged their authenticity. Type, time, and accuracy of responses on a keyboard were analyzed. A total of 57 participants (36 women and 21 men) aged 20 to 76 years (mean age, 50.2 years) and with mild left or right unilateral facial paralysis were included in the study. Patients with right facial paralysis were faster (by about 150 milliseconds) and more accurate (mean number of errors, 1.9 vs 2.5) to detect expression onsets on the left side of the stimulus face, suggesting anatomical asymmetry of facial mimicry. Patients with left paralysis, however, showed more anomalous responses, which partly differed by emotion. The findings favor the hypothesis of an anatomical asymmetry of facial mimicry and suggest that patients with a left hemiparalysis could be more at risk of developing a cluster of disabilities and psychological conditions including emotion-recognition impairments. 3.
Hamedi, M; Salleh, Sh-Hussain; Tan, T S; Ismail, K; Ali, J; Dee-Uam, C; Pavaganun, C; Yupapin, P P
The authors present a new method of recognizing different human facial gestures through their neural activities and muscle movements, which can be used in machine-interfacing applications. Human-machine interface (HMI) technology utilizes human neural activities as input controllers for the machine. Recently, much work has been done on the specific application of facial electromyography (EMG)-based HMI, which have used limited and fixed numbers of facial gestures. In this work, a multipurpose interface is suggested that can support 2-11 control commands that can be applied to various HMI systems. The significance of this work is finding the most accurate facial gestures for any application with a maximum of eleven control commands. Eleven facial gesture EMGs are recorded from ten volunteers. Detected EMGs are passed through a band-pass filter and root mean square features are extracted. Various combinations of gestures with a different number of gestures in each group are made from the existing facial gestures. Finally, all combinations are trained and classified by a Fuzzy c-means classifier. In conclusion, combinations with the highest recognition accuracy in each group are chosen. An average accuracy >90% of chosen combinations proved their ability to be used as command controllers.
Wathan, Jen; Burrows, Anne M; Waller, Bridget M; McComb, Karen
Although previous studies of horses have investigated their facial expressions in specific contexts, e.g. pain, until now there has been no methodology available that documents all the possible facial movements of the horse and provides a way to record all potential facial configurations. This is essential for an objective description of horse facial expressions across a range of contexts that reflect different emotional states. Facial Action Coding Systems (FACS) provide a systematic methodology of identifying and coding facial expressions on the basis of underlying facial musculature and muscle movement. FACS are anatomically based and document all possible facial movements rather than a configuration of movements associated with a particular situation. Consequently, FACS can be applied as a tool for a wide range of research questions. We developed FACS for the domestic horse (Equus caballus) through anatomical investigation of the underlying musculature and subsequent analysis of naturally occurring behaviour captured on high quality video. Discrete facial movements were identified and described in terms of the underlying muscle contractions, in correspondence with previous FACS systems. The reliability of others to be able to learn this system (EquiFACS) and consistently code behavioural sequences was high--and this included people with no previous experience of horses. A wide range of facial movements were identified, including many that are also seen in primates and other domestic animals (dogs and cats). EquiFACS provides a method that can now be used to document the facial movements associated with different social contexts and thus to address questions relevant to understanding social cognition and comparative psychology, as well as informing current veterinary and animal welfare practices.
Full Text Available Although previous studies of horses have investigated their facial expressions in specific contexts, e.g. pain, until now there has been no methodology available that documents all the possible facial movements of the horse and provides a way to record all potential facial configurations. This is essential for an objective description of horse facial expressions across a range of contexts that reflect different emotional states. Facial Action Coding Systems (FACS provide a systematic methodology of identifying and coding facial expressions on the basis of underlying facial musculature and muscle movement. FACS are anatomically based and document all possible facial movements rather than a configuration of movements associated with a particular situation. Consequently, FACS can be applied as a tool for a wide range of research questions. We developed FACS for the domestic horse (Equus caballus through anatomical investigation of the underlying musculature and subsequent analysis of naturally occurring behaviour captured on high quality video. Discrete facial movements were identified and described in terms of the underlying muscle contractions, in correspondence with previous FACS systems. The reliability of others to be able to learn this system (EquiFACS and consistently code behavioural sequences was high--and this included people with no previous experience of horses. A wide range of facial movements were identified, including many that are also seen in primates and other domestic animals (dogs and cats. EquiFACS provides a method that can now be used to document the facial movements associated with different social contexts and thus to address questions relevant to understanding social cognition and comparative psychology, as well as informing current veterinary and animal welfare practices.
Valstar, M.F.; Pantic, Maja
Past work on automatic analysis of facial expressions has focused mostly on detecting prototypic expressions of basic emotions like happiness and anger. The method proposed here enables the detection of a much larger range of facial behavior by recognizing facial muscle actions [action units (AUs)
Chouet, Bernard A.; Matoza, Robin S.
With the emergence of portable broadband seismic instrumentation, availability of digital networks with wide dynamic range, and development of new powerful analysis techniques made possible by greatly increased computer capacity, volcano seismology has now reached a mature stage where insights are rapidly being gained on the role played by magmatic and hydrothermal fluids in the generation of seismic waves. Volcanoes produce a wide variety of signals originating in the transport of magma and related hydrothermal fluids and their interaction with solid rock. Typical signals include (1) brittle failure earthquakes that reflect the response of the rock to stress changes induced by magma movement; (2) pressure oscillations accompanying the dynamics of liquids and gases in conduits and cracks; and (3) magma fracturing and fragmentation. Oscillatory behaviors within magmatic and hydrothermal systems are the norm and are the expressions of the complex rheologies of these fluids and nonlinear characteristics of associated processes underlying the release of thermo-chemical and gravitational energy from volcanic fluids along their ascent path. The interpretation of these signals and quantification of their source mechanisms form the core of modern volcano seismology. The accuracy to which the forces operating at the source can be resolved depends on the degree of resolution achieved for the volcanic structure. High-resolution tomography based on iterative inversions of seismic travel-time data can image three-dimensional structures at a scale of a few hundred meters provided adequate local short-period earthquake data are available. Hence, forces in a volcano are potentially resolvable for periods longer than ~ 1 s. In concert with techniques aimed at the interpretation of processes occurring in the fluid, novel seismic methods have emerged that are allowing the detection of stress changes in volcanic structures induced by magma movement. These methods include (1) ambient
Andersson, Anders Tobias
Modern facial feature tracking techniques can automatically extract and accurately track multiple facial landmark points from faces in video streams in real time. Facial landmark points are deﬁned as points distributed on a face in regards to certain facial features, such as eye corners and face contour. This opens up for using facial feature movements as a handsfree human-computer interaction technique. These alternatives to traditional input devices can give a more interesting gaming experi...
Full Text Available The number of channels used for polysomnographic recording frequently causes difficulties for patients because of the many cables connected. Also, it increases the risk of having troubles during recording process and increases the storage volume. In this study, it is intended to detect periodic leg movement (PLM in sleep with the use of the channels except leg electromyography (EMG by analysing polysomnography (PSG data with digital signal processing (DSP and machine learning methods. PSG records of 153 patients of different ages and genders with PLM disorder diagnosis were examined retrospectively. A novel software was developed for the analysis of PSG records. The software utilizes the machine learning algorithms, statistical methods, and DSP methods. In order to classify PLM, popular machine learning methods (multilayer perceptron, K-nearest neighbour, and random forests and logistic regression were used. Comparison of classified results showed that while K-nearest neighbour classification algorithm had higher average classification rate (91.87% and lower average classification error value (RMSE = 0.2850, multilayer perceptron algorithm had the lowest average classification rate (83.29% and the highest average classification error value (RMSE = 0.3705. Results showed that PLM can be classified with high accuracy (91.87% without leg EMG record being present.
Toups, Melissa A; Pease, James B; Hahn, Matthew W
Most of our knowledge of sex-chromosome evolution comes from male heterogametic (XX/XY) taxa. With the genome sequencing of multiple female heterogametic (ZZ/ZW) taxa, we can now ask whether there are patterns of evolution common to both sex chromosome systems. In all XX/XY systems examined to date, there is an excess of testis-biased retrogenes moving from the X chromosome to the autosomes, which is hypothesized to result from either sexually antagonistic selection or escape from meiotic sex chromosome inactivation (MSCI). We examined RNA-mediated (retrotransposed) and DNA-mediated gene movement in two independently evolved ZZ/ZW systems, birds (chicken and zebra finch) and lepidopterans (silkworm). Even with sexually antagonistic selection likely operating in both taxa and MSCI having been identified in the chicken, we find no evidence for an excess of genes moving from the Z chromosome to the autosomes in either lineage. We detected no excess for either RNA- or DNA-mediated duplicates, across a range of approaches and methods. We offer some potential explanations for this difference between XX/XY and ZZ/ZW sex chromosome systems, but further work is needed to distinguish among these hypotheses. Regardless of the root causes, we have identified an additional, potentially inherent, difference between XX/XY and ZZ/ZW systems.
Lin, Yin-Yan; Wu, Hau-Tieng; Hsu, Chi-An; Huang, Po-Chiun; Huang, Yuan-Hao; Lo, Yu-Lun
Physiologically, the thoracic (THO) and abdominal (ABD) movement signals, captured using wearable piezo-electric bands, provide information about various types of apnea, including central sleep apnea (CSA) and obstructive sleep apnea (OSA). However, the use of piezo-electric wearables in detecting sleep apnea events has been seldom explored in the literature. This study explored the possibility of identifying sleep apnea events, including OSA and CSA, by solely analyzing one or both the THO and ABD signals. An adaptive non-harmonic model was introduced to model the THO and ABD signals, which allows us to design features for sleep apnea events. To confirm the suitability of the extracted features, a support vector machine was applied to classify three categories - normal and hypopnea, OSA, and CSA. According to a database of 34 subjects, the overall classification accuracies were on average 75.9%±11.7% and 73.8%±4.4%, respectively, based on the cross validation. When the features determined from the THO and ABD signals were combined, the overall classification accuracy became 81.8%±9.4%. These features were applied for designing a state machine for online apnea event detection. Two event-byevent accuracy indices, S and I, were proposed for evaluating the performance of the state machine. For the same database, the S index was 84.01%±9.06%, and the I index was 77.21%±19.01%. The results indicate the considerable potential of applying the proposed algorithm to clinical examinations for both screening and homecare purposes.
Ozçelik, D; Toplu, G; Türkseven, A; Senses, D A; Yiğit, B
Transverse facial cleft is a very rare malformation. The Tessier no. 7 cleft is a lateral facial cleft which emanates from oral cavity and extends towards the tragus, involving both soft tissue and skeletal components. Here, we present a case having transverse facial cleft, accessory mandible having teeth, absent parotid gland and ipsilateral peripheral facial nerve weakness. After surgical repair of the cleft in 2-month of age, improvement of the facial nerve function was detected in 3-year of age. Resection of the accessory mandible was planned in 5-6 years of age.
张丽雯; 杨艳芳; 齐美彬; 蒋建国
A method to detect driving fatigue based on the features of eyes and yawning is proposed.First,the face area is detected and located by using the Gaussian model in YCrCb color space.Then the facial gray image is linearized,and in the binary image,human eye regions are robustly located under the geometric constraints.By using the region growing and the morphological operating,the eyes positioning is accurately performed.Accordingly,the closure of eyes is calculated.Then the candidate lip area is located according to the best threshold of color space and the facial gray value feature.The degree of mouth opening shows whether the driver yawns.Finally,the driving fatigue is decided based on two facial features.The detection result of driving fatigue is improved because of the combination of the features of eye and yawning frequency.%文章采用一种基于眼睛闭合度及打呵欠来检测驾驶员疲劳的方法,在YCrCb颜色空间中利用高斯模型进行肤色检测得到人脸的区域,在人脸灰度二值化图中利用五官几何结构的先验知识粗略定位人眼,利用区域生长和形态学运算得到人眼轮廓并计算眼睛的闭合度；检测嘴唇时利用唇色最佳阈值大致确定嘴唇位置,在此基础上通过人脸灰度值特征精确定位嘴唇,然后通过嘴张开程度判断驾驶员是否打呵欠；最后基于2个特征对驾驶疲劳进行判决,实验证明这种方法对驾驶疲劳检测具有较好的效果.
Du, Shichuan; Tao, Yong; Martinez, Aleix M
Understanding the different categories of facial expressions of emotion regularly used by us is essential to gain insights into human cognition and affect as well as for the design of computational models and perceptual interfaces. Past research on facial expressions of emotion has focused on the study of six basic categories--happiness, surprise, anger, sadness, fear, and disgust. However, many more facial expressions of emotion exist and are used regularly by humans. This paper describes an important group of expressions, which we call compound emotion categories. Compound emotions are those that can be constructed by combining basic component categories to create new ones. For instance, happily surprised and angrily surprised are two distinct compound emotion categories. The present work defines 21 distinct emotion categories. Sample images of their facial expressions were collected from 230 human subjects. A Facial Action Coding System analysis shows the production of these 21 categories is different but consistent with the subordinate categories they represent (e.g., a happily surprised expression combines muscle movements observed in happiness and surprised). We show that these differences are sufficient to distinguish between the 21 defined categories. We then use a computational model of face perception to demonstrate that most of these categories are also visually discriminable from one another.
... more to fully heal and achieve maximum improved appearance. Facial plastic surgery makes it possible to correct facial flaws that can undermine self-confidence. Changing how your scar looks can help change ...
This review covers universal patterns in facial preferences. Facial attractiveness has fascinated thinkers since antiquity, but has been the subject of intense scientific study for only the last quarter of a century...
Facial reanimation following persistent facial paralysis can be managed with surgical procedures of varying complexity. The choice of the technique is mainly determined by the cause of facial paralysis, the age and desires of the patient. The techniques most commonly used are the nerve grafts (VII-VII, XII-VII, cross facial graft), dynamic muscle transfers (temporal myoplasty, free muscle transfert) and static suspensions. An intensive rehabilitation through specific exercises after all procedures is essential to archieve good results.
Carranza, Dafnis C; Haley, Jennifer C; Chiu, Melvin
A 34-year-old man from El Salvador was referred to our clinic with a 10-year history of a pruritic erythematous facial eruption. He reported increased pruritus and scaling of lesions when exposed to the sun. He worked as a construction worker and admitted to frequent sun exposure. Physical examination revealed well-circumscribed erythematous to violaceous papules with raised borders and atrophic centers localized to the nose (Figure 1). He did not have lesions on the arms or legs. He did not report a family history of similar lesions. A biopsy specimen was obtained from the edge of a lesion on the right ala. Histologic examination of the biopsy specimen showed acanthosis of the epidermis with focal invagination of the corneal layer and a homogeneous column of parakeratosis in the center of that layer consistent with a cornoid lamella (Figure 2). Furthermore, the granular layer was absent at the cornoid lamella base. The superficial dermis contained a sparse, perivascular lymphocytic infiltrate. No evidence of dysplasia or malignancy was seen. These findings supported a diagnosis of porokeratosis. The patient underwent a trial of cryotherapy with moderate improvement of the facial lesions.
Andersson, Richard; Larsson, Linnea; Holmqvist, Kenneth; Stridh, Martin; Nyström, Marcus
Almost all eye-movement researchers use algorithms to parse raw data and detect distinct types of eye movement events, such as fixations, saccades, and pursuit, and then base their results on these. Surprisingly, these algorithms are rarely evaluated. We evaluated the classifications of ten eye-movement event detection algorithms, on data from an SMI HiSpeed 1250 system, and compared them to manual ratings of two human experts. The evaluation focused on fixations, saccades, and post-saccadic oscillations. The evaluation used both event duration parameters, and sample-by-sample comparisons to rank the algorithms. The resulting event durations varied substantially as a function of what algorithm was used. This evaluation differed from previous evaluations by considering a relatively large set of algorithms, multiple events, and data from both static and dynamic stimuli. The main conclusion is that current detectors of only fixations and saccades work reasonably well for static stimuli, but barely better than chance for dynamic stimuli. Differing results across evaluation methods make it difficult to select one winner for fixation detection. For saccade detection, however, the algorithm by Larsson, Nyström and Stridh (IEEE Transaction on Biomedical Engineering, 60(9):2484-2493,2013) outperforms all algorithms in data from both static and dynamic stimuli. The data also show how improperly selected algorithms applied to dynamic data misestimate fixation and saccade properties.
Mattei, Eugenio; Censi, Federica; Triventi, Michele; Mancini, Matteo; Napolitano, Antonio; Genovese, Elisabetta; Cannata, Vittorio; Falsaperla, Rosaria; Calcagnini, Giovanni
The static magnetic field generated by MRI systems is highly non-homogenous and rapidly decreases when moving away from the bore of the scanner. Consequently, the movement around the MRI scanner is equivalent to an exposure to a time-varying magnetic field at very low frequency (few Hz). For patients with an implanted cardiac stimulators, such as an implantable cardioverter/defibrillator (ICD), the movements inside the MRI environment may thus induce voltages on the loop formed by the leads of the device, with the potential to affect the behavior of the stimulator. In particular, the ICD's detection algorithms may be affected by the induced voltage and may cause inappropriate sensing, arrhythmia detections, and eventually inappropriate ICD therapy.We performed in-vitro measurements on a saline-filled humanshaped phantom (male, 170 cm height), equipped with an MRconditional ICD able to transmit in real-time the detected cardiac activity (electrograms). A biventricular implant was reproduced and the ICD was programmed in standard operating conditions, but with the shock delivery disabled. The electrograms recorded in the atrial, left and right ventricle channels were monitored during rotational movements along the vertical axis, in close proximity of the bore. The phantom was also equipped with an accelerometer and a magnetic field probe to measure the angular velocity and the magnetic field variation during the experiment. Pacing inhibition, inappropriate detection of tachyarrhythmias and of ventricular fibrillation were observed. Pacing inhibition began at an angular velocity of about 7 rad/s, (dB/dt of about 2 T/s). Inappropriate detection of ventricular fibrillation occurred at about 8 rad/s (dB/dt of about 3 T/s). These findings highlight the need for a specific risk assessment of workers with MR-conditional ICDs, which takes into account also effects that are generally not considered relevant for patients, such as the movement around the scanner bore.
Shelley-Jones, D; Beischer, N; de Crespigny, L; Chew, F
Two cases of rhesus isoimmunization are presented in which the fetus was much more severely affected than anticipated and where a sinusoidal pattern found on cardiotocography, performed because of absent fetal movements, resulted in appropriate and successful management.
Sophie L Fayolle
Full Text Available Two experiments were run to examine the effects of dynamic displays of facial expressions of emotions on time judgments. The participants were given a temporal bisection task with emotional facial expressions presented in a dynamic or a static display. Two emotional facial expressions and a neutral expression were tested and compared. Each of the emotional expressions had the same affective valence (unpleasant, but one was high-arousing (expressing anger and the other low-arousing (expressing sadness. Our results showed that time judgments are highly sensitive to movements in facial expressions and the emotions expressed. Indeed, longer perceived durations were found in response to the dynamic faces and the high-arousing emotional expressions compared to the static faces and low-arousing expressions. In addition, the facial movements amplified the effect of emotions on time perception. Dynamic facial expressions are thus interesting tools for examining variations in temporal judgments in different social contexts.
Ferri, Raffaele; Zucconi, Marco; Manconi, Mauro; Bruni, Oliviero; Miano, Silvia; Plazzi, Giuseppe; Ferini-Strambi, Luigi
To assess the performance of a new method for automatic detection of periodic leg movements during sleep. Leg movements during sleep were visually detected in the tibialis anterior muscles recordings of 15 patients with restless legs syndrome and 15 normal controls. Leg movements were detected automatically by means of a new computer method with which electromyogram signals are first digitally band-pass filtered and then rectified; subsequently, the detection of leg movements is performed by using 2 thresholds: one for the starting point and another to detect the end point of each leg movement. Sensitivity and false-positive rate were obtained; the American Sleep Disorders Association parameters were also computed, and the results analyzed by means of the Kendall W coefficient, the linear correlation coefficient and the Bland-Altman plots. N/A. Fifteen patients with restless legs syndrome and periodic leg movements and 15 controls. High values of the Kendall W coefficient of concordance between automatic and visual analysis were found with values close to 1 and the linear correlation coefficient for leg movements index and total leg movements index was > 0.950 (p visual and computer detection, which were -9.01 and +9.89 for the periodic leg movement index. None of the normal controls was found to have periodic leg movement indexes >5 after automatic analysis. Our method can be applied to the clinical evaluation of periodic leg movements during sleep, with some caution in patients with a low periodic leg movement indexes. Large-scale research application is possible and can be considered as reliable.
Janet H Bultitude
Full Text Available It has been suggested that incongruence between signals for motor intention and sensory input can cause pain and other sensory abnormalities. This claim is supported by reports that moving in an environment of induced sensorimotor conflict leads to elevated pain and sensory symptoms in those with certain painful conditions. Similar procedures can lead to reports of anomalous sensations in healthy volunteers too. In the present study, we used mirror visual feedback to investigate the effects of sensorimotor incongruence on responses to stimuli that arise from sources external to the body, in particular, touch. Incongruence between the sensory and motor signals for the right arm was manipulated by having the participants make symmetrical or asymmetrical movements while watching a reflection of their left arm in a parasagittal mirror, or the left hand surface of a similarly positioned opaque board. In contrast to our prediction, sensitivity to the presence of gaps in tactile stimulation of the right forearm was not reduced when participants made asymmetrical movements during mirror visual feedback, as compared to when they made symmetrical or asymmetrical movements with no visual feedback. Instead, sensitivity was reduced when participants made symmetrical movements during mirror visual feedback relative to the other three conditions. We suggest that small discrepancies between sensory and motor information, as they occur during mirror visual feedback with symmetrical movements, can impair tactile processing. In contrast, asymmetrical movements with mirror visual feedback may not impact tactile processing because the larger discrepancies between sensory and motor information may prevent the integration of these sources of information. These results contrast with previous reports of anomalous sensations during exposure to both low and high sensorimotor conflict, but are nevertheless in agreement with a forward model interpretation of perceptual
Full Text Available Detecting loci under selection is an important task in evolutionary biology. In conservation genetics detecting selection is key to investigating adaptation to the spread of infectious disease. Loci under selection can be detected on a spatial scale, accounting for differences in demographic history among populations, or on a temporal scale, tracing changes in allele frequencies over time. Here we use these two approaches to investigate selective responses to the spread of an infectious cancer--devil facial tumor disease (DFTD--that since 1996 has ravaged the Tasmanian devil (Sarcophilus harrisii. Using time-series 'restriction site associated DNA' (RAD markers from populations pre- and post DFTD arrival, and DFTD free populations, we infer loci under selection due to DFTD and investigate signatures of selection that are incongruent among methods, populations, and times. The lack of congruence among populations influenced by DFTD with respect to inferred loci under selection, and the direction of that selection, fail to implicate a consistent selective role for DFTD. Instead genetic drift is more likely driving the observed allele frequency changes over time. Our study illustrates the importance of applying methods with different performance optima e.g. accounting for population structure and background selection, and assessing congruence of the results.
Full Text Available During their lifetime, people learn to recognize thousands of faces that they interact with. Face perception refers to an individual's understanding and interpretation of the face, particularly the human face, especially in relation to the associated information processing in the brain. The proportions and expressions of the human face are important to identify origin, emotional tendencies, health qualities, and some social information. From birth, faces are important in the individual's social interaction. Face perceptions are very complex as the recognition of facial expressions involves extensive and diverse areas in the brain. Our main goal is to put emphasis on presenting human faces specialized studies, and also to highlight the importance of attractiviness in their retention. We will see that there are many factors that influence face recognition.
Rodrigues, João Fabrício Mota; Coelho, Marco Túlio Pacheco
Sampling the biodiversity is an essential step for conservation, and understanding the efficiency of sampling methods allows us to estimate the quality of our biodiversity data. Sex ratio is an important population characteristic, but until now, no study has evaluated how efficient are the sampling methods commonly used in biodiversity surveys in estimating the sex ratio of populations. We used a virtual ecologist approach to investigate whether active and passive capture methods are able to accurately sample a population’s sex ratio and whether differences in movement pattern and detectability between males and females produce biased estimates of sex-ratios when using these methods. Our simulation allowed the recognition of individuals, similar to mark-recapture studies. We found that differences in both movement patterns and detectability between males and females produce biased estimates of sex ratios. However, increasing the sampling effort or the number of sampling days improves the ability of passive or active capture methods to properly sample sex ratio. Thus, prior knowledge regarding movement patterns and detectability for species is important information to guide field studies aiming to understand sex ratio related patterns. PMID:27441554
Rodrigues, João Fabrício Mota; Coelho, Marco Túlio Pacheco
Sampling the biodiversity is an essential step for conservation, and understanding the efficiency of sampling methods allows us to estimate the quality of our biodiversity data. Sex ratio is an important population characteristic, but until now, no study has evaluated how efficient are the sampling methods commonly used in biodiversity surveys in estimating the sex ratio of populations. We used a virtual ecologist approach to investigate whether active and passive capture methods are able to accurately sample a population's sex ratio and whether differences in movement pattern and detectability between males and females produce biased estimates of sex-ratios when using these methods. Our simulation allowed the recognition of individuals, similar to mark-recapture studies. We found that differences in both movement patterns and detectability between males and females produce biased estimates of sex ratios. However, increasing the sampling effort or the number of sampling days improves the ability of passive or active capture methods to properly sample sex ratio. Thus, prior knowledge regarding movement patterns and detectability for species is important information to guide field studies aiming to understand sex ratio related patterns.
Pons, Y; Ukkola-Pons, E; Ballivet de Régloix, S; Champagne, C; Raynal, M; Lepage, P; Kossowski, M
Facial palsy can be defined as a decrease in function of the facial nerve, the primary motor nerve of the facial muscles. When the facial palsy is peripheral, it affects both the superior and inferior areas of the face as opposed to central palsies, which affect only the inferior portion. The main cause of peripheral facial palsies is Bell's palsy, which remains a diagnosis of exclusion. The prognosis is good in most cases. In cases with significant cosmetic sequelae, a variety of surgical procedures are available (such as hypoglossal-facial anastomosis, temporalis myoplasty and Tenzel external canthopexy) to rehabilitate facial aesthetics and function.
Amr M. El-Sayed
Full Text Available This paper presents an approach of identifying prosthetic knee movements through pattern recognition of mechanical responses at the internal socket’s wall. A quadrilateral double socket was custom made and instrumented with two force sensing resistors (FSR attached to specific anterior and posterior sites of the socket’s wall. A second setup was established by attaching three piezoelectric sensors at the anterior distal, anterior proximal, and posterior sites. Gait cycle and locomotion movements such as stair ascent and sit to stand were adopted to characterize the validity of the technique. FSR and piezoelectric outputs were measured with reference to the knee angle during each phase. Piezoelectric sensors could identify the movement of midswing and terminal swing, pre-full standing, pull-up at gait, sit to stand, and stair ascent. In contrast, FSR could estimate the gait cycle stance and swing phases and identify the pre-full standing at sit to stand. FSR showed less variation during sit to stand and stair ascent to sensitively represent the different movement states. The study highlighted the capacity of using in-socket sensors for knee movement identification. In addition, it validated the efficacy of the system and warrants further investigation with more amputee subjects and different sockets types.
El-Sayed, Amr M; Hamzaid, Nur Azah; Tan, Kenneth Y S; Abu Osman, Noor Azuan
This paper presents an approach of identifying prosthetic knee movements through pattern recognition of mechanical responses at the internal socket's wall. A quadrilateral double socket was custom made and instrumented with two force sensing resistors (FSR) attached to specific anterior and posterior sites of the socket's wall. A second setup was established by attaching three piezoelectric sensors at the anterior distal, anterior proximal, and posterior sites. Gait cycle and locomotion movements such as stair ascent and sit to stand were adopted to characterize the validity of the technique. FSR and piezoelectric outputs were measured with reference to the knee angle during each phase. Piezoelectric sensors could identify the movement of midswing and terminal swing, pre-full standing, pull-up at gait, sit to stand, and stair ascent. In contrast, FSR could estimate the gait cycle stance and swing phases and identify the pre-full standing at sit to stand. FSR showed less variation during sit to stand and stair ascent to sensitively represent the different movement states. The study highlighted the capacity of using in-socket sensors for knee movement identification. In addition, it validated the efficacy of the system and warrants further investigation with more amputee subjects and different sockets types.
Valstar, M.F.; Mehu, M.; Jiang, Bihan; Pantic, Maja; Scherer, K.
Automatic facial expression recognition has been an active topic in computer science for over two decades, in particular facial action coding system action unit (AU) detection and classification of a number of discrete emotion states from facial expressive imagery. Standardization and comparability
Huang, D.; Shan, C.; Ardebilian, M.; Chen, L.
Facial image analysis, including face detection, face recognition,facial expression analysis, facial demographic classification, and so on, is an important and interesting research topic in the computervision and image processing area, which has many important applications such as human-computer
Valstar, M.F.; Mehu, M.; Jiang, Bihan; Pantic, Maja; Scherer, K.
Automatic facial expression recognition has been an active topic in computer science for over two decades, in particular facial action coding system action unit (AU) detection and classification of a number of discrete emotion states from facial expressive imagery. Standardization and comparability
Huang, D.; Shan, C.; Ardebilian, M.; Chen, L.
Facial image analysis, including face detection, face recognition,facial expression analysis, facial demographic classification, and so on, is an important and interesting research topic in the computervision and image processing area, which has many important applications such as human-computer
Seyyed Basir Hashemi
Full Text Available Introduction: Intra parotid facial nerve schowannoma is a rare tumor. Case report: In this article we presented two cases of intra parotid facial nerve schowannoma. In two cases tumor presented with asymptomatic parotid mass that mimic pleomorphic adenoma. No preoperative facial nerve dysfunction in cases is detected. Diagnostic result and surgical management are discussed in this paper.
Kaltwang, Sebastian; Todorovic, Sinisa; Pantic, Maja
This paper is about estimating intensity levels of Facial Action Units (FAUs) in videos as an important step toward interpreting facial expressions. As input features, we use locations of facial landmark points detected in video frames. To address uncertainty of input, we formulate a generative late
Chávez Oyanadel, R.O.; Clevers, J.G.P.W.; Verbesselt, J.; Naulin, P.; Herold, M.
Heliotropic leaf movement or leaf ‘solar tracking’ occurs for a wide variety of plants, including many desert species and some crops. This has an important effect on the canopy spectral reflectance as measured from satellites. For this reason, monitoring systems based on spectral vegetation indices,
Lesperance, Andrea; Blain, Stefanie; Chau, Tom
Children with hyperkinetic movement (HKM) often have limited access to traditional augmentative and alternative communication technologies (e.g., mechanical switches). To seek a communication solution for these children, this study explored the possibility that discernable biomechanical patterns, related to preference, exist amid HKM. We deployed a unified approach to analyse a child's movements, fusing caregiver and clinician observations with quantitative data (accelerations of the upper extremities). Two case studies were examined. In both, the accelerometer data identified preference at adjusted accuracies statistically above chance using a linear discriminant classifier. Visually, communicative movement patterns were identified in the first child (κ=0.25-0.27) but not in the second child (κ=0.03-0.11). Implications of this study include possible enhancement in communication and independence for these children.
Mehdizadeh, Omid B; Diels, Jacqueline; White, William Matthew
This article reviews the current literature supporting the use of botulinum toxin in producing symmetric facial features and reducing unwanted, involuntary movements. Methods, protocols, and adverse events are discussed. Additionally, the authors suggest that using botulinum toxin A therapy in postparalytic facial synkinesis can provide long-term results when used in conjunction with other treatment modalities.
Werner, C. A.; Poland, M. P.; Power, J. A.; Sutton, A. J.; Elias, T.; Grapenthin, R.; Thelen, W. A.
Typically in the weeks to days before a volcanic eruption there are indisputable signals of unrest that can be identified in geophysical and geochemical data. Detection of signals of volcanic unrest months to years prior to an eruption, however, relies on our ability to recognize and link more subtle changes. Deep long-period earthquakes, typically 10-45 km beneath volcanoes, are thought to represent magma movement and may indicate near future unrest. Carbon dioxide (CO2 ) exsolves from most magmas at similar depths and increases in CO2 discharge may also provide a months-to-years precursor as it emits at the surface in advance of the magma from which it exsolved. Without the use of sensitive monitoring equipment and routine measurements, changes in CO2 can easily go undetected. Finally, inflation of the surface, through use of InSAR or GPS stations (especially at sites tens of km from the volcano) can also indicate accumulation of magma in the deep crust. Here we present three recent examples, from Redoubt, Kilauea, and Mammoth Mountain volcanoes, where increases in CO2 emission, deep long-period earthquakes, and surface deformation data indicate either the intrusion of magma into the deep crust in the months to years preceding volcanic eruptions or a change in ongoing volcanic unrest. At Redoubt volcano, Alaska, elevated CO2 emission (~ 1200 t/d, or roughly 20 times the background emission) was measured in October, 2008, over 5 months prior to the first magmatic eruption in March, 2009. In addition to CO2 release, deep long-period earthquakes were first recorded in December, 2008, and a deep deformation signal was detected starting in May 2008, albeit retrospectively. At Kilauea, Hawaii, increases in CO2 emissions from the summit (up to nearly 25 kt/d, over three times the background emission) were measured mid-2004, roughly coincident with a change in deformation behavior from deflation to inflation. Nearly 3 years later, a change in eruptive activity occurred
Girges, Christine; Wright, Michael J; Spencer, Janine V; O'Brien, Justin M D
While biological motion refers to both face and body movements, little is known about the visual perception of facial motion. We therefore examined alpha wave suppression as a reduction in power is thought to reflect visual activity, in addition to attentional reorienting and memory processes. Nineteen neurologically healthy adults were tested on their ability to discriminate between successive facial motion captures. These animations exhibited both rigid and non-rigid facial motion, as well as speech expressions. The structural and surface appearance of these facial animations did not differ, thus participants decisions were based solely on differences in facial movements. Upright, orientation-inverted and luminance-inverted facial stimuli were compared. At occipital and parieto-occipital regions, upright facial motion evoked a transient increase in alpha which was then followed by a significant reduction. This finding is discussed in terms of neural efficiency, gating mechanisms and neural synchronization. Moreover, there was no difference in the amount of alpha suppression evoked by each facial stimulus at occipital regions, suggesting early visual processing remains unaffected by manipulation paradigms. However, upright facial motion evoked greater suppression at parieto-occipital sites, and did so in the shortest latency. Increased activity within this region may reflect higher attentional reorienting to natural facial motion but also involvement of areas associated with the visual control of body effectors.
Rudovic, Ognjen; Pavlovic, Vladimir; Pantic, Maja
Modeling intensity of facial action units from spontaneously displayed facial expressions is challenging mainly because of high variability in subject-specific facial expressiveness, head-movements, illumination changes, etc. These factors make the target problem highly context-sensitive. However, e
Rudovic, Ognjen; Pavlovic, Vladimir; Pantic, Maja
Modeling intensity of facial action units from spontaneously displayed facial expressions is challenging mainly because of high variability in subject-specific facial expressiveness, head-movements, illumination changes, etc. These factors make the target problem highly context-sensitive. However,
Tuominen, Pekko; Tuononen, Minttu
One of the key elements in short-term solar forecasting is the detection of clouds and their movement. This paper discusses a new method for extracting cloud cover and cloud movement information from ground based camera images using neural networks and the Lucas-Kanade method. Two novel features of the algorithm are that it performs well both inside and outside of the circumsolar region, i.e. the vicinity of the sun, and is capable of deciding a threefold sun state. More precisely, the sun state can be detected to be either clear, partly covered by clouds or overcast. This is possible due to the absence of a shadow band in the imaging system. Visual validation showed that the new algorithm performed well in detecting clouds of varying color and contrast in situations referred to as difficult for commonly used thresholding methods. Cloud motion field results were computed from two consecutive sky images by solving the optical flow problem with the fast to compute Lucas-Kanade method. A local filtering scheme developed in this study was used to remove noisy motion vectors and it is shown that this filtering technique results in a motion field with locally nearly uniform directions and smooth global changes in direction trends. Thin, transparent clouds still pose a challenge for detection and leave room for future improvements in the algorithm.
Blomgren, Staffan; Hertz, Marcus
Face detection is used in many different areas and with this thesis we aim to show the difference between Facebooks face detection soft-ware compared with an open source version from OpenCV. By using the simplest implementation of OpenCV we want to find out if it is viable for use in personal applications and be of help for others wanting to implement face detection. The dataset was meticulously checked to find the exact number of faces in each image so that the optimal result is given. The c...
Antonielli, Benedetta; Monserrat, Oriol; Bonini, Marco; Righini, Gaia; Sani, Federico; Luzi, Guido; Feyzullayev, Akper; Aliyev, Chingiz
Mud volcanism is a process that consists in the extrusion of mud, fragments or blocks of country rocks, saline waters and gases, mostly methane. This mechanism is typically linked to in-depth hydrocarbon traps, and it builds up a variety of conical edifices with dimension and morphology similar to those of magmatic volcanoes. Interferometry by Satellite Aperture Radar (InSAR) techniques have been commonly used to monitor and investigate the ground deformation connected to the eruptive phases of magmatic volcanoes. InSAR techniques have also been employed to explore the ground deformation associated with the LUSI mud volcano in Java (Indonesia). We aim to carry out a study on the paroxysmal activities of the Azerbaijan mud volcanoes, among the largest on Earth, using similar techniques. In particular the deformations of the mud volcanic systems were analyzed through the technique of satellite differential interferometry (DInSAR), thanks to the acquisition of 16 descending and 4 ascending Envisat images, spanning about 4 years (October 2003-November 2007); these data were provided by the European Space Agency. The preliminary analysis of a set of 77 interferograms and the unwrapping process elaboration of some of them selected according to the best coherence values, allowed the detection of significant deformations in correspondence of Ayaz-Akhtarma and Khara Zira Island mud volcanoes. This analysis has allowed to identify relevant ground deformations of the volcanic systems in connection with the main eruptive events in 2005 and in 2006 respectively, that are recorded by the catalogue of Azerbaijan mud volcano eruptions until 2007. The preliminary analysis of the interferograms of the Ayaz-Akhtarma and the Khara Zira mud volcanoes shows that the whole volcano edifice or part of it is subject to a ground displacement before or in coincidence with the eruption. Assuming that the movement is mainly vertical, we suppose that deformation is due to bulging of the volcanic
Deveze, A; Paris, J
The diagnosis of a permanent facial paralysis can be devastating to a patient, because of the cosmetic, functional and psychological disorders. Our society places on physical appearance and leads to isolation of patients who are embarrassed with their paralyzed face. The objectives of the facial rehabilitation is to correct the functional and cosmetic losses of the patient. The main functional goals are to protect the eye and reestablish oral competence. The primary cosmetic goals are to create balance and symmetry of the face at rest and to reestablish the coordinated movement of the facial musculature. The surgeon should be familiar with the variety of options available so that an individual plan can be developed based on each patient's clinical picture. History of the facial paralysis, its etiology and the duration of the paralysis are of particular interest as they orientate the rehabilitation plan strategy.
Marcelo Coelho Goiato; Daniela Micheline Dos Santos; Lisiane Cristina Bannwart; Marcela Filié Haddad; Leonardo Viana Pereira; Aljomar José Vechiato Filho
Several factors including cancer, malformations and traumas may cause large facial mutilation. These functional and aesthetic deformities negatively affect the psychological perspectives and quality of life of the mutilated patient. Conventional treatments are prone to fail aesthetically and functionally. The recent introduction of the composite tissue allotransplantation (CTA), which uses transplanted facial tissues of healthy donors to recover the damaged or non-existent facial tissue of mu...
Lee, Samantha Sze-Yee; Black, Alex A; Lacherez, Philippe; Wood, Joanne M
To examine the effects of optical blur, auditory distractors, and age on eye movement patterns while performing a driving hazard perception test (HPT). Twenty young (mean age 27.1 ± 4.6 years) and 20 older (73.3 ± 5.7 years) drivers with normal vision completed a HPT in a repeated-measures counterbalanced design while their eye movements were recorded. Testing was performed under two visual (best-corrected vision and with +2.00DS blur) and two distractor (with and without auditory distraction) conditions. Participants were required to respond to road hazards appearing in the HPT videos of real-world driving scenes and their hazard response times were recorded. Blur and distractors each significantly delayed hazard response time by 0.42 and 0.76 s, respectively (p < 0.05). A significant interaction between age and distractors indicated that older drivers were more affected by distractors than young drivers (response with distractors delayed by 0.96 and 0.60 s, respectively). There were no other two- or three-way interaction effects on response time. With blur, for example, both groups fixated significantly longer on hazards before responding compared to best-corrected vision. In the presence of distractors, both groups exhibited delayed first fixation on the hazards and spent less time fixating on the hazards. There were also significant differences in eye movement characteristics between groups, where older drivers exhibited smaller saccades, delayed first fixation on hazards, and shorter fixation duration on hazards compared to the young drivers. Collectively, the findings of delayed hazard response times and alterations in eye movement patterns with blur and distractors provide further evidence that visual impairment and distractors are independently detrimental to driving safety given that delayed hazard response times are linked to increased crash risk.
Basić-Kes, Vanja; Dobrota, Vesna Dermanović; Cesarik, Marijan; Matovina, Lucija Zadro; Madzar, Zrinko; Zavoreo, Iris; Demarin, Vida
Peripheral facial weakness is a facial nerve damage that results in muscle weakness on one side of the face. It may be idiopathic (Bell's palsy) or may have a detectable cause. Almost 80% of peripheral facial weakness cases are primary and the rest of them are secondary. The most frequent causes of secondary peripheral facial weakness are systemic viral infections, trauma, surgery, diabetes, local infections, tumor, immune disorders, drugs, degenerative diseases of the central nervous system, etc. The diagnosis relies upon the presence of typical signs and symptoms, blood chemistry tests, cerebrospinal fluid investigations, nerve conduction studies and neuroimaging methods (cerebral MRI, x-ray of the skull and mastoid). Treatment of secondary peripheral facial weakness is based on therapy for the underlying disorder, unlike the treatment of Bell's palsy that is controversial due to the lack of large, randomized, controlled, prospective studies. There are some indications that steroids or antiviral agents are beneficial but there are also studies that show no beneficial effect. Additional treatments include eye protection, physiotherapy, acupuncture, botulinum toxin, or surgery. Bell's palsy has a benign prognosis with complete recovery in about 80% of patients, 15% experience some mode of permanent nerve damage and severe consequences remain in 5% of patients.
Seo, Jeong Jin; Kang, Heoung Keun; Kim, Hyun Ju; Kim, Jae Kyu; Jung, Hyun Ung; Moon, Woong Jae [Chonnam University Medical School, Kwangju (Korea, Republic of)
To evaluate the usefulness of 3 dimensional volume MR imaging technique for demonstrating the facial nerves and to describe MR findings in facial palsy patients and evaluate the significance of facial nerve enhancement. We reviewed the MR images of facial nerves obtained with 3 dimensional volume imaging technique before and after intravenous administration of Gadopentetate dimeglumine in 13 cases who had facial paralysis and 33 cases who had no facial palsy. And we analyzed the detectability of ananatomical segments of intratemporal facial nerves and facial nerve enhancement. When the 3 dimensional volume MR images of 46 nerves were analyzed subjectively, the nerve courses of 43(93%) of 46 nerves were effectively demonstrated on 3 dimensional volume MR images. Internal acoustic canal portions and geniculate ganglion of facial nerve were well visualized on axial images and tympanic and mastoid segments were well depicted on oblique sagittal images. 10 of 13 patients(77%) were visibly enhanced along at least one segment of the facial nerve with swelling or thickening, and nerves of 8 of normal 33 cases(24%) were enhanced without thickening or swelling. MR findings of facial nerve parelysis is asymmetrical thickening of facial nerve with contrast enhancement. The 3 dimensional volume MR imaging technique should be a useful study for the evaluation of intratemporal facial nerve disease.
Ning XU; Zhang-yi LIANG; Ming XU; Ying-hua GUAN; Qi-hua HE; Qi-de HAN; Xin-sheng ZHAO; You-yi ZHANG
Aim: To investigate the movement of α1A-adrenergic receptors(α1A-AR) stimu-lated by agonist, phenylephrine (PE), and the dynamics of receptor movement in real time in single living cells with millisecond resolution.Methods: We labeled α1A-AR using the monoclonal, anti-FLAG (a kind of tag) antibody and Cy3-conju-gated goat anti-mouse IgG and recorded the trajectory of their transport process in living HEK293A cells stimulated by agonist, PE, and then analyzed their dy-namic properties.Results: The specific detection of α1A-AR on the surface of living HEK293A-α1A cells was achieved. α1A-AR internalize under the stimulation of PE. After the cells were stimulated with PE for 20 rain, apparent colocalization was found between α1A-AR and F-actins. After 40 min stimulation of PE, trajecto-ries of approximate linear motion in HEK293A-α1A cells were recorded, and their velocity was calculated. Condusion: The specific labeling method on the living cell surface provides a convenient means of real-time detection of the behavior of surface receptors. By this method we were able to specifically detectα1A-AR and record the behavior of individual particles of receptors with 50 ms exposure time in real time in single living cells.
Cattaneo, Luigi; Saccani, Elena; De Giampaulis, Piero; Crisi, Girolamo; Pavesi, Giovanni
We investigated the pattern of volitional facial motor deficits in acute stroke patients. We assessed the strength of single facial movements and correlated it to the site of infarct classified on computed tomography scans. Exclusion criteria were previous stroke, cerebral hemorrhage, and subcortical stroke. Results showed that weakness in eyelid closure was associated with anterior cerebral artery (ACA) stroke. Weakness in lip opening was associated with middle cerebral artery (MCA) stroke. We suggest that sparing of upper facial movements in MCA stroke is due to the presence of an upper face motor representation in both the MCA and ACA territories.
Hickey, Amanda; John, Dinesh; Sasaki, Jeffer E; Mavilia, Marianna; Freedson, Patty
There is a need to examine step-counting accuracy of activity monitors during different types of movements. The purpose of this study was to compare activity monitor and manually counted steps during treadmill and simulated free-living activities and to compare the activity monitor steps to the StepWatch (SW) in a natural setting. Fifteen participants performed laboratory-based treadmill (2.4, 4.8, 7.2 and 9.7 km/h) and simulated free-living activities (eg, cleaning room) while wearing an activPAL, Omron HJ720-ITC, Yamax Digi- Walker SW-200, 2 ActiGraph GT3Xs (1 in "low-frequency extension" [AGLFE] and 1 in "normal-frequency" mode), an ActiGraph 7164, and a SW. Participants also wore monitors for 1-day in their free-living environment. Linear mixed models identified differences between activity monitor steps and the criterion in the laboratory/free-living settings. Most monitors performed poorly during treadmill walking at 2.4 km/h. Cleaning a room had the largest errors of all simulated free-living activities. The accuracy was highest for forward/rhythmic movements for all monitors. In the free-living environment, the AGLFE had the largest discrepancy with the SW. This study highlights the need to verify step-counting accuracy of activity monitors with activities that include different movement types/directions. This is important to understand the origin of errors in step-counting during free-living conditions.
Face injuries and disorders can cause pain and affect how you look. In severe cases, they can affect sight, ... your nose, cheekbone and jaw, are common facial injuries. Certain diseases also lead to facial disorders. For ...
Full Text Available Arm movement after the CT scan is a common artifact in PET/CT scanning. Motion artifacts may lead to difficulties in interpreting PET/CT images accurately. We report a 66 year old male patient with gastric cancer who underwent PET/CT for primary staging. He had a previous history of papillary thyroid cancer. In PET scan, there were striking cold artifacts at the level of arms. This is a classical sign of an accidental arm motion. A second scan was performed with the arms down due to the history of papillary thyroid cancer. The results were discussed.
Aliakbaryhosseinabadi, Susan; Kamavuako, Ernest Nlandu; Farina, Dario
-invasive electroencephalographic (EEG). Participants were asked to perform a series of cue-based ankle dorsiflexions as the primary task (single task level). In some experimental runs, in addition to the primary task they concurrently attended an auditory oddball paradigm consisting of three tones while they were asked to count...... the number of sequences of special tones (dual task level). EEG signals were recorded from nine channels centered on Cz. Analysis of event-related potential (ERP) signals from Cz confirmed that the oddball task decreased the attention to the ankle dorsiflexion significantly. Furthermore, movement...
Full Text Available In this paper, an object detector is proposed based on a convolution/subsampling feature map and a two-level cascade classifier. First, a convolution/subsampling operation alleviates illumination, rotation and noise variances. Then, two classifiers are concatenated to check a large number of windows using a coarse-to-fine strategy. Since the sub-sampled feature map with enhanced pixels was fed into the coarse-level classifier, the checked windows were drastically reduced to a quarter of the original image. A few remaining windows showing detailed data were further checked using a fine-level classifier. In addition to improving the detection process, the proposed mechanism also sped up the training process. Some features generated from the prototypes within the small window were selected and trained to obtain the coarse-level classifier. Moreover, a feature ranking algorithm reduced the large feature pool to a small set, thus speeding up the training process without losing detection performance. The contribution of this paper is twofold: first, the coarse-to-fine scheme shortens both the training and detection processes. Second, the feature ranking algorithm reduces training time. Finally, some experimental results were achieved for evaluation. From the results, the proposed method was shown to outperform the rapidly performing Adaboost, as well as forward feature selection methods.
Full Text Available In interpersonal encounters, individuals often exhibit changes in their own facial expressions in response to emotional expressions of another person. Such changes are often called facial mimicry. While this tendency first appeared to be an automatic tendency of the perceiver to show the same emotional expression as the sender, evidence is now accumulating that situation, person, and relationship jointly determine whether and for which emotions such congruent facial behavior is shown. We review the evidence regarding the moderating influence of such factors on facial mimicry with a focus on understanding the meaning of facial responses to emotional expressions in a particular constellation. From this, we derive recommendations for a research agenda with a stronger focus on the most common forms of encounters, actual interactions with known others, and on assessing potential mediators of facial mimicry. We conclude that facial mimicry is modulated by many factors: attention deployment and sensitivity, detection of valence, emotional feelings, and social motivations. We posit that these are the more proximal causes of changes in facial mimicry due to changes in its social setting.
Reddy, Sashank; Redett, Richard
Facial paralysis can have devastating physical and psychosocial consequences. These are particularly severe in children in whom loss of emotional expressiveness can impair social development and integration. The etiologies of facial paralysis, prospects for spontaneous recovery, and functions requiring restoration differ in children as compared with adults. Here we review contemporary management of facial paralysis with a focus on special considerations for pediatric patients.
Marcelo Coelho Goiato
Full Text Available Several factors including cancer, malformations and traumas may cause large facial mutilation. These functional and aesthetic deformities negatively affect the psychological perspectives and quality of life of the mutilated patient. Conventional treatments are prone to fail aesthetically and functionally. The recent introduction of the composite tissue allotransplantation (CTA, which uses transplanted facial tissues of healthy donors to recover the damaged or non-existent facial tissue of mutilated patients, resulted in greater clinical results. Therefore, the present study aims to conduct a literature review on the relevance and effectiveness of facial transplants in mutilated subjects. It was observed that the facial transplants recovered both the aesthetics and function of these patients and consequently improved their quality of life.
Full Text Available Head movement during brain Computed Tomography Perfusion (CTP can deteriorate perfusion analysis quality in acute ischemic stroke patients. We developed a method for automatic detection of CTP datasets with excessive head movement, based on 3D image-registration of CTP, with non-contrast CT providing transformation parameters. For parameter values exceeding predefined thresholds, the dataset was classified as ‘severely moved’. Threshold values were determined by digital CTP phantom experiments. The automated selection was compared to manual screening by 2 experienced radiologists for 114 brain CTP datasets. Based on receiver operator characteristics, optimal thresholds were found of respectively 1.0°, 2.8° and 6.9° for pitch, roll and yaw, and 2.8 mm for z-axis translation. The proposed method had a sensitivity of 91.4% and a specificity of 82.3%. This method allows accurate automated detection of brain CTP datasets that are unsuitable for perfusion analysis.
Early and reliable detection of herpes simplex virus type 1 and varicella zoster virus DNAs in oral fluid of patients with idiopathic peripheral facial nerve palsy: Decision support regarding antiviral treatment?
Lackner, Andreas; Kessler, Harald H; Walch, Christian; Quasthoff, Stefan; Raggam, Reinhard B
Idiopathic peripheral facial nerve palsy has been associated with the reactivation of herpes simplex virus type 1 (HSV-1) or varicella zoster virus (VZV). In recent studies, detection rates were found to vary strongly which may be caused by the use of different oral fluid collection devices in combination with molecular assays lacking standardization. In this single-center pilot study, liquid phase-based and absorption-based oral fluid collection was compared. Samples were collected with both systems from 10 patients with acute idiopathic peripheral facial nerve palsy, 10 with herpes labialis or with Ramsay Hunt syndrome, and 10 healthy controls. Commercially available IVD/CE-labeled molecular assays based on fully automated DNA extraction and real-time PCR were employed. With the liquid phase-based oral fluid collection system, three patients with idiopathic peripheral facial nerve palsy tested positive for HSV-1 DNA and another two tested positive for VZV DNA. All patients with herpes labialis tested positive for HSV-1 DNA and all patients with Ramsay Hunt syndrome tested positive for VZV DNA. With the absorption-based oral fluid collection system, detections rates and viral loads were found to be significantly lower when compared to those obtained with the liquid phase-based collection system. Collection of oral fluid with a liquid phase-based system and the use of automated and standardized molecular methods allow early and reliable detection of HSV-1 and VZV DNAs in patients with acute idiopathic peripheral facial nerve palsy and may provide a valuable decision support regarding start of antiviral treatment at the first clinical visit.
Full Text Available Abstract Background Facial pain syndromes can be very heterogeneous and need individual diagnosis and treatment. This report describes an interesting case of facial pain associated with eczema and an isolated dyskinesia of the lower facial muscles following dental surgery. Different aspects of the pain, spasms and the eczema will be discussed. Case presentation In this patient, persistent intense pain arose in the lower part of her face following a dental operation. The patient also exhibited dyskinesia of her caudal mimic musculature that was triggered by specific movements. Several attempts at therapy had been unsuccessful. We performed local injections of botulinum toxin type A (BTX-A into the affected region of the patient's face. Pain relief was immediate following each set of botulinum toxin injections. The follow up time amounts 62 weeks. Conclusion Botulinum toxin type A (BTX-A can be a safe and effective therapy for certain forms of facial pain syndromes.
Full Text Available The low average birth rate in developed countries and the increase in life expectancy have lead society to face for the first time an ageing situation. This situation associated with the World’s economic crisis (which started in 2008 forces the need of equating better and more efficient ways of providing more quality of life for the elderly. In this context, the solution presented in this work proposes to tackle the problem of monitoring the elderly in a way that is not restrictive for the life of the monitored, avoiding the need for premature nursing home admissions. To this end, the system uses the fusion of sensory data provided by a network of wireless sensors placed on the periphery of the user. Our approach was also designed with a low-cost deployment in mind, so that the target group may be as wide as possible. Regarding the detection of long-term problems, the tests conducted showed that the precision of the system in identifying and discerning body postures and body movements allows for a valid monitorization and rehabilitation of the user. Moreover, concerning the detection of accidents, while the proposed solution presented a near 100% precision at detecting normal falls, the detection of more complex falls (i.e., hampered falls will require further study.
Full Text Available This article deals with topic of transport vehicles identification for dynamic and static transport based on video detection. It explains some of the technologies and approaches necessary for processing of specific image information (transport situation. The paper also describes a design of algorithm for vehicle detection on parking lot and consecutive record of trajectory into virtual environment. It shows a new approach to moving object detection (vehicles, people, and handlers on an enclosed area with emphasis on secure parking. The created application enables automatic identification of trajectory of specific objects moving within the parking area. The application was created in program language C++ with using an open source library OpenCV.
Full Text Available M Hamedi1, Sh-Hussain Salleh2, TS Tan2, K Ismail2, J Ali3, C Dee-Uam4, C Pavaganun4, PP Yupapin51Faculty of Biomedical and Health Science Engineering, Department of Biomedical Instrumentation and Signal Processing, University of Technology Malaysia, Skudai, 2Centre for Biomedical Engineering Transportation Research Alliance, 3Institute of Advanced Photonics Science, Nanotechnology Research Alliance, University of Technology Malaysia (UTM, Johor Bahru, Malaysia; 4College of Innovative Management, Valaya Alongkorn Rajabhat University, Pathum Thani, 5Nanoscale Science and Engineering Research Alliance (N'SERA, Advanced Research Center for Photonics, Faculty of Science, King Mongkut's Institute of Technology Ladkrabang, Bangkok, ThailandAbstract: The authors present a new method of recognizing different human facial gestures through their neural activities and muscle movements, which can be used in machine-interfacing applications. Human–machine interface (HMI technology utilizes human neural activities as input controllers for the machine. Recently, much work has been done on the specific application of facial electromyography (EMG-based HMI, which have used limited and fixed numbers of facial gestures. In this work, a multipurpose interface is suggested that can support 2–11 control commands that can be applied to various HMI systems. The significance of this work is finding the most accurate facial gestures for any application with a maximum of eleven control commands. Eleven facial gesture EMGs are recorded from ten volunteers. Detected EMGs are passed through a band-pass filter and root mean square features are extracted. Various combinations of gestures with a different number of gestures in each group are made from the existing facial gestures. Finally, all combinations are trained and classified by a Fuzzy c-means classifier. In conclusion, combinations with the highest recognition accuracy in each group are chosen. An average accuracy
Hernan F. Garcia
Full Text Available This work presents a framework for emotion recognition, based in facial expression analysis using Bayesian Shape Models (BSM for facial landmarking localization. The Facial Action Coding System (FACS compliant facial feature tracking based on Bayesian Shape Model. The BSM estimate the parameters of the model with an implementation of the EM algorithm. We describe the characterization methodology from parametric model and evaluated the accuracy for feature detection and estimation of the parameters associated with facial expressions, analyzing its robustness in pose and local variations. Then, a methodology for emotion characterization is introduced to perform the recognition. The experimental results show that the proposed model can effectively detect the different facial expressions. Outperforming conventional approaches for emotion recognition obtaining high performance results in the estimation of emotion present in a determined subject. The model used and characterization methodology showed efficient to detect the emotion type in 95.6% of the cases.
Full Text Available Humans use facial expressions to convey personal feelings. Facial expressions need to be automatically recognized to design control and interactive applications. Feature extraction in an accurate manner is one of the key steps in automatic facial expression recognition system. Current frequency domain facial expression recognition systems have not fully utilized the facial elements and muscle movements for recognition. In this paper, stationary wavelet transform is used to extract features for facial expression recognition due to its good localization characteristics, in both spectral and spatial domains. More specifically a combination of horizontal and vertical subbands of stationary wavelet transform is used as these subbands contain muscle movement information for majority of the facial expressions. Feature dimensionality is further reduced by applying discrete cosine transform on these subbands. The selected features are then passed into feed forward neural network that is trained through back propagation algorithm. An average recognition rate of 98.83% and 96.61% is achieved for JAFFE and CK+ dataset, respectively. An accuracy of 94.28% is achieved for MS-Kinect dataset that is locally recorded. It has been observed that the proposed technique is very promising for facial expression recognition when compared to other state-of-the-art techniques.
Flenady, Vicki; MacPhail, Julie; Gardener, Glenn; Chadha, Yogesh; Mahomed, Kassam; Heazell, Alexander; Fretts, Ruth; Frøen, Frederik
Decreased fetal movement (DFM) is associated with increased risk of adverse pregnancy outcome. However, there is limited research to inform practice in the detection and management of DFM. To identify current practices and views of obstetricians in Australia and New Zealand regarding DFM. A postal survey of Fellows and Members, and obstetric trainees of the Royal Australian and New Zealand College of Obstetricians and Gynaecologists. Of the 1700 surveys distributed, 1066 (63%) were returned, of these, 805 (76% of responders) were currently practising and included in the analysis. The majority considered that asking women about fetal movement should be a part of routine care. Sixty per cent reported maternal perception of DFM for 12 h was sufficient evidence of DFM and 77% DFM for 24 h. KICK charts were used routinely by 39%, increasing to 66% following an episode of DFM. Alarm limits varied, the most commonly reported was practice for DFM is evident. Large-scale randomised controlled trials are required to identify optimal screening and management options. In the interim, high quality clinical practice guidelines using the best available advice are needed to enhance consistency in practice including advice provided to women.
trustworthy.Experiment 2 used the eye tracker to assess the effects of the dimorphic cues on the evaluation of facial attractiveness. Results showed that the subjects preferred the masculine male faces obtained by the sexual dimorphism and feminized female face. Eye movement tracking showed that average pupil dilation and average fixation count on a male face were significantly higher than on a female face. The first fixation time was significantly greater for the masculine faces than for the feminine ones, but the first fixation time was significantly shorter for the male faces than the female ones. The first fixation time and first fixation duration for masculine faces were both significantly longer than for feminine ones. These indicators of eye movement provide some evidence for the effect of the sexual dimorphism on the facial attractiveness.
Valle-Melón, J. M.
Full Text Available As in other engineering structures, historic buildings are conditioned by atmospheric changes which affect their size and shape. These effects follow a more or less cyclic pattern and do not normally put the stability of such buildings in jeopardy since they are part of their natural dynamics. Nevertheless, the study of these effects provides valuable information to understand the behavior of both the building and the materials it is made of.
This paper arose from the project of geometric monitoring of a presumably unstable historic building: the church of Santa María la Blanca in Agoncillo (La Rioja, Spain, which is being observed with conventional surveying equipment. The computations of the different epochs show several movements that can be explained as due to seasonal cycles.
Al igual que el resto de estructuras de ingeniería, los edificios históricos están sometidos a las variaciones de las condiciones atmosféricas que afectan a sus dimensiones. Estos efectos son de carácter cíclico y no suelen suponer riesgo para la estabilidad del edificio, ya que se encuentran dentro de su dinámica natural, sin embargo, su determinación aporta información valiosa a la hora de entender el comportamiento tanto del edificio como de los materiales que lo conforman. Los resultados que se presentan surgen del proyecto de auscultación geométrica de un edificio histórico supuestamente inestable, la Iglesia de Santa María la Blanca de Agoncillo (La Rioja, España, que se viene realizando utilizando instrumentación topográfica convencional. En el cálculo de las diferentes campañas se han podido detectar movimientos cíclicos estacionales.
Samadani, Uzma; Ritlop, Robert; Reyes, Marleen; Nehrbass, Elena; Li, Meng; Lamm, Elizabeth; Schneider, Julia; Shimunov, David; Sava, Maria; Kolecki, Radek; Burris, Paige; Altomare, Lindsey; Mehmood, Talha; Smith, Theodore; Huang, Jason H; McStay, Christopher; Todd, S Rob; Qian, Meng; Kondziolka, Douglas; Wall, Stephen; Huang, Paul
Disconjugate eye movements have been associated with traumatic brain injury since ancient times. Ocular motility dysfunction may be present in up to 90% of patients with concussion or blast injury. We developed an algorithm for eye tracking in which the Cartesian coordinates of the right and left pupils are tracked over 200 sec and compared to each other as a subject watches a short film clip moving inside an aperture on a computer screen. We prospectively eye tracked 64 normal healthy noninjured control subjects and compared findings to 75 trauma subjects with either a positive head computed tomography (CT) scan (n=13), negative head CT (n=39), or nonhead injury (n=23) to determine whether eye tracking would reveal the disconjugate gaze associated with both structural brain injury and concussion. Tracking metrics were then correlated to the clinical concussion measure Sport Concussion Assessment Tool 3 (SCAT3) in trauma patients. Five out of five measures of horizontal disconjugacy were increased in positive and negative head CT patients relative to noninjured control subjects. Only one of five vertical disconjugacy measures was significantly increased in brain-injured patients relative to controls. Linear regression analysis of all 75 trauma patients demonstrated that three metrics for horizontal disconjugacy negatively correlated with SCAT3 symptom severity score and positively correlated with total Standardized Assessment of Concussion score. Abnormal eye-tracking metrics improved over time toward baseline in brain-injured subjects observed in follow-up. Eye tracking may help quantify the severity of ocular motility disruption associated with concussion and structural brain injury.
Mehta, Ritvik P
The management of facial paralysis is one of the most complex areas of reconstructive surgery. Given the wide variety of functional and cosmetic deficits in the facial paralysis patient, the reconstructive surgeon requires a thorough understanding of the surgical techniques available to treat this condition. This review article will focus on surgical management of facial paralysis and the treatment options available for acute facial paralysis (facial paralysis (3 weeks to 2 yr) and chronic facial paralysis (>2 yr). For acute facial paralysis, the main surgical therapies are facial nerve decompression and facial nerve repair. For facial paralysis of intermediate duration, nerve transfer procedures are appropriate. For chronic facial paralysis, treatment typically requires regional or free muscle transfer. Static techniques of facial reanimation can be used for acute, intermediate, or chronic facial paralysis as these techniques are often important adjuncts to the overall management strategy.
Bell, J A; Stigant, M
If sitting postures influence the risk of developing low back pain then it is important that quantification of sedentary work activities and simultaneous measurement of lumbar postural characteristics takes place. The objective of this study was to develop a system for identifying activities and their associated lumbar postures using fibre optic goniometers (FOGs). Five student subjects wore two FOGs attached to the lumbar spine and hip for 8 min while being recorded using a video camera when sitting, standing and walking. Observer Software was used to code the video recording, enabling the sagittal movement characteristics of each FOG to be described for individual activities. Results indicated that each activity produced unique data, and could be independently identified from their motion profiles by three raters (k = 1). The data will be used to develop algorithms to automate the process of activity detection. This system has the potential to measure behaviour in non-clinical settings.
Fiorentini, Chiara; Schmidt, Susanna; Viviani, Paolo
We asked whether the identification of emotional facial expressions (FEs) involves the simultaneous perception of the facial configuration or the detection of emotion-specific diagnostic cues. We recorded at high speed (500 frames s-1) the unfolding of the FE in five actors, each expressing six emotions (anger, surprise, happiness, disgust, fear, sadness). Recordings were coded every 10 frames (20 ms of real time) with the Facial Action Coding System (FACS, Ekman et al 2002, Salt Lake City, UT: Research Nexus eBook) to identify the facial actions contributing to each expression, and their intensity changes over time. Recordings were shown in slow motion (1/20 of recording speed) to one hundred observers in a forced-choice identification task. Participants were asked to identify the emotion during the presentation as soon as they felt confident to do so. Responses were recorded along with the associated response times (RTs). The RT probability density functions for both correct and incorrect responses were correlated with the facial activity during the presentation. There were systematic correlations between facial activities, response probabilities, and RT peaks, and significant differences in RT distributions for correct and incorrect answers. The results show that a reliable response is possible long before the full FE configuration is reached. This suggests that identification is reached by integrating in time individual diagnostic facial actions, and does not require perceiving the full apex configuration.
Zhang, Bo; Yang, Chuan; Wang, Wei; Li, Wei
We present the surgical techniques and results of cross-facial nerve grafting that have been developed in the repair of ocular-oral synkinesis after facial paralysis. Eleven patients with ocular-oral synkinesis after facial paralysis underwent the cross-facial nerve grafting with facial nerve transposition at a tertiary academic hospital between 2003 and 2009. The patient selection for the study was based on the degree of disfigurement and facial function parameter rating using the Toronto Facial Grading System. The procedures used were surgeries done in two stages. All cases were followed up for 2 months to 6 years after the second surgery. The degree of improvement was evaluated at 6 to 7 months after the procedures. Six of the patients were followed up for more than 2 years after the stage-two surgery and demonstrated significant reduction in the ocular-oral synkinetic movements. The Toronto Facial Grading System scores from the postoperative follow-ups increased an average of 16 points (28%), and the patients had achieved symmetrical facial movement. We concluded that cross-facial nerve grafting with facial nerve branch transposition is effective and can be considered as an option for the repair of ocular-oral synkinesis after facial paralysis in select patients.
The project objectives are: (1) determine for the first time the properties limiting the performance of CZT detectors; (2) develop efficient, non-destructive techniques to measure the quality of detector materials; and (3) provide rapid feedback to crystal growers and, in conjunction with suppliers, improve CZT detector performance as measured by device energy resolution, efficiency, stability and cost. The goal is a stable commercial supply of low-cost, high energy resolution (0.5% FWHM at 662 keV) CZT crystals for detecting, characterizing and imaging nuclear and radiological materials in a wide variety of field conditions.
Haque, Mohammad Ahsanul; Irani, Ramin; Nasrollahi, Kamal
by combining a ‘Good feature to track’ and a ‘Supervised descent method’ in order to overcome the limitations of currently available facial video based HR measuring systems. Such limitations include, e.g., unrealistic restriction of the subject’s movement and artificial lighting during data capture. A face...... in realistic scenarios. Experimental results show that the proposed system outperforms existing video based systems for HR measurement.......Heartbeat Rate (HR) reveals a person’s health condition. This paper presents an effective system for measuring HR from facial videos acquired in a more realistic environment than the testing environment of current systems. The proposed method utilizes a facial feature point tracking method...
Bhama, Prabhat K; Hadlock, Tessa A
The facial nerve is the most commonly paralyzed nerve in the human body. Facial paralysis affects aesthetic appearance, and it has a profound effect on function and quality of life. Management of patients with facial paralysis requires a multidisciplinary approach, including otolaryngologists, plastic surgeons, ophthalmologists, and physical therapists. Regardless of etiology, patients with facial paralysis should be evaluated systematically, with initial efforts focused upon establishing proper diagnosis. Management should proceed with attention to facial zones, including the brow and periocular region, the midface and oral commissure, the lower lip and chin, and the neck. To effectively compare contemporary facial reanimation strategies, it is essential to employ objective intake assessment methods, and standard reassessment schemas during the entire management period.
Potgieser, Adriaan R E; van Dijk, J Marc C; Elting, Jan Willem J; de Koning-Tijssen, Marina A J
Facial tics and spasms are socially incapacitating, but effective treatment is often available. The clinical picture is sufficient for distinguishing between the different diseases that cause this affliction.We describe three cases of patients with facial tics or spasms: one case of tics, which are familiar to many physicians; one case of blepharospasms; and one case of hemifacial spasms. We discuss the differential diagnosis and the treatment possibilities for facial tics and spasms. Early diagnosis and treatment is important, because of the associated social incapacitation. Botulin toxin should be considered as a treatment option for facial tics and a curative neurosurgical intervention should be considered for hemifacial spasms.
Zakrzewska, Joanna M; Jensen, Troels S
Premise Facial pain refers to a heterogeneous group of clinically and etiologically different conditions with the common clinical feature of pain in the facial area. Among these conditions, trigeminal neuralgia (TN), persistent idiopathic facial pain, temporomandibular joint pain, and trigeminal...
Full Text Available An experimental technique has been developed for measuring and visualizing strain distribution on facial skin. A stereovision technique based on digital image correlation is employed for obtaining the displacement distribution on the human face. Time-variation of the movement of the facial skin surface is obtained from consecutive images obtained using a pair of high-speed cameras. The strains on the facial skin surface are then obtained from the measured displacements. The performance of the developed system is demonstrated by applying it to the measurement of the strain on facial skin during the production of sound. Results show that the strains on facial skin can be visualized. Further discussion on the relationship between the creation of wrinkles and strains is possible with the help of the developed system.
Full Text Available In order to reliably detect changes in the surficial morphology of a landslide, measurements performed at the different epochs being compared have to comply with certain characteristics such as allowing the reconstruction of the surface from acquired points and a resolution sufficiently high to provide a proper description of details. Terrestrial Laser Scanning survey enables to acquire large amounts of data and therefore potentially allows knowing even small details of a landslide. By appropriate additional field measurements, point clouds can be referenced to a common reference system with high accuracy, so that scans effectively share the same system. In this note we present the monitoring of a large landslide by two surveys carried out two years apart from each other. The adopted reference frame consists of a network of GNSS (Global Navigation Satellite Systems permanent stations that constitutes a system of controlled stability over time. Knowledge of the shape of the surface comes from the generation of a DEM (Digital Elevation Model. Some algorithms are compared and the analysis is performed by means of the evaluation of some statistical parameters using cross-validation. In general, evaluation of mass displacements occurred between two surveys is possible differencing the corresponding DEMs, but then arises the need to distinguish the different behaviors of the various landslide bodies that could be present among the slope. Here landslide bodies' identification has been carried out considering geomorphological criteria, making also use of DEM derived products, such as contour maps, slope and aspect maps.
Poole, Kate; Herget, Regina; Lapatsina, Liudmila; Ngo, Ha-Duong; Lewin, Gary R
In sensory neurons, mechanotransduction is sensitive, fast and requires mechanosensitive ion channels. Here we develop a new method to directly monitor mechanotransduction at defined regions of the cell-substrate interface. We show that molecular-scale (~13 nm) displacements are sufficient to gate mechanosensitive currents in mouse touch receptors. Using neurons from knockout mice, we show that displacement thresholds increase by one order of magnitude in the absence of stomatin-like protein 3 (STOML3). Piezo1 is the founding member of a class of mammalian stretch-activated ion channels, and we show that STOML3, but not other stomatin-domain proteins, brings the activation threshold for Piezo1 and Piezo2 currents down to ~10 nm. Structure-function experiments localize the Piezo modulatory activity of STOML3 to the stomatin domain, and higher-order scaffolds are a prerequisite for function. STOML3 is the first potent modulator of Piezo channels that tunes the sensitivity of mechanically gated channels to detect molecular-scale stimuli relevant for fine touch.
Lena Rachel Quinto
Full Text Available Singing involves vocal production accompanied by a dynamic and meaningful use of facial expressions, which may serve as ancillary gestures that complement, disambiguate, or reinforce the acoustic signal. In this investigation, we examined the use of facial movements to communicate emotion, focusing on movements arising in three epochs: before vocalisation (pre-production, during vocalisation (production, and immediately after vocalisation (post-production. The stimuli were recordings of seven vocalists’ facial movements as they sang short (14 syllable melodic phrases with the intention of communicating happiness, sadness, irritation, or no emotion. Facial movements were presented as point-light displays to 16 observers who judged the emotion conveyed. Experiment 1 revealed that the accuracy of emotional judgement varied with singer, emotion and epoch. Accuracy was highest in the production epoch, however, happiness was well communicated in the pre-production epoch. In Experiment 2, observers judged point-light displays of exaggerated movements. The ratings suggested that the extent of facial and head movements is largely perceived as a gauge of emotional arousal. In Experiment 3, observers rated point-light displays of scrambled movements. Configural information was removed in these stimuli but velocity and acceleration were retained. Exaggerated scrambled movements were likely to be associated with happiness or irritation whereas unexaggerated scrambled movements were more likely to be identified as neutral. An analysis of the motions of singers revealed systematic changes in facial movement as a function of the emotional intentions of singers. The findings confirm the central role of facial expressions in vocal emotional communication, and highlight individual differences between singers in the amount and intelligibility of facial movements made before, during, and after vocalization.
Quinto, Lena R; Thompson, William F; Kroos, Christian; Palmer, Caroline
Singing involves vocal production accompanied by a dynamic and meaningful use of facial expressions, which may serve as ancillary gestures that complement, disambiguate, or reinforce the acoustic signal. In this investigation, we examined the use of facial movements to communicate emotion, focusing on movements arising in three epochs: before vocalization (pre-production), during vocalization (production), and immediately after vocalization (post-production). The stimuli were recordings of seven vocalists' facial movements as they sang short (14 syllable) melodic phrases with the intention of communicating happiness, sadness, irritation, or no emotion. Facial movements were presented as point-light displays to 16 observers who judged the emotion conveyed. Experiment 1 revealed that the accuracy of emotional judgment varied with singer, emotion, and epoch. Accuracy was highest in the production epoch, however, happiness was well communicated in the pre-production epoch. In Experiment 2, observers judged point-light displays of exaggerated movements. The ratings suggested that the extent of facial and head movements was largely perceived as a gauge of emotional arousal. In Experiment 3, observers rated point-light displays of scrambled movements. Configural information was removed in these stimuli but velocity and acceleration were retained. Exaggerated scrambled movements were likely to be associated with happiness or irritation whereas unexaggerated scrambled movements were more likely to be identified as "neutral." An analysis of singers' facial movements revealed systematic changes as a function of the emotional intentions of singers. The findings confirm the central role of facial expressions in vocal emotional communication, and highlight individual differences between singers in the amount and intelligibility of facial movements made before, during, and after vocalization.
Full Text Available The Facial Nerve can be damaged at a peripheral level by a stroke or, for example by trauma or infection within the faceor the ear. In these cases the facial muscles are paralysed with little or no chance of spontaneous recovery. This research focuses on the potential utilisation of a Shape Memory Alloy(SMA to replace the function of the Facial Nerve, which willallow in conjunction with passive reconstructive methods, a patient to regain limited but active movement of the mouthcorner. Paralysis of the mouth corner is a very disabling bothfunctionally and cosmetically, speech and swallowing are hampered and the patient loses saliva, with presents a social problem.
Fritz, Michael; Rolfes, Bryan N
Treatment of advanced parotid or cutaneous malignancies often requires sacrifice of the facial nerve as well as resection of the parotid gland and surrounding structures. In addition to considerations regarding reinnervation and dynamic reanimation, reconstruction in this setting must take into account unique factors such as soft tissue volume deficits and the high likelihood of adjunctive radiation therapy. Furthermore, considerations of patient comorbidities including advanced age and poor long-term prognosis often influence reconstructive modality. The optimal reconstructive technique would provide potential for restoration of facial tone and voluntary movement as well as immediate restoration of facial support and function. Beyond considerations of facial movement and rest position, restoration of lost soft tissue volume is critical to obtain facial symmetry. To control long-term volume in the setting of adjunctive radiation therapy, vascularized tissue is required. In this chapter, we describe a comprehensive approach to the management of radical parotidectomy and similar facial defects that addresses these concerns and also describes management strategies over time. Specific techniques employed include anterolateral thigh free flaps, nerve grafting utilizing motor nerves to the vastus lateralis muscle, and orthodromic temporalis tendon transfer. Further considerations relative to the eye, forehead, and long-term facial refinement are also discussed.
This is a report of two patients with isolated facial talon cusps. One occurred on a permanent mandibular central incisor; the other on a permanent maxillary canine. The locations of these talon cusps suggests that the definition of a talon cusp include teeth in addition to the incisor group and be extended to include the facial aspect of teeth.
Jiang, Bihan; Valstar, Michel; Martinez, Brais; Pantic, Maja
Both the configuration and the dynamics of facial expressions are crucial for the interpretation of human facial behavior. Yet to date, the vast majority of reported efforts in the field either do not take the dynamics of facial expressions into account, or focus only on prototypic facial expressions of six basic emotions. Facial dynamics can be explicitly analyzed by detecting the constituent temporal segments in Facial Action Coding System (FACS) Action Units (AUs)-onset, apex, and offset. In this paper, we present a novel approach to explicit analysis of temporal dynamics of facial actions using the dynamic appearance descriptor Local Phase Quantization from Three Orthogonal Planes (LPQ-TOP). Temporal segments are detected by combining a discriminative classifier for detecting the temporal segments on a frame-by-frame basis with Markov Models that enforce temporal consistency over the whole episode. The system is evaluated in detail over the MMI facial expression database, the UNBC-McMaster pain database, the SAL database, the GEMEP-FERA dataset in database-dependent experiments, in cross-database experiments using the Cohn-Kanade, and the SEMAINE databases. The comparison with other state-of-the-art methods shows that the proposed LPQ-TOP method outperforms the other approaches for the problem of AU temporal segment detection, and that overall AU activation detection benefits from dynamic appearance information.
... t want them to. If you have a movement disorder, you experience these kinds of impaired movement. Dyskinesia ... movement and is a common symptom of many movement disorders. Tremors are a type of dyskinesia. Nerve diseases ...
Vullings, R; Mischi, M
Reduced fetal movement is an important parameter to assess fetal distress. Currently, no suitable methods are available that can objectively assess fetal movement during pregnancy. Fetal vectorcardiographic (VCG) loop alignment could be such a method. In general, the goal of VCG loop alignment is to correct for motion-induced changes in the VCGs of (multiple) consecutive heartbeats. However, the parameters used for loop alignment also provide information to assess fetal movement. Unfortunately, current methods for VCG loop alignment are not robust against low-quality VCG signals. In this paper, a more robust method for VCG loop alignment is developed that includes a priori information on the loop alignment, yielding a maximum a posteriori loop alignment. Classification, based on movement parameters extracted from the alignment, is subsequently performed using support vector machines, resulting in correct classification of (absence of) fetal movement in about 75% of cases. After additional validation and optimization, this method can possibly be employed for continuous fetal movement monitoring.
Stoiber, Nicolas; Breton, Gaspard; Seguier, Renaud
Modern modeling and rendering techniques have produced nearly photorealistic face models, but truly expressive digital faces also require natural-looking movements. Virtual characters in today's applications often display unrealistic facial expressions. Indeed, facial animation with traditional schemes such as keyframing and motion capture demands expertise. Moreover, the traditional schemes aren't adapted to interactive applications that require the real-time generation of context-dependent movements. A new animation system produces realistic expressive facial motion at interactive speed. The system relies on a set of motion models controlling facial-expression dynamics. The models are fitted on captured motion data and therefore retain the dynamic signature of human facial expressions. They also contain a nondeterministic component that ensures the variety of the long-term visual behavior. This system can efficiently animate any synthetic face. The video illustrates interactive use of a system that generates facial-animation sequences.
Facial shape transformation described by facial animation parameters (FAPs) involves the dynamic movement or deformation of eyes, brows, mouth, and lips, while detailed facial appearance concerns the facial textures such as creases, wrinkles, etc.Video-based facial animation exhibits not only facial shape transformation but also detailed appearance updates. In this paper, a novel algorithm for effectively extracting FAPs from video is proposed. Our system adopts the ICA-enforced direct appearance model (DAM) to track faces from video sequences; and then, FAPs are extracted from every frame of the video based on an extended model of Wincandidate 3.1. Facial appearance details are transformed from each frame by mapping an expression ratio image to the original image. We adopt wavelet to synthesize expressive details by combining the low-frequency signals of the original face and high-frequency signals of the expressive face from each frame of the video. Experimental results show that our proposed algorithm is suitable for reproducing realistic, expressive facial animations.
Sato, Wataru; Yoshikawa, Sakiko
Based on previous neuroscientific evidence indicating activation of the mirror neuron system in response to dynamic facial actions, we hypothesized that facial mimicry would occur while subjects viewed dynamic facial expressions. To test this hypothesis, dynamic/static facial expressions of anger/happiness were presented using computer-morphing…
Ma, Ming-San; van der Hoeven, Johannes H.; Nicolai, Jean-Philippe A.; Meek, Marcel F.
Facial synkinesis (or synkinesia) (FS) occurs frequently after paresis or paralysis of the facial nerve and is in most cases due to aberrant regeneration of (branches of) the facial nerve. Patients suffer from inappropriate and involuntary synchronous facial muscle contractions. Here we describe two
Lund, H.; Juul-Kristensen, B.; Hansen, K.;
compared to healthy participants' right elbows (AE: 2.15(o) [0.20(o)] versus 1.45(o) [0.15(o)], p=0.011). No significant difference between healthy women and OA patients regarding the left elbow for TDPM, or JPS was observed. The present age-controlled, cross-sectional study suggests......The purpose of this study was to clarify whether osteoarthritis (OA) patients have a localized or a generalized reduction in proprioception. Twenty one women with knee OA (mean age [SD]: 57.1 [12.0] years) and 29 healthy women (mean age [SD]: 55.3 [10.1] years) had their joint position sense (JPS......) and threshold to detection of a passive movement (TDPM) measured in both knees and elbows. JPS was measured as the participant's ability to actively reproduce the position of the elbow and knee joints. TDPM was measured as the participant's ability to recognize a passive motion of the elbow and knee joints...
McCarty, David E; Kim, Paul Y; Frilot, Clifton; Chesson, Andrew L; Marino, Andrew A
The strong associations of rapid eye movement (REM) sleep with dreaming and memory consolidation imply the existence of REM-specific brain electrical activity, notwithstanding the visual similarity of the electroencephalograms (EEGs) in REM and wake states. Our goal was to detect REM sleep by means of algorithmic analysis of the EEG. We postulated that novel depth and fragmentation variables, defined in relation to temporal changes in the signal (recurrences), could be statistically combined to allow disambiguation of REM epochs. The cohorts studied were consecutive patients with obstructive sleep apnea (OSA) recruited from a sleep medicine clinic, and clinically normal participants selected randomly from a national database (N = 20 in each cohort). Individual discriminant analyses were performed, for each subject based on 4 recurrence biomarkers, and used to classify every 30-second epoch in the subject's overnight polysomnogram as REM or NotREM (wake or any non-REM sleep stage), using standard clinical staging as ground truth. The primary outcome variable was the accuracy of algorithmic REM classification. Average accuracies of 90% and 87% (initial and cross-validation analyses) were achieved in the OSA cohort; corresponding results in the normal cohort were 87% and 85%. Analysis of brain recurrence allowed identification of REM sleep, disambiguated from wake and all other stages, using only a single EEG lead, in subjects with or without OSA.
Full Text Available The present study explored the ability of expert and novice chess players to rapidly distinguish between regions of a chessboard that were relevant to the best move on the board, and regions of the board that were irrelevant. Accordingly, we monitored the eye movements of expert and novice chess players, while they selected white’s best move for a variety of chess problems. To manipulate relevancy, we constructed two different versions of each chess problem in the experiment, and we counterbalanced these versions across participants. These two versions of each problem were identical except that a single piece was changed from a bishop to a knight. This subtle change reversed the relevancy map of the board, such that regions that were relevant in one version of the board were now irrelevant (and vice versa. Using this paradigm, we demonstrated that both the experts and novices spent more time fixating the relevant relative to the irrelevant regions of the board. However, the experts were faster at detecting relevant information than the novices, as shown by the finding that experts (but not novices were able to distinguish between relevant and irrelevant information during the early part of the trial. These findings further demonstrate the domain-related perceptual processing advantage of chess experts, using an experimental paradigm that allowed us to manipulate relevancy under tightly controlled conditions.
Crysdale, W S
Congenital hearing loss occurs in association with cranio-facial anomalies. Lay people and health professionals as well frequently regard individuals with cranio-facial anomalies as "stupid" or of lower than normal intelligence because of their odd appearance. Two case reports illustrate that this erroneous assumption will result in the delayed detection of significant hearing loss.
Full Text Available This article describes the oral rehabilitation of an 8-year-old girl with extensively affected primary and permanent dentition. This report is unique in which distinct dental anomalies including enamel hypoplasia, irregular dentin formation, taurodontism, hpodontia and dens in dente accompany unilateral disturbance of abducens and facial nerves which control the lateral eye movement, and facial expression, respectively. Keywords: enamel hypoplasia; irregular dentin formation; taurodontism; hypodontia; dens in dente; abducens and facial nerves;
Precise facial feature extraction is essential to the high-level face recognition and expression analysis. This paper presents a novel method for the real-time geomet-ric facial feature extraction from live video. In this paper, the input image is viewed as a weighted graph. The segmentation of the pixels corresponding to the edges of facial components of the mouth, eyes, brows, and nose is implemented by means of random walks on the weighted graph. The graph has an 8-connected lattice structure and the weight value associated with each edge reflects the likelihood that a random walker will cross that edge. The random walks simulate an anisot-ropic diffusion process that filters out the noise while preserving the facial expres-sion pixels. The seeds for the segmentation are obtained from a color and motion detector. The segmented facial pixels are represented with linked lists in the origi-nal geometric form and grouped into different parts corresponding to facial com-ponents. For the convenience of implementing high-level vision, the geometric description of facial component pixels is further decomposed into shape and reg-istration information. Shape is defined as the geometric information that is invari-ant under the registration transformation, such as translation, rotation, and iso-tropic scale. Statistical shape analysis is carried out to capture global facial fea-tures where the Procrustes shape distance measure is adopted. A Bayesian ap-proach is used to incorporate high-level prior knowledge of face structure. Ex-perimental results show that the proposed method is capable of real-time extraction of precise geometric facial features from live video. The feature extraction is robust against the illumination changes, scale variation, head rotations, and hand inter-ference.
Bandini, Andrea; Orlandi, Silvia; Escalante, Hugo Jair; Giovannelli, Fabio; Cincotta, Massimo; Reyes-Garcia, Carlos A; Vanni, Paola; Zaccara, Gaetano; Manfredi, Claudia
The automatic analysis of facial expressions is an evolving field that finds several clinical applications. One of these applications is the study of facial bradykinesia in Parkinson's disease (PD), which is a major motor sign of this neurodegenerative illness. Facial bradykinesia consists in the reduction/loss of facial movements and emotional facial expressions called hypomimia. In this work we propose an automatic method for studying facial expressions in PD patients relying on video-based METHODS: 17 Parkinsonian patients and 17 healthy control subjects were asked to show basic facial expressions, upon request of the clinician and after the imitation of a visual cue on a screen. Through an existing face tracker, the Euclidean distance of the facial model from a neutral baseline was computed in order to quantify the changes in facial expressivity during the tasks. Moreover, an automatic facial expressions recognition algorithm was trained in order to study how PD expressions differed from the standard expressions. Results show that control subjects reported on average higher distances than PD patients along the tasks. This confirms that control subjects show larger movements during both posed and imitated facial expressions. Moreover, our results demonstrate that anger and disgust are the two most impaired expressions in PD patients. Contactless video-based systems can be important techniques for analyzing facial expressions also in rehabilitation, in particular speech therapy, where patients could get a definite advantage from a real-time feedback about the proper facial expressions/movements to perform. Copyright © 2017 Elsevier B.V. All rights reserved.
Recently, a new assessment technique by which to evaluate brain function in the fetus and newborn infant has been developed. The method is based on the assessment of the quality of General Movements (GMs). GMs are complex movements involving all parts of the body. They are present throughout fetal l
El-Hori, Inas H.; El-Momen, Zahraa K.; Ganoun, Ali
This paper explores and compares techniques for automatically recognizing facial actions in sequences of images. The comparative study of Facial Expression Recognition (FER) techniques namely Principal Component's analysis (PCA) and PCA with Gabor filters (GF) is done. The objective of this research is to show that PCA with Gabor filters is superior to the first technique in terms of recognition rate. To test and evaluates their performance, experiments are performed using real database by both techniques. The universally accepted five principal emotions to be recognized are: Happy, Sad, Disgust and Angry along with Neutral. The recognition rates are obtained on all the facial expressions.
Ibáñez, J.; Serrano, J. I.; del Castillo, M. D.; Monge-Pereira, E.; Molina-Rueda, F.; Alguacil-Diego, I.; Pons, J. L.
Objective. Characterizing the intention to move by means of electroencephalographic activity can be used in rehabilitation protocols with patients’ cortical activity taking an active role during the intervention. In such applications, the reliability of the intention estimation is critical both in terms of specificity ‘number of misclassifications’ and temporal accuracy. Here, a detector of the onset of voluntary upper-limb reaching movements based on the cortical rhythms and the slow cortical potentials is proposed. The improvement in detections due to the combination of these two cortical patterns is also studied. Approach. Upper-limb movements and cortical activity were recorded in healthy subjects and stroke patients performing self-paced reaching movements. A logistic regression combined the output of two classifiers: (i) a naïve Bayes classifier trained to detect the event-related desynchronization preceding the movement onset and (ii) a matched filter detecting the bereitschaftspotential. The proposed detector was compared with the detectors by using each one of these cortical patterns separately. In addition, differences between the patients and healthy subjects were analysed. Main results. On average, 74.5 ± 13.8% and 82.2 ± 10.4% of the movements were detected with 1.32 ± 0.87 and 1.50 ± 1.09 false detections generated per minute in the healthy subjects and the patients, respectively. A significantly better performance was achieved by the combined detector (as compared to the detectors of the two cortical patterns separately) in terms of true detections (p = 0.099) and false positives (p = 0.0083). Significance. A rationale is provided for combining information from cortical rhythms and slow cortical potentials to detect the onsets of voluntary upper-limb movements. It is demonstrated that the two cortical processes supply complementary information that can be summed up to boost the performance of the detector. Successful results have been also
Mehta, Ritvik P.
The management of facial paralysis is one of the most complex areas of reconstructive surgery. Given the wide variety of functional and cosmetic deficits in the facial paralysis patient, the reconstructive surgeon requires a thorough understanding of the surgical techniques available to treat this condition. This review article will focus on surgical management of facial paralysis and the treatment options available for acute facial paralysis (2 yr). For acute facial paralysis, the main surgi...
Eleftheriadis, Stefanos; Rudovic, Ognjen; Pantic, Maja
Facial-expression data often appear in multiple views either due to head-movements or the camera position. Existing methods for multi-view facial expression recognition perform classification of the target expressions either by using classifiers learned separately for each view or by using a single
Eleftheriadis, Stefanos; Rudovic, Ognjen; Pantic, Maja
Images of facial expressions are often captured from various views as a result of either head movements or variable camera position. Existing methods for multiview and/or view-invariant facial expression recognition typically perform classification of the observed expression using either classifiers
Eleftheriadis, Stefanos; Rudovic, Ognjen; Pantic, Maja
Images of facial expressions are often captured from various views as a result of either head movements or variable camera position. Existing methods for multiview and/or view-invariant facial expression recognition typically perform classification of the observed expression using either classifiers
In recent years, facial recognition draws much attention due to its wide potential applications. As a unique technology in Biometric Identification, facial recognition represents a significant improvement since it could be operated without cooperation of people under detection. Hence, facial recognition will be taken into defense system, medical detection, human behavior understanding, etc. Several theories and methods have been established to make progress in facial recognition: (1) A novel two-stage facial landmark localization method is proposed which has more accurate facial localization effect under specific database; (2) A statistical face frontalization method is proposed which outperforms state-of-the-art methods for face landmark localization; (3) It proposes a general facial landmark detection algorithm to handle images with severe occlusion and images with large head poses; (4) There are three methods proposed on Face Alignment including shape augmented regression method, pose-indexed based multi-view method and a learning based method via regressing local binary features. The aim of this paper is to analyze previous work of different aspects in facial recognition, focusing on concrete method and performance under various databases. In addition, some improvement measures and suggestions in potential applications will be put forward.
Forssell, Heli; Alstergren, Per; Bakke, Merete
, clinical features, consequences, central and peripheral mechanisms, diagnostic criteria (DC/TMD), and principles of management. For each of the neuropathic facial pain entities, the definitions, prevalence, clinical features, and diagnostics are described. The current understanding of the pathophysiology......Persistent facial pains, especially temporomandibular disorders (TMD), are common conditions. As dentists are responsible for the treatment of most of these disorders, up-to date knowledge on the latest advances in the field is essential for successful diagnosis and management. The review covers...... TMD, and different neuropathic or putative neuropathic facial pains such as persistent idiopathic facial pain and atypical odontalgia, trigeminal neuralgia and painful posttraumatic trigeminal neuropathy. The article presents an overview of TMD pain as a biopsychosocial condition, its prevalence...
Forssell, Heli; Alstergren, Per; Bakke, Merete
TMD, and different neuropathic or putative neuropathic facial pains such as persistent idiopathic facial pain and atypical odontalgia, trigeminal neuralgia and painful posttraumatic trigeminal neuropathy. The article presents an overview of TMD pain as a biopsychosocial condition, its prevalence......Persistent facial pains, especially temporomandibular disorders (TMD), are common conditions. As dentists are responsible for the treatment of most of these disorders, up-to date knowledge on the latest advances in the field is essential for successful diagnosis and management. The review covers......, clinical features, consequences, central and peripheral mechanisms, diagnostic criteria (DC/TMD), and principles of management. For each of the neuropathic facial pain entities, the definitions, prevalence, clinical features, and diagnostics are described. The current understanding of the pathophysiology...
Forssell, Heli; Alstergren, Per; Bakke, Merete;
, clinical features, consequences, central and peripheral mechanisms, diagnostic criteria (DC/TMD), and principles of management. For each of the neuropathic facial pain entities, the definitions, prevalence, clinical features, and diagnostics are described. The current understanding of the pathophysiology...
Central nervous system abnormalities on midline facial defects with hypertelorism detected by magnetic resonance image and computed tomography Anomalias de sistema nervoso central em defeitos de linha média facial com hipertelorismo detectados por ressonância magnética e tomografia computadorizada
Vera Lúcia Gil-da-Silva-Lopes
Full Text Available The aim of this study were to describe and to compare structural central nervous system (CNS anomalies detected by magnetic resonance image (MRI and computed tomography (CT in individuals affected by midline facial defects with hypertelorism (MFDH isolated or associated with multiple congenital anomalies (MCA. The investigation protocol included dysmorphological examination, skull and facial X-rays, brain CT and/or MRI. We studied 24 individuals, 12 of them had an isolated form (Group I and the others, MCA with unknown etiology (Group II. There was no significative difference between Group I and II and the results are presented in set. In addition to the several CNS anomalies previously described, MRI (n=18 was useful for detection of neuronal migration errors. These data suggested that structural CNS anomalies and MFDH seem to have an intrinsic embryological relationship, which should be taken in account during the clinical follow-up.Este estudo objetivou descrever e comparar as anomalias estruturais do sistema nervoso central (SNC detectadas por meio de ressonância magnética (RM e tomografia computadorizada (TC de crânio em indivíduos com defeitos de linha média facial com hipertelorismo (DLMFH isolados ou associados a anomalias congênitas múltiplas (ACM. O protocolo de investigação incluiu exame dismorfológico, RX de crânio e face, CT e RM de crânio. Foram estudados 24 indivíduos, sendo que 12 apresentavam a forma isolada (Grupo I e os demais, DLMFH com ACM de etiologia não esclarecida (Grupo II. Não houve diferença entre os dois grupos e os resultados foram agrupados. Além de várias anomalias de SNC já descritas, a RM foi útil para detecção de erros de migração neuronal. Os dados sugerem que as alterações estruturais de SNC e os DLMFH têm relação embriológica, o que deve ser levado em conta durante o seguimento clínico.
Chung, Jungman; Chung, Jungmin; Oh, Wonjun; Yoo, Yongkyu; Lee, Won Gu; Bang, Hyunwoo
Here we present a new method for automatic and objective monitoring of ingestive behaviors in comparison with other facial activities through load cells embedded in a pair of glasses, named GlasSense. Typically, activated by subtle contraction and relaxation of a temporalis muscle, there is a cyclic movement of the temporomandibular joint during mastication. However, such muscular signals are, in general, too weak to sense without amplification or an electromyographic analysis. To detect these oscillatory facial signals without any use of obtrusive device, we incorporated a load cell into each hinge which was used as a lever mechanism on both sides of the glasses. Thus, the signal measured at the load cells can detect the force amplified mechanically by the hinge. We demonstrated a proof-of-concept validation of the amplification by differentiating the force signals between the hinge and the temple. A pattern recognition was applied to extract statistical features and classify featured behavioral patterns, such as natural head movement, chewing, talking, and wink. The overall results showed that the average F1 score of the classification was about 94.0% and the accuracy above 89%. We believe this approach will be helpful for designing a non-intrusive and un-obtrusive eyewear-based ingestive behavior monitoring system.
Licht, Peter B; Pilegaard, Hans K
people. Side effects are frequent, but most patients are satisfied with the operation. In the short term, the key to success in sympathetic surgery for facial blushing lies in a meticulous and critical patient selection and in ensuring that the patient is thoroughly informed about the high risk of side...... effects. In the long term, the key to success in sympathetic surgery for facial blushing lies in more quality research comparing surgical, pharmacologic, and psychotherapeutic treatments....
Kang, Jung-A; Chun, Min Ho; Choi, Su Jin; Chang, Min Cheol; Yi, You Gyoung
To investigate the effects of mirror therapy using a tablet PC for post-stroke central facial paresis. A prospective, randomized controlled study was performed. Twenty-one post-stroke patients were enrolled. All patients performed 15 minutes of orofacial exercise twice daily for 14 days. The mirror group (n=10) underwent mirror therapy using a tablet PC while exercising, whereas the control group (n=11) did not. All patients were evaluated using the Regional House-Brackmann Grading Scale (R-HBGS), and the length between the corner of the mouth and the ipsilateral earlobe during rest and smiling before and after therapy were measured bilaterally. We calculated facial movement by subtracting the smile length from resting length. Differences and ratios between bilateral sides of facial movement were evaluated as the final outcome measure. Baseline characteristics were similar for the two groups. There were no differences in the scores for the basal Modified Barthel Index, the Korean version of Mini-Mental State Examination, National Institutes of Health Stroke Scale, R-HBGS, and bilateral differences and ratios of facial movements. The R-HBGS as well as the bilateral differences and ratios of facial movement showed significant improvement after therapy in both groups. The degree of improvement of facial movement was significantly larger in the mirror group than in the control group. Mirror therapy using a tablet PC might be an effective tool for treating central facial paresis after stroke.
Kim, Eun Jeong; Lee, Jun; Lee, Ji Woon; Lee, Jun Hyung; Park, Chol Jin; Kim, Young Dae; Lee, Hyun Jin
Peripheral facial nerve palsy (FNP) is a mononeuropathy that affects the peripheral part of the facial nerve. Primary causes of peripheral FNP remain largely unknown, but detectable causes include systemic infections (viral and others), trauma, ischemia, tumor, and extrinsic compression. Peripheral FNP in relation to extrinsic compression has rarely been described in case reports. Here, we report a case of a 71-year-old man who was diagnosed with peripheral FNP following endoscopic submucosal dissection. This case is the first report of the development of peripheral FNP in a patient undergoing therapeutic endoscopy. We emphasize the fact that physicians should be attentive to the development of peripheral FNP following therapeutic endoscopy.
Valstar, M F; Mehu, M; Bihan Jiang; Pantic, M; Scherer, K
Automatic facial expression recognition has been an active topic in computer science for over two decades, in particular facial action coding system action unit (AU) detection and classification of a number of discrete emotion states from facial expressive imagery. Standardization and comparability have received some attention; for instance, there exist a number of commonly used facial expression databases. However, lack of a commonly accepted evaluation protocol and, typically, lack of sufficient details needed to reproduce the reported individual results make it difficult to compare systems. This, in turn, hinders the progress of the field. A periodical challenge in facial expression recognition would allow such a comparison on a level playing field. It would provide an insight on how far the field has come and would allow researchers to identify new goals, challenges, and targets. This paper presents a meta-analysis of the first such challenge in automatic recognition of facial expressions, held during the IEEE conference on Face and Gesture Recognition 2011. It details the challenge data, evaluation protocol, and the results attained in two subchallenges: AU detection and classification of facial expression imagery in terms of a number of discrete emotion categories. We also summarize the lessons learned and reflect on the future of the field of facial expression recognition in general and on possible future challenges in particular.
Wang, Yuzhe; Zhu, Jian
Dielectric elastomer actuators have the advantage of mimicking the salient feature of life: movements in response to stimuli. In this paper we explore application of dielectric elastomer actuators to artificial muscles. These artificial muscles can mimic natural masseter to control jaw movements, which are key components in facial expressions especially during talking and singing activities. This paper investigates optimal design of the dielectric elastomer actuator. It is found that the actuator with embedded plastic fibers can avert electromechanical instability and can greatly improve its actuation. Two actuators are then installed in a robotic skull to drive jaw movements, mimicking the masseters in a human jaw. Experiments show that the maximum vertical displacement of the robotic jaw, driven by artificial muscles, is comparable to that of the natural human jaw during speech activities. Theoretical simulations are conducted to analyze the performance of the actuator, which is quantitatively consistent with the experimental observations.
Karpouzis, Kostas; Tsapatsoulis, Nicolas; Kollias, Stefanos D.
Research in facial expression has concluded that at least six emotions, conveyed by human faces, are universally associated with distinct expressions. Sadness, anger, joy, fear, disgust and surprise are categories of expressions that are recognizable across cultures. In this work we form a relation between the description of the universal expressions and the MPEG-4 Facial Definition Parameter Set (FDP). We also investigate the relation between the movement of basic FDPs and the parameters that describe emotion-related words according to some classical psychological studies. In particular Whissel suggested that emotions are points in a space, which seem to occupy two dimensions: activation and evaluation. We show that some of the MPEG-4 Facial Animation Parameters (FAPs), approximated by the motion of the corresponding FDPs, can be combined by means of a fuzzy rule system to estimate the activation parameter. In this way variations of the six archetypal emotions can be achieved. Moreover, Plutchik concluded that emotion terms are unevenly distributed through the space defined by dimensions like Whissel's; instead they tend to form an approximately circular pattern, called 'emotion wheel,' modeled using an angular measure. The 'emotion wheel' can be defined as a reference for creating intermediate expressions from the universal ones, by interpolating the movement of dominant FDP points between neighboring basic expressions. By exploiting the relation between the movement of the basic FDP point and the activation and angular parameters we can model more emotions than the primary ones and achieve efficient recognition in video sequences.
Jesus Claudio Gabana-Silveira; Laura Davison Mangilli; Sassi, Fernanda C.; Arnaldo Feitosa Braga; Claudia Regina Furquim de Andrade
OBJECTIVES: This study evaluated the effects of facial stimulation over the superficial muscles of the face in individuals with facial lipoatrophy associated with human immunodeficiency virus (HIV) and with no indication for treatment with polymethyl methacrylate. METHOD: The study sample comprised four adolescents of both genders ranging from 13 to 17 years in age. To participate in the study, the participants had to score six or less points on the Facial Lipoatrophy Index. The facial stim...
Prkachin, Kenneth M.
The experience of pain is often represented by changes in facial expression. Evidence of pain that is available from facial expression has been the subject of considerable scientific investigation. The present paper reviews the history of pain assessment via facial expression in the context of a model of pain expression as a nexus connecting internal experience with social influence. Evidence about the structure of facial expressions of pain across the lifespan is reviewed. Applications of fa...
Petersen, Carl C H
Facial muscles drive whisker movements, which are important for active tactile sensory perception in mice and rats. These whisker muscles are innervated by cholinergic motor neurons located in the lateral facial nucleus. The whisker motor neurons receive synaptic inputs from premotor neurons, which are located within the brain stem, the midbrain, and the neocortex. Complex, distributed neural circuits therefore regulate whisker movement during behavior. This review focuses specifically on cortical whisker motor control. The whisker primary motor cortex (M1) strongly innervates brain stem reticular nuclei containing whisker premotor neurons, which might form a central pattern generator for rhythmic whisker protraction. In a parallel analogous pathway, the whisker primary somatosensory cortex (S1) strongly projects to the brain stem spinal trigeminal interpolaris nucleus, which contains whisker premotor neurons innervating muscles for whisker retraction. These anatomical pathways may play important functional roles, since stimulation of M1 drives exploratory rhythmic whisking, whereas stimulation of S1 drives whisker retraction.
Arnhardt, Christian; Fernández-Steeger, Tomas; Azzam, Rafig
Monitoring systems in landslide areas are important elements of effective Early Warning structures. Data acquisition and retrieval allows the detection of movement processes and thus is essential to generate warnings in time. Apart from the precise measurement, the reliability of data is fundamental, because outliers can trigger false alarms and leads to the loss of acceptance of such systems. For the monitoring of mass movements and their risk it is important to know, if there is movement, how fast it is and how trustworthy is the information. The joint project "Sensorbased landslide early warning system" (SLEWS) deals with these questions, and tries to improve data quality and to reduce false alarm rates, due to the combination of sensor date (sensor fusion). The project concentrates on the development of a prototypic Alarm- and Early Warning system (EWS) for different types of landslides by using various low-cost sensors, integrated in a wireless sensor network (WSN). The network consists of numerous connection points (nodes) that transfer data directly or over other nodes (Multi-Hop) in real-time to a data collection point (gateway). From there all the data packages are transmitted to a spatial data infrastructure (SDI) for further processing, analyzing and visualizing with respect to end-user specifications. The ad-hoc characteristic of the network allows the autonomous crosslinking of the nodes according to existing connections and communication strength. Due to the independent finding of new or more stable connections (self healing) a breakdown of the whole system is avoided. The bidirectional data stream enables the receiving of data from the network but also allows the transfer of commands and pointed requests into the WSN. For the detection of surface deformations in landslide areas small low-cost Micro-Electro-Mechanical-Systems (MEMS) and positionsensors from the automobile industries, different industrial applications and from other measurement
Parke, Frederic I
This comprehensive work provides the fundamentals of computer facial animation and brings into sharper focus techniques that are becoming mainstream in the industry. Over the past decade, since the publication of the first edition, there have been significant developments by academic research groups and in the film and games industries leading to the development of morphable face models, performance driven animation, as well as increasingly detailed lip-synchronization and hair modeling techniques. These topics are described in the context of existing facial animation principles. The second ed
WANG Shuliang; YUAN Hanning; CAO Baohua; WANG Dakui
Expressional face recognition is a challenge in computer vision for complex expressions. Facial data field is proposed to recognize expression. Fundamentals are presented in the methodology of face recognition upon data field and subsequently, technical algorithms including normalizing faces, generating facial data field, extracting feature points in partitions, assigning weights and recog-nizing faces. A case is studied with JAFFE database for its verification. Result indicates that the proposed method is suitable and eff ective in expressional face recognition con-sidering the whole average recognition rate is up to 94.3%. In conclusion, data field is considered as a valuable alter-native to pattern recognition.
Coltro, Pedro Soler; Goldenberg, Dov Charles; Aldunate, Johnny Leandro Conduta Borda; Alessi, Mariana Sisto; Chang, Alexandre Jin Bok Audi; Alonso, Nivaldo; Ferreira, Marcus Castro
A 14-year-old patient had a low-energy facial blunt trauma that evolved to right facial paralysis caused by parotid hematoma with parotid salivary gland lesion. Computed tomography and angiography demonstrated intraparotid collection without pseudoaneurysm and without radiologic signs of fracture in the face. The patient was treated with serial punctures for hematoma deflation, resolving with regression and complete remission of facial paralysis, with no late sequela. The authors discuss the relationship between facial nerve traumatic injuries associated or not with the presence of facial fractures, emphasizing the importance of early recognition and appropriate treatment of such cases.
Percepção de expressões faciais em pessoas com esquizofrenia: movimentos oculares, sintomatologia e nível intelectual The perception of facial expressions in people with schizophrenia: eye movements, symptomatology and intelligence
Full Text Available A análise dos padrões dos movimentos oculares em esquizofrênicos mostra que estes apresentam alterações próprias da doença. O objetivo foi avaliar e relacionar as propriedades dos movimentos oculares com o estado clínico e nível intelectual durante observação de faces. Foram avaliados 10 sujeitos com diagnóstico de esquizofrenia e 10 controle, pareados em função de sexo, idade e escolaridade. Foi utilizada a Escala das Síndromes Positiva e Negativa (ESPN, o Teste de Matrizes Progressivas Raven e o Penn Emotion Acuity Test. A busca visual foi registrada com programa EyeGaze®. Resultados mostram que ambos os grupos fixaram mais as faces com conteúdo emocional, sendo o número total de fixações menor entre os pacientes com esquizofrenia. A duração da fixação teve correlação inversa com a pontuação no Raven, PANSS e número de fixação. Parâmetros dos movimentos oculares se correlacionaram com a condição clínica e nível intelectual dos pacientes com esquizofrenia.Analyses of the eye movement patterns of schizophrenics show some specific characteristics underlying the illness. The objective of present study was to evaluate and correlate basic properties of the eye movements with the clinical state and intelligence during the visual scan of faces. 10 outpatient subjects with schizophrenia and 10 controls matched in gender, age and school years were evaluated. The assessment tools were the Positive and Negative Syndrome Scale (PNSS, the Raven's Progressive Matrices Test and the Penn Emotion Acuity Test. The visual scan was registered with the EyeGaze® program. Both groups showed more fixations for stimuli with emotional expression, with significantly smaller overall number of fixations for the schizophrenic subjects group. Duration of fixations was inversely correlated with score in the Raven Test, PANSS and number of fixations. The properties of the eye movements showed correlation with the clinical condition and
Magnenat-Thalmann, N. [Univ. of Geneva, Geneva (Switzerland)
This paper describes high-level tools for specifying, controlling, and synchronizing temporal and spatial characteristics for 3D animation of facial expressions. The proposed approach consists of hierarchical levels of controls. Specification of expressions, phonemes, emotions, sentences, and head movements by means of a high-level language is shown. The various aspects of synchronization are also emphasized. Then, association of the control different interactive devices and media which allows the animator greater flexibility and freedom, is discussed. Experiments with input accessories such as the keyboard of a music synthesizer and gestures from the DataGlove are illustrated.
Alam, Mohammad Khursheed; Mohd Noor, Nor Farid; Basri, Rehana; Yew, Tan Fo; Wen, Tay Hui
This study aimed to investigate the association of facial proportion and its relation to the golden ratio with the evaluation of facial appearance among Malaysian population. This was a cross-sectional study with 286 randomly selected from Universiti Sains Malaysia (USM) Health Campus students (150 females and 136 males; 100 Malaysian Chinese, 100 Malaysian Malay and 86 Malaysian Indian), with the mean age of 21.54 ± 1.56 (Age range, 18-25). Facial indices obtained from direct facial measurements were used for the classification of facial shape into short, ideal and long. A validated structured questionnaire was used to assess subjects' evaluation of their own facial appearance. The mean facial indices of Malaysian Indian (MI), Malaysian Chinese (MC) and Malaysian Malay (MM) were 1.59 ± 0.19, 1.57 ± 0.25 and 1.54 ± 0.23 respectively. Only MC showed significant sexual dimorphism in facial index (P = 0.047; Pmean score of 2.18 ± 0.97 for overall impression and 2.15 ± 1.04 for facial parts, compared to MM and MI, with mean score of 1.80 ± 0.97 and 1.64 ± 0.74 respectively for overall impression; 1.75 ± 0.95 and 1.70 ± 0.83 respectively for facial parts. 1) Only 17.1% of Malaysian facial proportion conformed to the golden ratio, with majority of the population having short face (54.5%); 2) Facial index did not depend significantly on races; 3) Significant sexual dimorphism was shown among Malaysian Chinese; 4) All three races are generally satisfied with their own facial appearance; 5) No significant association was found between golden ratio and facial evaluation score among Malaysian population.
Full Text Available É relatado um caso de paralisia facial bilateral, incompleta, associada a hipoacusia esquerda, após traumatismo cranioencefálico, com fraturas evidenciadas radiológicamente. Algumas considerações são formuladas tentando relacionar ditas manifestações com fraturas do osso temporal.
Licht, Peter B; Pilegaard, Hans K
an indication for treatment, facial blushing may be treated effectively by thoracoscopic sympathectomy. The type of blushing likely to benefit from sympathectomy is mediated by the sympathetic nerves and is the uncontrollable, rapidly developing blush typically elicited when one receives attention from other...
Razfar, Ali; Lee, Matthew K; Massry, Guy G; Azizzadeh, Babak
Facial nerve paralysis is a devastating condition arising from several causes with severe functional and psychological consequences. Given the complexity of the disease process, management involves a multispecialty, team-oriented approach. This article provides a systematic approach in addressing each specific sequela of this complex problem.
Full Text Available É apresentado um caso de diplegia facial surgida após meningite meningocócica e infecção por herpes simples. Depois de discutir as diversas condições que o fenômeno pode apresentar-se, o autor inclina-se por uma etiologia herpética.
The prevalence of human error in safety-critical occupations remains a major challenge to mission success despite increasing automation in control processes. Although various methods have been proposed to prevent incidences of human error, none of these have been developed to employ the detection and regulation of Operator Functional State (OFS), or the optimal condition of the operator while performing a task, in work environments due to drawbacks such as obtrusiveness and impracticality. A video-based system with the ability to infer an individual's emotional state from facial feature patterning mitigates some of the problems associated with other methods of detecting OFS, like obtrusiveness and impracticality in integration with the mission environment. This paper explores the utility of facial expression recognition as a technology for inferring OFS by first expounding on the intricacies of OFS and the scientific background behind emotion and its relationship with an individual's state. Then, descriptions of the feedback loop and the emotion protocols proposed for the facial recognition program are explained. A basic version of the facial expression recognition program uses Haar classifiers and OpenCV libraries to automatically locate key facial landmarks during a live video stream. Various methods of creating facial expression recognition software are reviewed to guide future extensions of the program. The paper concludes with an examination of the steps necessary in the research of emotion and recommendations for the creation of an automatic facial expression recognition program for use in real-time, safety-critical missions
Full Text Available Facial expressions play an essential role in communications in social interactions with other human beings which deliver rich information about their emotions. Facial expression analysis has wide range ofapplications in the areas such as Psychology, Animations, Interactive games, Image retrieval and Image understanding. Selecting the relevant feature and ignoring the unimportant feature is the key step in facial expression recognition system. Here, we propose an efficient method for identifying the expressions of the students torecognize their comprehension from the facial expressions in static images containing the frontal view of the human face. Our goal is to categorize the facial expressions of the students in the given image into two basic emotional expression states – comprehensible, incomprehensible. One of the key action units in the face to expose expression is eye. In this paper, Facial expressions are identified from the expressions of the eyes. Our method consists of three steps, Edge detection, Eye extraction and Emotion recognition. Edge detection is performed through Prewitt operator. Extraction of eyes is performed using iterative search algorithm on the edge image. All the extracted information are combined together to form the feature vector. Finally, the features are given as an input for a BPN classifier and thus the facial expressions are being identified. The proposed method is tested on the Yale Face database.
Neely, John Gail; Lisker, Paul; Drapekin, Jesse
The objective of this study was to evaluate laterality and upper/lower face dominance of expressiveness during prescribed speech using a unique validated image subtraction system capable of sensitive and reliable measurement of facial surface deformation. Observations and experiments of central control of facial expressions during speech and social utterances in humans and animals suggest that the right mouth moves more than the left during nonemotional speech. However, proficient lip readers seem to attend to the whole face to interpret meaning from expressed facial cues, also implicating a horizontal (upper face-lower face) axis. Prospective experimental design. Experimental maneuver: recited speech. image-subtraction strength-duration curve amplitude. Thirty normal human adults were evaluated during memorized nonemotional recitation of 2 short sentences. Facial movements were assessed using a video-image subtractions system capable of simultaneously measuring upper and lower specific areas of each hemiface. The results demonstrate both axes influence facial expressiveness in human communication; however, the horizontal axis (upper versus lower face) would appear dominant, especially during what would appear to be spontaneous breakthrough unplanned expressiveness. These data are congruent with the concept that the left cerebral hemisphere has control over nonemotionally stimulated speech; however, the multisynaptic brainstem extrapyramidal pathways may override hemiface laterality and preferentially take control of the upper face. Additionally, these data demonstrate the importance of the often-ignored brow in facial expressiveness. Experimental study. EBM levels not applicable.
Martin, N. (G.H. Pitie-Salpetriere, 75 - Paris (France). Dept. of Neuroradiology); Sterkers, O. (Hospital Beaujon, Clichy (France). Dept. of Otorhinolaryngology); Mompoint, D.; Nahum, H. (Hopital Beaujon, Clichy (France). Dept. of Radiology)
Four cases of facial nerve neuroma were evaluated by computed tomographic (CT) scan and magnetic resonance imaging (MRI). The extension of the tumor in the petrous bone or the parotid gland was well defined by MRI in all cases. CT scan was useful to demonstrate bone erosions and the relation of the tumor to inner ear structures. In cases of progressive facial palsy, CT and MRI should be combined to detect a facial neuroma and to plan the surgical approach for tumor removal and nerve grafting. (orig.).
Hata, Yutaka; Kanazawa, Seigo; Endo, Maki; Tsuchiya, Naoki; Nakajima, Hiroshi
This paper proposes a heart rate monitoring system for detecting autonomic nervous system by the heart rate variability using an air pressure sensor to diagnose mental disease. Moreover, we propose a human behavior monitoring system for detecting the human trajectory in home by an infrared camera. In day and night times, the human behavior monitoring system detects the human movement in home. The heart rate monitoring system detects the heart rate in bed in night time. The air pressure sensor consists of a rubber tube, cushion cover and pressure sensor, and it detects the heart rate by setting it to bed. It unconstraintly detects the RR-intervals; thereby the autonomic nervous system can be assessed. The autonomic nervous system analysis can examine the mental disease. While, the human behavior monitoring system obtains distance distribution image by an infrared camera. It classifies adult, child and the other object from distance distribution obtained by the camera, and records their trajectories. This behavior, i.e., trajectory in home, strongly corresponds to cognitive disorders. Thus, the total system can detect mental disease and cognitive disorders by uncontacted sensors to human body.
Hontanilla, Bernardo; Marré, Diego
Masseteric and hypoglossal nerve transfers are reliable alternatives for reanimating short-term facial paralysis. To date, few studies exist in the literature comparing these techniques. This work presents a quantitative comparison of masseter-facial transposition versus hemihypoglossal facial transposition with a nerve graft using the Facial Clima system. Forty-six patients with complete unilateral facial paralysis underwent reanimation with either hemihypoglossal transposition with a nerve graft (group I, n = 25) or direct masseteric-facial coaptation (group II, n = 21). Commissural displacement and commissural contraction velocity were measured using the Facial Clima system. Postoperative intragroup commissural displacement and commissural contraction velocity means of the reanimated versus the normal side were first compared using a paired sample t test. Then, mean percentages of recovery of both parameters were compared between the groups using an independent sample t test. Onset of movement was also compared between the groups. Significant differences of mean commissural displacement and commissural contraction velocity between the reanimated side and the normal side were observed in group I but not in group II. Mean percentage of recovery of both parameters did not differ between the groups. Patients in group II showed a significantly faster onset of movement compared with those in group I (62 ± 4.6 days versus 136 ± 7.4 days, p = 0.013). Reanimation of short-term facial paralysis can be satisfactorily addressed by means of either hemihypoglossal transposition with a nerve graft or direct masseteric-facial coaptation. However, with the latter, better symmetry and a faster onset of movement are observed. In addition, masseteric nerve transfer avoids morbidity from nerve graft harvesting. Therapeutic, III.
Kae Nakajima; Tetsuto Minami; Shigeki Nakauchi
Facial color varies depending on emotional state, and emotions are often described in relation to facial color. In this study, we investigated whether the recognition of facial expressions was affected by facial color and vice versa. In the facial expression task, expression morph continua were employed: fear-anger and sadness-happiness. The morphed faces were presented in three different facial colors (bluish, neutral, and reddish color). Participants identified a facial expression between t...
Mohammadi, Mohammad Reza; Fatemizadeh, Emad; Mahoor, Mohammad H
Automatic measurement of spontaneous facial action units (AUs) defined by the facial action coding system (FACS) is a challenging problem. The recent FACS user manual defines 33 AUs to describe different facial activities and expressions. In spontaneous facial expressions, a subset of AUs are often occurred or activated at a time. Given this fact that AUs occurred sparsely over time, we propose a novel method to detect the absence and presence of AUs and estimate their intensity levels via sparse representation (SR). We use the robust principal component analysis to decompose expression from facial identity and then estimate the intensity of multiple AUs jointly using a regression model formulated based on dictionary learning and SR. Our experiments on Denver intensity of spontaneous facial action and UNBC-McMaster shoulder pain expression archive databases show that our method is a promising approach for measurement of spontaneous facial AUs.
Jalaliniya, Shahram; Mardanbeigi, Diako
computers. In this paper, we demonstrate the rich capabilities of EyeGrip with two example applications: 1) a mind reading game, and 2) a picture selection system. Our study shows that by selecting an appropriate speed and maximum number of visible images in the screen the proposed method can be used...... the user looks at a sequence of images moving horizontally on the display while the user's eye movements are tracked by an eye tracker. We conducted an experiment that shows the performance of the proposed approach. We also investigated the influence of the speed and maximum number of visible images...
Horlock, Nigel; Sanders, Roy; Harrison, Douglas H
Subperiosteal face lifting has gained wide acceptance in aesthetic surgical practice. It may also have a role to play in patients with partial facial palsy. These patients demonstrate poor static position of the mouth but maintain some degree of facial movement. This study examined the role of subperiosteal facial suspension as an alternative treatment modality in this patient group. In this series, five patients with varying degrees of partial facial palsy underwent subperiosteal face lifting, including sub-orbicularis oculi fat elevation via a temporal, lower lid, and buccal approach, thereby mobilizing and elevating and suspending the zygomaticus major and levator labii superioris muscles on the facial skeleton. An attempt was made to categorize the patients according to overall House-Brackmann score. It was not possible to precisely classify the patients by this method, although the approximate scores were two patients scoring 3, two patients scoring 4, and one patient scoring 5. To overcome inconsistencies with this method, the degree of static and dynamic asymmetry of the mouth and also the excursion of the mouth were graded separately. Four patients with mild to moderate dynamic and static asymmetry (House-Brackmann score of approximately 3 and 4) who maintained excellent or good excursion of the mouth achieved excellent or good results. One patient with poor excursion and severe partial facial palsy (House-Brackmann score of 5) was improved but remained markedly asymmetric (follow-up, 4 months to 1 year). Subperiosteal face lifting is a useful therapeutic modality for management of selected patients with mild partial facial palsy. These patients demonstrate asymmetric static position but maintain some degree of muscle excursion. Patients with severe facial palsies with poor muscle excursion continue to require muscle transfer or sling procedures. The authors hope that long-term follow-up will confirm the sustained effect of midfacial suspension in this
The face aftereffect (FAE; the illusion of faces after adaptation to a face) has been reported to occur without retinal overlap between adaptor and test, but recent studies revealed that the FAE is not constant across all test locations, which suggests that the FAE is also retinotopic. However, it remains unclear whether the characteristic of the retinotopy of the FAE for one facial aspect is the same as that of the FAE for another facial aspect. In the research reported here, an examination of the retinotopy of the FAE for facial expression indicated that the facial expression aftereffect occurs without retinal overlap between adaptor and test, and depends on the retinal distance between them. Furthermore, the results indicate that, although dependence of the FAE on adaptation-test distance is similar between facial expression and facial identity, the FAE for facial identity is larger than that for facial expression when a test face is presented in the opposite hemifield. On the basis of these results, I discuss adaptation mechanisms underlying facial expression processing and facial identity processing for the retinotopy of the FAE.
Irani, Ramin; Nasrollahi, Kamal; Oliu Simon, Marc
facial images for pain detection and pain intensity level recognition. For this purpose, we extract energies released by facial pixels using a spatiotemporal filter. Experiments on a group of 12 elderly people applying the multimodal approach show that the proposed method successfully detects pain...
Streit, M; Wölwer, W; Gaebel, W
The performance of schizophrenic in-patients in facial expression identification was assessed in an acute phase and in a partly remitted phase of the illness. During visual exploration of the face stimuli, the patient's eye movements were recorded using an infrared-corneal-reflection technique. Compared to healthy controls, patients demonstrated a significant deficit in facial-affect recognition. In addition, schizophrenics differed from controls in several eye movement parameters such as length of mean scan path and mean duration of fixation. Both the facial-affect recognition deficit and the eye movement abnormalities remained stable over time. However, performance in facial-affect recognition and eye movement abnormalities were not correlated. Patients with flattened affect showed relatively selective scan pattern characteristics. In contrast, affective flattening was not correlated with performance in facial-affect recognition. Dosage of neuroleptic medication did not affect the results. The main findings of the study suggest that schizophrenia is associated with disturbances in primarily unrelated neurocognitive operations mediating visuomotor processing and facial expression analysis. Given their time stability, the disturbances might have a trait-like character.
O'Neill, Francis; Nurmikko, Turo; Sommer, Claudia
Premise In this article we review some lesser known cranial neuralgias that are distinct from trigeminal neuralgia, trigeminal autonomic cephalalgias, or trigeminal neuropathies. Included are occipital neuralgia, superior laryngeal neuralgia, auriculotemporal neuralgia, glossopharyngeal and nervus intermedius neuralgia, and pain from acute herpes zoster and postherpetic neuralgia of the trigeminal and intermedius nerves. Problem Facial neuralgias are rare and many physicians do not see such cases in their lifetime, so patients with a suspected diagnosis within this group should be referred to a specialized center where multidisciplinary team diagnosis may be available. Potential solution Each facial neuralgia can be identified on the basis of clinical presentation, allowing for precision diagnosis and planning of treatment. Treatment remains conservative with oral or topical medication recommended for neuropathic pain to be tried before more invasive procedures are undertaken. However, evidence for efficacy of current treatments remains weak.
DeBruine, Lisa M
Organisms are expected to be sensitive to cues of genetic relatedness when making decisions about social behaviour. Relatedness can be assessed in several ways, one of which is phenotype matching: the assessment of similarity between others' traits and either one's own traits or those of known relatives. One candidate cue of relatedness in humans is facial resemblance. Here, I report the effects of an experimental manipulation of facial resemblance in a two-person sequential trust game. Subjects were shown faces of ostensible playing partners manipulated to resemble either themselves or an unknown person. Resemblance to the subject's own face raised the incidence of trusting a partner, but had no effect on the incidence of selfish betrayals of the partner's trust. Control subjects playing with identical pictures failed to show such an effect. In a second experiment, resemblance of the playing partner to a familiar (famous) person had no effect on either trusting or betrayals of trust.
Full Text Available A woman with history of bifrontal headache, vomiting and loss of vision was diagnosed as a case of pseudotumor cerebri based on clinical and MRI findings. Bilateral abducens and facial nerve palsies were detected. Pseudotumor cerebri in this patient was not associated with any other illness or related to drug therapy. Treatment was given to lower the raised intracranial pressure to which the patient responded.
Rai, Manjunath; Hegde, Padmaraj; Devaraju, Umesh M.
Teratomas are neoplasm composed of three germinal layers of the embryo that form tissues not normally found in the organ in which they arise. These are most common in the sacrococcygeal region and are rare in the head and neck, which account for less than 6%. An unusual case of facial teratoma in a new born, managed successfully is described here with postoperative follow up of 2 years without any recurrence.
Mohammad Khursheed Alam
Full Text Available This study aimed to investigate the association of facial proportion and its relation to the golden ratio with the evaluation of facial appearance among Malaysian population. This was a cross-sectional study with 286 randomly selected from Universiti Sains Malaysia (USM Health Campus students (150 females and 136 males; 100 Malaysian Chinese, 100 Malaysian Malay and 86 Malaysian Indian, with the mean age of 21.54 ± 1.56 (Age range, 18-25. Facial indices obtained from direct facial measurements were used for the classification of facial shape into short, ideal and long. A validated structured questionnaire was used to assess subjects' evaluation of their own facial appearance. The mean facial indices of Malaysian Indian (MI, Malaysian Chinese (MC and Malaysian Malay (MM were 1.59 ± 0.19, 1.57 ± 0.25 and 1.54 ± 0.23 respectively. Only MC showed significant sexual dimorphism in facial index (P = 0.047; P<0.05 but no significant difference was found between races. Out of the 286 subjects, 49 (17.1% were of ideal facial shape, 156 (54.5% short and 81 (28.3% long. The facial evaluation questionnaire showed that MC had the lowest satisfaction with mean score of 2.18 ± 0.97 for overall impression and 2.15 ± 1.04 for facial parts, compared to MM and MI, with mean score of 1.80 ± 0.97 and 1.64 ± 0.74 respectively for overall impression; 1.75 ± 0.95 and 1.70 ± 0.83 respectively for facial parts.1 Only 17.1% of Malaysian facial proportion conformed to the golden ratio, with majority of the population having short face (54.5%; 2 Facial index did not depend significantly on races; 3 Significant sexual dimorphism was shown among Malaysian Chinese; 4 All three races are generally satisfied with their own facial appearance; 5 No significant association was found between golden ratio and facial evaluation score among Malaysian population.
Williams, David M; Dechen Quinn, Amy; Porter, William F
Advances in animal tracking technologies have reduced but not eliminated positional error. While aware of such inherent error, scientists often proceed with analyses that assume exact locations. The results of such analyses then represent one realization in a distribution of possible outcomes. Evaluating results within the context of that distribution can strengthen or weaken our confidence in conclusions drawn from the analysis in question. We evaluated the habitat-specific positional error of stationary GPS collars placed under a range of vegetation conditions that produced a gradient of canopy cover. We explored how variation of positional error in different vegetation cover types affects a researcher's ability to discern scales of movement in analyses of first-passage time for white-tailed deer (Odocoileus virginianus). We placed 11 GPS collars in 4 different vegetative canopy cover types classified as the proportion of cover above the collar (0-25%, 26-50%, 51-75%, and 76-100%). We simulated the effect of positional error on individual movement paths using cover-specific error distributions at each location. The different cover classes did not introduce any directional bias in positional observations (1 m≤mean≤6.51 m, 0.24≤p≤0.47), but the standard deviation of positional error of fixes increased significantly with increasing canopy cover class for the 0-25%, 26-50%, 51-75% classes (SD = 2.18 m, 3.07 m, and 4.61 m, respectively) and then leveled off in the 76-100% cover class (SD = 4.43 m). We then added cover-specific positional errors to individual deer movement paths and conducted first-passage time analyses on the noisy and original paths. First-passage time analyses were robust to habitat-specific error in a forest-agriculture landscape. For deer in a fragmented forest-agriculture environment, and species that move across similar geographic extents, we suggest that first-passage time analysis is robust with regard to positional errors.
David M Williams
Full Text Available Advances in animal tracking technologies have reduced but not eliminated positional error. While aware of such inherent error, scientists often proceed with analyses that assume exact locations. The results of such analyses then represent one realization in a distribution of possible outcomes. Evaluating results within the context of that distribution can strengthen or weaken our confidence in conclusions drawn from the analysis in question. We evaluated the habitat-specific positional error of stationary GPS collars placed under a range of vegetation conditions that produced a gradient of canopy cover. We explored how variation of positional error in different vegetation cover types affects a researcher's ability to discern scales of movement in analyses of first-passage time for white-tailed deer (Odocoileus virginianus. We placed 11 GPS collars in 4 different vegetative canopy cover types classified as the proportion of cover above the collar (0-25%, 26-50%, 51-75%, and 76-100%. We simulated the effect of positional error on individual movement paths using cover-specific error distributions at each location. The different cover classes did not introduce any directional bias in positional observations (1 m≤mean≤6.51 m, 0.24≤p≤0.47, but the standard deviation of positional error of fixes increased significantly with increasing canopy cover class for the 0-25%, 26-50%, 51-75% classes (SD = 2.18 m, 3.07 m, and 4.61 m, respectively and then leveled off in the 76-100% cover class (SD = 4.43 m. We then added cover-specific positional errors to individual deer movement paths and conducted first-passage time analyses on the noisy and original paths. First-passage time analyses were robust to habitat-specific error in a forest-agriculture landscape. For deer in a fragmented forest-agriculture environment, and species that move across similar geographic extents, we suggest that first-passage time analysis is robust with regard to
CHEN Pei; WANG Peng; CHEN Guangli; GONG Shusheng
To observe the glial reactions surrounding facial motor neurons following facial nerve anastomosis. At 1,7,21 and 60 d following facial nerve anastomosis, the recovery process of facial movement was observed, the glial fibrillary acidic protein (GFAP) immunoreactivitywas analyzed by a combined method of fluorescent retrograde tracing and immunofluorescent histochemical stai ning, and the ultrastructure of astrocytes were observed under a transmission electron microscope (TEM), respectively. Postoperatively the function of facial muscles could not return to normal, often accompanied with hyperkinetic syndromes such as synkinesis at the late stage. Motor neurons in every facial subnucleus could be retrogradely labeled by fluoro gold (FG), and displayed an evident somatotopic organization. Normally there was a considerable number of GFAP-positive cells in nonnucleus regions but few inside the facial nucleus region. Postoperatively the GFAP immunoreactivity in the anastomotic side increased significantly, but gradually decreased at the late stage. The ultrastructure of astrocytes in our experiment showed that the sheet-like process of astrocytes invested and protected the injured facial motor neurons. The present study shows that reactive astrocytes undergo some characteristic changes during the process of facial nerve injury and regeneration. The plastic change at the late stage may be involved in the mechanism of synkinesis.
B. Hontanilla Calatayud
Full Text Available El objetivo de este trabajo es presentar nuestro protocolo de actuación en el tratamiento quirúrgico de la parálisis facial tras 140 casos tratados entre los años 2000 y 2007. Este protocolo está basado en los resultados obtenidos con un nuevo sistema de captura del movimiento facial en 3D denominado "Facial Clima", que puede ser considerado como un método objetivo de medición de los resultados en la cirugía de reanimación facial. Así podría compararse en pacientes con parálisis facial, la efectividad de los tratamientos entre distintos centros. Exponemos los resultados obtenidos tanto a nivel de la reconstrucción de la sonrisa como a nivel palpebral.The aim of this study is to present our protocol in the surgical treatment of facial paralysis after 140 treated cases since 2000 to 2007. The protocol is based on the results obtained with a new 3-D capture system of the facial movement called "Facial Clima", that could be considered as the adequate tool to assess the outcome of the facial paralysis reanimation surgery. Thus, patients with facial paralysis could be compared among surgical centres such that effectiveness of facial reanimation operations could be evaluated. The results obtained are exposed for smile and lid reconstruction.
Full Text Available In recent years, there has been growing enthusiasm that functional MRI could achieve clinical utility for a broad range of neuropsychiatric disorders. However, several barriers remain. For example, the acquisition of large-scale datasets capable of clarifying the marked heterogeneity that exists in psychiatric illnesses will need to be realized. In addition, there continues to be a need for the development of image processing and analysis methods capable of separating signal from artifact. As a prototypical hyperkinetic disorder, and movement related artifact being a significant confound in functional imaging studies, ADHD offers a unique challenge. As part of the ADHD-200 Global Competition and this special edition of Frontiers, the ADHD-200 Consortium demonstrates the utility of an aggregate dataset pooled across five institutions in addressing these challenges. The work aimed to A examine the impact of emerging techniques for controlling for micro-movements, and B provide novel insights into the neural correlates of ADHD subtypes. Using SVM based MVPA we show that functional connectivity patterns in individuals are capable of differentiating the two most prominent ADHD subtypes. The application of graph-theory revealed that the Combined (ADHD-C and Inattentive (ADHD-I subtypes demonstrated some overlapping (particularly sensorimotor systems, but unique patterns of atypical connectivity. For ADHD-C, atypical connectivity was prominent in midline default network components, as well as insular cortex; in contrast, the ADHD-I group exhibited atypical patterns within the dlPFC regions and cerebellum. Systematic motion-related artifact was noted, and highlighted the need for stringent motion correction. Findings reported were robust to the specific motion correction strategy employed. These data suggest that rs-fcMRI data can be used to characterize individual patients with ADHD and to identify neural distinctions underlying the clinical
Abstract The study was undertaken to determine the prevalence of facial pain and the association of facial pain with temporomandibular disorders (TMD) as well as with other factors, in a geographically defined population-based sample consisting of subjects born in 1966 in northern Finland, and in a case-control study including subjects with facial pain and their healthy controls. In addition, the influence of conservative stomatognathic and necessary prosthetic treatme...
Tomita, Koichi; Hosokawa, Ko; Yano, Kenji
In treating reversible facial paralysis, cross-facial nerve grafting offers voluntary and emotional reanimation. In contrast, rapid re-innervation and strong neural stimulation can be obtained with hypoglossal-facial nerve crossover. In this article, we describe the method of a combination of these techniques as a one-stage procedure. A 39-year-old man presented with facial paralysis due to nerve avulsion within the stylomastoid foramen. The sural nerve was harvested and two branches were created at its distal end by intraneural dissection. One branch was anastomosed to the contralateral facial nerve, and the other branch was used for hypoglossal-facial nerve crossover, followed by connecting the proximal stump of the graft to the trunk of the paralysed facial nerve in an end-to-end fashion. At 9 months postoperatively, almost complete facial symmetry and co-ordinated movements of the mimetic muscles were obtained with no obvious tongue atrophy. Since our method can efficiently gather neural inputs from the contralateral facial nerve and the ipsilateral hypoglossal nerve, it may become a good alternative for reanimation of reversible facial paralysis when the ipsilateral facial nerve is not available.
This paper reviews clinical neuropsychological studies that have indicated that the recognition of a person's identity and the recognition of facial expressions are processed by different cortical and subcortical areas of the brain. The fusiform gyrus, especially the right fusiform gyrus, plays an important role in the recognition of identity. The superior temporal sulcus, amygdala, and medial frontal cortex play important roles in facial-expression recognition. Both facial recognition and facial-expression recognition are highly intellectual processes that involve several regions of the brain.
Martin Paul Evison
Full Text Available Facial reconstructions in archaeology allow empathy with people who lived in the past and enjoy considerable popularity with the public. It is a common misconception that facial reconstruction will produce an exact likeness; a resemblance is the best that can be hoped for. Research at Sheffield University is aimed at the development of a computer system for facial reconstruction that will be accurate, rapid, repeatable, accessible and flexible. This research is described and prototypical 3-D facial reconstructions are presented. Interpolation models simulating obesity, ageing and ethnic affiliation are also described. Some strengths and weaknesses in the models, and their potential for application in archaeology are discussed.
Jensen, Troels S
Premise Facial pain refers to a heterogeneous group of clinically and etiologically different conditions with the common clinical feature of pain in the facial area. Among these conditions, trigeminal neuralgia (TN), persistent idiopathic facial pain, temporomandibular joint pain, and trigeminal autonomic cephalalgias (TAC) are the most well described conditions. Conclusion TN has been known for centuries, and is recognised by its characteristic and almost pathognomonic clinical features. The other facial pain conditions are less well defined, and over the years there has been confusion about their classification. PMID:28181442
Eleftheriadis, Stefanos; Rudovic, Ognjen; Pantic, Maja
Images of facial expressions are often captured from various views as a result of either head movements or variable camera position. Existing methods for multiview and/or view-invariant facial expression recognition typically perform classification of the observed expression using either classifiers learned separately for each view or a single classifier learned for all views. However, these approaches ignore the fact that different views of a facial expression are just different manifestations of the same facial expression. By accounting for this redundancy, we can design more effective classifiers for the target task. To this end, we propose a discriminative shared Gaussian process latent variable model (DS-GPLVM) for multiview and view-invariant classification of facial expressions from multiple views. In this model, we first learn a discriminative manifold shared by multiple views of a facial expression. Subsequently, we perform facial expression classification in the expression manifold. Finally, classification of an observed facial expression is carried out either in the view-invariant manner (using only a single view of the expression) or in the multiview manner (using multiple views of the expression). The proposed model can also be used to perform fusion of different facial features in a principled manner. We validate the proposed DS-GPLVM on both posed and spontaneously displayed facial expressions from three publicly available datasets (MultiPIE, labeled face parts in the wild, and static facial expressions in the wild). We show that this model outperforms the state-of-the-art methods for multiview and view-invariant facial expression classification, and several state-of-the-art methods for multiview learning and feature fusion.
Murali Krishna kanala
Full Text Available Expression detection is useful as a non-invasive method of lie detection and behaviour prediction. However, these facial expressions may be difficult to detect to the untrained eye. In this paper we implements facial expression recognition techniques using Ranking Method. The human face plays an important role in our social interaction, conveying people's identity. Using human face as a key to security, the biometrics face recognition technology has received significant attention in the past several years. Experiments are performed using standard database like surprise, sad and happiness. The universally accepted three principal emotions to be recognized are: surprise, sad and happiness along with neutral.
Valstar, Michel F; Pantic, Maja
Past work on automatic analysis of facial expressions has focused mostly on detecting prototypic expressions of basic emotions like happiness and anger. The method proposed here enables the detection of a much larger range of facial behavior by recognizing facial muscle actions [action units (AUs)] that compound expressions. AUs are agnostic, leaving the inference about conveyed intent to higher order decision making (e.g., emotion recognition). The proposed fully automatic method not only allows the recognition of 22 AUs but also explicitly models their temporal characteristics (i.e., sequences of temporal segments: neutral, onset, apex, and offset). To do so, it uses a facial point detector based on Gabor-feature-based boosted classifiers to automatically localize 20 facial fiducial points. These points are tracked through a sequence of images using a method called particle filtering with factorized likelihoods. To encode AUs and their temporal activation models based on the tracking data, it applies a combination of GentleBoost, support vector machines, and hidden Markov models. We attain an average AU recognition rate of 95.3% when tested on a benchmark set of deliberately displayed facial expressions and 72% when tested on spontaneous expressions.
Stereotypic movements are repetitive patterns of movement with certain peculiar features that make them especially interesting. Their physiopathology and their relationship with the neurobehavioural disorders they are frequently associated with are unknown. In this paper our aim is to offer a simple analysis of their dominant characteristics, their differentiation from other processes and a hypothesis of the properties of stereotypic movements, which could all set the foundations for research work into their physiopathology.
... Stories Español Eye Health / Eye Health A-Z Botulinum Toxin (Botox) for Facial Wrinkles Sections Botulinum Toxin (Botox) ... Facial Wrinkles How Does Botulinum Toxin (Botox) Work? Botulinum Toxin (Botox) for Facial Wrinkles Written by: Kierstan Boyd ...
Seventh cranial nerve palsy due to birth trauma; Facial palsy - birth trauma; Facial palsy - neonate; Facial palsy - infant ... this condition. Some factors that can cause birth trauma (injury) include: Large baby size (may be seen ...
Furl, N; van Rijsbergen, N J; Kiebel, S J; Friston, K J; Treves, A; Dolan, R J
People track facial expression dynamics with ease to accurately perceive distinct emotions. Although the superior temporal sulcus (STS) appears to possess mechanisms for perceiving changeable facial attributes such as expressions, the nature of the underlying neural computations is not known. Motivated by novel theoretical accounts, we hypothesized that visual and motor areas represent expressions as anticipated motion trajectories. Using magnetoencephalography, we show predictable transitions between fearful and neutral expressions (compared with scrambled and static presentations) heighten activity in visual cortex as quickly as 165 ms poststimulus onset and later (237 ms) engage fusiform gyrus, STS and premotor areas. Consistent with proposed models of biological motion representation, we suggest that visual areas predictively represent coherent facial trajectories. We show that such representations bias emotion perception of subsequent static faces, suggesting that facial movements elicit predictions that bias perception. Our findings reveal critical processes evoked in the perception of dynamic stimuli such as facial expressions, which can endow perception with temporal continuity.
Jack, Rachael E; Garrod, Oliver G B; Yu, Hui; Caldara, Roberto; Schyns, Philippe G
Since Darwin's seminal works, the universality of facial expressions of emotion has remained one of the longest standing debates in the biological and social sciences. Briefly stated, the universality hypothesis claims that all humans communicate six basic internal emotional states (happy, surprise, fear, disgust, anger, and sad) using the same facial movements by virtue of their biological and evolutionary origins [Susskind JM, et al. (2008) Nat Neurosci 11:843-850]. Here, we refute this assumed universality. Using a unique computer graphics platform that combines generative grammars [Chomsky N (1965) MIT Press, Cambridge, MA] with visual perception, we accessed the mind's eye of 30 Western and Eastern culture individuals and reconstructed their mental representations of the six basic facial expressions of emotion. Cross-cultural comparisons of the mental representations challenge universality on two separate counts. First, whereas Westerners represent each of the six basic emotions with a distinct set of facial movements common to the group, Easterners do not. Second, Easterners represent emotional intensity with distinctive dynamic eye activity. By refuting the long-standing universality hypothesis, our data highlight the powerful influence of culture on shaping basic behaviors once considered biologically hardwired. Consequently, our data open a unique nature-nurture debate across broad fields from evolutionary psychology and social neuroscience to social networking via digital avatars.
Nook, Erik C; Lindquist, Kristen A; Zaki, Jamil
Decades ago, the "New Look" movement challenged how scientists thought about vision by suggesting that conceptual processes shape visual perceptions. Currently, affective scientists are likewise debating the role of concepts in emotion perception. Here, we utilized a repetition-priming paradigm in conjunction with signal detection and individual difference analyses to examine how providing emotion labels-which correspond to discrete emotion concepts-affects emotion recognition. In Study 1, pairing emotional faces with emotion labels (e.g., "sad") increased individuals' speed and sensitivity in recognizing emotions. Additionally, individuals with alexithymia-who have difficulty labeling their own emotions-struggled to recognize emotions based on visual cues alone, but not when emotion labels were provided. Study 2 replicated these findings and further demonstrated that emotion concepts can shape perceptions of facial expressions. Together, these results suggest that emotion perception involves conceptual processing. We discuss the implications of these findings for affective, social, and clinical psychology.
Daniele Fontes Ferreira Bernardes
Full Text Available OBJETIVO: estudar a atividade eletromiográfica dos músculos frontal, orbicular dos olhos, zigomáticos, orbicular da boca em indivíduos normais e pacientes portadores de paralisia facial e o índice de simetria entre os dois lados da face. MÉTODOS: foram avaliados por meio da eletromiografia de superfície, seis indivíduos sem histórico de alteração na musculatura facial e seis pacientes com paralisia facial periférica. Para a avaliação eletromiográfica foram solicitados os seguintes movimentos (ao esforço máximo: elevação da testa, fechamento de olhos, protrusão labial e retração labial. RESULTADOS: encontrou-se que em indivíduos normais a média dos potenciais eletromiográficos para ambos os lados da face é semelhante, demonstrando que a integridade do nervo facial é fundamental para o equilíbrio da mímica facial. Nos pacientes com paralisia facial a média dos potenciais eletromiográficos para ambos os lados da face é significativamente diferente (evidenciando a falta de inervação neural. CONCLUSÃO: os resultados eletromiográficos mostraram diferença estatisticamente significante entres os dois lados da face nos indivíduos normais e nos pacientes com paralisia facial.PURPOSE: to study the surface electromyographic activity of frontal, orbicular occuli, orbicular oris and zigomatycs muscles in normal subjects and in peripheral facial paralysis patients. METHODS: six volunteers with no facial paralysis history as well as six peripheral facial paralysis patients were evaluated with electromyography using superficial electrodes. Maximum effort muscle activity and symmetry index were measured for the voluntary movements such as: raising eyebrows, eyes closing, smiling, puckering lips. RESULTS: it was found out that in normal subjects the muscle activity values were similar between the two sides of the face, showing that the facial nerve integrity is fundamental to the balance of facial mimics. In facial paralysis
Ito, Kyoko; Kurose, Hiroyuki; Takami, Ai; Nishida, Shogo
In this study, a target facial expression selection interface for a facial expression training system and a facial expression training system were both proposed and developed. Twelve female dentists used the facial expression training system, and evaluations and opinions about the facial expression training system were obtained from these participants. In the future, we will attempt to improve both the target facial expression selection interface and the comparison of a current and a target f...
Pavesi, Giovanni; Cattaneo, Luigi; Chierici, Elisabetta; Mancia, Domenico
We investigated trigemino-facial excitatory and inhibitory responses in perioral muscles in hemifacial spasm (HFS). We examined 15 patients affected with idiopathic HFS and 8 healthy controls. Five patients had spasms mostly limited to the periocular region and 10 had spasms also involving the perioral muscles. Responses were recorded from the resting orbicularis oculi (OOc), levator labii superioris (LLS) and orbicularis oris (OOr) muscles, after supraorbital (SO) nerve stimulation and during isolated voluntary contraction of LLS muscle. Eight patients showed complete or partial preservation of the late silent period (SP2) in activated LLS muscle. The remaining 7 patients showed absence of SP2. Early and late excitatory responses were variably present in LLS muscle at rest. Patients with HFS clinically restricted to periocular muscles had at least partial preservation of the SP2. In conclusion, in HFS patients inhibitory trigemino-facial reflexes are impaired and excitatory trigemino-facial responses are elicited in perioral muscles. These two phenomena seem to develop independently; the degree of trigemino-facial reflex impairment parallels the extension of involuntary movements to the lower facial muscles.
Pei CHEN; Jun SONG; Linghui LUO; Shusheng GONG
The remodeling process of synapses and eurotransmitter receptors of facial nucleus were observed. Models were set up by facial-facial anastomosis in rat. At post-surgery day (PSD) 0, 7, 21 and 60, synaptophysin (p38), NMDA receptor subunit 2A and AMPA receptor subunit 2 (GIuR2) were observed by immunohistochemical method and emi-quantitative RT-PCR, respectively. Meanwhile, the synaptic structure of the facial motorneurons was observed under a transmission electron microscope (TEM). The intensity of p38 immunoreactivity was decreased, reaching the lowest value at PSD day 7, and then increased slightly at PSD 21. Ultrastructurally, the number of synapses in nucleus of the operational side decreased, which was consistent with the change in P38 immhnoreactivity. NMDAR2A mRNA was down-regulated significantly in facial nucleus after the operation (P000.05). The synapses innervation and the expression of NMDAR2A and AMPAR2 mRNA in facial nucleus might be modified to suit for the new motor tasks following facial-facial anastomosis, and influenced facial nerve regeneration and recovery.
Ma, Fengling; Xu, Fen; Luo, Xianming
This study examined developmental changes in children's abilities to make trustworthiness judgments based on faces and the relationship between a child's perception of trustworthiness and facial attractiveness. One hundred and one 8-, 10-, and 12-year-olds, along with 37 undergraduates, were asked to judge the trustworthiness of 200 faces. Next, they issued facial attractiveness judgments. The results indicated that children made consistent trustworthiness and attractiveness judgments based on facial appearance, but with-adult and within-age agreement levels of facial judgments increased with age. Additionally, the agreement levels of judgments made by girls were higher than those by boys. Furthermore, the relationship between trustworthiness and attractiveness judgments increased with age, and the relationship between two judgments made by girls was closer than those by boys. These findings suggest that face-based trait judgment ability develops throughout childhood and that, like adults, children may use facial attractiveness as a heuristic cue that signals a stranger's trustworthiness.
Full Text Available Most previous studies on facial expression recognition have focused on the moderate emotions; to date, few studies have been conducted to investigate the explicit and implicit processes of peak emotions. In the current study, we used transiently peak intense expression images of athletes at the winning or losing point in competition as materials, and investigated the diagnosability of peak facial expressions at both implicit and explicit levels. In Experiment 1, participants were instructed to evaluate isolated faces, isolated bodies, and the face-body compounds, and eye-tracking movement was recorded. The results revealed that the isolated body and face-body congruent images were better recognized than isolated face and face-body incongruent images, indicating that the emotional information conveyed by facial cues was ambiguous, and the body cues influenced facial emotion recognition. Furthermore, eye movement records showed that the participants displayed distinct gaze patterns for the congruent and incongruent compounds. In Experiment 2A, the subliminal affective priming task was used, with faces as primes and bodies as targets, to investigate the unconscious emotion perception of peak facial expressions. The results showed that winning face prime facilitated reaction to winning body target, whereas losing face prime inhibited reaction to winning body target, suggesting that peak facial expressions could be perceived at the implicit level. In general, the results indicate that peak facial expressions cannot be consciously recognized but can be perceived at the unconscious level. In Experiment 2B, revised subliminal affective priming task and a strict awareness test were used to examine the validity of unconscious perception of peak facial expressions found in Experiment 2A. Results of Experiment 2B showed that reaction time to both winning body targets and losing body targets was influenced by the invisibly peak facial expression primes
Hontanilla, Bernardo; Vila, Antonio
To compare quantitatively the results obtained after hemihypoglossal nerve transposition and microvascular gracilis transfer associated with a cross facial nerve graft (CFNG) for reanimation of a paralysed face, 66 patients underwent hemihypoglossal transposition (n = 25) or microvascular gracilis transfer and CFNG (n = 41). The commissural displacement (CD) and commissural contraction velocity (CCV) in the two groups were compared using the system known as Facial clima. There was no inter-group variability between the groups (p > 0.10) in either variable. However, intra-group variability was detected between the affected and healthy side in the transposition group (p = 0.036 and p = 0.017, respectively). The transfer group had greater symmetry in displacement of the commissure (CD) and commissural contraction velocity (CCV) than the transposition group and patients were more satisfied. However, the transposition group had correct symmetry at rest but more asymmetry of CCV and CD when smiling.
Valstar, Michel F.; Jiang, Bihan; Mehu, Marc; Pantic, Maja; Scherer, Klaus
Automatic Facial Expression Recognition and Analysis, in particular FACS Action Unit (AU) detection and discrete emotion detection, has been an active topic in computer science for over two decades. Standardisation and comparability has come some way; for instance, there exist a number of commonly u
Valstar, Michel F.; Jiang, Bihan; Mehu, Marc; Pantic, Maja; Scherer, Klaus
Automatic Facial Expression Recognition and Analysis, in particular FACS Action Unit (AU) detection and discrete emotion detection, has been an active topic in computer science for over two decades. Standardisation and comparability has come some way; for instance, there exist a number of commonly
Wang, Ming Yan; Zhu, Wei; Ma, Lin; Ma, Juan Juan; Zhang, Dong En; Tong, Zhi Wei; Chen, Jun
In this paper, we report a facile method to successfully fabricate MnO2 nanoflowers loaded onto 3D RGO@nickel foam, showing enhanced biosensing activity due to the improved structural integration of different electrode materials components. When the as-prepared 3D hybrid electrodes were investigated as a binder-free biosensor, two well-defined and separate differential pulse voltammetric peaks for ractopamine (RAC) and salbutamol (SAL) were observed, indicating the simultaneous selective detection of both β-agonists possible. The MnO2/RGO@NF sensor also demonstrated a linear relationship over a wide concentration range of 17 nM to 962 nM (R=0.9997) for RAC and 42 nM to 1463 nM (R=0.9996) for SAL, with the detection limits of 11.6 nM for RAC and 23.0 nM for SAL. In addition, the developed MnO2/RGO@NF sensor was further investigated to detect RAC and SAL in pork samples, showing satisfied comparable results in comparison with analytic results from HPLC.
Full Text Available We showed that the looking time spent on faces is a valid covariate of beauty by testing the relation between facial attractiveness and gaze behavior. We presented natural scenes which always pictured two people, encompassing a wide range of facial attractiveness. Employing measurements of eye movements in a free viewing paradigm, we found a linear relation between facial attractiveness and gaze behavior: The more attractive the face, the longer and the more often it was looked at. In line with evolutionary approaches, the positive relation was particularly pronounced when participants viewed other sex faces.
Mitrovic, Aleksandra; Goller, Jürgen
We showed that the looking time spent on faces is a valid covariate of beauty by testing the relation between facial attractiveness and gaze behavior. We presented natural scenes which always pictured two people, encompassing a wide range of facial attractiveness. Employing measurements of eye movements in a free viewing paradigm, we found a linear relation between facial attractiveness and gaze behavior: The more attractive the face, the longer and the more often it was looked at. In line with evolutionary approaches, the positive relation was particularly pronounced when participants viewed other sex faces. PMID:27698984
Andrej N. Ilanković
Evaluation of efficacy after 6 weeks specific rehabilitation treatment with the Vilan method showed: very satisfactory results in correction of involuntary movements of the torso, bradykinesia of the hands, praxia and the simple simultaneous movements; satisfactory correction of involuntary movements of the extremities, walking and the facial gestures. No correction was in: involuntary movement of mouth and face, tremor, ideation, ideo‑motor series and in complex simultaneous movement.
Strelnikov, Kuzma; Foxton, Jessica; Marx, Mathieu; Barone, Pascal
The visual cues involved in auditory speech processing are not restricted to information from lip movements but also include head or chin gestures and facial expressions such as eyebrow movements. The fact that visual gestures precede the auditory signal implicates that visual information may influence the auditory activity. As visual stimuli are very close in time to the auditory information for audiovisual syllables, the cortical response to them usually overlaps with that for the auditory stimulation; the neural dynamics underlying the visual facilitation for continuous speech therefore remain unclear. In this study, we used a three-word phrase to study continuous speech processing. We presented video clips with even (without emphasis) phrases as the frequent stimuli and with one word visually emphasized by the speaker as the non-frequent stimuli. Negativity in the resulting ERPs was detected after the start of the emphasizing articulatory movements but before the auditory stimulus, a finding that was confirmed by the statistical comparisons of the audiovisual and visual stimulation. No such negativity was present in the control visual-only condition. The propagation of this negativity was observed between the visual and fronto-temporal electrodes. Thus, in continuous speech, the visual modality evokes predictive coding for the auditory speech, which is analysed by the cerebral cortex in the context of the phrase even before the arrival of the corresponding auditory signal.
Posamentier, Mette T; Abdi, Hervé
This paper reviews processing of facial identity and expressions. The issue of independence of these two systems for these tasks has been addressed from different approaches over the past 25 years. More recently, neuroimaging techniques have provided researchers with new tools to investigate how facial information is processed in the brain. First, findings from "traditional" approaches to identity and expression processing are summarized. The review then covers findings from neuroimaging studies on face perception, recognition, and encoding. Processing of the basic facial expressions is detailed in light of behavioral and neuroimaging data. Whereas data from experimental and neuropsychological studies support the existence of two systems, the neuroimaging literature yields a less clear picture because it shows considerable overlap in activation patterns in response to the different face-processing tasks. Further, activation patterns in response to facial expressions support the notion of involved neural substrates for processing different facial expressions.
MARKIN Evgeny; PRAKASH Edmond C.
Facial expression recognition consists of determining what kind of emotional content is presented in a human face.The problem presents a complex area for exploration, since it encompasses face acquisition, facial feature tracking, facial expression classification. Facial feature tracking is of the most interest. Active Appearance Model (AAM) enables accurate tracking of facial features in real-time, but lacks occlusions and self-occlusions. In this paper we propose a solution to improve the accuracy of fitting technique. The idea is to include occluded images into AAM training data. We demonstrate the results by running ex periments using gradient descent algorithm for fitting the AAM. Our experiments show that using fitting algorithm with occluded training data improves the fitting quality of the algorithm.
This report is about facial asymmetry, its connection to emotional expression, and methods of measuring facial asymmetry in videos of faces. The research was motivated by two factors: firstly, there was a real opportunity to develop a novel measure of asymmetry that required minimal human involvement and that improved on earlier measures in the literature; and secondly, the study of the relationship between facial asymmetry and emotional expression is both interesting in its own right, and important because it can inform neuropsychological theory and answer open questions concerning emotional processing in the brain. The two aims of the research were: first, to develop an automatic frame-by-frame measure of facial asymmetry in videos of faces that improved on previous measures; and second, to use the measure to analyse the relationship between facial asymmetry and emotional expression, and connect our findings with previous research of the relationship.
Dalla Toffola, Elena; Pavese, Chiara; Cecini, Miriam; Petrucci, Lucia; Ricotti, Susanna; Bejor, Maurizio; Salimbeni, Grazia; Biglioli, Federico; Klersy, Catherine
Our study evaluates the grade and timing of recovery in 30 patients with complete facial paralysis (House-Brackmann grade VI) treated with hypoglossal-facial nerve (XII-VII) anastomosis and a long-term rehabilitation program, consisting of exercises in facial muscle activation mediated by tongue movement and synkinesis control with mirror feedback. Reinnervation after XII-VII anastomosis occurred in 29 patients, on average 5.4 months after surgery. Three years after the anastomosis, 23.3% of patients had grade II, 53.3% grade III, 20% grade IV and 3.3% grade VI ratings on the House-Brackmann scale. Time to reinnervation was associated with the final House-Brackmann grade. Our study demonstrates that patients undergoing XIIVII anastomosis and a long-term rehabilitation program display a significant recovery of facial symmetry and movement. The recovery continues for at Hypoglossal-facial nerve anastomosis and rehabilitation in patients with complete facial palsy: cohort study of 30 patients followed up for three years least three years after the anastomosis, meaning that prolonged follow-up of these patients is advisable.
Background-The maxillary artery is recognized as the main vascular supply of the facial bones; nonetheless clinical evidence supports a co-dominant role for the facial artery. This study explores the extent of the facial skeleton within a facial allograft that can be harvested based on the facial artery. Methods-Twenty-three cadaver heads were used in this study. In 12 heads, the right facial, superficial temporal and maxillary arteries were injected. In 1 head, facial artery angiography w...
Yordany Boza Mejias
Full Text Available Background: odontogenic facial cellulitis is an acute inflammatory process manifested in very different ways, with a variable scale in clinical presentation ranging from harmless well defined processes, to diffuse and progressive that may develop complications leading the patient to a critical condition, even risking their lives. Objective: To characterize the behavior of odontogenic facial cellulitis. Methods: A descriptive case series study was conducted at the dental clinic of Aguada de Pasajeros, Cienfuegos, from September 2010 to March 2011. It included 56 patients who met the inclusion criteria. Variables analyzed included: sex, age, teeth and regions affected, causes of cellulite and prescribed treatment. Results: no sex predilection was observed, lower molars and submandibular anatomical region were the most affected (50% and 30 4% respectively being tooth decay the main cause for this condition (51, 7%. The opening access was not performed to all the patients in the emergency service. The causal tooth extraction was not commonly done early, according to the prescribed antibiotic group. Thermotherapy with warm fomentation and saline mouthwash was the most prescribed and the most widely used group of antibiotics was the penicillin. Conclusions: dental caries were the major cause of odontogenic cellulite. There are still difficulties with the implementation of opening access.
José Ricardo Gurgel Testa
Full Text Available A paralisia facial causada pelo colesteatoma é pouco freqüente. As porções do nervo mais acometidas são a timpânica e a região do 2º joelho. Nos casos de disseminação da lesão colesteatomatosa para o epitímpano anterior, o gânglio geniculado é o segmento do nervo facial mais sujeito à injúria. A etiopatogenia pode estar ligada à compressão do nervo pelo colesteatoma seguida de diminuição do seu suprimento vascular como também pela possível ação de substâncias neurotóxicas produzidas pela matriz do tumor ou pelas bactérias nele contidas. OBJETIVO: Avaliar a incidência, as características clínicas e o tratamento da paralisia facial decorrente da lesão colesteatomatosa. FORMA DE ESTUDO: Clínico retrospectivo. MATERIAL E MÉTODO: Estudo retrospectivo envolvendo dez casos de paralisia facial por colesteatoma selecionados através de levantamento de 206 descompressões do nervo facial com diferentes etiologias, realizadas na UNIFESP-EPM nos últimos dez anos. RESULTADOS: A incidência de paralisia facial por colesteatoma neste estudo foi de 4,85%,com predominância do sexo feminino (60%. A idade média dos pacientes foi de 39 anos. A duração e o grau da paralisia (inicial juntamente com a extensão da lesão foram importantes em relação à recuperação funcional do nervo facial. CONCLUSÃO: O tratamento cirúrgico precoce é fundamental para que ocorra um resultado funcional mais adequado. Nos casos de ruptura ou intensa fibrose do tecido nervoso, o enxerto de nervo (auricular magno/sural e/ou a anastomose hipoglosso-facial podem ser sugeridas.Facial paralysis caused by cholesteatoma is uncommon. The portions most frequently involved are horizontal (tympanic and second genu segments. When cholesteatomas extend over the anterior epitympanic space, the facial nerve is placed in jeopardy in the region of the geniculate ganglion. The aetiology can be related to compression of the nerve followed by impairment of its
Saatci, I. [Dept. of Radiology, Hacettepe Univ., Hospital Sihhiye, Ankara (Turkey); Sahintuerk, F. [Dept. of Radiology, Hacettepe Univ., Hospital Sihhiye, Ankara (Turkey); Sennaroglu, L. [Dept. of Otolaryngology, Head and Neck Surgery, Hacettepe Univ., Hospital Sihhiye, Ankara (Turkey); Boyvat, F. [Dept. of Radiology, Hacettepe Univ., Hospital Sihhiye, Ankara (Turkey); Guersel, B. [Dept. of Otolaryngology, Head and Neck Surgery, Hacettepe Univ., Hospital Sihhiye, Ankara (Turkey); Besim, A. [Dept. of Radiology, Hacettepe Univ., Hospital Sihhiye, Ankara (Turkey)
The purpose of this prospective study was to define the enhancement pattern of the facial nerve in idiopathic facial paralysis (Bell`s palsy) on magnetic resonance (MR) imaging with routine doses of gadolinium-DTPA (0.1 mmol/kg). Using 0.5 T imager, 24 patients were examined with a mean interval time of 13.7 days between the onset of symptoms and the MR examination. Contralateral asymptomatic facial nerves constituted the control group and five of the normal facial nerves (20.8%) showed enhancement confined to the geniculate ganglion. Hence, contrast enhancement limited to the geniculate ganglion in the abnormal facial nerve (3 of 24) was referred to a equivocal. Not encountered in any of the normal facial nerves, enhancement of other segments alone or associated with geniculate ganglion enhancement was considered to be abnormal and noted in 70.8% of the symptomatic facial nerves. The most frequently enhancing segments were the geniculate ganglion and the distal intracanalicular segment. (orig.)
Bran, Gregor M; Börjesson, Pontus K E; Boahene, Kofi D; Gosepath, Jan; Lohuis, Peter J F M
Delayed recovery after facial palsy results in aberrant nerve regeneration with symptomatic movement disorders, summarized as the postparalytic facial nerve syndrome. The authors present an alternative surgical approach for improvement of periocular movement disorders in patients with postparalytic facial nerve syndrome. The authors proposed that endoscopic brow lift leads to an improvement of periocular movement disorders by reducing pathologically raised levels of afferent input. Eleven patients (seven women and four men) with a mean age of 54 years (range, 33 to 85 years) and with postparalytic facial nerve syndrome underwent endoscopic brow lift under general anesthesia. Patients' preoperative condition was compared with their postoperative condition using a retrospective questionnaire. Subjects were also asked to compare the therapeutic effectiveness of endoscopic brow lift and botulinum toxin type A. Mean follow-up was 52 months (range, 22 to 83 months). No intraoperative or postoperative complications occurred. During follow-up, patients and physicians observed an improvement of periorbital contractures and oculofacial synkinesis. Scores on quality of life improved significantly after endoscopic brow lift. Best results were obtained when botulinum toxin type A was adjoined after the endoscopic brow lift. Patients described a cumulative therapeutic effect. These findings suggest endoscopic brow lift as a promising additional treatment modality for the treatment of periocular postparalytic facial nerve syndrome-related symptoms, leading to an improved quality of life. Even though further prospective investigation is needed, a combination of endoscopic brow lift and postsurgical botulinum toxin type A administration could become a new therapeutic standard.
Jorge Jr, Jose Jarjura; Pialarissi, Paulo Roberto; Borges, Godofredo Campos; Squella, Sara Agueda Fuenzalida; de Gouveia, Maria de Fátima; Saragiotto Jr, Jose Carlos; Gonçalves, Victor Ribeiro
Different methods used to evaluate the movements of the face have many degrees of subjectivity and reliability. The authors discuss the ease of using these methods in clinical practice or their accuracy in scientific research. To obtain the standard for normal facial muscles movements using an objective method - the Vicon system. Light reflective markers were placed at points of interest on the face of 12 normal subjects. The movements were captured by cameras that sent the images to a computer. The points' displacements were measured between rest and maximum muscle contraction; and we calculated the means and the standard deviations (SD) were calculated. When smiling, the variation of the oral commissures was between 6.45 and 12.11 mm, mean of 9.28 mm and SD od 2.83; for lifting the eyebrow, it is between 6.0 and 13.08 mm, mean of 10.57 mm and SD of 2.51; for eyelids movement there was a variation of 6.89 and 11.29 mm, with a mean value of 9.09 mm and SD of 2.20; for the movement of wrinkling the forehead, the results showed a variation of 4.16 and 10.85 mm, with a mean value of 7.56 and SD of 3.29. The authors obtained normal patterns for facial muscle contraction.
Humayra Binte Ali
Full Text Available Face is one of the most important biometric traits for its uniqueness and robustness. For this reason researchers from many diversified fields, like: security, psychology, image processing, and computer vision, started to do research on face detection as well as facial expression recognition. Subspace learning methods work very good for recognizing same facial features. Among subspace learning techniques PCA, ICA, NMF are the most prominent topics. In this work, our main focus is on Independent Component Analysis (ICA. Among several architectures of ICA,we used here FastICA and LS-ICA algorithm. We applied Fast-ICA on whole faces and on different facial parts to analyze the influence of different parts for basic facial expressions. Our extended algorithm WAPA-FastICA and OEPA-FastICA are discussed in proposed algorithm section. Locally Salient ICA is implemented on whole face by using 8x8 windows to find the more prominent facial features for facial expression. The experiment shows our proposed OEPA-FastICA and WAPA-FastICA outperform the existing prevalent Whole-FastICA and LS-ICA methods.
Rafailovich-Sokolov, Sara; Guan, E.; Afriat, Isablle; Rafailovich, Miriam; Sokolov, Jonathan; Clark, Richard
Digital image analysis techniques have been extensively used in facial recognition. To date, most static facial characterization techniques, which are usually based on Fourier transform techniques, are sensitive to lighting, shadows, or modification of appearance by makeup, natural aging or surgery. In this study we have demonstrated that it is possible to uniquely identify faces by analyzing the natural motion of facial features with Digital Image Speckle Correlation (DISC). Human skin has a natural pattern produced by the texture of the skin pores, which is easily visible with conventional digital cameras of resolution greater than 4 mega pixels. Hence the application of the DISC method to the analysis of facial motion appears to be very straightforward. Here we demonstrate that the vector diagrams produced by this method for facial images are directly correlated to the underlying muscle structure which is unique for an individual and is not affected by lighting or make-up. Furthermore, we will show that this method can also be used for medical diagnosis in early detection of facial paralysis and other forms of skin disorders.
Full Text Available Simulation theories propose that observing another’s facial expression activates sensorimotor representations involved in the execution of that expression, facilitating recognition processes. The mirror neuron system (MNS is a potential mechanism underlying simulation of facial expressions, with like neural processes activated both during observation and performance. Research with monkeys and adult humans supports this proposal, but so far there have been no investigations of facial MNS activity early in human development. The current study used electroencephalography (EEG to explore mu rhythm desynchronization, an index of MNS activity, in 30-month-old children as they observed videos of dynamic emotional and non-emotional facial expressions, as well as scrambled versions of the same videos. We found significant mu desynchronization in central regions during observation and execution of both emotional and non-emotional facial expressions, which was right-lateralized for emotional and bilateral for non-emotional expressions during observation. These findings support previous research suggesting movement simulation during observation of facial expressions, and are the first to provide evidence for sensorimotor activation during observation of facial expressions, consistent with a functioning facial MNS at an early stage of human development.
Jäncke, L; Kaufmann, N
Two experiments were undertaken to examine whether facial responses to odors correlate with the hedonic odor evaluation. Experiment 1 examined whether subjects (n = 20) spontaneously generated facial movements associated with odor evaluation when they are tested in private. To measure facial responses, EMG was recorded over six muscle regions (M. corrugator supercilii, M. procerus, M. nasalis, M. levator, M. orbicularis oculi and M. zygomaticus major) using surface electrodes. In experiment 2 the experimental group (n = 10) smelled the odors while they were visually inspected by the experimenter sitting in front of the test subjects. The control group (n = 10) performed the same experimental condition as those subjects participating in experiment 1. Facial EMG over four mimetic muscle regions (M. nasalis, M. levator, M. zygomaticus major, M. orbicularis oculi) was measured while subjects smelled different odors. The main findings of this study may be summarized as follows: (i) there was no correlation between valence rating and facial EMG responses; (ii) pleasant odors did not evoke smiles when subjects smelled the odors in private; (iii) in solitude, highly concentrated malodors evoked facial EMG reactions of those mimetic muscles which are mainly involved in generating a facial display of disgust; (iv) those subjects confronted with an audience showed stronger facial reactions over the periocular and cheek region (indicative of a smile) during the smelling of pleasant odors than those who smelled these odors in private; (v) those subjects confronted with an audience showed stronger facial reactions over the M. nasalis region (indicative of a display of disgust) during the smelling of malodors than those who smelled the malodors in private. These results were taken as evidence for a more social communicative function of facial displays and strongly mitigates the reflexive-hedonic interpretation of facial displays to odors as supposed by Steiner.
Lindsay, Robin W; Bhama, Prabhat; Hadlock, Tessa A
Facial paralysis can contribute to disfigurement, psychological difficulties, and an inability to convey emotion via facial expression. In patients unable to perform a meaningful smile, free gracilis muscle transfer (FGMT) can often restore smile function. However, little is known about the impact on disease-specific quality of life. To determine quantitatively whether FGMT improves quality of life in patients with facial paralysis. Prospective evaluation of 154 FGMTs performed at a facial nerve center on 148 patients with facial paralysis. The Facial Clinimetric Evaluation (FaCE) survey and Facial Assessment by Computer Evaluation software (FACE-gram) were used to quantify quality-of-life improvement, oral commissure excursion, and symmetry with smile. Free gracilis muscle transfer. Change in FaCE score, oral commissure excursion, and symmetry with smile. There were 127 successful FGMTs on 124 patients and 14 failed procedures on 13 patients. Mean (SD) FaCE score increased significantly after successful FGMT (42.30 [15.9] vs 58.5 [17.60]; paired 2-tailed t test, P Free gracilis muscle transfer has become a mainstay in the management armamentarium for patients with severe reduction in oral commissure movement after facial nerve insult and recovery. We found a quantitative improvement in quality of life after FGMT in patients who could not recover a meaningful smile after facial nerve insult. Quality-of-life improvement was not statistically different between donor nerve groups or facial paralysis types.
Heppt, Werner J; Vent, Julia
Beauty has been an intriguing issue since the evolving of a culture in mankind. Even the Neanderthals are believed to have applied makeover to enhance facial structures and thus underline beauty. The determinants of beauty and aesthetics have been defined by artists and scientists alike. This article will give an overview of the evolvement of a beauty concept and the significance of the facial profile. It aims at sharpening the senses of the facial plastic surgeon for analyzing the patient's face, consulting the patient on feasible options, planning, and conducting surgery in the most individualized way.
Full Text Available É relatado um caso de paralisia facial bilateral, incompleta, associada a hipoacusia esquerda, após traumatismo cranioencefálico, com fraturas evidenciadas radiológicamente. Algumas considerações são formuladas tentando relacionar ditas manifestações com fraturas do osso temporal.A case of traumatic facial diplegia with left partial loss of hearing following head injury is reported. X-rays showed fractures on the occipital and left temporal bones. A review of traumatic facial paralysis is made.
Full Text Available É apresentado um caso de diplegia facial surgida após meningite meningocócica e infecção por herpes simples. Depois de discutir as diversas condições que o fenômeno pode apresentar-se, o autor inclina-se por uma etiologia herpética.A case of bilateral facial paralysis following meningococcal meningitis and herpes simplex infection is reported. The author discusses the differential diagnosis of bilateral facial nerve paralysis which includes several diseases and syndromes and concludes by herpetic aetiology.
Lautenbacher, Stefan; Kunz, Miriam
The analysis of the facial expression of pain promises to be one of the most sensitive tools for the detection of pain in patients with moderate to severe forms of dementia, who can no longer self-report pain. Fine-grain analysis using the Facial Action Coding System (FACS) is possible in research b
Akechi, Hironori; Senju, Atsushi; Kikuchi, Yukiko; Tojo, Yoshikuni; Osanai, Hiroo; Hasegawa, Toshikazu
Two experiments investigated whether children with autism spectrum disorder (ASD) integrate relevant communicative signals, such as gaze direction, when decoding a facial expression. In Experiment 1, typically developing children (9-14 years old; n = 14) were faster at detecting a facial expression accompanying a gaze direction with a congruent…
CHRISTEN, HJ; BARTLAU, N; HANEFELD, F; EIFFERT, H; THOMSSEN, R
27 consecutive cases with acute peripheral facial palsy were studied for Lyme borreliosis. In 16 out of 27 children Lyme borreliosis could be diagnosed by detection of specific IgM antibodies in CSF. CSF findings allow a clear distinction according to etiology. All children with facial palsy due to
CHRISTEN, HJ; BARTLAU, N; HANEFELD, F; EIFFERT, H; THOMSSEN, R
27 consecutive cases with acute peripheral facial palsy were studied for Lyme borreliosis. In 16 out of 27 children Lyme borreliosis could be diagnosed by detection of specific IgM antibodies in CSF. CSF findings allow a clear distinction according to etiology. All children with facial palsy due to
Detection of feline herpes virus 1 via polymerase chain reaction and immunohistochemistry in cats with ulcerative facial dermatitis, eosinophilic granuloma complex reaction patterns and mosquito bite hypersensitivity.
Persico, Paola; Roccabianca, Paola; Corona, Antonio; Vercelli, Antonella; Cornegliani, Luisa
Ulcerative dermatitis caused by feline herpes virus 1 (FHV-1) is an uncommon disease characterized by cutaneous ulcers secondary to epidermal, adnexal and dermal necrosis. Differential diagnoses for FHV-1 lesions include, but are not limited to, mosquito bite hypersensitivity and eosinophilic granuloma complex. Histopathological diagnosis of FHV-1 dermatitis is based on the detection of the intranuclear inclusion bodies. In cases where intranuclear inclusions are missing but clinical and histological findings are compatible with FHV-1 dermatitis, immunohistochemistry (IHC) and PCRs have been used. In this retrospective study, we evaluated the presence of FHV-1 by IHC and PCR in skin biopsies and compared the results of the two tests. Sixty-four skin biopsy specimens from cats with compatible lesions were reviewed and tested via PCR and IHC for evidence of FHV-1. Polymerase chain reaction was positive in 12 of 64 biopsies; PCR and IHC were positive only in two of 64 biopsies, and these cases were considered true positive cases. The higher number of PCR-positive cases was possibly attributed to amplification of viral DNA from a live attenuated vaccination, but a previous FHV-1 infection with subsequent amplification of latently inserted FHV-1 could not be excluded. If clinical signs and histopathology suggest FHV-1 infection in the absence of typical inclusion bodies, IHC is the preferred diagnostic test; PCR may be useful for initial screening, but due to false positives is not sufficient for a definitive diagnosis.
Li, Hong; Williams, Trevor
Orofacial clefts are the most frequent craniofacial defects, which affect 1.5 in 1,000 newborns worldwide. Orofacial clefting is caused by abnormal facial development. In human and mouse, initial growth and patterning of the face relies on several small buds of tissue, the facial prominences. The face is derived from six main prominences: paired frontal nasal processes (FNP), maxillary prominences (MxP) and mandibular prominences (MdP). These prominences consist of swellings of mesenchyme that are encased in an overlying epithelium. Studies in multiple species have shown that signaling crosstalk between facial ectoderm and mesenchyme is critical for shaping the face. Yet, mechanistic details concerning the genes involved in these signaling relays are lacking. One way to gain a comprehensive understanding of gene expression, transcription factor binding, and chromatin marks associated with the developing facial ectoderm and mesenchyme is to isolate and characterize the separated tissue compartments. Here we present a method for separating facial ectoderm and mesenchyme at embryonic day (E) 10.5, a critical developmental stage in mouse facial formation that precedes fusion of the prominences. Our method is adapted from the approach we have previously used for dissecting facial prominences. In this earlier study we had employed inbred C57BL/6 mice as this strain has become a standard for genetics, genomics and facial morphology. Here, though, due to the more limited quantities of tissue available, we have utilized the outbred CD-1 strain that is cheaper to purchase, more robust for husbandry, and tending to produce more embryos (12-18) per litter than any inbred mouse strain. Following embryo isolation, neutral protease Dispase II was used to treat the whole embryo. Then, the facial prominences were dissected out, and the facial ectoderm was separated from the mesenchyme. This method keeps both the facial ectoderm and mesenchyme intact. The samples obtained using this
Hampton, Anna L; Colby, Lesley A; Bergin, Ingrid L
Simian retrovirus type D (SRVD) is a naturally occurring betaretrovirus in nonhuman primates of the genus Macaca. Infection can lead to a variety of clinical, hematologic, and histopathologic abnormalities. We report an unusual clinical presentation of facial paralysis and histologic lymphocytic neuritis in an SRVD type 2 (SRVD2)-infected rhesus macaque (Macaca mulatta) with a catheter-associated vena caval thrombus, anemia, thrombocytopenia, and multisystemic lymphoid hyperplasia. At initial presentation, a right atrial mass was detected by echocardiography. The macaque was clinically asymptomatic but had persistent anemia, thrombocytopenia, hyperglobulinemia, and later neutropenia. It was seropositive for SRV and PCR-positive for SRVD 2. Approximately 1 mo after initial presentation, the macaque developed right facial paralysis and was euthanized. Histologic lesions included lymphoplasmacytic aggregates affecting multiple organs, consistent with SRV-related lymphoid hyperplasia. The right facial nerve showed lymphoplasmacytic inflammation. The nerve itself was negative immunohistochemically for SRV antigen, but antigen was present infrequently in pericapillary lymphoid cells within the facial nerve and abundantly within lymphoid aggregates in the adjacent parotid salivary gland, bone marrow, and soft tissue. Known neurotropic viruses could not be identified. Given the widespread inflammation in this macaque, particularly in the area surrounding the facial nerve, lymphocytic neuritis and facial paralysis likely were an indirect effect of SRV infection due to local extension of SRV-related inflammation in the surrounding tissue.
Full Text Available Accidental injury to the facial nerve where the bony canal defects are present may result with facial nerve dysfunction during otological surgery. Therefore, it is critical to know the incidence and the type of facial nerve dehiscences in the presence of normal development of the facial canal. The aim of this study is to review the site and the type of such bony defects in 144 patients operated for facial paralysis, myringoplasty, stapedotomy, middle ear exploration for sudden hearing loss, and so forth, other than chronic suppurative otitis media with or without cholesteatoma, middle ear tumors, and anomaly. Correlation of intraoperative findings with preoperative computerized tomography was also analyzed in 35 patients. Conclusively, one out of every 10 surgical cases may have dehiscence of the facial canal which has to be always borne in mind during surgical manipulation of the middle ear. Computerized tomography has some limitations to evaluate the dehiscent facial canal due to high false negative and positive rates.
Full Text Available Facial self-resemblance has been proposed to serve as a kinship cue that facilitates cooperation between kin. In the present study, facial resemblance was manipulated by morphing stimulus faces with the participants' own faces or control faces (resulting in self-resemblant or other-resemblant composite faces. A norming study showed that the perceived degree of kinship was higher for the participants and the self-resemblant composite faces than for actual first-degree relatives. Effects of facial self-resemblance on trust and cooperation were tested in a paradigm that has proven to be sensitive to facial trustworthiness, facial likability, and facial expression. First, participants played a cooperation game in which the composite faces were shown. Then, likability ratings were assessed. In a source memory test, participants were required to identify old and new faces, and were asked to remember whether the faces belonged to cooperators or cheaters in the cooperation game. Old-new recognition was enhanced for self-resemblant faces in comparison to other-resemblant faces. However, facial self-resemblance had no effects on the degree of cooperation in the cooperation game, on the emotional evaluation of the faces as reflected in the likability judgments, and on the expectation that a face belonged to a cooperator rather than to a cheater. Therefore, the present results are clearly inconsistent with the assumption of an evolved kin recognition module built into the human face recognition system.
@@ Case History Ms. Zheng from Singapore, aged 51 years, paid her first visit on Aug.30, 2006, with the chief complaint of left facial paralysis accompanied with facial spasm for 5 years. The patient got left facial paralysis in 2001, which was not completely cured, and developed into facial spasm one year later. Although she had received various treatments including surgical operation, the disease was not cured. At the moment she had discomfort and dull sensation in the left facial area, mainly accompanied with twitching of the peripheral nerve of the eye. She was also accompanied with posterior auricular muscle tension and discomfort. She had fairly good sleep and appetite, but slightly quick temper. Physical examination at the moment showed that the patient had a slightly thin body figure, flushing face, and good mental state. The blood pressure was 110/75mmHg and the heart rate was 85 beats/min. No abnormal signs were found in the heart and lungs. The facial examination showed mild swelling of the left side of the face, incomplete closing of the eye lids, disappearance of wrinkles on the forehead, shallow nasolabial groove, and obvious muscle tension and tenderness in the left opisthotic region. Careful observation could find slight facial muscular twitching. The tongue proper was red with little coating, and the pulse thready-wiry.
汪海彬; 卢家楣; 姚本先; 桑青松; 陈宁; 唐晓晨
采用 ERP 和眼动技术考察职前教师情绪复杂性对情绪面孔加工的影响。实验采用面孔识别范式，分别比较高低情绪复杂性的职前教师在加工4类基本情绪时的 ERP 和眼动差异。行为结果显示，除愉快情绪外，高分组在其他3种情绪加工的正确率上显著高于低分组，反应时上则显著短于低分组； ERP结果显示，高分组在 P100、N170和 LPP三个成分的幅值上均显著高于低分组，而在 VPP、P200和 N200三个成分的幅值上则显著低于低分组；眼动结果显示，高分组在总注视点个数、注视频率和瞳孔直径上高于低分组，而在眼跳时间和眼跳幅度上则小于低分组。这些结果表明情绪复杂性影响情绪面孔的加工，高情绪复杂性促使个体在情绪信息加工时对情绪类别信息更敏感，且采用更高的加工效率和更优的加工模式。%Propositional knowledge for emotional complexity is also called emotional awareness, which has been regarded as the most fundamental skill to emotional intelligence. It refers to the ability of recognizing and describing one’s own and others’ emotions. This kind of ability is important to the processing of individual mental health and interpersonal interaction. In the present study, the electrophysiological correlates and the eye movement of the facial expression processing among the pre-service teachers who have different emotional awareness were investigated. To screen participant of high or low emotional awareness, 800 Pre-Service Teachers were surveyed by the Chinese version of leaves of emotional awareness scale (LEAS). As paid volunteers, 40 pre-service teachers were recruited to take part in study1 and the other 60 pre-service teachers in study2. The participants in the experiment were all right-hand, had normal or corrected-to-normal vision and had no neurological or psychological disorders. This study was approved by the local ethics committee
Popa, M.C.; Rothkrantz, L.J.M.; Wiggers, P.; Braspenning, R.A.C.; Shan, C.
Many approaches to facial expression recognition focus on assessing the six basic emotions (anger, disgust, happiness, fear, sadness, and surprise). Real-life situations proved to produce many more subtle facial expressions. A reliable way of analyzing the facial behavior is the Facial Action Coding
Facial disfigurements can result from oncologic surgery, trauma and congenital deformities. These disfigurements can be rehabilitated with facial prostheses. Facial prostheses are usually made of silicones. A problem of facial prostheses is that microorganisms can colonize their surface. It is hard
Hofer, Stefan O P; Mureau, Marc A M
Aesthetic facial reconstruction is a challenging art. Improving outcomes in aesthetic facial reconstruction requires a thorough understanding of the basic principles of the functional and aesthetic requirements for facial reconstruction. From there, further refinement and attention to detail can be provided. This paper discusses basic principles of aesthetic facial reconstruction.
Popa, M.C.; Rothkrantz, L.J.M.; Wiggers, P.; Braspenning, R.A.C.; Shan, C.
Many approaches to facial expression recognition focus on assessing the six basic emotions (anger, disgust, happiness, fear, sadness, and surprise). Real-life situations proved to produce many more subtle facial expressions. A reliable way of analyzing the facial behavior is the Facial Action Coding
Full Text Available Facial melanoses (FM are a common presentation in Indian patients, causing cosmetic disfigurement with considerable psychological impact. Some of the well defined causes of FM include melasma, Riehl′s melanosis, Lichen planus pigmentosus, erythema dyschromicum perstans (EDP, erythrosis, and poikiloderma of Civatte. But there is considerable overlap in features amongst the clinical entities. Etiology in most of the causes is unknown, but some factors such as UV radiation in melasma, exposure to chemicals in EDP, exposure to allergens in Riehl′s melanosis are implicated. Diagnosis is generally based on clinical features. The treatment of FM includes removal of aggravating factors, vigorous photoprotection, and some form of active pigment reduction either with topical agents or physical modes of treatment. Topical agents include hydroquinone (HQ, which is the most commonly used agent, often in combination with retinoic acid, corticosteroids, azelaic acid, kojic acid, and glycolic acid. Chemical peels are important modalities of physical therapy, other forms include lasers and dermabrasion.
Dost, Michael; Vogel, Dietmar; Winkler, Thomas; Vogel, Juergen; Erb, Rolf; Kieselstein, Eva; Michel, Bernd
Cross correlation analysis of digitised grey scale patterns is based on - at least - two images which are compared one to each other. Comparison is performed by means of a two-dimensional cross correlation algorithm applied to a set of local intensity submatrices taken from the pattern matrices of the reference and the comparison images in the surrounding of predefined points of interest. Established as an outstanding NDE tool for 2D and 3D deformation field analysis with a focus on micro- and nanoscale applications (microDAC and nanoDAC), the method exhibits an additional potential for far wider applications, that could be used for advancing homeland security. Cause the cross correlation algorithm in some kind seems to imitate some of the "smart" properties of human vision, this "field-of-surface-related" method can provide alternative solutions to some object and process recognition problems that are difficult to solve with more classic "object-related" image processing methods. Detecting differences between two or more images using cross correlation techniques can open new and unusual applications in identification and detection of hidden objects or objects with unknown origin, in movement or displacement field analysis and in some aspects of biometric analysis, that could be of special interest for homeland security.
Ahmed Hassan El-Sabbagh
Full Text Available Background: Subjects seeking aesthetic surgery for facial dimples are increasing in number. Literature on dimple creation surgery are sparse. Various techniques have been used with their own merits and disadvantages. Materials and Methods: Facial dimples were created in 23 cases. All the subjects were females. Five cases were bilateral and the rest were unilateral. Results: Minor complications such as swelling and hematoma were observed in four cases. Infection occurred in two cases. Most of the subjects were satisfied with the results. Conclusions: Suturing technique is safe, reliable and an easily reproducible way to create facial dimple. Level of Evidence: IV: Case series.
Haque, Mohammad Ahsanul; Nasrollahi, Kamal; Moeslund, Thomas B.
for facial landmark detection. If the face quality is low the proposed system corrects the facial landmarks that are detected by SDM. Depending upon the face velocity in consecutive video frames and face quality measure, two algorithms are proposed for correction of landmarks in low quality faces by using......Face alignment in video is a primitive step for facial image analysis. The accuracy of the alignment greatly depends on the quality of the face image in the video frames and low quality faces are proven to cause erroneous alignment. Thus, this paper proposes a system for quality aware face...... alignment by using a Supervised Decent Method (SDM) along with a motion based forward extrapolation method. The proposed system first extracts faces from video frames. Then, it employs a face quality assessment technique to measure the face quality. If the face quality is high, the proposed system uses SDM...
Anson, Goesel; Kane, Michael A C; Lambros, Val
Wrinkles are just one indicator of facial aging, but an indicator that is of prime importance in our world of facial aesthetics. Wrinkles occur where fault lines develop in aging skin. Those fault lines may be due to skin distortion resulting from facial expression or may be due to skin distortion from mechanical compression during sleep. Expression wrinkles and sleep wrinkles differ in etiology, location, and anatomical pattern. Compression, shear, and stress forces act on the face in lateral or prone sleep positions. We review the literature relating to the development of wrinkles and the biomechanical changes that occur in response to intrinsic and extrinsic influences. We explore the possibility that compression during sleep not only results in wrinkles but may also contribute to facial skin expansion.
Lee, John Y K; Chen, H Isaac; Urban, Christopher; Hojat, Anahita; Church, Ephraim; Xie, Sharon X; Farrar, John T
Outcomes in clinical trials on trigeminal pain therapies require instruments with demonstrated reliability and validity. The authors evaluated the Brief Pain Inventory (BPI) in its existing form plus an additional 7 facial-specific items in patients referred to a single neurosurgeon for a diagnosis of facial pain. The complete 18-item instrument is referred to as the BPI-Facial. This study was a cross-sectional analysis of patients who completed the BPI-Facial. The diagnosis of classic versus atypical trigeminal neuralgia (TN) was made before analyzing the questionnaire results. A hypothesis-driven factor analysis was used to determine the principal components of the questionnaire. Item reliability and questionnaire validity were tested for these specific constructs. Data from 156 patients were analyzed, including 114 patients (73%) with classic and 42 (27%) with atypical TN. Using orthomax rotation factor analysis, 3 factors with an eigenvalue > 1.0 were identified-pain intensity, interference with general activities, and facial-specific pain interference-accounting for 97.6% of the observed item variance. Retention of the 3 factors was confirmed via a Cattell scree plot. Internal reliability was demonstrated by calculating Cronbach's alpha: 0.86 for pain intensity, 0.89 for interference with general activities, 0.95 for facial-specific pain interference, and 0.94 for the entire instrument. Initial validity of the BPI-Facial instrument was supported by the detection of statistically significant differences between patients with classic versus atypical pain. Patients with atypical TN rated their facial pain as more intense (atypical 6.24 vs classic 5.03, p = 0.013) and as having greater interference in general activities (atypical 6.94 vs classic 5.43, p = 0.0033). Both groups expressed high levels of facial-specific pain interference (atypical 6.34 vs classic 5.95, p = 0.527). The BPI-Facial is a rigorous measure of facial pain in patients with TN and appears to
Ithzel Maria Villarreal
Full Text Available Introduction: Malignant tumors of the parotid gland account scarcely for 5% of all head and neck tumors. Most of these neoplasms have a high tendency for recurrence, local infiltration, perineural extension, and metastasis. Although uncommon, these malignant tumors require complex surgical treatment sometimes involving a total parotidectomy including a complete facial nerve resection. Severe functional and aesthetic facial defects are the result of a complete sacrifice or injury to isolated branches becoming an uncomfortable distress for patients and a major challenge for reconstructive surgeons. Case Report: A case of a 54-year-old, systemically healthy male patient with a 4 month complaint of pain and swelling on the right side of the face is presented. The patient reported a rapid increase in the size of the lesion over the past 2 months. Imaging tests and histopathological analysis reported an adenoid cystic carcinoma. A complete parotidectomy was carried out with an intraoperative notice of facial nerve infiltration requiring a second intervention for nerve and defect reconstruction. A free ALT flap with vascularized nerve grafts was the surgical choice. A 6 month follow-up showed partial facial movement recovery and the facial defect mended. Conclusion: It is of critical importance to restore function to patients with facial nerve injury. Vascularized nerve grafts, in many clinical and experimental studies, have shown to result in better nerve regeneration than conventional non-vascularized nerve grafts. Nevertheless, there are factors that may affect the degree, speed and regeneration rate regarding the free fasciocutaneous flap. In complex head and neck defects following a total parotidectomy, the extended free fasciocutaneous ALT (anterior-lateral thigh flap with a vascularized nerve graft is ideally suited for the reconstruction of the injured site. Donor–site morbidity is low and additional surgical time is minimal
Villarreal, Ithzel Maria; Rodríguez-Valiente, Antonio; Castelló, Jose Ramon; Górriz, Carmen; Montero, Oscar Alvarez; García-Berrocal, Jose Ramon
Introduction: Malignant tumors of the parotid gland account scarcely for 5% of all head and neck tumors. Most of these neoplasms have a high tendency for recurrence, local infiltration, perineural extension, and metastasis. Although uncommon, these malignant tumors require complex surgical treatment sometimes involving a total parotidectomy including a complete facial nerve resection. Severe functional and aesthetic facial defects are the result of a complete sacrifice or injury to isolated branches becoming an uncomfortable distress for patients and a major challenge for reconstructive surgeons. Case Report: A case of a 54-year-old, systemically healthy male patient with a 4 month complaint of pain and swelling on the right side of the face is presented. The patient reported a rapid increase in the size of the lesion over the past 2 months. Imaging tests and histopathological analysis reported an adenoid cystic carcinoma. A complete parotidectomy was carried out with an intraoperative notice of facial nerve infiltration requiring a second intervention for nerve and defect reconstruction. A free ALT flap with vascularized nerve grafts was the surgical choice. A 6 month follow-up showed partial facial movement recovery and the facial defect mended. Conclusion: It is of critical importance to restore function to patients with facial nerve injury. Vascularized nerve grafts, in many clinical and experimental studies, have shown to result in better nerve regeneration than conventional non-vascularized nerve grafts. Nevertheless, there are factors that may affect the degree, speed and regeneration rate regarding the free fasciocutaneous flap. In complex head and neck defects following a total parotidectomy, the extended free fasciocutaneous ALT (anterior-lateral thigh) flap with a vascularized nerve graft is ideally suited for the reconstruction of the injured site. Donor–site morbidity is low and additional surgical time is minimal compared with the time of a single
Sforza, C; Mapelli, A; Galante, D; Moriconi, S; Ibba, T M; Ferraro, L; Ferrario, V F
To assess sex- and age-related characteristics in standardized facial movements, 40 healthy adults (20 men, 20 women; aged 20-50 years) performed seven standardized facial movements (maximum smile; free smile; "surprise" with closed mouth; "surprise" with open mouth; eye closure; right- and left-side eye closures). The three-dimensional coordinates of 21 soft tissue facial landmarks were recorded by a motion analyser, their movements computed, and asymmetry indices calculated. Within each movement, total facial mobility was independent from sex and age (analysis of variance, p>0.05). Asymmetry indices of the eyes and mouth were similar in both sexes (p>0.05). Age significantly influenced eye and mouth asymmetries of the right-side eye closure, and eye asymmetry of the surprise movement. On average, the asymmetry indices of the symmetric movements were always lower than 8%, and most did not deviate from the expected value of 0 (Student's t). Larger asymmetries were found for the asymmetric eye closures (eyes, up to 50%, page had a limited influence on total facial motion and asymmetry in normal adult men and women. Copyright © 2010 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
Full Text Available BackgroundThis paper discusses the various methods and the materialsfor the fabrication of active artificial facial muscles. Theprimary use for these will be the reanimation of paralysedor atrophied muscles in sufferers of non-recoverableunilateral facial paralysis.MethodThe prosthetic solution described in this paper is based onsensing muscle motion of the contralateral healthy musclesand replicating that motion across a patient’s paralysed sideof the face, via solid state and thin film actuators. Thedevelopment of this facial prosthetic device focused onrecreating a varying intensity smile, with emphasis ontiming, displacement and the appearance of the wrinklesand folds that commonly appear around the nose and eyesduring the expression.An animatronic face was constructed with actuations beingmade to a silicone representation musculature, usingmultiple shape-memory alloy cascades. Alongside theartificial muscle physical prototype, a facial expressionrecognition software system was constructed. This formsthe basis of an automated calibration and reconfigurationsystem for the artificial muscles following implantation, soas to suit the implantee’s unique physiognomy.ResultsAn animatronic model face with silicone musculature wasdesigned and built to evaluate the performance of ShapeMemory Alloy artificial muscles, their power controlcircuitry and software control systems. A dual facial motionsensing system was designed to allow real time control overmodel – a piezoresistive flex sensor to measure physicalmotion, and a computer vision system to evaluate real toartificial muscle performance.Analysis of various facial expressions in real subjects wasmade, which give useful data upon which to base thesystems parameter limits.ConclusionThe system performed well, and the various strengths andshortcomings of the materials and methods are reviewedand considered for the next research phase, when newpolymer based artificial muscles are constructed
Hanson, Mark D; Zuker, Ronald M; Shaul, Randi Zlotnik
INTRODUCTION: Current pediatric burn care has resulted in survival being the expectation for most children. Composite tissue allotransplantation in the form of face or hand transplantation may present opportunities for reconstructive surgery of patients with burns. The present paper addresses the question “Could facial transplantation be of therapeutic benefit in the treatment of pediatric burns associated with facial disfigurement?” METHODS: Therapeutic benefit of facial transplantation was defined in terms of psychiatric adjustment and quality of life (QOL). To ascertain therapeutic benefit, studies of pediatric burn injury and associated psychiatric adjustment and QOL in children, adolescents and adults with pediatric burns, were reviewed. RESULTS: Pediatric burn injury is associated with anxiety disorders, including post-traumatic stress disorder and depressive disorders. Many patients with pediatric burns do not routinely access psychiatric care for these disorders, including those for psychiatric assessment of suicidal risk. A range of QOL outcomes were reported; four were predominantly satisfactory and one was predominantly unsatisfactory. DISCUSSION: Facial transplantation may reduce the risk of depressive and anxiety disorders other than post-traumatic stress disorder. Facial transplantation promises to be the new reconstructive psychosurgery, because it may be a surgical intervention with the potential to reduce the psychiatric suffering associated with pediatric burns. Furthermore, patients with pediatric burns may experience the stigma of disfigurement and psychiatric conditions. The potential for improved appearance with facial transplantation may reduce this ‘dual stigmata’. Studies combining surgical and psychiatric research are warranted. PMID:19949498
Full Text Available Introduction. The functional results of surgery in terms of facial mobility are key elements in the treatment of patients. Little is actually known about changes in facial mobility following surgical treatment with maxillomandibular advancement (MMA. Objectives. The three-dimensional (3D methods study of basic facial movements in typical OSAS patients treated with MMA was the topic of the present research. Materials and Methods. Ten patients affected by severe obstructive sleep apnea syndrome (OSAS were engaged for the study. Their facial surface data was acquired using a 3D laser scanner one week before (T1 and 12 months after (T2 orthognathic surgery. The facial movements were frowning, grimace, smiling, and lip purse. They were described in terms of surface and landmark displacements (mm. The mean landmark displacement was calculated for right and left sides of the face, at T1 and at T2. Results. One year after surgery, facial movements were similar to presurgical registrations. No modifications of symmetry were present. Conclusions. Despite the skeletal maxilla-mandible expansion, orthognathic surgical treatment (MMA of OSAS patients does not seem to modify facial mobility. Only an enhancement of amplitude in smiling and knitting brows was observed. These results could have reliable medical and surgical applications.
Allanson, Judith; Smith, Amanda; Hare, Heather
Nablus mask-like facial syndrome (NMLFS) has many distinctive phenotypic features, particularly tight glistening skin with reduced facial expression, blepharophimosis, telecanthus, bulky nasal tip, abnormal external ear architecture, upswept frontal hairline, and sparse eyebrows. Over the last few...... heterozygous deletions significantly overlapping the region associated with NMLFS. Notably, while one mother and child were said to have mild tightening of facial skin, none of these individuals exhibited reduced facial expression or the classical facial phenotype of NMLFS. These findings indicate...
Facial feature tracking and facial actions recognition from image sequence attracted great attention in computer vision field. Computational facial expression analysis is a challenging research topic in computer vision. It is required by many applications such as human-computer interaction, computer graphic animation and automatic facial expression recognition. In recent years, plenty of computer vision techniques have been developed to track or recognize the facial activities in three levels...
Full Text Available Aim: To examine facial canal status in patients with chronic otitis media (COM surgery and to detect the relation between facial canals dehiscence (FCD with middle ear pathology in these patients. Material and Method: The surgery data of patients who were subjected to tympanoplasty with or without mastoidectomy and radical mastoidectomy due to COM were analyzed retrospectively from January 2006 to December 2012. In addition to demonstrative data of the patients, status of facial canal and preoperative diagnoses of patients, type of the operation performed, status of middle ear, number of surgeries, existence of cholesteatoma, existence of ossicular chain defect, lateral canal defect and dura defect were assessed and the relation thereof with facial canal dehiscence (FCD was analyzed statistically. Results: Seven hundred ninety six patients were included in the study. FCD was detected in 10.05% of the patients. FCD was most frequently observed in the tympanic segment. It was found out that there was a statistically significant relationship of middle ear pathology, cholesteatoma, revision surgery, lateral semicircular canal and ossicular chain defect with FCD. Discussion: COM diagnosed patients may have defect in facial canal according to their preoperative diagnoses, middle ear pathologies, number of operations and ossicular chain defects. These patients should be applied a more careful surgery and closely followed up in postoperative periods.
Licht, Peter Bjørn; Pilegaard, Hans K; Ladegaard, Lars
Background. Facial blushing is one of the most peculiar of human expressions. The pathophysiology is unclear, and the prevalence is unknown. Thoracoscopic sympathectomy may cure the symptom and is increasingly used in patients with isolated facial blushing. The evidence base for the optimal level...... of targeting the sympathetic chain is limited to retrospective case studies. We present a randomized clinical trial. Methods. 100 patients were randomized (web-based, single-blinded) to rib-oriented (R2 or R2-R3) sympathicotomy for isolated facial blushing at two university hospitals during a 6-year period...... in all social and mental domains in both groups. Overall, 85% of the patients had an excellent or satisfactory result, with no significant difference between the R2 procedure and the R2-R3 procedure. Mild recurrence of facial blushing occurred in 30% of patients within the first year. One patient...
Ciorba, Andrea; Corazzi, Virginia; Conz, Veronica; Bianchini, Chiara; Aimoni, Claudia
Facial nerve palsy is a condition with several implications, particularly when occurring in childhood. It represents a serious clinical problem as it causes significant concerns in doctors because of its etiology, its treatment options and its outcome, as well as in little patients and their parents, because of functional and aesthetic outcomes. There are several described causes of facial nerve paralysis in children, as it can be congenital (due to delivery traumas and genetic or malformative diseases) or acquired (due to infective, inflammatory, neoplastic, traumatic or iatrogenic causes). Nonetheless, in approximately 40%-75% of the cases, the cause of unilateral facial paralysis still remains idiopathic. A careful diagnostic workout and differential diagnosis are particularly recommended in case of pediatric facial nerve palsy, in order to establish the most appropriate treatment, as the therapeutic approach differs in relation to the etiology.
Veillon, F. [Service de Radiologie I, Hopital de Hautepierre, 67098 Strasbourg Cedex (France)], E-mail: Francis.Veillon@chru-strasbourg.fr; Ramos-Taboada, L.; Abu-Eid, M. [Service de Radiologie I, Hopital de Hautepierre, 67098 Strasbourg Cedex (France); Charpiot, A. [Service d' ORL, Hopital de Hautepierre, 67098 Strasbourg Cedex (France); Riehm, S. [Service de Radiologie I, Hopital de Hautepierre, 67098 Strasbourg Cedex (France)
The facial nerve is responsible for the motor innervation of the face. It has a visceral motor function (lacrimal, submandibular, sublingual glands and secretion of the nose); it conveys a great part of the taste fibers, participates to the general sensory of the auricle (skin of the concha) and the wall of the external auditory meatus. The facial mimic, production of tears, nasal flow and salivation all depend on the facial nerve. In order to image the facial nerve it is mandatory to be knowledgeable about its normal anatomy including the course of its efferent and afferent fibers and about relevant technical considerations regarding CT and MR to be able to achieve high-resolution images of the nerve.
Boucher, Jerry D.; Ekman, Paul
Provides strong support for the view that there is no one area of the face which best reveals emotion, but that the value of the different facial areas in distinguishing emotions depends upon the emotion being judged. (Author)
Boucher, Jerry D.; Ekman, Paul
Provides strong support for the view that there is no one area of the face which best reveals emotion, but that the value of the different facial areas in distinguishing emotions depends upon the emotion being judged. (Author)
Facial neuralgias are produced by a change in neurological structure or function. This type of neuropathic pain affects the mental health as well as quality of life of patients. There are different types of neuralgias affecting the oral and maxillofacial region. These unusual pains are linked to some possible mechanisms. Various diagnostic tests are done to diagnose the proper cause of facial neuralgia and according to it the medical and surgical treatment is done to provide relief to patient.
Tunali, Gamze Dilek
Ankara : Bilkent Univ., 1996. Thesis (Master's) -- Bilkent University, 1996. Includes bibliographical references leaves 54-56. The work presented here describes the power of 2D animation with texture mai^ping controlled by line drawings. Animation is specifically intended for facial animation and not restricted by the human face. We initially have a sequence of facial images which are taken from a video sequence of the same face and an image of another face to be animated...
Singh, Geeta; Mohammad, Shadab; Pal, U. S.; Hariram; Malkunje, Laxman R.; Singh, Nimisha
Background: Facial injuries in children always present a challenge in respect of their diagnosis and management. Since these children are of a growing age every care should be taken so that later the overall growth pattern of the facial skeleton in these children is not jeopardized. Purpose: To access the most feasible method for the management of facial injuries in children without hampering the facial growth. Materials and Methods: Sixty child patients with facial trauma were selected randomly for this study. On the basis of examination and investigations a suitable management approach involving rest and observation, open or closed reduction and immobilization, trans-osseous (TO) wiring, mini bone plate fixation, splinting and replantation, elevation and fixation of zygoma, etc. were carried out. Results and Conclusion: In our study fall was the predominant cause for most of the facial injuries in children. There was a 1.09% incidence of facial injuries in children up to 16 years of age amongst the total patients. The age-wise distribution of the fracture amongst groups (I, II and III) was found to be 26.67%, 51.67% and 21.67% respectively. Male to female patient ratio was 3:1. The majority of the cases of facial injuries were seen in Group II patients (6-11 years) i.e. 51.67%. The mandibular fracture was found to be the most common fracture (0.60%) followed by dentoalveolar (0.27%), mandibular + midface (0.07) and midface (0.02%) fractures. Most of the mandibular fractures were found in the parasymphysis region. Simple fracture seems to be commonest in the mandible. Most of the mandibular and midface fractures in children were amenable to conservative therapies except a few which required surgical intervention. PMID:22639504
Yang, Manshu; Chow, Sy-Miin
Facial electromyography (EMG) is a useful physiological measure for detecting subtle affective changes in real time. A time series of EMG data contains bursts of electrical activity that increase in magnitude when the pertinent facial muscles are activated. Whereas previous methods for detecting EMG activation are often based on deterministic or…
Yang, Manshu; Chow, Sy-Miin
Facial electromyography (EMG) is a useful physiological measure for detecting subtle affective changes in real time. A time series of EMG data contains bursts of electrical activity that increase in magnitude when the pertinent facial muscles are activated. Whereas previous methods for detecting EMG activation are often based on deterministic or…
Terzis, Julia K; Anesti, Katerina
The purpose of this study is to clarify the confusing nomenclature and pathogenesis of Developmental Facial Paralysis, and how it can be differentiated from other causes of facial paralysis present at birth. Differentiating developmental from traumatic facial paralysis noted at birth is important for determining prognosis, but also for medicolegal reasons. Given the dramatic presentation of this condition, accurate and reliable guidelines are necessary in order to facilitate early diagnosis and initiate appropriate therapy, while providing support and counselling to the family. The 30 years experience of our center in the management of developmental facial paralysis is dependent upon a thorough understanding of facial nerve embryology, anatomy, nerve physiology, and an appreciation of well-recognized mishaps during fetal development. It is hoped that a better understanding of this condition will in the future lead to early targeted screening, accurate diagnosis and prompt treatment in this population of facially disfigured patients, which will facilitate their emotional and social rehabilitation, and their reintegration among their peers.
Livingstone, Steven R; Vezer, Esztella; McGarry, Lucy M; Lang, Anthony E; Russo, Frank A
Humans spontaneously mimic the facial expressions of others, facilitating social interaction. This mimicking behavior may be impaired in individuals with Parkinson's disease, for whom the loss of facial movements is a clinical feature. To assess the presence of facial mimicry in patients with Parkinson's disease. Twenty-seven non-depressed patients with idiopathic Parkinson's disease and 28 age-matched controls had their facial muscles recorded with electromyography while they observed presentations of calm, happy, sad, angry, and fearful emotions. Patients exhibited reduced amplitude and delayed onset in the zygomaticus major muscle region (smiling response) following happy presentations (patients M = 0.02, 95% confidence interval [CI] -0.15 to 0.18, controls M = 0.26, CI 0.14 to 0.37, ANOVA, effect size [ES] = 0.18, p mimicry overall, mimicking other peoples' frowns to some extent, but presenting with profoundly weakened and delayed smiles. These findings open a new avenue of inquiry into the "masked face" syndrome of PD.
Schiavenato, Martin; Byers, Jacquie F; Scovanner, Paul; McMahon, James M; Xia, Yinglin; Lu, Naiji; He, Hua
The primal face of pain (PFP) is postulated to be a common and universal facial expression to pain, hardwired and present at birth. We evaluated its presence by applying a computer-based methodology consisting of "point-pair" comparisons captured from video to measure facial movement in the pain expression by way of change across two images: one image before and one image after a painful stimulus (heel-stick). Similarity of facial expression was analyzed in a sample of 57 neonates representing both sexes and 3 ethnic backgrounds (African American, Caucasian and Hispanic/Latino) while controlling for these extraneous and potentially modulating factors: feeding type (bottle, breast, or both), behavioral state (awake or asleep), and use of epidural and/or other perinatal anesthesia. The PFP is consistent with previous reports of expression of pain in neonates and is characterized by opening of the mouth, drawing in of the brows, and closing of the eyes. Although facial expression was not identical across or among groups, our analyses showed no particular clustering or unique display by sex, or ethnicity. The clinical significance of this commonality of pain display, and of the origin of its potential individual variation begs further evaluation.
Kércia Melo de Oliveira Fonseca
Full Text Available INTRODUCTION: It has become common to use scales to measure the degree of involvement of facial paralysis in phonoaudiological clinics. OBJECTIVE: To analyze the inter- and intra-rater agreement of the scales of degree of facial paralysis and to elicit point of view of the appraisers regarding their use. METHODS: Cross-sectional observational clinical study of the Chevalier and House & Brackmann scales performed by five speech therapists with clinical experience, who analyzed the facial expression of 30 adult subjects with impaired facial movements two times, with a one week interval between evaluations. The kappa analysis was employed. RESULTS: There was excellent inter-rater agreement for both scales (kappa > 0.80, and on the Chevalier scale a substantial intra-rater agreement in the first assessment (kappa = 0.792 and an excellent agreement in the second assessment (kappa = 0.928. The House & Brackmann scale showed excellent agreement at both assessments (kappa = 0.850 and 0.857. As for the appraisers' point of view, one appraiser thought prior training is necessary for the Chevalier scale and, four appraisers felt that training is important for the House & Brackmann scale. CONCLUSION: Both scales have good inter- and intra-rater agreement and most of the appraisers agree on the ease and relevance of the application of these scales.
Mixed Movements is a research project engaged in performance-based architectural drawing. Architectonic implementation questions relations between the human body and a body of architecture by the different ways we handle drawing materials. A drawing may explore architectonic problems at other...... levels than those related to building, and this exploration is a special challenge and competence implicit artistic development work. The project Mixed Movements generates drawing-material, not primary as representation, but as a performance-based media, making the body being-in-the-media felt and appear...... as possible operational moves....
Recio, Guillermo; Schacht, Annekathrin; Sommer, Werner
Emotional facial expressions usually arise dynamically from a neutral expression. Yet, most previous research focused on static images. The present study investigated basic aspects of processing dynamic facial expressions. In two experiments, we presented short videos of facial expressions of six basic emotions and non-emotional facial movements emerging at variable and fixed rise times, attaining different intensity levels. In event-related brain potentials (ERP), effects of emotion but also for non-emotional movements appeared as early posterior negativity (EPN) between 200 and 350ms, suggesting an overall facilitation of early visual encoding for all facial movements. These EPN effects were emotion-unspecific. In contrast, relative to happiness and neutral expressions, negative emotional expressions elicited larger late positive ERP components (LPCs), indicating a more elaborate processing. Both EPN and LPC amplitudes increased with expression intensity. Effects of emotion and intensity were additive, indicating that intensity (understood as the degree of motion) increases the impact of emotional expressions but not its quality. These processes can be driven by all basic emotions, and there is little emotion-specificity even when statistical power is considerable (N (Experiment 2)=102). Copyright © 2013 Elsevier B.V. All rights reserved.
Iwase, Masao; Ouchi, Yasuomi; Okada, Hiroyuki; Yokoyama, Chihiro; Nobezawa, Shuji; Yoshikawa, Etsuji; Tsukada, Hideo; Takeda, Masaki; Yamashita, Ko; Takeda, Masatoshi; Yamaguti, Kouzi; Kuratsune, Hirohiko; Shimizu, Akira; Watanabe, Yasuyoshi
Laughter or smile is one of the emotional expressions of pleasantness with characteristic contraction of the facial muscles, of which the neural substrate remains to be explored. This currently described study is the first to investigate the generation of human facial expression of pleasant emotion using positron emission tomography and H(2)(15)O. Regional cerebral blood flow (rCBF) during laughter/smile induced by visual comics and the magnitude of laughter/smile indicated significant correlation in the bilateral supplementary motor area (SMA) and left putamen (P < 0.05, corrected), but no correlation in the primary motor area (M1). In the voluntary facial movement, significant correlation between rCBF and the magnitude of EMG was found in the face area of bilateral M1 and the SMA (P < 0.001, uncorrected). Laughter/smile, as opposed to voluntary movement, activated the visual association areas, left anterior temporal cortex, left uncus, and orbitofrontal and medial prefrontal cortices (P < 0.05, corrected), whereas voluntary facial movement generated by mimicking a laughing/smiling face activated the face area of the left M1 and bilateral SMA, compared with laughter/smile (P < 0.05, corrected). We demonstrated distinct neural substrates of emotional and volitional facial expression and defined cognitive and experiential processes of a pleasant emotion, laughter/smile.
Multiview 3D face modeling has attracted increasing attention recently and has become one of the potential avenues in future video systems. We aim to make more reliable and robust automatic feature extraction and natural 3D feature construction from 2D features detected on a pair of frontal and profile view face images. We propose several heuristic algorithms to minimize possible errors introduced by prevalent nonperfect orthogonal condition and noncoherent luminance. In our approach, we first extract the 2D features that are visible to both cameras in both views. Then, we estimate the coordinates of the features in the hidden profile view based on the visible features extracted in the two orthogonal views. Finally, based on the coordinates of the extracted features, we deform a 3D generic model to perform the desired 3D clone modeling. Present study proves the scope of resulted facial models for practical applications like face recognition and facial animation.
Bo-Lin Jian; Chieh-Li Chen; Wen-Lin Chu; Min-Wei Huang
.... Thus, this study used non-contact infrared thermal facial images (ITFIs) to analyze facial temperature changes evoked by different emotions in moderately and markedly ill schizophrenia patients...
Kim, Keum Won [Pohang Medical Center, Pohang (Korea, Republic of); Lee, Ho Kyu; Shin, Ji Hoon; Choi, Choong Gon; Suh, Dae Chul [Asan Medical Center, Ulsan Univ. College of Medicine, Seoul (Korea, Republic of); Cheong, Hae Kwan [Dongguk Univ. College of Medicine, Seoul (Korea, Republic of)
To analyze the characteristics of CT and MRI findings of facial nerve schwannoma in ten patients. Ten patients with pathologically confirmed facial nerve schwannoma, underwent physical and radilolgic examination. The latter involved MRI in all ten and CT scanning in six. We analyzed the location (epicenter), extent and number of involved segments of tumors, tuumor morphology, and changes in adjacent bony structures. The major symptoms of facial nerve schwannoma were facial nerve paralysis in seven cases and hearing loss in six. Epicenters were detected at the intraparotid portion in five cases, the intracanalicular portion in two, the cisternal portion in one, and the intratemporal portion in two. The segment most frequently involved was the mastoid (n=6), followed by the parotid (n=5), intracanalicular (n=4), cisternal (n=2), the labyrinthine/geniculate ganglion (n=2) and the tympanic segment (n=1). Tumors affected two segments of the facial nerve in eight cases, only one segment in one, and four continuous segments in one. Morphologically, tumors were ice-cream cone shaped in the cisternal segment tumor (1/1), cone shaped in intracanalicular tumors (2/2), oval shaped in geniculate ganglion tumors (1/1), club shaped in intraparotid tumors (5/5) and bead shaped in the diffuse-type tumor (1/1). Changes in adjacent bony structures involved widening of the stylomastoid foramen in intraparotid tumors (5/5), widening of the internal auditary canal in intracanalicular and cisternal tumors (3/3), bony erosion of the geniculate fossa in geniculate ganglion tumors (2/2), and widening of the facial nerve canal in intratemporal and intraparotid tumors (6/6). The characteristic location, shape and change in adjacent bony structures revealed by facial schwannomas on CT and MR examination lead to correct diagnosis.
Jack, Rachael E; Garrod, Oliver G B; Schyns, Philippe G
Designed by biological and social evolutionary pressures, facial expressions of emotion comprise specific facial movements to support a near-optimal system of signaling and decoding. Although highly dynamical, little is known about the form and function of facial expression temporal dynamics. Do facial expressions transmit diagnostic signals simultaneously to optimize categorization of the six classic emotions, or sequentially to support a more complex communication system of successive categorizations over time? Our data support the latter. Using a combination of perceptual expectation modeling, information theory, and Bayesian classifiers, we show that dynamic facial expressions of emotion transmit an evolving hierarchy of "biologically basic to socially specific" information over time. Early in the signaling dynamics, facial expressions systematically transmit few, biologically rooted face signals supporting the categorization of fewer elementary categories (e.g., approach/avoidance). Later transmissions comprise more complex signals that support categorization of a larger number of socially specific categories (i.e., the six classic emotions). Here, we show that dynamic facial expressions of emotion provide a sophisticated signaling system, questioning the widely accepted notion that emotion communication is comprised of six basic (i.e., psychologically irreducible) categories, and instead suggesting four.
Naruse, Susumu; Hashimoto, Toshiaki; Mori, Kenji; Tsuda, Yoshimi; Takahara, Mitsue; Kagami, Shoji
Facial expressions hold abundant information and play a central part in communication. In daily life, we must construct amicable interpersonal relationships by communicating through verbal and nonverbal behaviors. While school-age is a period of rapid social growth, few studies exist that study developmental changes in facial expression recognition during this age. This study investigated developmental changes in facial expression recognition by examining observers' gaze on others' expressions. 87 school-age children from first to sixth grade (41 boys, 46 girls). The Tobii T60 Eye-tracker(Tobii Technologies, Sweden) was used to gauge eye movement during a task of matching pre-instructed emotion words and facial expressions images (neutral, angry, happy, surprised, sad, disgusted) presented on a monitor fixed at a distance of 50 cm. In the task of matching the six facial expression images and emotion words, the mid- and higher-grade children answered more accurately than the lower-grade children in matching four expressions, excluding neutral and happy. For fixation time and fixation count, the lower-grade children scored lower than other grade children, gazing on all facial expressions significantly fewer times and for shorter periods. It is guessed that the stage from lower grades to middle grades is a turning point in facial recognition.
Liu, Tongran; Xiao, Tong; Li, Xiaoyan
The relationship between human fluid intelligence and social-emotional abilities has been a topic of considerable interest. The current study investigated whether adolescents with different intellectual levels had different automatic neural processing of facial expressions. Two groups of adolesce......-attentive change detection on social-emotional information.......The relationship between human fluid intelligence and social-emotional abilities has been a topic of considerable interest. The current study investigated whether adolescents with different intellectual levels had different automatic neural processing of facial expressions. Two groups of adolescent...
Choi, Jin-Young; Lee, Sang-Hoon; Baek, Seung-Hak
Aesthetic units of the face can be divided into facial content (FC; eyes, nose, lips, and mouth), anterior facial frame (AFF; a contour line from the trichion, the temporal line of the frontal bone, the lateral orbital rim, the most lateral line of the anterior part of the zygomatic body, the anterior border of the masseter muscle, to the inferior border of the chin), and posterior facial frame (PFF; a contour line from the hairline, the zygomatic arch, to the ramus and gonial angle area of the mandible). The size and shape of each FC and the balance and proportion between FCs create a unique appearance for each person. The facial form can be determined through the combination of AFF and PFF. In the Asian population, clinicians frequently encounter problems of FC (eg, acute nasolabial angle, protrusive and everted lips, nonconsonant lip line, or lip canting), AFF (eg, midface hypoplasia, protrusive and asymmetric chin, vertical deficiency/excess of the anterior maxilla and symphysis, or prominent zygoma), and PFF (eg, square mandibular angle). These problems can be efficiently and effectively corrected through the combination of hard tissue surgery such as anterior segmental osteotomy, genioplasty, mandibular angle reduction, malarplasty, and orthognathic surgery. Therefore, the purposes of this article were to introduce the concepts of FC, AFF, and PFF, and to explain the effects of facial hard tissue surgery on facial aesthetics.
Like all music performance, percussion playing requires high control over timing and sound properties. Specific to percussionists, however, is the need to adjust the movement to different instruments with varying physical properties and tactile feedback to the player. Furthermore, the well define...
Chloroplast movement is important for plant survival under high light and for efficient photosynthesis under low light. This review introduces recent knowledge on chloroplast movement and shows how to analyze the responses and the moving mechanisms, potentially inspiring research in this field. Avoidance from the strong light is mediated by blue light receptor phototropin 2 (phot2) plausibly localized on the chloroplast envelop and accumulation at the week light-irradiated area is mediated by phot1 and phot2 localized on the plasma membrane. Chloroplasts move by chloroplast actin (cp-actin) filaments that must be polymerized by Chloroplast Unusual Positioning1 (CHUP1) at the front side of moving chloroplast. To understand the signal transduction pathways and the mechanism of chloroplast movement, that is, from light capture to motive force-generating mechanism, various methods should be employed based on the various aspects. Observation of chloroplast distribution pattern under different light condition by fixed cell sectioning is somewhat an old-fashioned technique but the most basic and important way. However, most importantly, precise chloroplast behavior during and just after the induction of chloroplast movement by partial cell irradiation using an irradiator with either low light or strong light microbeam should be recorded by time lapse photographs under infrared light and analyzed. Recently various factors involved in chloroplast movement, such as cp-actin filaments and CHUP1, could be traced in Arabidopsis transgenic lines with fluorescent protein tags under a confocal laser scanning microscope (CLSM) and/or a total internal reflection fluorescence microscope (TIRFM). These methods are listed and their advantages and disadvantages are evaluated.
Full Text Available Facial deformities can impose burden to the patient. There are many solutions for facial deformities such as plastic surgery and facial prosthetics. However, current fabrication method of facial prosthetics is high-cost and time consuming. This study aimed to identify a new method to construct a customized facial prosthetic. A 3D scanner, computer software and 3D printer were used in this study. Results showed that the new developed method can be used to produce a customized facial prosthetics. The advantages of the developed method over the conventional process are low cost, reduce waste of material and pollution in order to meet the green concept.
Kramer, Robin S S; Ward, Robert
We investigated forms of socially relevant information signalled from static images of the face. We created composite images from women scoring high and low values on personality and health dimensions and measured the accuracy of raters in discriminating high from low trait values. We also looked specifically at the information content within the internal facial features, by presenting the composite images with an occluding mask. Four of the Big Five traits were accurately discriminated on the basis of the internal facial features alone (conscientiousness was the exception), as was physical health. The addition of external features in the full-face images led to improved detection for extraversion and physical health and poorer performance on intellect/imagination (or openness). Visual appearance based on internal facial features alone can therefore accurately predict behavioural biases in the form of personality, as well as levels of physical health.
Ali K. K. Bermani
Full Text Available The topic of automatic recognition of facial expressions deduce a lot of researchers in the late last century and has increased a great interest in the past few years. Several techniques have emerged in order to improve the efficiency of the recognition by addressing problems in face detection and extraction features in recognizing expressions. This paper has proposed automatic system for facial expression recognition which consists of hybrid approach in feature extraction phase which represent a combination between holistic and analytic approaches by extract 307 facial expression features (19 features by geometric, 288 feature by appearance. Expressions recognition is performed by using radial basis function (RBF based on artificial neural network to recognize the six basic emotions (anger, fear, disgust, happiness, surprise, sadness in addition to the natural.The system achieved recognition rate 97.08% when applying on person-dependent database and 93.98% when applying on person-independent.
Muhammed Tayyib Kadak
Full Text Available Autism is a genetically transferred neurodevelopmental disorder characterized by severe and permanent deficits in many interpersonal relation areas like communication, social interaction and emotional responsiveness. Patients with autism have deficits in face recognition, eye contact and recognition of emotional expression. Both recognition of face and expression of facial emotion carried on face processing. Structural and functional impairment in fusiform gyrus, amygdala, superior temporal sulcus and other brain regions lead to deficits in recognition of face and facial emotion. Therefore studies suggest that face processing deficits resulted in problems in areas of social interaction and emotion in autism. Studies revealed that children with autism had problems in recognition of facial expression and used mouth region more than eye region. It was also shown that autistic patients interpreted ambiguous expressions as negative emotion. In autism, deficits related in various stages of face processing like detection of gaze, face identity, recognition of emotional expression were determined, so far. Social interaction impairments in autistic spectrum disorders originated from face processing deficits during the periods of infancy, childhood and adolescence. Recognition of face and expression of facial emotion could be affected either automatically by orienting towards faces after birth, or by “learning” processes in developmental periods such as identity and emotion processing. This article aimed to review neurobiological basis of face processing and recognition of emotional facial expressions during normal development and in autism.
Full Text Available Facial expressions of emotion are thought to convey expressers’ behavioral intentions, thus priming observers’ approach and avoidance tendencies appropriately. The present study examined whether detecting expressions of behavioral intent influence perceivers’ estimation of the expresser’s distance from them. Eighteen undergraduates (9 male and 9 female participated in the study. Six facial expressions were chosen on the basis of degree of threat—anger, hate (threatening expressions, shame, surprise (neutral expressions, pleasure and joy (safe expressions. Each facial expression was presented on a tablet PC held by an assistant covered by a black drape who stood 1m, 2m, or 3m away from participants. Participants performed a visual matching task to report the perceived distance. Results showed that facial expression influenced distance estimation, with faces exhibiting threatening or safe expressions judged closer than those showing neutral expressions. Females’ judgments were more likely to be influenced; but these influences largely disappeared beyond the 2m distance. These results suggest that facial expressions of emotion (particularly threatening or safe emotions influence others’ (especially females’ distance estimations but only within close proximity.
Schrammel, Franziska; Pannasch, Sebastian; Graupner, Sven-Thomas; Mojzisch, Andreas; Velichkovsky, Boris M
The present study aimed to investigate the impact of facial expression, gaze interaction, and gender on attention allocation, physiological arousal, facial muscle responses, and emotional experience in simulated social interactions. Participants viewed animated virtual characters varying in terms of gender, gaze interaction, and facial expression. We recorded facial EMG, fixation duration, pupil size, and subjective experience. Subject's rapid facial reactions (RFRs) differentiated more clearly between the character's happy and angry expression in the condition of mutual eye-to-eye contact. This finding provides evidence for the idea that RFRs are not simply motor responses, but part of an emotional reaction. Eye movement data showed that fixations were longer in response to both angry and neutral faces than to happy faces, thereby suggesting that attention is preferentially allocated to cues indicating potential threat during social interaction.
Blanchin, T; Martin, F; Labbe, D
Peripheral facial paralysis often reveals two conditions that are hard to control: labial occlusion and palpebral closure. Today, there are efforts to go beyond the sole use of muscle stimulation techniques, and attention is being given to cerebral plasticity stimulation? This implies using the facial nerves' efferent pathway as the afferent pathway in rehabilitation. This technique could further help limit the two recalcitrant problems, above. We matched two groups of patients who underwent surgery for peripheral facial paralysis by lengthening the temporalis myoplasty (LTM). LTM is one of the best ways to examine cerebral plasticity. The trigeminal nerve is a mixed nerve and is both motor and sensory. After a LTM, patients have to use the trigeminal nerve differently, as it now has a direct role in generating the smile. The LTM approach, using the efferent pathway, therefore, creates a challenge for the brain. The two groups followed separate therapies called "classical" and "mirror-effect". The "mirror-effect" method gave a more precise orientation of the patient's cerebral plasticity than did the classical rehabilitation. The method develops two axes: voluntary movements patients need to control their temporal smile; and spontaneous movements needed for facial expressions. Work on voluntary movements is done before a "digital mirror", using an identical doubled hemiface, providing the patient with a fake copy of his face and, thus, a 7 "mirror-effect". The spontaneous movements work is based on what we call the "Therapy of Motor Emotions". The method presented here is used to treat facial paralysis (Bell's Palsies type), whether requiring surgery or not. Importantly, the facial nerve, like the trigeminal nerve above, is also a mixed nerve and is stimulated through the efferent pathway in the same manner.
Alan W. Gray
Full Text Available The current study addressed whether rated femininity, attractiveness, and health in female faces are associated with numerous indices of self-reported health history (number of colds/stomach bugs/frequency of antibiotic use in a sample of 105 females. It was predicted that all three rating variables would correlate negatively with bouts of illness (with the exception of rates of stomach infections, on the assumption that aspects of facial appearance signal mate quality. The results showed partial support for this prediction, in that there was a general trend for both facial femininity and attractiveness to correlate negatively with the reported number of colds in the preceding twelve months and with the frequency of antibiotic use in the last three years and the last twelve months. Rated facial femininity (as documented in September was also associated with days of flu experienced in the period spanning the November-December months. However, rated health did not correlate with any of the health indices (albeit one marginal result with antibiotic use in the last twelve months. The results lend support to previous findings linking facial femininity to health and suggest that facial femininity may be linked to some aspects of disease resistance but not others.
Müri, René M
The present Review deals with the motor control of facial expressions in humans. Facial expressions are a central part of human communication. Emotional face expressions have a crucial role in human nonverbal behavior, allowing a rapid transfer of information between individuals. Facial expressions can be either voluntarily or emotionally controlled. Recent studies in nonhuman primates and humans have revealed that the motor control of facial expressions has a distributed neural representation. At least five cortical regions on the medial and lateral aspects of each hemisphere are involved: the primary motor cortex, the ventral lateral premotor cortex, the supplementary motor area on the medial wall, and the rostral and caudal cingulate cortex. The results of studies in humans and nonhuman primates suggest that the innervation of the face is bilaterally controlled for the upper part and mainly contralaterally controlled for the lower part. Furthermore, the primary motor cortex, the ventral lateral premotor cortex, and the supplementary motor area are essential for the voluntary control of facial expressions. In contrast, the cingulate cortical areas are important for emotional expression, because they receive input from different structures of the limbic system.
Facial paralysis has been a recognized condition since Antiquity, and was mentionned by Hippocratus. In the 17th century, in 1687, the Dutch physician Stalpart Van der Wiel rendered a detailed observation. It was, however, Charles Bell who, in 1821, provided the description that specified the role of the facial nerve. Facial nerve surgery began at the end of the 19th century. Three different techniques were used successively: nerve anastomosis, (XI-VII Balance 1895, XII-VII, Korte 1903), myoplasties (Lexer 1908), and suspensions (Stein 1913). Bunnell successfully accomplished the first direct facial nerve repair in the temporal bone, in 1927, and in 1932 Balance and Duel experimented with nerve grafts. Thanks to progress in microsurgical techniques, the first faciofacial anastomosis was realized in 1970 (Smith, Scaramella), and an account of the first microneurovascular muscle transfer published in 1976 by Harii. Treatment of the eyelid paralysis was at the origin of numerous operations beginning in the 1960s; including palpebral spring (Morel Fatio 1962) silicone sling (Arion 1972), upperlid loading with gold plate (Illig 1968), magnets (Muhlbauer 1973) and transfacial nerve grafts (Anderl 1973). By the end of the 20th century, surgeons had at their disposal a wide range of valid techniques for facial nerve surgery, including modernized versions of older techniques.
Full Text Available In 1984 Christopher Cordner offered a critical view on theories of graceful movement in sport developed by Ng. G. Wulk, David Best and Joseph Kupfer. In 2001 Paul Davis criticized his view. Cordner responded, rejecting all the criticism. More than a century before, Herbert Spencer and Jean-Marie Guyau had a similar controversy over grace. Both exchanges of opinion involve three positions: that grace is the most efficient movement and therefore something quantitative and measurable; that grace is expression of the wholeness of person and the world; and that grace is something which neither science nor philosophy can explain. To clarify these conflicting issues, this article proposes to examine the history of the notion which goes back to the Latin gratia and has root in the Ancient Greek charis, and to apply the concepts of cultural anchor and thin coherence, following John R. Searle’s explanation that we produce epistemically objective accounts of ontologically subjective reality.
Lee, Young Hee; Im, Jaeg Yeong
This book is for antinuclear movement. So, this book introduces many articles on nuclear issues of Asia and the pacific area. The titles of articles are the crusades of Reagan by Werner Plaha, contending between super powers in Europe by Alva Reimer Myrdal, claims of resistance by Daniel Ellsberg, nuclear and the Korean Peninsula by Go, Seung Woo, Liberation but of belief of nuclear weapon by Lee, Young Hee and nuclear weapon in Korea by peter Haze.
Khandait, S P; Khandait, P D
In this paper, an approach to the problem of automatic facial feature extraction from a still frontal posed image and classification and recognition of facial expression and hence emotion and mood of a person is presented. Feed forward back propagation neural network is used as a classifier for classifying the expressions of supplied face into seven basic categories like surprise, neutral, sad, disgust, fear, happy and angry. For face portion segmentation and localization, morphological image processing operations are used. Permanent facial features like eyebrows, eyes, mouth and nose are extracted using SUSAN edge detection operator, facial geometry, edge projection analysis. Experiments are carried out on JAFFE facial expression database and gives better performance in terms of 100% accuracy for training set and 95.26% accuracy for test set.
LoBue, Vanessa; Thrasher, Cat
Emotional development is one of the largest and most productive areas of psychological research. For decades, researchers have been fascinated by how humans respond to, detect, and interpret emotional facial expressions. Much of the research in this area has relied on controlled stimulus sets of adults posing various facial expressions. Here we introduce a new stimulus set of emotional facial expressions into the domain of research on emotional development-The Child Affective Facial Expression set (CAFE). The CAFE set features photographs of a racially and ethnically diverse group of 2- to 8-year-old children posing for six emotional facial expressions-angry, fearful, sad, happy, surprised, and disgusted-and a neutral face. In the current work, we describe the set and report validity and reliability data on the set from 100 untrained adult participants.
Full Text Available Emotional development is one of the largest and most productive areas of psychological research. For decades, researchers have been fascinated by how humans respond to, detect, and interpret emotional facial expressions. Much of the research in this area has relied on controlled stimulus sets of adults posing various facial expressions. Here we introduce a new stimulus set of emotional facial expressions into the domain of research on emotional development—The Child Affective Facial Expression set (CAFE. The CAFE set features photographs of a racially and ethnically diverse group of 2- to 8-year-old children posing for 6 emotional facial expressions—angry, fearful, sad, happy, surprised, and disgusted—and a neutral face. In the current work, we describe the set and report validity and reliability data on the set from 100 untrained adult participants.
Full Text Available Iatrogenic facial nerve palsy in mastoid surgery is considered a crime or a taboo in the present scenario of medical science. But one has to accept the fact that every otologist encounters this entity at some point in his/her career. Hence it is of prime importance to be equipped to detect and to manage these cases. The obvious and disfiguring facial deformity it causes makes this a dreaded complication. Our article here discusses our experience in managing four cases of iatrogenic facial palsy. The etiology in all the cases was mastoidectomy for cholesteatoma. The detection of the site and repair was performed by the same surgeon in all cases. The facial nerve was transected completely in three cases, and in one case there was partial loss (>50% of fibers. Cable nerve grafting was utilized in three patients. There was grade 4 improvement in three patients who underwent cable nerve grafting, and one patient had grade 2 recovery after end-to-end anastomosis. A good anatomical knowledge and experience with temporal bone dissection is of great importance in preventing facial nerve injury. If facial nerve injury is detected, it should be managed as early as possible. An end-to-end anastomosis provides better results in final recovery as opposed to cable nerve grafting for facial nerve repair.
Melvin, Thuy-Anh N; Limb, Charles J
Facial paralysis represents the end result of a wide array of disorders and heterogeneous etiologies, including congenital, traumatic, infectious, neoplastic, and metabolic causes. Thus, facial palsy has a diverse range of presentations, from transient unilateral paresis to devastating permanent bilateral paralysis. Although not life-threatening, facial paralysis remains relatively common and can have truly severe effects on one's quality of life, with important ramifications in terms of psychological impact and physiologic burden. Prognosis and outcomes for patients with facial paralysis are highly dependent on the etiologic nature of the weakness as well as the treatment offered to the patient. Facial plastic surgeons are often asked to manage the sequelae of long-standing facial paralysis. It is important, however, for any practitioner who assists this population to have a sophisticated understanding of the common etiologies and initial management of facial paralysis. This article reviews the more common causes of facial paralysis and discusses relevant early treatment strategies.