WorldWideScience

Sample records for volubility gestural production

  1. Maternal Label and Gesture Use Affects Acquisition of Specific Object Names

    Science.gov (United States)

    Zammit, Maria; Schafer, Graham

    2011-01-01

    Ten mothers were observed prospectively, interacting with their infants aged 0 ; 10 in two contexts (picture description and noun description). Maternal communicative behaviours were coded for volubility, gestural production and labelling style. Verbal labelling events were categorized into three exclusive categories: label only; label plus…

  2. Observation of static gestures influences speech production.

    Science.gov (United States)

    Jarick, Michelle; Jones, Jeffery A

    2008-08-01

    Research investigating 'mirror neurons' has demonstrated the presence of an observation-execution matching system in humans. One hypothesized role for this system might be to aid in action understanding by encoding the underlying intentions of the actor. To investigate this hypothesis, we asked participants to observe photographs of an actor making orofacial gestures (implying verbal or non-verbal acts), and to produce syllables that were compatible or incompatible with the gesture they observed. We predicted that if mirror neurons encode the intentions of an actor, then the pictures implying verbal gestures would affect speech production, whereas the non-verbal gestures would not. Our results showed that the observation of compatible verbal gestures facilitated verbal responses, while incompatible verbal gestures caused interference. Although this compatibility effect did not reach statistical significance when the photographs implied a non-verbal act, responses were faster on average when the gesture implied the use of similar articulators as those involved with the production of the target syllable. Altogether, these behavioral findings compliment previous neuroimaging studies indicating that static pictures portraying gestures activate brain regions associated with an observation-execution matching system.

  3. Does Language Shape the Production and Perception of Gestures?

    NARCIS (Netherlands)

    Gu, Y.; Mol, L.; Hoetjes, M.W.; Swerts, M.G.J.; Bello, P.; Guarini, M.; McShane, M.; Scassellati, B.

    2014-01-01

    Does language influence the production and perception of gestures? The metaphorical use of language in representing time is deeply interlinked with actions in space, such as gestures. In Chinese, speakers can talk and gesture about time as if it were horizontal, sagittal, or vertical. In English,

  4. Gesture comprehension, knowledge and production in Alzheimer's disease.

    Science.gov (United States)

    Rousseaux, M; Rénier, J; Anicet, L; Pasquier, F; Mackowiak-Cordoliani, M A

    2012-07-01

      Although apraxia is a typical consequence of Alzheimer's disease (AD), the profile of apraxic impairments is still subject to debate. Here, we analysed apraxia components in patients with AD with mild-to-moderate or moderately severe dementia.   Thirty-one patients were included. We first evaluated simple gestures, that is, the imitation of finger and hand configurations, symbolic gestures (recognition, production on verbal command and imitation), pantomimes (recognition, production on verbal command, imitation and production with the object), general knowledge and complex gestures (tool-object association, function-tool association, production of complex actions and knowledge about action sequences). Tests for dementia (Mini Mental State Examination and the Dementia Rating Scale), language disorders, visual agnosia and executive function were also administered.   Compared with controls, patients showed significant difficulties (P ≤ 0.01) in subtests relating to simple gestures (except for the recognition and imitation of symbolic gestures). General knowledge about tools, objects and action sequences was less severely impaired. Performance was frequently correlated with the severity of dementia. Multiple-case analyses revealed that (i) the frequency of apraxia depended on the definition used, (ii) ideomotor apraxia was more frequent than ideational apraxia, (iii) conceptual difficulties were slightly more frequent than production difficulties in the early stage of AD and (iv) difficulties in gesture recognition were frequent (especially for pantomimes).   Patients with AD can clearly show gesture apraxia from the mild-moderate stage of dementia onwards. Recognition and imitation disorders are relatively frequent (especially for pantomimes). We did not find conceptual difficulties to be the main problem in early-stage AD. © 2012 The Author(s) European Journal of Neurology © 2012 EFNS.

  5. A common functional neural network for overt production of speech and gesture.

    Science.gov (United States)

    Marstaller, L; Burianová, H

    2015-01-22

    The perception of co-speech gestures, i.e., hand movements that co-occur with speech, has been investigated by several studies. The results show that the perception of co-speech gestures engages a core set of frontal, temporal, and parietal areas. However, no study has yet investigated the neural processes underlying the production of co-speech gestures. Specifically, it remains an open question whether Broca's area is central to the coordination of speech and gestures as has been suggested previously. The objective of this study was to use functional magnetic resonance imaging to (i) investigate the regional activations underlying overt production of speech, gestures, and co-speech gestures, and (ii) examine functional connectivity with Broca's area. We hypothesized that co-speech gesture production would activate frontal, temporal, and parietal regions that are similar to areas previously found during co-speech gesture perception and that both speech and gesture as well as co-speech gesture production would engage a neural network connected to Broca's area. Whole-brain analysis confirmed our hypothesis and showed that co-speech gesturing did engage brain areas that form part of networks known to subserve language and gesture. Functional connectivity analysis further revealed a functional network connected to Broca's area that is common to speech, gesture, and co-speech gesture production. This network consists of brain areas that play essential roles in motor control, suggesting that the coordination of speech and gesture is mediated by a shared motor control network. Our findings thus lend support to the idea that speech can influence co-speech gesture production on a motoric level. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.

  6. Using a social robot to teach gestural recognition and production in children with autism spectrum disorders.

    Science.gov (United States)

    So, Wing-Chee; Wong, Miranda Kit-Yi; Lam, Carrie Ka-Yee; Lam, Wan-Yi; Chui, Anthony Tsz-Fung; Lee, Tsz-Lok; Ng, Hoi-Man; Chan, Chun-Hung; Fok, Daniel Chun-Wing

    2017-07-04

    While it has been argued that children with autism spectrum disorders are responsive to robot-like toys, very little research has examined the impact of robot-based intervention on gesture use. These children have delayed gestural development. We used a social robot in two phases to teach them to recognize and produce eight pantomime gestures that expressed feelings and needs. Compared to the children in the wait-list control group (N = 6), those in the intervention group (N = 7) were more likely to recognize gestures and to gesture accurately in trained and untrained scenarios. They also generalized the acquired recognition (but not production) skills to human-to-human interaction. The benefits and limitations of robot-based intervention for gestural learning were highlighted. Implications for Rehabilitation Compared to typically-developing children, children with autism spectrum disorders have delayed development of gesture comprehension and production. Robot-based intervention program was developed to teach children with autism spectrum disorders recognition (Phase I) and production (Phase II) of eight pantomime gestures that expressed feelings and needs. Children in the intervention group (but not in the wait-list control group) were able to recognize more gestures in both trained and untrained scenarios and generalize the acquired gestural recognition skills to human-to-human interaction. Similar findings were reported for gestural production except that there was no strong evidence showing children in the intervention group could produce gestures accurately in human-to-human interaction.

  7. Phonological similarity affects production of gestures, even in the absence of overt speech

    Directory of Open Access Journals (Sweden)

    Nazbanou eNozari

    2015-09-01

    Full Text Available Are manual gestures affected by inner speech? This study tested the hypothesis that phonological form influences gesture by investigating whether phonological similarity between words that describe motion gestures creates interference for production of those gestures in the absence of overt speech. Participants learned to respond to a picture of a bottle by gesturing to open the bottle’s cap, and to a picture of long hair by gesturing to twirl the hair. In one condition, the gestures were introduced with phonologically-similar labels twist and twirl (similar condition, while in the other condition, they were introduced with phonologically-dissimilar labels unscrew and twirl (dissimilar condition. During the actual experiment, labels were not produced and participants only gestured by looking at pictures. In both conditions, participants also gestured to a control pair that was used as a baseline. Participants made significantly more errors on gestures in the similar than dissimilar condition after correction for baseline differences. This finding shows the influence of phonology on gesture production in the absence of overt speech and poses new constraints on the locus of the interaction between language and gesture systems.

  8. Voluble: a space-time diagram of the solar system

    Science.gov (United States)

    Aguilera, Julieta C.; SubbaRao, Mark U.

    2008-02-01

    Voluble is a dynamic space-time diagram of the solar system. Voluble is designed to help users understand the relationship between space and time in the motion of the planets around the sun. Voluble is set in virtual reality to relate these movements to our experience of immediate space. Beyond just the visual, understanding dynamic systems is naturally associated to the articulation of our bodies as we perform a number of complex calculations, albeit unconsciously, to deal with simple tasks. Such capabilities encompass spatial perception and memory. Voluble investigates the balance between the visually abstract and the spatially figurative in immersive development to help illuminate phenomena that are beyond the reach of human scale and time. While most diagrams, even computer-based interactive ones, are flat, three-dimensional real-time virtual reality representations are closer to our experience of space. The representation can be seen as if it was "really there," engaging a larger number of cues pertaining to our everyday spatial experience.

  9. The relationship between motor development, gestures and language production in the second year of life: a mediational analysis.

    Science.gov (United States)

    Longobardi, Emiddia; Spataro, Pietro; Rossi-Arnaud, Clelia

    2014-02-01

    This longitudinal study investigated the relationships between motor, gestural and linguistic abilities using two parent report instruments. Motor skills at 12 months significantly correlated with language production at 16, 20 and 23 months, but these associations were mediated by the use of representational gestures.

  10. Co-speech gesture production in an animation-narration task by bilinguals: a near-infrared spectroscopy study.

    Science.gov (United States)

    Oi, Misato; Saito, Hirofumi; Li, Zongfeng; Zhao, Wenjun

    2013-04-01

    To examine the neural mechanism of co-speech gesture production, we measured brain activity of bilinguals during an animation-narration task using near-infrared spectroscopy. The task of the participants was to watch two stories via an animated cartoon, and then narrate the contents in their first language (Ll) and second language (L2), respectively. The participants showed significantly more gestures in L2 than in L1. The number of gestures lowered at the ending part of the narration in L1, but not in L2. Analyses of concentration changes of oxygenated hemoglobin revealed that activation of the left inferior frontal gyrus (IFG) significantly increased during gesture production, while activation of the left posterior superior temporal sulcus (pSTS) significantly decreased in line with an increase in the left IFG. These brain activation patterns suggest that the left IFG is involved in the gesture production, and the left pSTS is modulated by the speech load.

  11. The Development of Vocabulary in Spanish Children with Down Syndrome: Comprehension, Production, and Gestures

    Science.gov (United States)

    Galeote, Miguel; Sebastian, Eugenia; Checa, Elena; Rey, Rocio; Soto, Pilar

    2011-01-01

    Background: Our main purpose was to compare the lexical development of Spanish children with Down syndrome (DS) and children with typical development (TD) to investigate the relationship between cognitive and vocabulary development in comprehension and oral and gestural production. Method: Participants were 186 children with DS and 186 children…

  12. Mainstreaming gesture based interfaces

    Directory of Open Access Journals (Sweden)

    David Procházka

    2013-01-01

    Full Text Available Gestures are a common way of interaction with mobile devices. They emerged especially with the iPhone production. Gestures in currently used devices are usually based on the original gestures presented by Apple in its iOS (iPhone Operating System. Therefore, there is a wide agreement on the mobile gesture design. In last years, it is possible to see experiments with gesture usage also in the other areas of consumer electronics and computers. The examples can include televisions, large projections etc. These gestures can be marked as spatial or 3D gestures. They are connected with a natural 3D environment rather than with a flat 2D screen. Nevertheless, it is hard to find a comparable design agreement within the spatial gestures. Various projects are based on completely different gesture sets. This situation is confusing for their users and slows down spatial gesture adoption.This paper is focused on the standardization of spatial gestures. The review of projects focused on spatial gesture usage is provided in the first part. The main emphasis is placed on the usability point-of-view. On the basis of our analysis, we argue that the usability is the key issue enabling the wide adoption. The mobile gesture emergence was possible easily because the iPhone gestures were natural. Therefore, it was not necessary to learn them.The design and implementation of our presentation software, which is controlled by gestures, is outlined in the second part of the paper. Furthermore, the usability testing results are provided as well. We have tested our application on a group of users not instructed in the implemented gestures design. These results were compared with the other ones, obtained with our original implementation. The evaluation can be used as the basis for implementation of similar projects.

  13. Electrophysiological and Kinematic Correlates of Communicative Intent in the Planning and Production of Pointing Gestures and Speech.

    Science.gov (United States)

    Peeters, David; Chu, Mingyuan; Holler, Judith; Hagoort, Peter; Özyürek, Aslı

    2015-12-01

    In everyday human communication, we often express our communicative intentions by manually pointing out referents in the material world around us to an addressee, often in tight synchronization with referential speech. This study investigated whether and how the kinematic form of index finger pointing gestures is shaped by the gesturer's communicative intentions and how this is modulated by the presence of concurrently produced speech. Furthermore, we explored the neural mechanisms underpinning the planning of communicative pointing gestures and speech. Two experiments were carried out in which participants pointed at referents for an addressee while the informativeness of their gestures and speech was varied. Kinematic and electrophysiological data were recorded online. It was found that participants prolonged the duration of the stroke and poststroke hold phase of their gesture to be more communicative, in particular when the gesture was carrying the main informational burden in their multimodal utterance. Frontal and P300 effects in the ERPs suggested the importance of intentional and modality-independent attentional mechanisms during the planning phase of informative pointing gestures. These findings contribute to a better understanding of the complex interplay between action, attention, intention, and language in the production of pointing gestures, a communicative act core to human interaction.

  14. Gesture Production in Language Impairment: It's Quality, Not Quantity, That Matters

    Science.gov (United States)

    Wray, Charlotte; Saunders, Natalie; McGuire, Rosie; Cousins, Georgia; Norbury, Courtenay Frazier

    2017-01-01

    Purpose: The aim of this study was to determine whether children with language impairment (LI) use gesture to compensate for their language difficulties. Method: The present study investigated gesture accuracy and frequency in children with LI (n = 21) across gesture imitation, gesture elicitation, spontaneous narrative, and interactive…

  15. Altered Gesture and Speech Production in ASD Detract from In-Person Communicative Quality.

    Science.gov (United States)

    Morett, Laura M; O'Hearn, Kirsten; Luna, Beatriz; Ghuman, Avniel Singh

    2016-03-01

    This study disentangled the influences of language and social processing on communication in autism spectrum disorder (ASD) by examining whether gesture and speech production differs as a function of social context. The results indicate that, unlike other adolescents, adolescents with ASD did not increase their coherency and engagement in the presence of a visible listener, and that greater coherency and engagement were related to lesser social and communicative impairments. Additionally, the results indicated that adolescents with ASD produced sparser speech and fewer gestures conveying supplementary information, and that both of these effects increased in the presence of a visible listener. Together, these findings suggest that interpersonal communication deficits in ASD are driven more strongly by social processing than language processing.

  16. Comprehensive assessment of gesture production: a new test of upper limb apraxia (TULIA).

    Science.gov (United States)

    Vanbellingen, T; Kersten, B; Van Hemelrijk, B; Van de Winckel, A; Bertschi, M; Müri, R; De Weerdt, W; Bohlhalter, S

    2010-01-01

    Only few standardized apraxia scales are available and they do not cover all domains and semantic features of gesture production. Therefore, the objective of the present study was to evaluate the reliability and validity of a newly developed test of upper limb apraxia (TULIA), which is comprehensive and still short to administer. The TULIA consists of 48 items including imitation and pantomime domain of non-symbolic (meaningless), intransitive (communicative) and transitive (tool related) gestures corresponding to 6 subtests. A 6-point scoring method (0-5) was used (score range 0-240). Performance was assessed by blinded raters based on videos in 133 stroke patients, 84 with left hemisphere damage (LHD) and 49 with right hemisphere damage (RHD), as well as 50 healthy subjects (HS). The clinimetric findings demonstrated mostly good to excellent internal consistency, inter- and intra-rater (test-retest) reliability, both at the level of the six subtests and at individual item level. Criterion validity was evaluated by confirming hypotheses based on the literature. Construct validity was demonstrated by a high correlation (r = 0.82) with the De Renzi-test. These results show that the TULIA is both a reliable and valid test to systematically assess gesture production. The test can be easily applied and is therefore useful for both research purposes and clinical practice.

  17. Patterns of apraxia associated with the production of intransitive limb gestures following left and right hemisphere stroke.

    Science.gov (United States)

    Heath, M; Roy, E A; Westwood, D; Black, S E

    2001-01-01

    The model of apraxia proposed by Roy (1996) states that three patterns of apraxia should be observed across pantomime and imitation conditions. In the present analysis the frequency and severity of each pattern of apraxia were examined in a consecutive sample of left-(LHD) and right-hemisphere-damaged (RHD) patients during the production of intransitive limb gestures. The results indicated that a significant proportion of LHD and RHD patients were selectively impaired in formulating the ideational component of intransitive limb gestures.

  18. Gesture Facilitates Children's Creative Thinking.

    Science.gov (United States)

    Kirk, Elizabeth; Lewis, Carine

    2017-02-01

    Gestures help people think and can help problem solvers generate new ideas. We conducted two experiments exploring the self-oriented function of gesture in a novel domain: creative thinking. In Experiment 1, we explored the relationship between children's spontaneous gesture production and their ability to generate novel uses for everyday items (alternative-uses task). There was a significant correlation between children's creative fluency and their gesture production, and the majority of children's gestures depicted an action on the target object. Restricting children from gesturing did not significantly reduce their fluency, however. In Experiment 2, we encouraged children to gesture, and this significantly boosted their generation of creative ideas. These findings demonstrate that gestures serve an important self-oriented function and can assist creative thinking.

  19. Increased pain intensity is associated with greater verbal communication difficulty and increased production of speech and co-speech gestures.

    Science.gov (United States)

    Rowbotham, Samantha; Wardy, April J; Lloyd, Donna M; Wearden, Alison; Holler, Judith

    2014-01-01

    Effective pain communication is essential if adequate treatment and support are to be provided. Pain communication is often multimodal, with sufferers utilising speech, nonverbal behaviours (such as facial expressions), and co-speech gestures (bodily movements, primarily of the hands and arms that accompany speech and can convey semantic information) to communicate their experience. Research suggests that the production of nonverbal pain behaviours is positively associated with pain intensity, but it is not known whether this is also the case for speech and co-speech gestures. The present study explored whether increased pain intensity is associated with greater speech and gesture production during face-to-face communication about acute, experimental pain. Participants (N = 26) were exposed to experimentally elicited pressure pain to the fingernail bed at high and low intensities and took part in video-recorded semi-structured interviews. Despite rating more intense pain as more difficult to communicate (t(25)  = 2.21, p =  .037), participants produced significantly longer verbal pain descriptions and more co-speech gestures in the high intensity pain condition (Words: t(25)  = 3.57, p  = .001; Gestures: t(25)  = 3.66, p =  .001). This suggests that spoken and gestural communication about pain is enhanced when pain is more intense. Thus, in addition to conveying detailed semantic information about pain, speech and co-speech gestures may provide a cue to pain intensity, with implications for the treatment and support received by pain sufferers. Future work should consider whether these findings are applicable within the context of clinical interactions about pain.

  20. Co-Speech Gesture Production in an Animation-Narration Task by Bilinguals: A Near-Infrared Spectroscopy Study

    Science.gov (United States)

    Oi, Misato; Saito, Hirofumi; Li, Zongfeng; Zhao, Wenjun

    2013-01-01

    To examine the neural mechanism of co-speech gesture production, we measured brain activity of bilinguals during an animation-narration task using near-infrared spectroscopy. The task of the participants was to watch two stories via an animated cartoon, and then narrate the contents in their first language (Ll) and second language (L2),…

  1. Co-Speech Gesture Production in an Animation-Narration Task by Bilinguals: A Near-Infrared Spectroscopy Study

    Science.gov (United States)

    Oi, Misato; Saito, Hirofumi; Li, Zongfeng; Zhao, Wenjun

    2013-01-01

    To examine the neural mechanism of co-speech gesture production, we measured brain activity of bilinguals during an animation-narration task using near-infrared spectroscopy. The task of the participants was to watch two stories via an animated cartoon, and then narrate the contents in their first language (Ll) and second language (L2),…

  2. Gestures and insight in advanced mathematical thinking

    Science.gov (United States)

    Yoon, Caroline; Thomas, Michael O. J.; Dreyfus, Tommy

    2011-10-01

    What role do gestures play in advanced mathematical thinking? We argue that the role of gestures goes beyond merely communicating thought and supporting understanding - in some cases, gestures can help generate new mathematical insights. Gestures feature prominently in a case study of two participants working on a sequence of calculus activities. One participant uses gestures to clarify the relationships between a function, its derivative and its antiderivative. We show how these gestures help create a virtual mathematical construct, which in turn leads to a new problem-solving strategy. These results suggest that gestures are a productive, but potentially undertapped resource for generating new insights in advanced levels of mathematics.

  3. To beg, or not to beg? That is the question: mangabeys modify their production of requesting gestures in response to human's attentional states.

    Directory of Open Access Journals (Sweden)

    Audrey Maille

    Full Text Available BACKGROUND: Although gestural communication is widespread in primates, few studies focused on the cognitive processes underlying gestures produced by monkeys. METHODOLOGY/PRINCIPAL FINDINGS: The present study asked whether red-capped mangabeys (Cercocebus torquatus trained to produce visually based requesting gestures modify their gestural behavior in response to human's attentional states. The experimenter held a food item and displayed five different attentional states that differed on the basis of body, head and gaze orientation; mangabeys had to request food by extending an arm toward the food item (begging gesture. Mangabeys were sensitive, at least to some extent, to the human's attentional state. They reacted to some postural cues of a human recipient: they gestured more and faster when both the body and the head of the experimenter were oriented toward them than when they were oriented away. However, they did not seem to use gaze cues to recognize an attentive human: monkeys begged at similar levels regardless of the experimenter's eyes state. CONCLUSIONS/SIGNIFICANCE: These results indicate that mangabeys lowered their production of begging gestures when these could not be perceived by the human who had to respond to it. This finding provides important evidence that acquired begging gestures of monkeys might be used intentionally.

  4. When does a system become phonological? Handshape production in gesturers, signers, and homesigners.

    Science.gov (United States)

    Brentari, Diane; Coppola, Marie; Mazzoni, Laura; Goldin-Meadow, Susan

    2012-02-01

    Sign languages display remarkable crosslinguistic consistencies in the use of handshapes. In particular, handshapes used in classifier predicates display a consistent pattern in finger complexity: classifier handshapes representing objects display more finger complexity than those representing how objects are handled. Here we explore the conditions under which this morphophonological phenomenon arises. In Study 1, we ask whether hearing individuals in Italy and the United States, asked to communicate using only their hands, show the same pattern of finger complexity found in the classifier handshapes of two sign languages: Italian Sign Language (LIS) and American Sign Language (ASL). We find that they do not: gesturers display more finger complexity in handling handshapes than in object handshapes. The morphophonological pattern found in conventional sign languages is therefore not a codified version of the pattern invented by hearing individuals on the spot. In Study 2, we ask whether continued use of gesture as a primary communication system results in a pattern that is more similar to the morphophonological pattern found in conventional sign languages or to the pattern found in gesturers. Homesigners have not acquired a signed or spoken language and instead use a self-generated gesture system to communicate with their hearing family members and friends. We find that homesigners pattern more like signers than like gesturers: their finger complexity in object handshapes is higher than that of gesturers (indeed as high as signers); and their finger complexity in handling handshapes is lower than that of gesturers (but not quite as low as signers). Generally, our findings indicate two markers of the phonologization of handshape in sign languages: increasing finger complexity in object handshapes, and decreasing finger complexity in handling handshapes. These first indicators of phonology appear to be present in individuals developing a gesture system without benefit

  5. Imposing Cognitive Constraints on Reference Production : The Interplay Between Speech and Gesture During Grounding

    NARCIS (Netherlands)

    Masson-Carro, Ingrid; Goudbeek, Martijn; Krahmer, Emiel

    2016-01-01

    Past research has sought to elucidate how speakers and addressees establish common ground in conversation, yet few studies have focused on how visual cues such as co-speech gestures contribute to this process. Likewise, the effect of cognitive constraints on multimodal grounding remains to be

  6. Mothers' Production of Hand Gestures While Communicating with their Preschool Children Under Various Task Conditions.

    Science.gov (United States)

    Gutmann, Arlyne J.; Turnure, James E.

    This study investigates hand gesturing behavior produced by mothers communicating with their first born 2- to 3-year-old children and their 4- to 5-year-old children. Thirty-two mother-child pairs were assigned to groups balanced equally for age and sex. After it was confirmed that the older children produced longer utterances, the mother-child…

  7. A Tale of Two Hands: Children's Early Gesture Use in Narrative Production Predicts Later Narrative Structure in Speech

    Science.gov (United States)

    Demir, Özlem Ece; Levine, Susan C.; Goldin-Meadow, Susan

    2015-01-01

    Speakers of all ages spontaneously gesture as they talk. These gestures predict children's milestones in vocabulary and sentence structure. We ask whether gesture serves a similar role in the development of narrative skill. Children were asked to retell a story conveyed in a wordless cartoon at age five and then again at six, seven, and eight.…

  8. Gesture Recognition Summarization

    Institute of Scientific and Technical Information of China (English)

    ZHANG Ting-fang; FENG Zhi-quan; SU Yuan-yuan; JIANG Yan

    2014-01-01

    Gesture recognition is an important research in the field of human-computer interaction. Hand Gestures are strong variable and flexible, so the gesture recognition has always been an important challenge for the researchers. In this paper, we first outlined the development of gestures recognition, and different classification of gestures based on different purposes. Then we respectively introduced common methods used in the process of gesture segmentation, feature extraction and recognition. Finally, the gesture recognition was summarized and the studying prospects were given.

  9. Gesture Interfaces

    NARCIS (Netherlands)

    Fikkert, F.W.

    2007-01-01

    Take away mouse and keyboard. Now, how do you interact with a computer? Especially one that has a display that is the size of an entire wall. One possibility is through gesture interfaces. Remember Minority Report? Cool stuff, but that was already five years ago.. So, what is already possible now an

  10. Effects of age and language on co-speech gesture production: an investigation of French, American, and Italian children's narratives.

    Science.gov (United States)

    Colletta, Jean-Marc; Guidetti, Michèle; Capirci, Olga; Cristilli, Carla; Demir, Ozlem Ece; Kunene-Nicolas, Ramona N; Levine, Susan

    2015-01-01

    The aim of this paper is to compare speech and co-speech gestures observed during a narrative retelling task in five- and ten-year-old children from three different linguistic groups, French, American, and Italian, in order to better understand the role of age and language in the development of multimodal monologue discourse abilities. We asked 98 five- and ten-year-old children to narrate a short, wordless cartoon. Results showed a common developmental trend as well as linguistic and gesture differences between the three language groups. In all three languages, older children were found to give more detailed narratives, to insert more comments, and to gesture more and use different gestures--specifically gestures that contribute to the narrative structure--than their younger counterparts. Taken together, these findings allow a tentative model of multimodal narrative development in which major changes in later language acquisition occur despite language and culture differences.

  11. Iconicity and ape gesture.

    OpenAIRE

    Perlman, M; Clark, N.; Tanner, J

    2014-01-01

    Iconic gestures are hypothesized to be c rucial to the evolution of language. Yet the important question of whether apes produce iconic gestures is the subject of considerable debate. This paper presents the current state of research on iconicity in ape gesture. In particular, it describes some of the empirical evidence suggesting that apes produce three different kinds of iconic gestures; it compares the iconicity hypothesis to other major hypotheses of ape gesture; and finally, it offers so...

  12. Action Imitation at 1 ½ Years is Better Than Pointing Gesture in Predicting Late Development of Language Production at 3 Years of Age

    Science.gov (United States)

    Zambrana, Imac M.; Ystrom, Eivind; Schjølberg, Synnve; Pons, Francisco

    2012-01-01

    This study examined whether poor pointing gestures and imitative actions at 18 months of age uniquely predicted late language production at 36 months, beyond the role of poor language at 18 months of age. Data from the Norwegian Mother and Child Cohort Study were utilized. Maternal reports of the children’s nonverbal skills and language were gathered for 42,517 children aged 18 months and for 28,107 of the same children at 36 months. Panel analysis of latent variables revealed that imitative actions, language comprehension, and language production uniquely contributed to predicting late development of language production, while pointing gestures did not. It is suggested that the results can be explained by underlying symbolic representational skills at 18 months. PMID:23033814

  13. Gesturing by speakers with aphasia: How does it compare?

    NARCIS (Netherlands)

    L. Mol (Linda); E. Krahmer (Emiel); W.M.E. van de Sandt-Koenderman (Mieke)

    2013-01-01

    textabstractPurpose: To study the independence of gesture and verbal language production. The authors assessed whether gesture can besemantically compensatory in cases of verbal language impairment and whether speakers with aphasia and control participants use similar depiction techniques in gesture

  14. When Gesture Becomes Analogy.

    Science.gov (United States)

    Cooperrider, Kensy; Goldin-Meadow, Susan

    2017-07-01

    Analogy researchers do not often examine gesture, and gesture researchers do not often borrow ideas from the study of analogy. One borrowable idea from the world of analogy is the importance of distinguishing between attributes and relations. Gentner (, ) observed that some metaphors highlight attributes and others highlight relations, and called the latter analogies. Mirroring this logic, we observe that some metaphoric gestures represent attributes and others represent relations, and propose to call the latter analogical gestures. We provide examples of such analogical gestures and show how they relate to the categories of iconic and metaphoric gestures described previously. Analogical gestures represent different types of relations and different degrees of relational complexity, and sometimes cohere into larger analogical models. Treating analogical gestures as a distinct phenomenon prompts new questions and predictions, and illustrates one way that the study of gesture and the study of analogy can be mutually informative. Copyright © 2017 Cognitive Science Society, Inc.

  15. Gesture discrimination in primary progressive aphasia: the intersection between gesture and language processing pathways.

    Science.gov (United States)

    Nelissen, Natalie; Pazzaglia, Mariella; Vandenbulcke, Mathieu; Sunaert, Stefan; Fannes, Katrien; Dupont, Patrick; Aglioti, Salvatore M; Vandenberghe, Rik

    2010-05-05

    The issue of the relationship between language and gesture processing and the partial overlap of their neural representations is of fundamental importance to neurology, psychology, and social sciences. Patients suffering from primary progressive aphasia, a clinical syndrome characterized by comparatively isolated language deficits, may provide direct evidence for anatomical and functional association between specific language deficits and gesture discrimination deficits. A consecutive series of 16 patients with primary progressive aphasia and 16 matched control subjects participated. Our nonverbal gesture discrimination task consisted of 19 trials. In each trial, participants observed three video clips showing the same gesture performed correctly in one clip and incorrectly in the other two. Subjects had to indicate which of the three versions was correct. Language and gesture production were evaluated by means of conventional tasks. All participants underwent high-resolution structural and diffusion tensor magnetic resonance imaging. Ten of the primary progressive aphasia patients showed a significant deficit on the nonverbal gesture discrimination task. A factor analysis revealed that this deficit clustered with gesture imitation, word and pseudoword repetition, and writing-to-dictation. Individual scores on this cluster correlated with volume in the left anterior inferior parietal cortex extending into the posterior superior temporal gyrus. Probabilistic tractography indicated this region comprised the cortical relay station of the indirect pathway connecting the inferior frontal gyrus and the superior temporal cortex. Thus, the left perisylvian temporoparietal area may underpin verbal imitative behavior, gesture imitation, and gesture discrimination indicative of a partly shared neural substrate for language and gesture resonance.

  16. Rendimiento y reacción a colletotrichum lindemuathianum en cultivares de fríjol voluble (phaseolus vulgaris l.).

    OpenAIRE

    Gallego G, Carolina; Ligarreto Moreno, Gustavo Adolfo; Garzón Gutiérrez, Luz Nayibe; Oliveros Garay, Óscar Arturo; Rincón Rivera, Linda Jeimmy

    2011-01-01

    Bajo condiciones de la sabana de Bogotá (Colombia), se evaluaron 32 cultivares de fríjol voluble por componentes del rendimiento y por su reacción a una mezcla de aislamientos de Colletotrichum lindemuthianum procedentes de Boyacá y Cundinamarca. Los genotipos que presentaron un buen comportamiento en rendimiento y una reacción en campo a la resistencia de la enfermedad fueron: D. Moreno y 3198. Los que expresaron una reacción de resistencia a la antracnosis fueron: 3180, 3182, 3177 y G-2333....

  17. Some Reasons for Studying Gesture and Second Language Acquisition (Hommage a Adam Kendon)

    Science.gov (United States)

    Gullberg, Marianne

    2006-01-01

    This paper outlines some reasons for why gestures are relevant to the study of SLA. First, given cross-cultural and cross-linguistic gestural repertoires, gestures can be treated as part of what learners can acquire in a target language. Gestures can therefore be studied as a developing system in their own right both in L2 production and…

  18. From Gesture to Speech

    Directory of Open Access Journals (Sweden)

    Maurizio Gentilucci

    2012-11-01

    Full Text Available One of the major problems concerning the evolution of human language is to understand how sounds became associated to meaningful gestures. It has been proposed that the circuit controlling gestures and speech evolved from a circuit involved in the control of arm and mouth movements related to ingestion. This circuit contributed to the evolution of spoken language, moving from a system of communication based on arm gestures. The discovery of the mirror neurons has provided strong support for the gestural theory of speech origin because they offer a natural substrate for the embodiment of language and create a direct link between sender and receiver of a message. Behavioural studies indicate that manual gestures are linked to mouth movements used for syllable emission. Grasping with the hand selectively affected movement of inner or outer parts of the mouth according to syllable pronunciation and hand postures, in addition to hand actions, influenced the control of mouth grasp and vocalization. Gestures and words are also related to each other. It was found that when producing communicative gestures (emblems the intention to interact directly with a conspecific was transferred from gestures to words, inducing modification in voice parameters. Transfer effects of the meaning of representational gestures were found on both vocalizations and meaningful words. It has been concluded that the results of our studies suggest the existence of a system relating gesture to vocalization which was precursor of a more general system reciprocally relating gesture to word.

  19. Gesturing by speakers with aphasia: how does it compare?

    Science.gov (United States)

    Mol, Lisette; Krahmer, Emiel; van de Sandt-Koenderman, Mieke

    2013-08-01

    To study the independence of gesture and verbal language production. The authors assessed whether gesture can be semantically compensatory in cases of verbal language impairment and whether speakers with aphasia and control participants use similar depiction techniques in gesture. The informativeness of gesture was assessed in 3 forced-choice studies, in which raters assessed the topic of the speaker's message in video clips of 13 speakers with moderate aphasia and 12 speakers with severe aphasia, who were performing a communication test (the Scenario Test). Both groups were compared and contrasted with 17 control participants, who either were or were not allowed to communicate verbally. In addition, the representation techniques used in gesture were analyzed. Gestures produced by speakers with more severe aphasia were less informative than those by speakers with moderate aphasia, yet they were not necessarily uninformative. Speakers with more severe aphasia also tended to use fewer representation techniques (mostly relying on outlining gestures) in co-speech gesture than control participants, who were asked to use gesture instead of speech. It is important to note that limb apraxia may be a mediating factor here. These results suggest that in aphasia, gesture tends to degrade with verbal language. This may imply that the processes underlying verbal language and co-speech gesture production, although partly separate, are closely linked.

  20. Co-verbal gestures among speakers with aphasia: Influence of aphasia severity, linguistic and semantic skills, and hemiplegia on gesture employment in oral discourse

    Science.gov (United States)

    Kong, Anthony Pak-Hin; Law, Sam-Po; Wat, Watson Ka-Chun; Lai, Christy

    2015-01-01

    The use of co-verbal gestures is common in human communication and has been reported to assist word retrieval and to facilitate verbal interactions. This study systematically investigated the impact of aphasia severity, integrity of semantic processing, and hemiplegia on the use of co-verbal gestures, with reference to gesture forms and functions, by 131 normal speakers, 48 individuals with aphasia and their controls. All participants were native Cantonese speakers. It was found that the severity of aphasia and verbal-semantic impairment was associated with significantly more co-verbal gestures. However, there was no relationship between right-sided hemiplegia and gesture employment. Moreover, significantly more gestures were employed by the speakers with aphasia, but about 10% of them did not gesture. Among those who used gestures, content-carrying gestures, including iconic, metaphoric, deictic gestures, and emblems, served the function of enhancing language content and providing information additional to the language content. As for the non-content carrying gestures, beats were used primarily for reinforcing speech prosody or guiding speech flow, while non-identifiable gestures were associated with assisting lexical retrieval or with no specific functions. The above findings would enhance our understanding of the use of various forms of co-verbal gestures in aphasic discourse production and their functions. Speech-language pathologists may also refer to the current annotation system and the results to guide clinical evaluation and remediation of gestures in aphasia. PMID:26186256

  1. Co-verbal gestures among speakers with aphasia: Influence of aphasia severity, linguistic and semantic skills, and hemiplegia on gesture employment in oral discourse.

    Science.gov (United States)

    Kong, Anthony Pak-Hin; Law, Sam-Po; Wat, Watson Ka-Chun; Lai, Christy

    2015-01-01

    The use of co-verbal gestures is common in human communication and has been reported to assist word retrieval and to facilitate verbal interactions. This study systematically investigated the impact of aphasia severity, integrity of semantic processing, and hemiplegia on the use of co-verbal gestures, with reference to gesture forms and functions, by 131 normal speakers, 48 individuals with aphasia and their controls. All participants were native Cantonese speakers. It was found that the severity of aphasia and verbal-semantic impairment was associated with significantly more co-verbal gestures. However, there was no relationship between right-sided hemiplegia and gesture employment. Moreover, significantly more gestures were employed by the speakers with aphasia, but about 10% of them did not gesture. Among those who used gestures, content-carrying gestures, including iconic, metaphoric, deictic gestures, and emblems, served the function of enhancing language content and providing information additional to the language content. As for the non-content carrying gestures, beats were used primarily for reinforcing speech prosody or guiding speech flow, while non-identifiable gestures were associated with assisting lexical retrieval or with no specific functions. The above findings would enhance our understanding of the use of various forms of co-verbal gestures in aphasic discourse production and their functions. Speech-language pathologists may also refer to the current annotation system and the results to guide clinical evaluation and remediation of gestures in aphasia. None. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. Gestural development and its relation to a child's early vocabulary.

    Science.gov (United States)

    Kraljević, Jelena Kuvač; Cepanec, Maja; Simleša, Sanja

    2014-05-01

    Gesture and language are tightly connected during the development of a child's communication skills. Gestures mostly precede and define the way of language development; even opposite direction has been found. Few recent studies have focused on the relationship between specific gestures and specific word categories, emphasising that the onset of one gesture type predicts the onset of certain word categories or of the earliest word combinations. The aim of this study was to analyse predicative roles of different gesture types on the onset of first word categories in a child's early expressive vocabulary. Our data show that different types of gestures predict different types of word production. Object gestures predict open-class words from the age of 13 months, and gestural routines predict closed-class words and social terms from 8 months. Receptive vocabulary has a strong mediating role for all linguistically defined categories (open- and closed-class words) but not for social terms, which are the largest word category in a child's early expressive vocabulary. Accordingly, main contribution of this study is to define the impact of different gesture types on early expressive vocabulary and to determine the role of receptive vocabulary in gesture-expressive vocabulary relation in the Croatian language. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Mnemonic Effect of Iconic Gesture and Beat Gesture in Adults and Children: Is Meaning in Gesture Important for Memory Recall?

    Science.gov (United States)

    So, Wing Chee; Chen-Hui, Colin Sim; Wei-Shan, Julie Low

    2012-01-01

    Abundant research has shown that encoding meaningful gesture, such as an iconic gesture, enhances memory. This paper asked whether gesture needs to carry meaning to improve memory recall by comparing the mnemonic effect of meaningful (i.e., iconic gestures) and nonmeaningful gestures (i.e., beat gestures). Beat gestures involve simple motoric…

  4. A Comparison of Coverbal Gesture Use in Oral Discourse Among Speakers With Fluent and Nonfluent Aphasia.

    Science.gov (United States)

    Kong, Anthony Pak-Hin; Law, Sam-Po; Chak, Gigi Wan-Chi

    2017-07-12

    Coverbal gesture use, which is affected by the presence and degree of aphasia, can be culturally specific. The purpose of this study was to compare gesture use among Cantonese-speaking individuals: 23 neurologically healthy speakers, 23 speakers with fluent aphasia, and 21 speakers with nonfluent aphasia. Multimedia data of discourse samples from these speakers were extracted from the Cantonese AphasiaBank. Gestures were independently annotated on their forms and functions to determine how gesturing rate and distribution of gestures differed across speaker groups. A multiple regression was conducted to determine the most predictive variable(s) for gesture-to-word ratio. Although speakers with nonfluent aphasia gestured most frequently, the rate of gesture use in counterparts with fluent aphasia did not differ significantly from controls. Different patterns of gesture functions in the 3 speaker groups revealed that gesture plays a minor role in lexical retrieval whereas its role in enhancing communication dominates among the speakers with aphasia. The percentages of complete sentences and dysfluency strongly predicted the gesturing rate in aphasia. The current results supported the sketch model of language-gesture association. The relationship between gesture production and linguistic abilities and clinical implications for gesture-based language intervention for speakers with aphasia are also discussed.

  5. Visual speech gestures modulate efferent auditory system.

    Science.gov (United States)

    Namasivayam, Aravind Kumar; Wong, Wing Yiu Stephanie; Sharma, Dinaay; van Lieshout, Pascal

    2015-03-01

    Visual and auditory systems interact at both cortical and subcortical levels. Studies suggest a highly context-specific cross-modal modulation of the auditory system by the visual system. The present study builds on this work by sampling data from 17 young healthy adults to test whether visual speech stimuli evoke different responses in the auditory efferent system compared to visual non-speech stimuli. The descending cortical influences on medial olivocochlear (MOC) activity were indirectly assessed by examining the effects of contralateral suppression of transient-evoked otoacoustic emissions (TEOAEs) at 1, 2, 3 and 4 kHz under three conditions: (a) in the absence of any contralateral noise (Baseline), (b) contralateral noise + observing facial speech gestures related to productions of vowels /a/ and /u/ and (c) contralateral noise + observing facial non-speech gestures related to smiling and frowning. The results are based on 7 individuals whose data met strict recording criteria and indicated a significant difference in TEOAE suppression between observing speech gestures relative to the non-speech gestures, but only at the 1 kHz frequency. These results suggest that observing a speech gesture compared to a non-speech gesture may trigger a difference in MOC activity, possibly to enhance peripheral neural encoding. If such findings can be reproduced in future research, sensory perception models and theories positing the downstream convergence of unisensory streams of information in the cortex may need to be revised.

  6. A labial gesture for /l/

    Science.gov (United States)

    Campbell, Fiona; Gick, Bryan

    2003-04-01

    Both in language change and in substitutions during language acquisition and disordered speech, /l/ has often been observed to alternate with labial sounds such as [w] or rounded vowels, particularly in postvocalic position. While there are many possible explanations for this alternation, including acoustic enhancement and articulator coupling, one possibility that has not been tested is whether normal adult speakers of English actually produce lip rounding for /l/. A study was conducted to test for the presence of a labial gesture in normal productions of /l/. Front and side video data of lip positions were collected from three adult English speakers during productions of /l/ and /d/. Significant differences were found for all subjects in lip protrusion (upper and lower) and/or lip aperture (horizontal and vertical) in post-vocalic allophones, as well as between the pre- and post-vocalic allophones of /l/. No significant differences were observed in comparisons of pre-vocalic /l/ and /d/. Results suggest that there is in fact a labial gesture in the post-vocalic allophone of /l/, but not in the pre-vocalic allophone. These findings are consistent with a notion of gestural simplification as a possible explanation for substitutions and in language change. [Research supported by NSERC.

  7. Single gaze gestures

    DEFF Research Database (Denmark)

    Møllenbach, Emilie; Lilholm, Martin; Gail, Alastair

    2010-01-01

    This paper examines gaze gestures and their applicability as a generic selection method for gaze-only controlled interfaces. The method explored here is the Single Gaze Gesture (SGG), i.e. gestures consisting of a single point-to-point eye movement. Horizontal and vertical, long and short SGGs we...... evaluated on two eye tracking devices (Tobii/QuickGlance (QG)). The main findings show that there is a significant difference in selection times between long and short SGGs, between vertical and horizontal selections, as well as between the different tracking systems....

  8. Teaching moral reasoning through gesture.

    Science.gov (United States)

    Beaudoin-Ryan, Leanne; Goldin-Meadow, Susan

    2014-11-01

    Stem-cell research. Euthanasia. Personhood. Marriage equality. School shootings. Gun control. Death penalty. Ethical dilemmas regularly spark fierce debate about the underlying moral fabric of societies. How do we prepare today's children to be fully informed and thoughtful citizens, capable of moral and ethical decisions? Current approaches to moral education are controversial, requiring adults to serve as either direct ('top-down') or indirect ('bottom-up') conduits of information about morality. A common thread weaving throughout these two educational initiatives is the ability to take multiple perspectives - increases in perspective taking ability have been found to precede advances in moral reasoning. We propose gesture as a behavior uniquely situated to augment perspective taking ability. Requiring gesture during spatial tasks has been shown to catalyze the production of more sophisticated problem-solving strategies, allowing children to profit from instruction. Our data demonstrate that requiring gesture during moral reasoning tasks has similar effects, resulting in increased perspective taking ability subsequent to instruction. A video abstract of this article can be viewed at http://www.youtube.com/watch?v/gAcRIClU_GY. © 2014 John Wiley & Sons Ltd.

  9. Baby Sign but Not Spontaneous Gesture Predicts Later Vocabulary in Children with Down Syndrome

    Science.gov (United States)

    Özçaliskan, Seyda; Adamson, Lauren B.; Dimitrova, Nevena; Bailey, Jhonelle; Schmuck, Lauren

    2016-01-01

    Early spontaneous gesture, specifically deictic gesture, predicts subsequent vocabulary development in typically developing (TD) children. Here, we ask whether deictic gesture plays a similar role in predicting later vocabulary size in children with Down Syndrome (DS), who have been shown to have difficulties in speech production, but strengths in…

  10. Gestural Communication in Children with Autism Spectrum Disorders during Mother-Child Interaction

    Science.gov (United States)

    Mastrogiuseppe, Marilina; Capirci, Olga; Cuva, Simone; Venuti, Paola

    2015-01-01

    Children with autism spectrum disorders display atypical development of gesture production, and gesture impairment is one of the determining factors of autism spectrum disorder diagnosis. Despite the obvious importance of this issue for children with autism spectrum disorder, the literature on gestures in autism is scarce and contradictory. The…

  11. Early Gesture Predicts Language Delay in Children with Pre- Or Perinatal Brain Lesions

    Science.gov (United States)

    Sauer, Eve; Levine, Susan C.; Goldin-Meadow, Susan

    2010-01-01

    Does early gesture use predict later productive and receptive vocabulary in children with pre- or perinatal unilateral brain lesions (PL)? Eleven children with PL were categorized into 2 groups based on whether their gesture at 18 months was within or below the range of typically developing (TD) children. Children with PL whose gesture was within…

  12. [Verbal and gestural communication in interpersonal interaction with Alzheimer's disease patients].

    Science.gov (United States)

    Schiaratura, Loris Tamara; Di Pastena, Angela; Askevis-Leherpeux, Françoise; Clément, Sylvain

    2015-03-01

    Communication can be defined as a verbal and non verbal exchange of thoughts and emotions. While verbal communication deficit in Alzheimer's disease is well documented, very little is known about gestural communication, especially in interpersonal situations. This study examines the production of gestures and its relations with verbal aspects of communication. Three patients suffering from moderately severe Alzheimer's disease were compared to three healthy adults. Each one were given a series of pictures and asked to explain which one she preferred and why. The interpersonal interaction was video recorded. Analyses concerned verbal production (quantity and quality) and gestures. Gestures were either non representational (i.e., gestures of small amplitude punctuating speech or accentuating some parts of utterance) or representational (i.e., referring to the object of the speech). Representational gestures were coded as iconic (depicting of concrete aspects), metaphoric (depicting of abstract meaning) or deictic (pointing toward an object). In comparison with healthy participants, patients revealed a decrease in quantity and quality of speech. Nevertheless, their production of gestures was always present. This pattern is in line with the conception that gestures and speech depend on different communicational systems and look inconsistent with the assumption of a parallel dissolution of gesture and speech. Moreover, analyzing the articulation between verbal and gestural dimensions suggests that representational gestures may compensate for speech deficits. It underlines the importance for the role of gestures in maintaining interpersonal communication.

  13. Towards a natural gesture interface: LDA-based gesture separability

    CERN Document Server

    Romaszewski, Michał; Głomb, Przemysław

    2011-01-01

    The goal of this paper is to analyse a method of validating a subset of gestures to be used as elements of a HCI interface. We investigate the applicability of LDA for gesture data dimensionality reduction. An Gesture mutual separability analysis of a diverse dataset of 22 natural gestures captured with two motion-capture devices is provided. Fisher criterion is used to produce measures of class separability and class overlap.

  14. A unified framework for gesture recognition and spatiotemporal gesture segmentation.

    Science.gov (United States)

    Alon, Jonathan; Athitsos, Vassilis; Yuan, Quan; Sclaroff, Stan

    2009-09-01

    Within the context of hand gesture recognition, spatiotemporal gesture segmentation is the task of determining, in a video sequence, where the gesturing hand is located and when the gesture starts and ends. Existing gesture recognition methods typically assume either known spatial segmentation or known temporal segmentation, or both. This paper introduces a unified framework for simultaneously performing spatial segmentation, temporal segmentation, and recognition. In the proposed framework, information flows both bottom-up and top-down. A gesture can be recognized even when the hand location is highly ambiguous and when information about when the gesture begins and ends is unavailable. Thus, the method can be applied to continuous image streams where gestures are performed in front of moving, cluttered backgrounds. The proposed method consists of three novel contributions: a spatiotemporal matching algorithm that can accommodate multiple candidate hand detections in every frame, a classifier-based pruning framework that enables accurate and early rejection of poor matches to gesture models, and a subgesture reasoning algorithm that learns which gesture models can falsely match parts of other longer gestures. The performance of the approach is evaluated on two challenging applications: recognition of hand-signed digits gestured by users wearing short-sleeved shirts, in front of a cluttered background, and retrieval of occurrences of signs of interest in a video database containing continuous, unsegmented signing in American Sign Language (ASL).

  15. Gesture Modelling for Linguistic Purposes

    CSIR Research Space (South Africa)

    Olivrin, GJ

    2007-05-01

    Full Text Available The study of sign languages attempts to create a coherent model that binds the expressive nature of signs conveyed in gestures to a linguistic framework. Gesture modelling offers an alternative that provides device independence, scalability...

  16. Bimanual Gesture Imitation in Alzheimer's Disease.

    Science.gov (United States)

    Sanin, G Nter; Benke, Thomas

    2017-01-01

    Unimanual gesture production or imitation has often been studied in Alzheimer's disease (AD) during apraxia testing. In the present study, it was hypothesized that bimanual motor tasks may be a sensitive method to detect impairments of motor cognition in AD due to increased demands on the cognitive system. We investigated bimanual, meaningless gesture imitation in 45 AD outpatients, 38 subjects with mild cognitive impairment (MCI), and 50 normal controls (NC) attending a memory clinic. Participants performed neuropsychological background testing and three tasks: the Interlocking Finger Test (ILF), Imitation of Alternating Hand Movements (AHM), and Bimanual Rhythm Tapping (BRT). The tasks were short and easy to administer. Inter-rater reliability was high across all three tests. AD patients performed significantly poorer than NC and MCI participants; a deficit to imitate bimanual gestures was rarely found in MCI and NC participants. Sensitivity to detect AD ranged from 0.5 and 0.7, specificity beyond 0.9. ROC analyses revealed good diagnostic accuracy (0.77 to 0.92). Impairment to imitate bimanual gestures was mainly predicted by diagnosis and disease severity. Our findings suggest that an impairment to imitate bimanual, meaningless gestures is a valid disease marker of mild to moderate AD and can easily be assessed in memory clinic settings. Based on our preliminary findings, it appears to be a separate impairment which can be distinguished from other cognitive deficits.

  17. Gestural Communication and Mating Tactics in Wild Chimpanzees.

    Directory of Open Access Journals (Sweden)

    Anna Ilona Roberts

    Full Text Available The extent to which primates can flexibly adjust the production of gestural communication according to the presence and visual attention of the audience provides key insights into the social cognition underpinning gestural communication, such as an understanding of third party relationships. Gestures given in a mating context provide an ideal area for examining this flexibility, as frequently the interests of a male signaller, a female recipient and a rival male bystander conflict. Dominant chimpanzee males seek to monopolize matings, but subordinate males may use gestural communication flexibly to achieve matings despite their low rank. Here we show that the production of mating gestures in wild male East African chimpanzees (Pan troglodytes schweunfurthii was influenced by a conflict of interest with females, which in turn was influenced by the presence and visual attention of rival males. When the conflict of interest was low (the rival male was present and looking away, chimpanzees used visual/ tactile gestures over auditory gestures. However, when the conflict of interest was high (the rival male was absent, or was present and looking at the signaller chimpanzees used auditory gestures over visual/ tactile gestures. Further, the production of mating gestures was more common when the number of oestrous and non-oestrus females in the party increased, when the female was visually perceptive and when there was no wind. Females played an active role in mating behaviour, approaching for copulations more often when the number of oestrus females in the party increased and when the rival male was absent, or was present and looking away. Examining how social and ecological factors affect mating tactics in primates may thus contribute to understanding the previously unexplained reproductive success of subordinate male chimpanzees.

  18. Gestural Communication and Mating Tactics in Wild Chimpanzees.

    Science.gov (United States)

    Roberts, Anna Ilona; Roberts, Sam George Bradley

    2015-01-01

    The extent to which primates can flexibly adjust the production of gestural communication according to the presence and visual attention of the audience provides key insights into the social cognition underpinning gestural communication, such as an understanding of third party relationships. Gestures given in a mating context provide an ideal area for examining this flexibility, as frequently the interests of a male signaller, a female recipient and a rival male bystander conflict. Dominant chimpanzee males seek to monopolize matings, but subordinate males may use gestural communication flexibly to achieve matings despite their low rank. Here we show that the production of mating gestures in wild male East African chimpanzees (Pan troglodytes schweunfurthii) was influenced by a conflict of interest with females, which in turn was influenced by the presence and visual attention of rival males. When the conflict of interest was low (the rival male was present and looking away), chimpanzees used visual/ tactile gestures over auditory gestures. However, when the conflict of interest was high (the rival male was absent, or was present and looking at the signaller) chimpanzees used auditory gestures over visual/ tactile gestures. Further, the production of mating gestures was more common when the number of oestrous and non-oestrus females in the party increased, when the female was visually perceptive and when there was no wind. Females played an active role in mating behaviour, approaching for copulations more often when the number of oestrus females in the party increased and when the rival male was absent, or was present and looking away. Examining how social and ecological factors affect mating tactics in primates may thus contribute to understanding the previously unexplained reproductive success of subordinate male chimpanzees.

  19. Eye-based head gestures

    DEFF Research Database (Denmark)

    Mardanbegi, Diako; Witzner Hansen, Dan; Pederson, Thomas

    2012-01-01

    A novel method for video-based head gesture recognition using eye information by an eye tracker has been proposed. The method uses a combination of gaze and eye movement to infer head gestures. Compared to other gesture-based methods a major advantage of the method is that the user keeps the gaze...

  20. Semantic Processing of Mathematical Gestures

    Science.gov (United States)

    Lim, Vanessa K.; Wilson, Anna J.; Hamm, Jeff P.; Phillips, Nicola; Iwabuchi, Sarina J.; Corballis, Michael C.; Arzarello, Ferdinando; Thomas, Michael O. J.

    2009-01-01

    Objective: To examine whether or not university mathematics students semantically process gestures depicting mathematical functions (mathematical gestures) similarly to the way they process action gestures and sentences. Semantic processing was indexed by the N400 effect. Results: The N400 effect elicited by words primed with mathematical gestures…

  1. Early Vocabulary and Gestures in Estonian Children

    Science.gov (United States)

    Schults, Astra; Tulviste, Tiia; Konstabel, Kenn

    2012-01-01

    Parents of 592 children between the age of 0 ; 8 and 1 ; 4 completed the Estonian adaptation of the MacArthur-Bates Communicative Development Inventory (ECDI Infant Form). The relationships between comprehension and production of different categories of words and gestures were examined. According to the results of regression modelling the…

  2. Nonverbal social communication and gesture control in schizophrenia.

    Science.gov (United States)

    Walther, Sebastian; Stegmayer, Katharina; Sulzbacher, Jeanne; Vanbellingen, Tim; Müri, René; Strik, Werner; Bohlhalter, Stephan

    2015-03-01

    Schizophrenia patients are severely impaired in nonverbal communication, including social perception and gesture production. However, the impact of nonverbal social perception on gestural behavior remains unknown, as is the contribution of negative symptoms, working memory, and abnormal motor behavior. Thus, the study tested whether poor nonverbal social perception was related to impaired gesture performance, gestural knowledge, or motor abnormalities. Forty-six patients with schizophrenia (80%), schizophreniform (15%), or schizoaffective disorder (5%) and 44 healthy controls matched for age, gender, and education were included. Participants completed 4 tasks on nonverbal communication including nonverbal social perception, gesture performance, gesture recognition, and tool use. In addition, they underwent comprehensive clinical and motor assessments. Patients presented impaired nonverbal communication in all tasks compared with controls. Furthermore, in contrast to controls, performance in patients was highly correlated between tasks, not explained by supramodal cognitive deficits such as working memory. Schizophrenia patients with impaired gesture performance also demonstrated poor nonverbal social perception, gestural knowledge, and tool use. Importantly, motor/frontal abnormalities negatively mediated the strong association between nonverbal social perception and gesture performance. The factors negative symptoms and antipsychotic dosage were unrelated to the nonverbal tasks. The study confirmed a generalized nonverbal communication deficit in schizophrenia. Specifically, the findings suggested that nonverbal social perception in schizophrenia has a relevant impact on gestural impairment beyond the negative influence of motor/frontal abnormalities. © The Author 2015. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  3. Gestures Enhance Foreign Language Learning

    Directory of Open Access Journals (Sweden)

    Manuela Macedonia

    2012-11-01

    Full Text Available Language and gesture are highly interdependent systems that reciprocally influence each other. For example, performing a gesture when learning a word or a phrase enhances its retrieval compared to pure verbal learning. Although the enhancing effects of co-speech gestures on memory are known to be robust, the underlying neural mechanisms are still unclear. Here, we summarize the results of behavioral and neuroscientific studies. They indicate that the neural representation of words consists of complex multimodal networks connecting perception and motor acts that occur during learning. In this context, gestures can reinforce the sensorimotor representation of a word or a phrase, making it resistant to decay. Also, gestures can favor embodiment of abstract words by creating it from scratch. Thus, we propose the use of gesture as a facilitating educational tool that integrates body and mind.

  4. Pantomimic gestures for human-robot interaction

    CSIR Research Space (South Africa)

    Burke, Michael G

    2015-10-01

    Full Text Available -1 IEEE TRANSACTIONS ON ROBOTICS 1 Pantomimic Gestures for Human-Robot Interaction Michael Burke, Student Member, IEEE, and Joan Lasenby Abstract This work introduces a pantomimic gesture interface, which classifies human hand gestures using...

  5. Method for gesture based modeling

    DEFF Research Database (Denmark)

    2006-01-01

    A computer program based method is described for creating models using gestures. On an input device, such as an electronic whiteboard, a user draws a gesture which is recognized by a computer program and interpreted relative to a predetermined meta-model. Based on the interpretation, an algorithm...... is assigned to the gesture drawn by the user. The executed algorithm may, for example, consist in creating a new model element, modifying an existing model element, or deleting an existing model element....

  6. Operational Gesture Segmentation and Recognition

    Institute of Scientific and Technical Information of China (English)

    马赓宇; 林学訚

    2003-01-01

    Gesture analysis by computer is an important part of the human computer interface (HCI) and agesture analysis method was developed using a skin-color-based method to extract the area representing thehand in a single image with a distribution feature measurement designed to describe the hand shape in theimages. A hidden Markov model (HMM) based method was used to analyze the temporal variation andsegmentation of continuous operational gestures. Furthermore, a transition HMM was used to represent theperiod between gestures, so the method could segment continuous gestures and eliminate non-standardgestures. The system can analyze 2 frames per second, which is sufficient for real time analysis.

  7. The development of co-speech gesture and its semantic integration with speech in 6- to 12-year-old children with autism spectrum disorders.

    Science.gov (United States)

    So, Wing-Chee; Wong, Miranda Kit-Yi; Lui, Ming; Yip, Virginia

    2015-11-01

    Previous work leaves open the question of whether children with autism spectrum disorders aged 6-12 years have delay in producing gestures compared to their typically developing peers. This study examined gestural production among school-aged children in a naturalistic context and how their gestures are semantically related to the accompanying speech. Delay in gestural production was found in children with autism spectrum disorders through their middle to late childhood. Compared to their typically developing counterparts, children with autism spectrum disorders gestured less often and used fewer types of gestures, in particular markers, which carry culture-specific meaning. Typically developing children's gestural production was related to language and cognitive skills, but among children with autism spectrum disorders, gestural production was more strongly related to the severity of socio-communicative impairment. Gesture impairment also included the failure to integrate speech with gesture: in particular, supplementary gestures are absent in children with autism spectrum disorders. The findings extend our understanding of gestural production in school-aged children with autism spectrum disorders during spontaneous interaction. The results can help guide new therapies for gestural production for children with autism spectrum disorders in middle and late childhood. © The Author(s) 2014.

  8. Effects of Age and Language on Co-Speech Gesture Production: An Investigation of French, American, and Italian Children's Narratives

    Science.gov (United States)

    Colletta, Jean-Marc; Guidetti, Michele; Capirci, Olga; Cristilli, Carla; Demir, Ozlem Ece; Kunene-Nicolas, Ramona N.; Levine, Susan

    2015-01-01

    The aim of this paper is to compare speech and co-speech gestures observed during a narrative retelling task in five- and ten-year-old children from three different linguistic groups, French, American, and Italian, in order to better understand the role of age and language in the development of multimodal monologue discourse abilities. We asked 98…

  9. Quantifying the Use of Gestures in Autism Spectrum Disorder

    DEFF Research Database (Denmark)

    Lambrechts, Anna; Yarrow, K.; Maras, Katie

    Background: Autism Spectrum Disorder (ASD) is characterized by difficulties in communication and social interaction. In the absence of a biomarker, a diagnosis of Autism Spectrum Disorder (ASD) is reached in settings such as the ADOS (Lord et al., 2000) by observing disturbances of social...... interaction such as abnormalities in the use of gestures or flow of conversation. These observations rely exclusively on clinical judgement and are thus prone to error and inconsistency across contexts and clinicians. While studies in children show that co-speech gestures are fewer (e.g. Wetherby et al., 1998...... that abnormal temporal processes contribute to impaired social skills in ASD (Allman, 2011). Objectives: - Quantify the production of gestures in ASD in naturally occurring language - Characterise the temporal dynamics of speech and gesture coordination in ASD using two acoustic indices; pitch and volume...

  10. Methodological reflections on gesture analysis in second language acquisition and bilingualism research

    OpenAIRE

    Gullberg, M

    2010-01-01

    Gestures, the symbolic movements speakers perform while they speak, form a closely inter-connected system with speech where gestures serve both addressee-directed (‘communicative’) and speaker-directed (’internal’) functions. This paper aims (1) to show that a combined analysis of gesture and speech offers new ways to address theoretical issues in SLA and bilingualism studies, probing SLA and bilingualism as product and process; and (2) to outline some methodological concerns and desiderata t...

  11. Gesturing Makes Memories that Last

    Science.gov (United States)

    Cook, Susan Wagner; Yip, Terina KuangYi; Goldin-Meadow, Susan

    2010-01-01

    When people are asked to perform actions, they remember those actions better than if they are asked to talk about the same actions. But when people talk, they often gesture with their hands, thus adding an action component to talking. The question we asked in this study was whether producing gesture along with speech makes the information encoded…

  12. Designing Gestural Interfaces Touchscreens and Interactive Devices

    CERN Document Server

    Saffer, Dan

    2008-01-01

    If you want to get started in new era of interaction design, this is the reference you need. Packed with informative illustrations and photos, Designing Gestural Interfaces provides you with essential information about kinesiology, sensors, ergonomics, physical computing, touchscreen technology, and new interface patterns -- information you need to augment your existing skills in traditional" websites, software, or product development. This book will help you enter this new world of possibilities."

  13. Early deictic but not other gestures predict later vocabulary in both typical development and autism.

    Science.gov (United States)

    Özçalışkan, Şeyda; Adamson, Lauren B; Dimitrova, Nevena

    2016-08-01

    Research with typically developing children suggests a strong positive relation between early gesture use and subsequent vocabulary development. In this study, we ask whether gesture production plays a similar role for children with autism spectrum disorder. We observed 23 18-month-old typically developing children and 23 30-month-old children with autism spectrum disorder interact with their caregivers (Communication Play Protocol) and coded types of gestures children produced (deictic, give, conventional, and iconic) in two communicative contexts (commenting and requesting). One year later, we assessed children's expressive vocabulary, using Expressive Vocabulary Test. Children with autism spectrum disorder showed significant deficits in gesture production, particularly in deictic gestures (i.e. gestures that indicate objects by pointing at them or by holding them up). Importantly, deictic gestures-but not other gestures-predicted children's vocabulary 1 year later regardless of communicative context, a pattern also found in typical development. We conclude that the production of deictic gestures serves as a stepping-stone for vocabulary development.

  14. Towards the creation of a Gesture Library

    Directory of Open Access Journals (Sweden)

    Bruno Galveia

    2015-06-01

    Full Text Available The evolution of technology has risen new possibilities in the so called Natural User Interfaces research area. Among distinct initiatives, several researchers are working with the existing sensors towards improving the support to gesture languages. This article tackles the recognition of gestures, using the Kinect sensor, in order to create a gesture library and support the gesture recognition processes afterwards.

  15. Gestures and Insight in Advanced Mathematical Thinking

    Science.gov (United States)

    Yoon, Caroline; Thomas, Michael O. J.; Dreyfus, Tommy

    2011-01-01

    What role do gestures play in advanced mathematical thinking? We argue that the role of gestures goes beyond merely communicating thought and supporting understanding--in some cases, gestures can help generate new mathematical insights. Gestures feature prominently in a case study of two participants working on a sequence of calculus activities.…

  16. Gestures and Insight in Advanced Mathematical Thinking

    Science.gov (United States)

    Yoon, Caroline; Thomas, Michael O. J.; Dreyfus, Tommy

    2011-01-01

    What role do gestures play in advanced mathematical thinking? We argue that the role of gestures goes beyond merely communicating thought and supporting understanding--in some cases, gestures can help generate new mathematical insights. Gestures feature prominently in a case study of two participants working on a sequence of calculus activities.…

  17. Aphasia in a user of British Sign Language: Dissociation between sign and gesture.

    Science.gov (United States)

    Marshall, Jane; Atkinson, Jo; Smulovitch, Elaine; Thacker, Alice; Woll, Bencie

    2004-07-01

    This paper reports a single case investigation of "Charles", a Deaf man with sign language aphasia following a left CVA. Anomia, or a deficit in sign retrieval, was a prominent feature of his aphasia, and this showed many of the well-documented characteristics of speech anomia. For example, sign retrieval was sensitive to familiarity, it could be cued, and there were both semantic and phonological errors. Like a previous case in the literature (Corina, Poizner, Bellugi, Feinberg, Dowd, & O'Grady-Batch, 1992), Charles demonstrated a striking dissociation between sign and gesture, since his gesture production was relatively intact. This dissociation was impervious to the iconicity of signs. So, Charles' sign production showed no effect of iconicity, and gesture production was superior to sign production even when the forms of the signs and gestures were similar. The implications of these findings for models of sign and gesture production are discussed.

  18. Hand Gesture Recognition: A Literature Review

    OpenAIRE

    Rafiqul Zaman Khan; Noor Adnan Ibraheem

    2012-01-01

    Hand gesture recognition system received great attention in the recent few years because of its manifoldness applications and the ability to interact with machine efficiently through human computer interaction. In this paper a survey of recent hand gesture recognition systems is presented. Key issues of hand gesture recognition system are presented with challenges of gesture system. Review methods of recent postures and gestures recognition system presented as well. Summary of res...

  19. Corner Detection of Hand Gesture

    Directory of Open Access Journals (Sweden)

    Lili Zhang

    2012-12-01

    Full Text Available This paper studies the methods of corner detection of hand gesture, and mainly introduces the orthogonal three-direction chain code (3OT and uses it in corner detection of hand gesture. The study is discussed from four aspects: the techniques used in corner detection, the techniques of Freeman chain code, the main idea of 3OT, the process of corner detection with 3OT and the experiments on corner detectors used for hand gesture images of 26 letters in American Sign Language are described in detail. Experiment results show that the 3OT has well performance with exact corner detection rate and least false corner’s number.

  20. A cross-species study of gesture and its role in symbolic development: Implications for the gestural theory of language evolution

    Directory of Open Access Journals (Sweden)

    Kristen eGillespie-Lynch

    2013-06-01

    Full Text Available Using a naturalistic video database, we examined whether gestures scaffolded the symbolic development of a language-enculturated chimpanzee, a language-enculturated bonobo, and a human child during the second year of life. These three species constitute a complete clade: species possessing a common immediate ancestor. A basic finding was the functional and formal similarity of many gestures between chimpanzee, bonobo, and human child. The child’s symbols were spoken words; the apes’ symbols were lexigrams, noniconic visual signifiers. A developmental pattern in which gestural representation of a referent preceded symbolic representation of the same referent appeared in all three species (but was statistically significant only for the child. Nonetheless, across species, the ratio of symbol to gesture increased significantly with age. But even though their symbol production increased, the apes continued to communicate more frequently by gesture than by symbol. In contrast, by15-18 months of age, the child used symbols more frequently than gestures. This ontogenetic sequence from gesture to symbol, present across the clade but more pronounced in child than ape, provides support for the role of gesture in language evolution. In all three species, the overwhelming majority of gestures were communicative (paired with eye-contact, vocalization, and/or persistence. However, vocalization was rare for the apes, but accompanied the majority of the child’s communicative gestures. This finding suggests the co-evolution of speech and gesture after the evolutionary divergence of the hominid line. Multimodal expressions of communicative intent (e.g., vocalization plus persistence were normative for the child, but less common for the apes. This finding suggests that multimodal expression of communicative intent was also strengthened after hominids diverged from apes.

  1. Effects of hand gestures on auditory learning of second-language vowel length contrasts.

    Science.gov (United States)

    Hirata, Yukari; Kelly, Spencer D; Huang, Jessica; Manansala, Michael

    2014-12-01

    Research has shown that hand gestures affect comprehension and production of speech at semantic, syntactic, and pragmatic levels for both native language and second language (L2). This study investigated a relatively less explored question: Do hand gestures influence auditory learning of an L2 at the segmental phonology level? To examine auditory learning of phonemic vowel length contrasts in Japanese, 88 native English-speaking participants took an auditory test before and after one of the following 4 types of training in which they (a) observed an instructor in a video speaking Japanese words while she made syllabic-rhythm hand gesture, (b) produced this gesture with the instructor, (c) observed the instructor speaking those words and her moraic-rhythm hand gesture, or (d) produced the moraic-rhythm gesture with the instructor. All of the training types yielded similar auditory improvement in identifying vowel length contrast. However, observing the syllabic-rhythm hand gesture yielded the most balanced improvement between word-initial and word-final vowels and between slow and fast speaking rates. The overall effect of hand gesture on learning of segmental phonology is limited. Implications for theories of hand gesture are discussed in terms of the role it plays at different linguistic levels.

  2. The Development of Co-Speech Gesture and Its Semantic Integration with Speech in 6- to 12-Year-Old Children with Autism Spectrum Disorders

    Science.gov (United States)

    So, Wing-Chee; Wong, Miranda Kit-Yi; Lui, Ming; Yip, Virginia

    2015-01-01

    Previous work leaves open the question of whether children with autism spectrum disorders aged 6-12?years have delay in producing gestures compared to their typically developing peers. This study examined gestural production among school-aged children in a naturalistic context and how their gestures are semantically related to the accompanying…

  3. The Changing Role of Gesture in Linguistic Development: A Developmental Trajectory and a Cross-Cultural Comparison between British and Finnish Children

    Science.gov (United States)

    Huttunen, K. H.; Pine, K. J.; Thurnham, A. J.; Khan, C.

    2013-01-01

    We studied how gesture use changes with culture, age and increased spoken language competence. A picture-naming task was presented to British (N = 80) and Finnish (N = 41) typically developing children aged 2-5 years. British children were found to gesture more than Finnish children and, in both cultures, gesture production decreased after the age…

  4. Paying Attention to Gesture When Students Talk Chemistry: Interactional Resources for Responsive Teaching

    Science.gov (United States)

    Flood, Virginia J.; Amar, Francois G.; Nemirovsky, Ricardo; Harrer, Benedikt W.; Bruce, Mitchell R. M.; Wittmann, Michael C.

    2015-01-01

    When students share and explore chemistry ideas with others, they use gestures and their bodies to perform their understanding. As a publicly visible, spatio-dynamic medium of expression, gestures and the body provide productive resources for imagining the submicroscopic, three-dimensional, and dynamic phenomena of chemistry together. In this…

  5. Gesture and Symbolic Representation in Italian and English-Speaking Canadian 2-Year-Olds

    Science.gov (United States)

    Marentette, Paula; Pettenati, Paola; Bello, Arianna; Volterra, Virginia

    2016-01-01

    Analyses of elicited pantomime, primarily of English-speaking children, show that preschool-aged children are more likely to symbolically represent an object with gestures depicting an object's form rather than its function. In contrast, anecdotal reports of spontaneous gesture production in younger children suggest that children use multiple…

  6. What We Say and How We Do: Action, Gesture, and Language in Proving

    Science.gov (United States)

    Williams-Pierce, Caroline; Pier, Elizabeth L.; Walkington, Candace; Boncoddo, Rebecca; Clinton, Virginia; Alibali, Martha W.; Nathan, Mitchell J.

    2017-01-01

    In this Brief Report, we share the main findings from our line of research into embodied cognition and proof activities. First, attending to students' gestures during proving activities can reveal aspects of mathematics thinking not apparent in their speech, and analyzing gestures after proof production can contribute significantly to our…

  7. Gesture recognition on smart cameras

    Science.gov (United States)

    Dziri, Aziz; Chevobbe, Stephane; Darouich, Mehdi

    2013-02-01

    Gesture recognition is a feature in human-machine interaction that allows more natural interaction without the use of complex devices. For this reason, several methods of gesture recognition have been developed in recent years. However, most real time methods are designed to operate on a Personal Computer with high computing resources and memory. In this paper, we analyze relevant methods found in the literature in order to investigate the ability of smart camera to execute gesture recognition algorithms. We elaborate two hand gesture recognition pipelines. The first method is based on invariant moments extraction and the second on finger tips detection. The hand detection method used for both pipeline is based on skin color segmentation. The results obtained show that the un-optimized versions of invariant moments method and finger tips detection method can reach 10 fps on embedded processor and use about 200 kB of memory.

  8. Gesture in the developing brain.

    Science.gov (United States)

    Dick, Anthony Steven; Goldin-Meadow, Susan; Solodkin, Ana; Small, Steven L

    2012-03-01

    Speakers convey meaning not only through words, but also through gestures. Although children are exposed to co-speech gestures from birth, we do not know how the developing brain comes to connect meaning conveyed in gesture with speech. We used functional magnetic resonance imaging (fMRI) to address this question and scanned 8- to 11-year-old children and adults listening to stories accompanied by hand movements, either meaningful co-speech gestures or meaningless self-adaptors. When listening to stories accompanied by both types of hand movement, both children and adults recruited inferior frontal, inferior parietal, and posterior temporal brain regions known to be involved in processing language not accompanied by hand movements. There were, however, age-related differences in activity in posterior superior temporal sulcus (STSp), inferior frontal gyrus, pars triangularis (IFGTr), and posterior middle temporal gyrus (MTGp) regions previously implicated in processing gesture. Both children and adults showed sensitivity to the meaning of hand movements in IFGTr and MTGp, but in different ways. Finally, we found that hand movement meaning modulates interactions between STSp and other posterior temporal and inferior parietal regions for adults, but not for children. These results shed light on the developing neural substrate for understanding meaning contributed by co-speech gesture.

  9. Kazakh Traditional Dance Gesture Recognition

    Science.gov (United States)

    Nussipbekov, A. K.; Amirgaliyev, E. N.; Hahn, Minsoo

    2014-04-01

    Full body gesture recognition is an important and interdisciplinary research field which is widely used in many application spheres including dance gesture recognition. The rapid growth of technology in recent years brought a lot of contribution in this domain. However it is still challenging task. In this paper we implement Kazakh traditional dance gesture recognition. We use Microsoft Kinect camera to obtain human skeleton and depth information. Then we apply tree-structured Bayesian network and Expectation Maximization algorithm with K-means clustering to calculate conditional linear Gaussians for classifying poses. And finally we use Hidden Markov Model to detect dance gestures. Our main contribution is that we extend Kinect skeleton by adding headwear as a new skeleton joint which is calculated from depth image. This novelty allows us to significantly improve the accuracy of head gesture recognition of a dancer which in turn plays considerable role in whole body gesture recognition. Experimental results show the efficiency of the proposed method and that its performance is comparable to the state-of-the-art system performances.

  10. Crossover learning of gestures in two ideomotor apraxia patients: A single case experimental design study.

    Science.gov (United States)

    Shimizu, Daisuke; Tanemura, Rumi

    2017-06-01

    Crossover learning may aid rehabilitation in patients with neurological disorders. Ideomotor apraxia (IMA) is a common sequela of left-brain damage that comprises a deficit in the ability to perform gestures to verbal commands or by imitation. This study elucidated whether crossover learning occurred in two post-stroke IMA patients without motor paralysis after gesture training approximately 2 months after stroke onset. We quantitatively analysed the therapeutic intervention history and investigated whether revised action occurred during gesture production. Treatment intervention was to examine how to influence improvement and generalisation of the ability to produce the gesture. This study used an alternating treatments single-subject design, and the intervention method was errorless learning. Results indicated crossover learning in both patients. Qualitative analysis indicated that revised action occurred during the gesture-production process in one patient and that there were two types of post-revised action gestures: correct and incorrect gestures. We also discovered that even when a comparably short time had elapsed since stroke onset, generalisation was difficult. Information transfer between the left and right hemispheres of the brain via commissural fibres is important in crossover learning. In conclusion, improvements in gesture-production skill should be made with reference to the left cerebral hemisphere disconnection hypothesis.

  11. Spontaneous gesture and spatial language: Evidence from focal brain injury.

    Science.gov (United States)

    Göksun, Tilbe; Lehet, Matthew; Malykhina, Katsiaryna; Chatterjee, Anjan

    2015-11-01

    People often use spontaneous gestures when communicating spatial information. We investigated focal brain-injured individuals to test the hypotheses that (1) naming motion event components of manner-path (represented by verbs-prepositions in English) are impaired selectively, (2) gestures compensate for impaired naming. Patients with left or right hemisphere damage (LHD or RHD) and elderly control participants were asked to describe motion events (e.g., running across) depicted in brief videos. Damage to the left posterior middle frontal gyrus, left inferior frontal gyrus, and left anterior superior temporal gyrus (aSTG) produced impairments in naming paths of motion; lesions to the left caudate and adjacent white matter produced impairments in naming manners of motion. While the frequency of spontaneous gestures were low, lesions to the left aSTG significantly correlated with greater production of path gestures. These suggest that producing prepositions-verbs can be separately impaired and gesture production compensates for naming impairments when damage involves left aSTG.

  12. The effects of learning American Sign Language on co-speech gesture*

    Science.gov (United States)

    CASEY, SHANNON; EMMOREY, KAREN; LARRABEE, HEATHER

    2013-01-01

    Given that the linguistic articulators for sign language are also used to produce co-speech gesture, we examined whether one year of academic instruction in American Sign Language (ASL) impacts the rate and nature of gestures produced when speaking English. A survey study revealed that 75% of ASL learners (N = 95), but only 14% of Romance language learners (N = 203), felt that they gestured more after one year of language instruction. A longitudinal study confirmed this perception. Twenty-one ASL learners and 20 Romance language learners (French, Italian, Spanish) were filmed re-telling a cartoon story before and after one academic year of language instruction. Only the ASL learners exhibited an increase in gesture rate, an increase in the production of iconic gestures, and an increase in the number of handshape types exploited in co-speech gesture. Five ASL students also produced at least one ASL sign when re-telling the cartoon. We suggest that learning ASL may (i) lower the neural threshold for co-speech gesture production, (ii) pose a unique challenge for language control, and (iii) have the potential to improve cognitive processes that are linked to gesture. PMID:23335853

  13. Gestural stability in vowels

    Science.gov (United States)

    Purnell, Thomas

    2004-05-01

    In accordance with proper perception of linguistic sound units, past research has demonstrated some degree of acoustic and physiological stability. In contrast, articulatory stability has been thought to be inconsistent because articulations may vary so long as the vocal tract area function results in appropriate formant structure [Atal et al., J. Acoust. Soc. Am. 63, 1535-1555 (1978)]. However, if the area function for the constriction and its anterior region can maintain acoustic stability, articulatory stability should be observed in the relational behavior of four tongue pellets used in xray microbeam data. Previous work examined normalized pellet data in order to arrive at an average posture for each vowel [Hashi et al., J. Acoust. Soc. Am. 104, 2426-2437 (1998)]. But by assuming static (average) gestures, the research fell short of a correct postural characterization. This study of tongue pellet speed and normalized pellet displacement of front vowels spoken by ten microbeam database subjects reports that the tongue tip pellet speed maxima identify vowel edges (end of vowel onset, beginning of offset) while displacement of the three anterior pellets identify changes in formant structure (e.g., two stable regions in the Northern Cities English front low vowel).

  14. Gestures: Their Role in Teaching and Learning.

    Science.gov (United States)

    Roth, Wolff-Michael

    2001-01-01

    Reviews existing literature on gestures and teaching in anthropology, linguistics, psychology, and education and, in the context of several concrete analyses of gesture use, articulates some focal questions relevant to educational research on knowing, learning, and teaching. (SLD)

  15. Gesture Recognition Technology: A Review

    Directory of Open Access Journals (Sweden)

    PALLAVI HALARNKAR

    2012-11-01

    Full Text Available Gesture Recognition Technology has evolved greatly over the years. The past has seen the contemporary Human – Computer Interface techniques and their drawbacks, which limit the speed and naturalness of the human brain and body. As a result gesture recognition technology has developed since the early 1900s with a view to achieving ease and lessening the dependence on devices like keyboards, mice and touchscreens. Attempts have been made to combine natural gestures to operate with the technology around us to enable us to make optimum use of our body gestures making our work faster and more human friendly. The present has seen huge development in this field ranging from devices like virtual keyboards, video game controllers to advanced security systems which work on face, hand and body recognition techniques. The goal is to make full use of themovements of the body and every angle made by the parts of the body in order to supplement technology to become human friendly and understand natural human behavior and gestures. The future of this technology is very bright with prototypes of amazing devices in research and development to make the world equipped with digital information at hand whenever and wherever required.

  16. Sensorimotor Control of Sound-Producing Gestures, Musical Gestures - Sound, Movement, and Meaning

    OpenAIRE

    Gibet, Sylvie

    2009-01-01

    In this chapter, we focus on sensorimotor models of sound-producing gestures. These models are studied from two different viewpoints, namely theories for motor control, and computer synthesis of avatars that produce human gesture. The theories aim to understand gesture on the basis of the underlying biomechanics, whereas the computer synthesis aims to understand entire gestures on the basis ofsensorimotor control models The emphasis of this chapter is on hand-arm gestures, from simple control...

  17. Does brain injury impair speech and gesture differently?

    Directory of Open Access Journals (Sweden)

    Tilbe Göksun

    2016-09-01

    Full Text Available People often use spontaneous gestures when talking about space, such as when giving directions. In a recent study from our lab, we examined whether focal brain-injured individuals’ naming motion event components of manner and path (represented in English by verbs and prepositions, respectively are impaired selectively, and whether gestures compensate for impairment in speech. Left or right hemisphere damaged patients and elderly control participants were asked to describe motion events (e.g., walking around depicted in brief videos. Results suggest that producing verbs and prepositions can be separately impaired in the left hemisphere and gesture production compensates for naming impairments when damage involves specific areas in the left temporal cortex.

  18. Temporal Dynamics of Speech and Gesture in Autism Spectrum Disorder

    DEFF Research Database (Denmark)

    Lambrechts, Anna; Gaigg, Sebastian; Yarrow, Kielan

    2015-01-01

    Autism Spectrum Disorder (ASD) is characterized by difficulties in communication and social interaction. Abnormalities in the use of gestures or flow of conversation are frequently reported in clinical observations and contribute to a diagnosis of the disorder but the mechanisms underlying...... these communication difficulties remain unclear. In the present study, we examine the hypothesis that the temporal dynamics of speech and gesture production is atypical in ASD and affects the overall quality of communication. The context of a previously published study of memory in ASD (Maras et al., 2013) provided...... the opportunity to examine video recordings of 17 ASD and 17 TD adults attempting to recall details of a standardized event they had participated in (a first aid scenario). Results indicated no group difference in the use and coordination of speech and gesture: both groups produced the same quantity of movement...

  19. Iconic and multi-stroke gesture recognition

    NARCIS (Netherlands)

    Willems, D.J.M.; Niels, R.M.J.; Gerven, M.A.J. van; Vuurpijl, L.G.

    2009-01-01

    Many handwritten gestures, characters, and symbols comprise multiple pendown strokes separated by penup strokes. In this paper, a large number of features known from the literature are explored for the recognition of such multi-stroke gestures. Features are computed from a global gesture shape. From

  20. A 3D Hand-drawn Gesture Input Device Using Fuzzy ARTMAP-based Recognizer

    Directory of Open Access Journals (Sweden)

    Jing Yang

    2006-06-01

    Full Text Available In this paper, a novel input device based on 3D dynamic hand-drawn gestures is presented. It makes use of inertial sensor and pattern recognition technique. Fuzzy ARTMAP based recognizer is adopted to realize gesture recognition by using 3-axis acceleration signals directly instead of reproduced trajectories of gestures. The proposed method may relax motion constraints during inputting a gesture, which is more convenient for user. This prototype of input device has been implemented on a remote controller to manipulate TVs. The recognition rate of 20 gestures is higher than 97%. It clearly shows the effectiveness and feasibility of the proposed input device. As a result, it is a powerful, flexible interface for modern electronic products.

  1. A 3D Hand-drawn Gesture Input Device Using Fuzzy ARTMAP-based Recognizer

    Directory of Open Access Journals (Sweden)

    Jing Yang

    2006-06-01

    Full Text Available In this paper, a novel input device based on 3D dynamic hand-drawn gestures is presented. It makes use of inertial sensor and pattern recognition technique. Fuzzy ARTMAP based recognizer is adopted to realize gesture recognition by using 3-axis acceleration signals directly instead of reproduced trajectories of gestures. The proposed method may relax motion constraints during inputting a gesture, which is more convenient for user. This prototype of input device has been implemented on a remote controller to manipulate TVs. The recognition rate of 20 gestures is higher than 97%. It clearly shows the effectiveness and feasibility of the proposed input device. As a result, it is a powerful, flexible interface for modern electronic products.

  2. Gesture & Speech Based Appliance Control

    Directory of Open Access Journals (Sweden)

    Dr. Sayleegharge,

    2014-01-01

    Full Text Available This document explores the use of speech & gestures to control home appliances. Aiming at the aging population of the world and relieving them from their dependencies. The two approaches used to sail through the target are the MFCC approach for speech processing and the Identification of Characteristic Point Algorithm for gesture recognition. A barrier preventing wide adoption is that this audience can find controlling assistive technology difficult, as they are less dexterous and computer literate. Our results hope to provide a more natural and intuitive interface to help bridge the gap between technology and elderly users.

  3. Language abilities and gestural communication in a girl with bilateral perisylvian syndrome: a clinical and rehabilitative follow-up.

    Science.gov (United States)

    Molteni, Bruna; Sarti, Daniela; Airaghi, Gloria; Falcone, Chiara; Mantegazza, Giulia; Baranello, Giovanni; Riva, Federica; Saletti, Veronica; Paruta, Nicoletta; Riva, Daria

    2010-08-01

    We present the neuropsychological and linguistic follow-up of a girl with bilateral perisylvian polymicrogyria during 4 years of gestural and verbal speech therapy. Some researchers have suggested that children with bilateral perisylvian polymicrogyria mentally fail to reach the syntactic phase and do not acquire a productive morphology. This patient achieved a mean length of utterance in signs/gestures of 3.4, a syntactic phase of completion of the nuclear sentence and the use of morphological modifications. We discuss the link between gesture and language and formulate hypotheses on the role of gestural input on the reorganization of compensatory synaptic circuits.

  4. Hemisphere asymmetries for imitation of novel gestures.

    Science.gov (United States)

    Goldenberg, Georg; Strauss, Stefan

    2002-09-24

    Disorders of imitation are traditionally considered as a symptom of apraxia, but defective imitation of gestures can contrast with intact performance of gestures to verbal command and vice versa. It thus seems worthwhile to explore the neural basis of imitation of gestures independently of other manifestations of apraxia. To assess body part specificity of disturbances of imitation for meaningless gestures of fingers, hand, and foot. Imitation of meaningless gestures involving fingers (internal hand configuration), hand (external hand position), or foot was examined in 30 patients with left brain damage (LBD), 20 patients with right brain damage (RBD), and 20 normal control subjects. LBD affected imitation of hand and foot gestures more than imitation of finger gestures, whereas RBD had the strongest effect on finger gestures and affected foot gestures more than hand gestures. These results can be accounted for by the assumption that body part coding of gestures depends on left hemisphere function and that additional right hemisphere contributions are afforded when demands on perceptual discrimination rise.

  5. Hand gesture recognition based on surface electromyography.

    Science.gov (United States)

    Samadani, Ali-Akbar; Kulic, Dana

    2014-01-01

    Human hands are the most dexterous of human limbs and hand gestures play an important role in non-verbal communication. Underlying electromyograms associated with hand gestures provide a wealth of information based on which varying hand gestures can be recognized. This paper develops an inter-individual hand gesture recognition model based on Hidden Markov models that receives surface electromyography (sEMG) signals as inputs and predicts a corresponding hand gesture. The developed recognition model is tested with a dataset of 10 various hand gestures performed by 25 subjects in a leave-one-subject-out cross validation and an inter-individual recognition rate of 79% was achieved. The promising recognition rate demonstrates the efficacy of the proposed approach for discriminating between gesture-specific sEMG signals and could inform the design of sEMG-controlled prostheses and assistive devices.

  6. Silent gestures speak in aphasia

    NARCIS (Netherlands)

    van Nispen, Karin; van de Sandt-Koenderman, M.; Krahmer, Emiel

    2017-01-01

    Background & Aim As the result of brain damage, people with aphasia (PWA) have language difficulties (Goodglass, 1993). Consequently, their communication can be greatly affected. This raises the question of whether gestures could convey information missing in their speech. Although it is known that

  7. Gestural coupling and social cognition

    DEFF Research Database (Denmark)

    Michael, John; Krueger, Joel William

    2012-01-01

    of congenital bilateral facial paralysis-can be a fruitful source of insight for research exploring the relation between high-level cognition and low-level coupling. Lacking a capacity for facial expression, individuals with MS are deprived of a primary channel for gestural coupling. According to SI, they lack...

  8. Gesture Activated Mobile Edutainment (GAME)

    DEFF Research Database (Denmark)

    Rehm, Matthias; Leichtenstern, Karin; Plomer, Joerg

    2010-01-01

    An approach to intercultural training of nonverbal behavior is presented that draws from research on role-plays with virtual agents and ideas from situated learning. To this end, a mobile serious game is realized where the user acquires knowledge about German emblematic gestures and tries them ou...... along with details on the gesture recognition and content authoring. By its experience-based role plays with virtual characters, GAME brings together ideas from situated learning and intercultural training in an integrated approach and paves the way for new m-learning concepts.......An approach to intercultural training of nonverbal behavior is presented that draws from research on role-plays with virtual agents and ideas from situated learning. To this end, a mobile serious game is realized where the user acquires knowledge about German emblematic gestures and tries them out...... in role-plays with virtual agents. Gesture performance is evaluated making use of build-in acceleration sensors of smart phones. After an account of the theoretical background covering diverse areas like virtual agents, situated learning and intercultural training, the paper presents the GAME approach...

  9. Gesture Interaction at a Distance

    NARCIS (Netherlands)

    Fikkert, F.W.

    2010-01-01

    The aim of this work is to explore, from a perspective of human behavior, which gestures are suited to control large display surfaces from a short distance away; why that is so; and, equally important, how such an interface can be made a reality. A well-known example of the type of interface that is

  10. Gestural Control Of Wavefield synthesis

    DEFF Research Database (Denmark)

    Grani, Francesco; Di Carlo, Diego; Portillo, Jorge Madrid

    2016-01-01

    Wi- iMote game controller to “throw” sounding objects towards them. Aim of this project was to create a gestural interface for a game based on auditory cues only, and to investigate how convolution reverberation can affects people’s percep- tion of distance in a wavefield synthesis setup environment....

  11. RENDIMIENTO Y REACCIÓN A COLLETOTRICHUM LINDEMUATHIANUM EN CULTIVARES DE FRÍJOL VOLUBLE (PHASEOLUS VULGARIS L.) YIELD AND REACTION TO COLLETOTRICHUM LINDEMUATHIANUM IN CULTIVARS OF CLIMBING BEANS (PHASEOLUS VULGARIS L.)

    OpenAIRE

    Carolina Gallego G.; Gustavo Adolfo Ligarreto Moreno; Luz Nayibe Garzón Gutiérrez; Óscar Arturo Oliveros Garay; Linda Jeimmy Rincón Rivera

    2010-01-01

    Bajo condiciones de la sabana de Bogotá (Colombia), se evaluaron 32 cultivares de fríjol voluble por componentes del rendimiento y por su reacción a una mezcla de aislamientos de Colletotrichum lindemuthianum procedentes de Boyacá y Cundinamarca. Los genotipos que presentaron un buen comportamiento en rendimiento y una reacción en campo a la resistencia de la enfermedad fueron: D. Moreno y 3198. Los que expresaron una reacción de resistencia a la antracnosis fueron: 3180, 3182, 3177 y G-2333....

  12. Intransitive limb gestures and apraxia following unilateral stroke.

    Science.gov (United States)

    Heath, M; Roy, E A; Black, S E; Westwood, D A

    2001-10-01

    Apraxia is the loss of the ability to perform learned, skilled movements correctly, and is frequently attributed to left hemisphere damage (Heilman & Rothi, 1985). Recent work (Dumont, Ska, & Schiavetto, 1999) has shown a dissociation between transitive (tool based; e.g., hammering a nail) and intransitive (expressive/ communicative; e.g., waving goodbye) actions; however, few group studies have specifically addressed apraxia for intransitive gestures. The present investigation examined the frequency and severity of praxis errors related to the production of intransitive gestures in left (LHD) or right hemisphere stroke (RHD) patients in the context of Roy's (1996) model of limb praxis. A total of 119 consecutive stroke patients (LHD = 57, RHD = 62) and 20 healthy age-matched controls performed eight intransitive gestures to pantomime and imitation. Performance was quantified via a multi-dimensional error notation system, providing detail about specific elements of performance (e.g., location), and a composite score reflecting overall gestural accuracy. Analyses of pantomime and imitation performance revealed an equal percentage of apraxic patients in each stroke group, and the severity of apraxia in these groups was also equivalent. Further, analyses of the patterns of apraxia specified by Roy (1996) revealed that patients in each stroke group demonstrated selective impairments in pantomime (LHD = 38%, RHD = 42%), or imitation (LHD = 9%, RHD = 5%) conditions, whereas others demonstrated concurrent impairments (LHD = 30%, RHD = 22%) indicating that stroke to either hemisphere can selectively impair each stage in the production of an intransitive action.

  13. Automatic gesture analysis using constant affine velocity.

    Science.gov (United States)

    Cifuentes, Jenny; Boulanger, Pierre; Pham, Minh Tu; Moreau, Richard; Prieto, Flavio

    2014-01-01

    Hand human gesture recognition has been an important research topic widely studied around the world, as this field offers the ability to identify, recognize, and analyze human gestures in order to control devices or to interact with computer interfaces. In particular, in medical training, this approach is an important tool that can be used to obtain an objective evaluation of a procedure performance. In this paper, some obstetrical gestures, acquired by a forceps, were studied with the hypothesis that, as the scribbling and drawing movements, they obey the one-sixth power law, an empirical relationship which connects path curvature, torsion, and euclidean velocity. Our results show that obstetrical gestures have a constant affine velocity, which is different for each type of gesture and based on this idea this quantity is proposed as an appropriate classification feature in the hand human gesture recognition field.

  14. Gestures modulate speech processing early in utterances.

    Science.gov (United States)

    Wu, Ying Choon; Coulson, Seana

    2010-05-12

    Electroencephalogram was recorded as healthy adults viewed short videos of spontaneous discourse in which a speaker used depictive gestures to complement information expressed through speech. Event-related potentials were computed time-locked to content words in the speech stream and to subsequent related and unrelated picture probes. Gestures modulated event-related potentials to content words co-timed with the first gesture in a discourse segment, relative to the same words presented with static freeze frames of the speaker. Effects were observed 200-550 ms after speech onset, a time interval associated with semantic processing. Gestures also increased sensitivity to picture probe relatedness. Effects of gestures on picture probe and spoken word analysis were inversely correlated, suggesting that gestures differentially impact verbal and image-based processes.

  15. Initial experiments with Multiple Musical Gestures

    DEFF Research Database (Denmark)

    Jensen, Kristoffer; Graugaard, Lars

    2005-01-01

    The classic orchestra has a diminishing role in society, while hard-disc recorded music plays a predominant role today. A simple to use pointer interface in 2D for producing music is presented as a means for playing in a social situation. The sounds of the music are produced by a low......-level synthesizer, and the music is produced by simple gestures that are repeated easily. The gestures include left-to-right and right-to-left motion shapes for spectral envelope and temporal envelope of the sounds, with optional backwards motion for the addition of noise; downward motion for note onset and several...... other manipulation gestures. The initial position controls which parameter is being affected, the notes intensity is controlled by the downward gesture speed, and a sequence is finalized instantly with one upward gesture. The synthesis employs a novel interface structure, the multiple musical gesture...

  16. Aspects of the Multiple Musical Gestures

    DEFF Research Database (Denmark)

    Jensen, Karl Kristoffer

    2006-01-01

    A simple to use pointer interface in 2D for producing music is presented as a means for real-time playing and sound generation. The music is produced by simple gestures that are repeated easily. The gestures include left-to-right and right-to-left motion shapes for spectral envelope and temporal...... envelope of the sounds, with optional backwards motion for the addition of noise; downward motion for note onset and several other manipulation gestures. The initial position controls which parameter is being affected, the notes intensity is controlled by the downward gesture speed, and a sequence...... is finalized instantly with one upward gesture. Several synthesis methods are presented and the control mechanisms are mapped into the multiple musical gesture interface. This enables a number of performers to interact on the same interface, either by each playing the same musical instruments simultaneously...

  17. No metaphorical timeline in gesture and cognition among yucatec mayas.

    Science.gov (United States)

    Le Guen, Olivier; Balam, Lorena Ildefonsa Pool

    2012-01-01

    In numerous languages, space provides a productive domain for the expression of time. This paper examines how time-to-space mapping is realized in Yucatec Maya. At the linguistic level, Yucatec Maya has numerous resources to express deictic time, whereas expression of sequential time is highly constrained. Specifically, in gesture, we do not find any metaphorical oriented timeline, but only an opposition between "current time" (mapped on the "here" space) and "remote time" (mapped on the "remote/distant space"). Additionally, past and future are not contrasted. Sequential or deictic time in language and gesture are not conceived as unfolding along a metaphorical oriented line (e.g., left-right or front-back) but as a succession of completed events not spatially organized. Interestingly, although Yucatec Maya speakers preferentially use a geocentric spatial frame of reference (FoR), especially visible in their use of gesture, time is not mapped onto a geocentric axis (e.g., east-west). We argue that, instead of providing a source for time mapping, the use of a spatial geocentric FoR in Yucatec Maya seems to inhibit it. The Yucatec Maya expression of time in language and gesture fits the more general cultural conception of time as cyclic. Experimental results confirmed, to some extent, this non-linear, non-directional conception of time in Yucatec Maya.

  18. No Metaphorical Timeline in Gesture and Cognition Among Yucatec Mayas

    Science.gov (United States)

    Le Guen, Olivier; Balam, Lorena Ildefonsa Pool

    2012-01-01

    In numerous languages, space provides a productive domain for the expression of time. This paper examines how time-to-space mapping is realized in Yucatec Maya. At the linguistic level, Yucatec Maya has numerous resources to express deictic time, whereas expression of sequential time is highly constrained. Specifically, in gesture, we do not find any metaphorical oriented timeline, but only an opposition between “current time” (mapped on the “here” space) and “remote time” (mapped on the “remote/distant space”). Additionally, past and future are not contrasted. Sequential or deictic time in language and gesture are not conceived as unfolding along a metaphorical oriented line (e.g., left-right or front-back) but as a succession of completed events not spatially organized. Interestingly, although Yucatec Maya speakers preferentially use a geocentric spatial frame of reference (FoR), especially visible in their use of gesture, time is not mapped onto a geocentric axis (e.g., east-west). We argue that, instead of providing a source for time mapping, the use of a spatial geocentric FoR in Yucatec Maya seems to inhibit it. The Yucatec Maya expression of time in language and gesture fits the more general cultural conception of time as cyclic. Experimental results confirmed, to some extent, this non-linear, non-directional conception of time in Yucatec Maya. PMID:22908000

  19. Gesture Recognition Based Mouse Events

    Directory of Open Access Journals (Sweden)

    Rachit Puri

    2013-12-01

    Full Text Available This paper presents the maneuver of mouse pointer a nd performs various mouse operations such as left click, right click, double click, drag etc using ge stures recognition technique. Recognizing gestures is a complex task which involves many aspects such as mo tion modeling, motion analysis, pattern recognition and machine learning. Keeping all the essential factors in mind a system has been created which recognizes the movement of fingers and various patterns formed by them. Color caps have been used for fingers to distinguish it f rom the background color such as skin color. Thus recog nizing the gestures various mouse events have been performed. The application has been created on MATL AB environment with operating system as windows 7.

  20. Gestures Towards the Digital Maypole

    Directory of Open Access Journals (Sweden)

    Christian McRea

    2005-01-01

    Full Text Available To paraphrase Blanchot: We are not learned; we are not ignorant. We have known joys. That is saying too little: We are alive, and this life gives us the greatest pleasure. The intensities afforded by mobile communication can be thought of as an extension of the styles and gestures already materialised by multiple maypole cultures, pre-digital community forms and the very clustered natures of speech and being. In his Critique of Judgment, Kant argues that the information selection process at the heart of communication is one of the fundamental activities of any aesthetically produced knowledge form. From this radial point, "Gestures Towards The Digital Maypole" begins the process of reorganising conceptions of modalities of communication around the absent centre and the affective realms that form through the movement of information-energy, like sugar in a hurricane.

  1. Nonsymbolic Gestural Interaction for Ambient Intelligence

    DEFF Research Database (Denmark)

    Rehm, Matthias

    2010-01-01

    the addressee with subtle clues about personality or cultural background. Gestures are an extremly rich source of communication-specific and contextual information for interactions in ambient intelligence environments. This chapter reviews the semantic layers of gestural interaction, focusing on the layer...... beyond communicative intent, and presents interface techniques to capture and analyze gestural input, taking into account nonstandard approaches such as acceleration analysis and the use of physiological sensors....

  2. Gesture analysis for physics education researchers

    Directory of Open Access Journals (Sweden)

    Rachel E. Scherr

    2008-01-01

    Full Text Available Systematic observations of student gestures can not only fill in gaps in students’ verbal expressions, but can also offer valuable information about student ideas, including their source, their novelty to the speaker, and their construction in real time. This paper provides a review of the research in gesture analysis that is most relevant to physics education researchers and illustrates gesture analysis for the purpose of better understanding student thinking about physics.

  3. Grounded Blends and Mathematical Gesture Spaces: Developing Mathematical Understandings via Gestures

    Science.gov (United States)

    Yoon, Caroline; Thomas, Michael O. J.; Dreyfus, Tommy

    2011-01-01

    This paper examines how a person's gesture space can become endowed with mathematical meaning associated with mathematical spaces and how the resulting mathematical gesture space can be used to communicate and interpret mathematical features of gestures. We use the theory of grounded blends to analyse a case study of two teachers who used gestures…

  4. Hand gestures mouse cursor control

    Directory of Open Access Journals (Sweden)

    Marian-Avram Vincze

    2014-05-01

    Full Text Available The paper describes the implementation of a human-computer interface for controlling the mouse cursor. The test reveal the fact: a low-cost web camera some processing algorithms are quite enough to control the mouse cursor on computers. Even if the system is influenced by the illuminance level on the plane of the hand, the current study may represent a start point for some studies on the hand tracking and gesture recognition field.

  5. Smart system for gesture identification

    OpenAIRE

    Pérez Obiols, Eduard

    2014-01-01

    A new interface system for control and input of data for automotive applications will be developed. The technology will be based on capacitive sensors. This thesis project is centered on developing and integrating a contactless system based on automatic recognition of gestures to allow interaction between car driver/passenger and some selected car functions in the automotive environment. Este proyecto se centra en el desarrollo y la integración de un sistema contactless (sin contacto) b...

  6. Dynamic Hand Gesture Recognition Using the Skeleton of the Hand

    OpenAIRE

    Vasile Buzuloiu; Patrick Lambert; Didier Coquin; Bogdan Ionescu

    2005-01-01

    This paper discusses the use of the computer vision in the interpretation of human gestures. Hand gestures would be an intuitive and ideal way of exchanging information with other people in a virtual space, guiding some robots to perform certain tasks in a hostile environment, or interacting with computers. Hand gestures can be divided into two main categories: static gestures and dynamic gestures. In this paper, a novel dynamic hand gesture recognition technique is proposed. It is based on ...

  7. Comparative Analysis of Hand Gesture Recognition Techniques

    Directory of Open Access Journals (Sweden)

    Arpana K. Patel

    2015-03-01

    Full Text Available During past few years, human hand gesture for interaction with computing devices has continues to be active area of research. In this paper survey of hand gesture recognition is provided. Hand Gesture Recognition is contained three stages: Pre-processing, Feature Extraction or matching and Classification or recognition. Each stage contains different methods and techniques. In this paper define small description of different methods used for hand gesture recognition in existing system with comparative analysis of all method with its benefits and drawbacks are provided.

  8. Gesture facilitates the syntactic analysis of speech

    Directory of Open Access Journals (Sweden)

    Henning eHolle

    2012-03-01

    Full Text Available Recent research suggests that the brain routinely binds together information from gesture and speech. However, most of this research focused on the integration of representational gestures with the semantic content of speech. Much less is known about how other aspects of gesture, such as emphasis, influence the interpretation of the syntactic relations in a spoken message. Here, we investigated whether beat gestures alter which syntactic structure is assigned to ambiguous spoken German sentences. The P600 component of the Event Related Brain Potential indicated that the more complex syntactic structure is easier to process when the speaker emphasizes the subject of a sentence with a beat. Thus, a simple flick of the hand can change our interpretation of who has been doing what to whom in a spoken sentence. We conclude that gestures and speech are an integrated system. Unlike previous studies, which have shown that the brain effortlessly integrates semantic information from gesture and speech, our study is the first to demonstrate that this integration also occurs for syntactic information. Moreover, the effect appears to be gesture-specific and was not found for other stimuli that draw attention to certain parts of speech, including prosodic emphasis, or a moving visual stimulus with the same trajectory as the gesture. This suggests that only visual emphasis produced with a communicative intention in mind (that is, beat gestures influences language comprehension, but not a simple visual movement lacking such an intention.

  9. A third-person perspective on co-speech action gestures in Parkinson's disease

    Science.gov (United States)

    Humphries, Stacey; Holler, Judith; Crawford, Trevor J.; Herrera, Elena; Poliakoff, Ellen

    2016-01-01

    A combination of impaired motor and cognitive function in Parkinson's disease (PD) can impact on language and communication, with patients exhibiting a particular difficulty processing action verbs. Co-speech gestures embody a link between action and language and contribute significantly to communication in healthy people. Here, we investigated how co-speech gestures depicting actions are affected in PD, in particular with respect to the visual perspective—or the viewpoint – they depict. Gestures are closely related to mental imagery and motor simulations, but people with PD may be impaired in the way they simulate actions from a first-person perspective and may compensate for this by relying more on third-person visual features. We analysed the action-depicting gestures produced by mild-moderate PD patients and age-matched controls on an action description task and examined the relationship between gesture viewpoint, action naming, and performance on an action observation task (weight judgement). Healthy controls produced the majority of their action gestures from a first-person perspective, whereas PD patients produced a greater proportion of gestures produced from a third-person perspective. We propose that this reflects a compensatory reliance on third-person visual features in the simulation of actions in PD. Performance was also impaired in action naming and weight judgement, although this was unrelated to gesture viewpoint. Our findings provide a more comprehensive understanding of how action-language impairments in PD impact on action communication, on the cognitive underpinnings of this impairment, as well as elucidating the role of action simulation in gesture production. PMID:26995225

  10. A third-person perspective on co-speech action gestures in Parkinson's disease.

    Science.gov (United States)

    Humphries, Stacey; Holler, Judith; Crawford, Trevor J; Herrera, Elena; Poliakoff, Ellen

    2016-05-01

    A combination of impaired motor and cognitive function in Parkinson's disease (PD) can impact on language and communication, with patients exhibiting a particular difficulty processing action verbs. Co-speech gestures embody a link between action and language and contribute significantly to communication in healthy people. Here, we investigated how co-speech gestures depicting actions are affected in PD, in particular with respect to the visual perspective-or the viewpoint - they depict. Gestures are closely related to mental imagery and motor simulations, but people with PD may be impaired in the way they simulate actions from a first-person perspective and may compensate for this by relying more on third-person visual features. We analysed the action-depicting gestures produced by mild-moderate PD patients and age-matched controls on an action description task and examined the relationship between gesture viewpoint, action naming, and performance on an action observation task (weight judgement). Healthy controls produced the majority of their action gestures from a first-person perspective, whereas PD patients produced a greater proportion of gestures produced from a third-person perspective. We propose that this reflects a compensatory reliance on third-person visual features in the simulation of actions in PD. Performance was also impaired in action naming and weight judgement, although this was unrelated to gesture viewpoint. Our findings provide a more comprehensive understanding of how action-language impairments in PD impact on action communication, on the cognitive underpinnings of this impairment, as well as elucidating the role of action simulation in gesture production.

  11. Gesture, sign and language: The coming of age of sign language and gesture studies.

    Science.gov (United States)

    Goldin-Meadow, Susan; Brentari, Diane

    2015-10-05

    How does sign language compare to gesture, on the one hand, and to spoken language on the other? At one time, sign was viewed as nothing more than a system of pictorial gestures with no linguistic structure. More recently, researchers have argued that sign is no different from spoken language with all of the same linguistic structures. The pendulum is currently swinging back toward the view that sign is gestural, or at least has gestural components. The goal of this review is to elucidate the relationships among sign language, gesture, and spoken language. We do so by taking a close look not only at how sign has been studied over the last 50 years, but also at how the spontaneous gestures that accompany speech have been studied. We come to the conclusion that signers gesture just as speakers do. Both produce imagistic gestures along with more categorical signs or words. Because, at the moment, it is difficult to tell where sign stops and where gesture begins, we suggest that sign should not be compared to speech alone, but should be compared to speech-plus-gesture. Although it might be easier (and, in some cases, preferable) to blur the distinction between sign and gesture, we argue that making a distinction between sign (or speech) and gesture is essential to predict certain types of learning, and allows us to understand the conditions under which gesture takes on properties of sign, and speech takes on properties of gesture. We end by calling for new technology that may help us better calibrate the borders between sign and gesture.

  12. Is the coupled control of hand and mouth postures precursor of reciprocal relations between gestures and words?

    Science.gov (United States)

    Gentilucci, Maurizio; Campione, Giovanna Cristina; De Stefani, Elisa; Innocenti, Alessandro

    2012-07-15

    We tested whether a system coupling hand postures related to gestures to the control of internal mouth articulators during production of vowels exists and it can be precursor of a system relating hand/arm gestures to words. Participants produced unimanual and bimanual representational gestures expressing the meaning of LARGE or SMALL. Once the gesture was produced, in experiment 1 they pronounced the vowels "A" or "I", in experiment 2 the word "GRÀNDE" (large) or "PÌCCOLO" (small), and in experiment 3 the pseudo-words "SCRÀNTA" or "SBÌCCARA". Mouth, hand kinematics and voice spectra were recorded and analyzed. Unimanual gestures affected voice spectra of the two vowels pronounced alone (experiment 1). Bimanual and both unimanual and bimanual gestures affected voice spectra of /a/ and /i/ included in the words (experiment 2) and pseudo-words (experiment 3), respectively. The results support the hypothesis that a system coupling hand gestures to vowel production exists. Moreover, they suggest the existence of a more general system relating gestures to words.

  13. Nature and Specificity of Gestural Disorder in Children with Developmental Coordination Disorder: A Multiple Case Study

    Directory of Open Access Journals (Sweden)

    Orianne Costini

    2017-07-01

    Full Text Available Aim: Praxis assessment in children with developmental coordination disorder (DCD is usually based on tests of adult apraxia, by comparing across types of gestures and input modalities. However, the cognitive models of adult praxis processing are rarely used in a comprehensive and critical interpretation. These models generally involve two systems: a conceptual system and a production system. Heterogeneity of deficits is consistently reported in DCD, involving other cognitive skills such as executive or visual-perceptual and visuospatial functions. Surprisingly, few researches examined the impact of these functions in gestural production. Our study aimed at discussing the nature and specificity of the gestural deficit in DCD using a multiple case study approach.Method: Tasks were selected and adapted from protocols proposed in adult apraxia, in order to enable a comprehensive assessment of gestures. This included conceptual tasks (knowledge about tool functions and actions; recognition of gestures, representational (transitive, intransitive, and non-representational gestures (imitation of meaningless postures. We realized an additional assessment of constructional abilities and other cognitive domains (executive functions, visual-perceptual and visuospatial functions. Data from 27 patients diagnosed with DCD were collected. Neuropsychological profiles were classified using an inferential clinical analysis based on the modified t-test, by comparison with 100 typically developing children divided into five age groups (from 7 to 13 years old.Results: Among the 27 DCD patients, we first classified profiles that are characterized by impairment in tasks assessing perceptual visual or visuospatial skills (n = 8. Patients with a weakness in executive functions (n = 6 were then identified, followed by those with an impaired performance in conceptual knowledge tasks (n = 4. Among the nine remaining patients, six could be classified as having a visual

  14. Hand Gesture Recognition Using Ultrasonic Waves

    KAUST Repository

    AlSharif, Mohammed Hussain

    2016-04-01

    Gesturing is a natural way of communication between people and is used in our everyday conversations. Hand gesture recognition systems are used in many applications in a wide variety of fields, such as mobile phone applications, smart TVs, video gaming, etc. With the advances in human-computer interaction technology, gesture recognition is becoming an active research area. There are two types of devices to detect gestures; contact based devices and contactless devices. Using ultrasonic waves for determining gestures is one of the ways that is employed in contactless devices. Hand gesture recognition utilizing ultrasonic waves will be the focus of this thesis work. This thesis presents a new method for detecting and classifying a predefined set of hand gestures using a single ultrasonic transmitter and a single ultrasonic receiver. This method uses a linear frequency modulated ultrasonic signal. The ultrasonic signal is designed to meet the project requirements such as the update rate, the range of detection, etc. Also, it needs to overcome hardware limitations such as the limited output power, transmitter, and receiver bandwidth, etc. The method can be adapted to other hardware setups. Gestures are identified based on two main features; range estimation of the moving hand and received signal strength (RSS). These two factors are estimated using two simple methods; channel impulse response (CIR) and cross correlation (CC) of the reflected ultrasonic signal from the gesturing hand. A customized simple hardware setup was used to classify a set of hand gestures with high accuracy. The detection and classification were done using methods of low computational cost. This makes the proposed method to have a great potential for the implementation in many devices including laptops and mobile phones. The predefined set of gestures can be used for many control applications.

  15. Neural interaction of speech and gesture: differential activations of metaphoric co-verbal gestures.

    Science.gov (United States)

    Kircher, Tilo; Straube, Benjamin; Leube, Dirk; Weis, Susanne; Sachs, Olga; Willmes, Klaus; Konrad, Kerstin; Green, Antonia

    2009-01-01

    Gestures are an important part of human communication. However, little is known about the neural correlates of gestures accompanying speech comprehension. The goal of this study is to investigate the neural basis of speech-gesture interaction as reflected in activation increase and decrease during observation of natural communication. Fourteen German participants watched video clips of 5 s duration depicting an actor who performed metaphoric gestures to illustrate the abstract content of spoken sentences. Furthermore, video clips of isolated gestures (without speech), isolated spoken sentences (without gestures) and gestures in the context of an unknown language (Russian) were additionally presented while functional magnetic resonance imaging (fMRI) data were acquired. Bimodal speech and gesture processing led to left hemispheric activation increases of the posterior middle temporal gyrus, the premotor cortex, the inferior frontal gyrus, and the right superior temporal sulcus. Activation reductions during the bimodal condition were located in the left superior temporal gyrus and the left posterior insula. Gesture related activation increases and decreases were dependent on language semantics and were not found in the unknown-language condition. Our results suggest that semantic integration processes for bimodal speech plus gesture comprehension are reflected in activation increases in the classical left hemispheric language areas. Speech related gestures seem to enhance language comprehension during the face-to-face communication.

  16. Gesture in a Kindergarten Mathematics Classroom

    Science.gov (United States)

    Elia, Iliada; Evangelou, Kyriacoulla

    2014-01-01

    Recent studies have advocated that mathematical meaning is mediated by gestures. This case study explores the gestures kindergarten children produce when learning spatial concepts in a mathematics classroom setting. Based on a video study of a mathematical lesson in a kindergarten class, we concentrated on the verbal and non-verbal behavior of one…

  17. Enhancing Gesture Quality in Young Singers

    Science.gov (United States)

    Liao, Mei-Ying; Davidson, Jane W.

    2016-01-01

    Studies have shown positive results for the use of gesture as a successful technique in aiding children's singing. The main purpose of this study was to examine the effects of movement training for children with regard to enhancing gesture quality. Thirty-six fifth-grade students participated in the empirical investigation. They were randomly…

  18. Gestures in an Intelligent User Interface

    NARCIS (Netherlands)

    Fikkert, F.W.; van der Vet, P.E.; Nijholt, Antinus; Shao, Ling; Shan, Caifeng; Luo, Jiebo; Etoh, Minoru

    2010-01-01

    In this chapter we investigated which hand gestures are intuitive to control a large display multimedia interface from a user’s perspective. Over the course of two sequential user evaluations we defined a simple gesture set that allows users to fully control a large display multimedia interface,

  19. Gesture in a Kindergarten Mathematics Classroom

    Science.gov (United States)

    Elia, Iliada; Evangelou, Kyriacoulla

    2014-01-01

    Recent studies have advocated that mathematical meaning is mediated by gestures. This case study explores the gestures kindergarten children produce when learning spatial concepts in a mathematics classroom setting. Based on a video study of a mathematical lesson in a kindergarten class, we concentrated on the verbal and non-verbal behavior of one…

  20. Gestures: Silent Scaffolding within Small Groups

    Science.gov (United States)

    Carter, Glenda; Wiebe, Eric N.; Reid-Griffin, Angela

    2006-01-01

    This paper describes how gestures are used to enhance scaffolding that occurs in small group settings. Sixth and eighth grade students participated in an elective science course focused on earth science concepts with a substantial spatial visualization component. Gestures that students used in small group discussions were analyzed and four…

  1. Gestures in an Intelligent User Interface

    NARCIS (Netherlands)

    Fikkert, Wim; Vet, van der Paul; Nijholt, Anton; Shao, Ling; Shan, Caifeng; Luo, Jiebo; Etoh, Minoru

    2010-01-01

    In this chapter we investigated which hand gestures are intuitive to control a large display multimedia interface from a user’s perspective. Over the course of two sequential user evaluations we defined a simple gesture set that allows users to fully control a large display multimedia interface, int

  2. Talking Hands: Tongue Motor Excitability During Observation of Hand Gestures Associated with Words.

    Directory of Open Access Journals (Sweden)

    Naeem eKomeilipoor

    2014-09-01

    Full Text Available Perception of speech and gestures engage common brain areas. Neural regions involved in speech perception overlap with those involved in speech production in an articulator-specific manner. Yet, it is unclear whether motor cortex also has a role in processing communicative actions like gesture and sign language. We asked whether the mere observation of hand gestures, paired and not paired with words, may result in changes in the excitability of the hand and tongue areas of motor cortex. Using single-pulse transcranial magnetic stimulation, we measured the motor excitability in tongue and hand areas of left primary motor cortex, while participants viewed video sequences of bimanual hand movements associated or not-associated with nouns. We found higher motor excitability in the tongue area during the presentation of meaningful gestures (noun-associated as opposed to meaningless ones, while the excitability of hand motor area was not differentially affected by gesture observation. Our results let us argue that the observation of gestures associated with a word results in activation of articulatory motor network accompanying speech production.

  3. Naming and gesturing spatial relations: evidence from focal brain-injured individuals.

    Science.gov (United States)

    Göksun, Tilbe; Lehet, Matthew; Malykhina, Katsiaryna; Chatterjee, Anjan

    2013-07-01

    Spatial language helps us to encode relations between objects and organize our thinking. Little is known about the neural instantiations of spatial language. Using voxel-lesion symptom mapping (VLSM), we tested the hypothesis that focal brain injured patients who had damage to left frontal-parietal peri-Sylvian regions would have difficulty in naming spatial relations between objects. We also investigated the relationship between impaired verbalization of spatial relations and spontaneous gesture production. Patients with left or right hemisphere damage and elderly control participants were asked to name static (e.g., an apple on a book) and dynamic (e.g., a pen moves over a box) locative relations depicted in brief video clips. The correct use of prepositions in each task and gestures that represent the spatial relations were coded. Damage to the left posterior middle frontal gyrus, the left inferior frontal gyrus, and the left anterior superior temporal gyrus were related to impairment in naming spatial relations. Production of spatial gestures negatively correlated with naming accuracy, suggesting that gestures might help or compensate for difficulty with lexical access. Additional analyses suggested that left hemisphere patients who had damage to the left posterior middle frontal gyrus and the left inferior frontal gyrus gestured less than expected, if gestures are used to compensate for impairments in retrieving prepositions.

  4. Telerobotic Pointing Gestures Shape Human Spatial Cognition

    CERN Document Server

    Cabibihan, John-John; Saj, Sujin; Zhang, Zhengchen

    2012-01-01

    This paper aimed to explore whether human beings can understand gestures produced by telepresence robots. If it were the case, they can derive meaning conveyed in telerobotic gestures when processing spatial information. We conducted two experiments over Skype in the present study. Participants were presented with a robotic interface that had arms, which were teleoperated by an experimenter. The robot could point to virtual locations that represented certain entities. In Experiment 1, the experimenter described spatial locations of fictitious objects sequentially in two conditions: speech condition (SO, verbal descriptions clearly indicated the spatial layout) and speech and gesture condition (SR, verbal descriptions were ambiguous but accompanied by robotic pointing gestures). Participants were then asked to recall the objects' spatial locations. We found that the number of spatial locations recalled in the SR condition was on par with that in the SO condition, suggesting that telerobotic pointing gestures c...

  5. Fusion of hand and arm gestures

    Science.gov (United States)

    Coquin, D.; Benoit, E.; Sawada, H.; Ionescu, B.

    2005-12-01

    In order to improve the link between an operator and its machine, some human oriented communication systems are now using natural languages like speech or gesture. The goal of this paper is to present a gesture recognition system based on the fusion of measurements issued from different kind of sources. It is necessary to have some sensors that are able to capture at least the position and the orientation of the hand such as Dataglove and a video camera. Datagloge gives a measure of the hand posture and a video camera gives a measure of the general arm gesture which represents the physical and spatial properties of the gesture, and based on the 2D skeleton representation of the arm. The measurements used are partially complementary and partially redundant. The application is distributed on intelligent cooperating sensors. The paper presents the measurement of the hand and the arm gestures, the fusion processes, and the implementation solution.

  6. Gesturing in the early universities.

    Science.gov (United States)

    O'Boyle, C

    2000-01-01

    Research into the oral and literary traditions of scholastic education usually emphasizes the significance of the world in late medieval pedagogy. This paper suggests that coded hand signals provided early university scholars with an important non-verbal means of communication too. Using illustrations of classroom scenes from early university manuscripts, this paper analyzes the artistic conventions for representating gestures that these images embody. By building up a typology of these gesticulations, it demonstrates that the producers of these images and their audience shared a perception of scholastic education that embraced a sophisticated understanding of the activities associated with university education.

  7. Modelling Gesture Based Ubiquitous Applications

    CERN Document Server

    Zacharia, Kurien; Varghese, Surekha Mariam

    2011-01-01

    A cost effective, gesture based modelling technique called Virtual Interactive Prototyping (VIP) is described in this paper. Prototyping is implemented by projecting a virtual model of the equipment to be prototyped. Users can interact with the virtual model like the original working equipment. For capturing and tracking the user interactions with the model image and sound processing techniques are used. VIP is a flexible and interactive prototyping method that has much application in ubiquitous computing environments. Different commercial as well as socio-economic applications and extension to interactive advertising of VIP are also discussed.

  8. Metodología para evaluar progenies F5 a partir de selecciones individuales F4 de fríjol voluble en el sistema de relevo con maíz F5 lines from individuals selections of voluble bean intercropped with maize

    Directory of Open Access Journals (Sweden)

    Roman V. Alberto

    1988-12-01

    Full Text Available En el Centro Regional de Investiga, ción, ICA "La Selva", situado en el municipio de Rionegro, Antioquia, Colombia, se
    sembraron ocho ensayos entre 1986 y 1987, con el fin de determinar una nueva metodología para evaluar progenies F5, de fríjol voluble (Phaseolus vulgaris L. en el sistema
    de relevo con maíz (Zea mays L.. Se encontró que el sistema de parcelas pequeñas (0,84 m2 sirve para evaluar y tamizar grandes cantidades de material en lo que respecta al rendimiento, el peso de 100 semillas, días a madurez fisiológica y días a floración, util izando cuatro replicaciones. Se obtiene así un ahorro del 77 ,76% del área sembrada, en comparación con la siembra en parcelas de seis sitios y tres replicaciones, disminuyendo de esta manera los costos de la investigación.A series of eigth experiments were carried out at "la Selva" ICA's Experiment Station, located at Rionegro, Antioquía,
    Colombia at 2.100 m.s.n.m. between 1986 and 1987. The objetive was to determine a new methodology for the evaluation of F5 bean (Phaseolus vulgaris l. lines in relay intercropped planted with maize (Zeamsys l. It was found, for advanced lines, that one hill-plots (0.84 m-2 were not useful for detecting significant differences, but for F5 materials it was found that 4 replicates allowed to detect differences. The aboye result for F5 lines will permit to save 77 .8% of the area normally used in such works, i.e. by employing 6 hill plots and 3 replicates.

  9. Evaluación por rendimiento de 12 genotipos promisorios de fríjol voluble (Phaseolus vulgaris L. tipo Bola roja y Reventón para las zonas frías de Colombia

    Directory of Open Access Journals (Sweden)

    López Jesús Edgardo

    2006-12-01

    Full Text Available

    El fríjol común (Phaseolus vulgaris L. es un alimento básico en la región Andina por ser una fuente rica en proteína y de bajo costo. La investigación para incrementar rendimientos en esta leguminosa es una opción para mejorar la competitividad en el mercado mundial. El objetivo principal de este trabajo fue evaluar por rendimiento los genotipos promisorios de fríjol voluble, tipos Bola roja y Reventón, para las zonas frías de Colombia mediante el análisis de sendero. Se realizó un diseño de bloques completos al azar con tres réplicas para evaluar 10 genotipos promisorios de fríjol voluble. El análisis de sendero para el rendimiento por planta y las correlaciones entre el rendimiento y sus componentes mostraron que el carácter número de vainas por planta es el de mayor importancia sobre la determinación del rendimiento, en comparación con los caracteres peso de 100 semillas y número de semillas por vaina, tanto en los genotipos de fríjol voluble tipo Bola roja como tipo Reventón. 

  10. Co-Speech Gesture as Input in Verb Learning

    Science.gov (United States)

    Goodrich, Whitney; Hudson Kam, Carla L.

    2009-01-01

    People gesture a great deal when speaking, and research has shown that listeners can interpret the information contained in gesture. The current research examines whether learners can also use co-speech gesture to inform language learning. Specifically, we examine whether listeners can use information contained in an iconic gesture to assign…

  11. Inducing Variability in Communicative Gestures Used by Severely Retarded Individuals.

    Science.gov (United States)

    Duker, Pieter C.; van Lent, Chretienne

    1991-01-01

    Consequences were withheld for high-rate gesture requests of 6 mentally handicapped individuals (ages 12-40), to increase the proportion of gestures used spontaneously. Results suggest that the teacher's nonresponding to high-rate spontaneous gesture requests increased individuals' use of previously taught but unused gesture requests. (Author/JDD)

  12. Cascading neural networks for upper-body gesture recognition

    CSIR Research Space (South Africa)

    Mangera, R

    2014-01-01

    Full Text Available and right gesture classification. The first neural network determines which hand is being used for gesture performance and the second neural network then recognises the gesture. The performance of the system is tested using the VisApp2013 gesture dataset...

  13. Convolutional neural network-based automatic classification of midsagittal tongue gestural targets using B-mode ultrasound images.

    Science.gov (United States)

    Xu, Kele; Roussel, Pierre; Csapó, Tamás Gábor; Denby, Bruce

    2017-06-01

    Tongue gestural target classification is of great interest to researchers in the speech production field. Recently, deep convolutional neural networks (CNN) have shown superiority to standard feature extraction techniques in a variety of domains. In this letter, both CNN-based speaker-dependent and speaker-independent tongue gestural target classification experiments are conducted to classify tongue gestures during natural speech production. The CNN-based method achieves state-of-the-art performance, even though no pre-training of the CNN (with the exception of a data augmentation preprocessing) was carried out.

  14. ARTIFICIAL NEURAL NETWORK APPROACH FOR HAND GESTURE RECOGNITION

    OpenAIRE

    MISS. SHWETA K. YEWALE,; MR. PANKAJ K. BHARNE

    2011-01-01

    Gesture recognition is an important for developing alternative human-computer interaction modalities. It enables human to interface with machine in a more natural way. For recognizing the gestures, there areseveral algorithms are available. There are several approaches for gesture recognition using MATLAB. Artificial Neural networks are flexible in a changing environment. This research paper gives the overview of ANN for gesture recognition. It also describes the process of gesture recognitio...

  15. Non Audio-Video gesture recognition system

    DEFF Research Database (Denmark)

    Craciunescu, Razvan; Mihovska, Albena Dimitrova; Kyriazakos, Sofoklis

    2016-01-01

    Gesture recognition is a topic in computer science and language technology with the goal of interpreting human gestures via mathematical algorithms. Gestures can originate from any bodily motion or state but commonly originate from the face or hand. Current research focus includes on the emotion...... that can be connected to any computer on the market. The paper proposes an equation that relates the distance and voltage for a Sharp GP2Y0A21 and GP2D120 sensors in the situation that a hand is used as the reflective object. In the end, the presented system is compared with other audio/video system...

  16. CT scan correlates of gesture recognition.

    Science.gov (United States)

    Ferro, J M; Martins, I P; Mariano, G; Caldas, A C

    1983-10-01

    The ability to recognise gestures was studied in 65 left-hemispheric stroke patients whose lesions were located by CT scan. In the acute stage (first month) frontal lobe and basal ganglia were frequently involved in patients showing inability to recognise gestures. In the later (third to fourth month) and chronic stages (greater than 6 months) parietal lobe involvement was important; lesions causing gesture recognition impairment were larger, had more extensive and frequent parietal involvement and produced less temporal lobe damage than those causing aural comprehension defects. These findings are discussed in the light of recent models of cerebral localisation of complex functions.

  17. Blind Speakers Show Language-Specific Patterns in Co-Speech Gesture but Not Silent Gesture.

    Science.gov (United States)

    Özçalışkan, Şeyda; Lucero, Ché; Goldin-Meadow, Susan

    2017-05-08

    Sighted speakers of different languages vary systematically in how they package and order components of a motion event in speech. These differences influence how semantic elements are organized in gesture, but only when those gestures are produced with speech (co-speech gesture), not without speech (silent gesture). We ask whether the cross-linguistic similarity in silent gesture is driven by the visuospatial structure of the event. We compared 40 congenitally blind adult native speakers of English or Turkish (20/language) to 80 sighted adult speakers (40/language; half with, half without blindfolds) as they described three-dimensional motion scenes. We found an effect of language on co-speech gesture, not on silent gesture-blind speakers of both languages organized their silent gestures as sighted speakers do. Humans may have a natural semantic organization that they impose on events when conveying them in gesture without language-an organization that relies on neither visuospatial cues nor language structure. © 2017 Cognitive Science Society, Inc.

  18. Research on Gesture Definition and Electrode Placement in Pattern Recognition of Hand Gesture Action SEMG

    Science.gov (United States)

    Zhang, Xu; Chen, Xiang; Zhao, Zhang-Yan; Tu, You-Qiang; Yang, Ji-Hai; Lantz, Vuokko; Wang, Kong-Qiao

    The goal of this study is to explore the effects of electrode place-ment on the hand gesture pattern recognition performance. We have conducted experiments with surface EMG sensors using two detecting electrode channels. In total 25 different hand gestures and 10 different electrode positions for measuring muscle activities have been evaluated. Based on the experimental results, dependencies between surface EMG signal detection positions and hand gesture recognition performance have been analyzed and summarized as suggestions how to define hand gestures and select suitable electrode positions for a myoelectric control system. This work provides useful insight for the development of a medical rehabilitation system based on EMG technique.

  19. Multimodal interfaces with voice and gesture input

    Energy Technology Data Exchange (ETDEWEB)

    Milota, A.D.; Blattner, M.M.

    1995-07-20

    The modalities of speech and gesture have different strengths and weaknesses, but combined they create synergy where each modality corrects the weaknesses of the other. We believe that a multimodal system such a one interwining speech and gesture must start from a different foundation than ones which are based solely on pen input. In order to provide a basis for the design of a speech and gesture system, we have examined the research in other disciplines such as anthropology and linguistics. The result of this investigation was a taxonomy that gave us material for the incorporation of gestures whose meanings are largely transparent to the users. This study describes the taxonomy and gives examples of applications to pen input systems.

  20. MGRA: Motion Gesture Recognition via Accelerometer

    Directory of Open Access Journals (Sweden)

    Feng Hong

    2016-04-01

    Full Text Available Accelerometers have been widely embedded in most current mobile devices, enabling easy and intuitive operations. This paper proposes a Motion Gesture Recognition system (MGRA based on accelerometer data only, which is entirely implemented on mobile devices and can provide users with real-time interactions. A robust and unique feature set is enumerated through the time domain, the frequency domain and singular value decomposition analysis using our motion gesture set containing 11,110 traces. The best feature vector for classification is selected, taking both static and mobile scenarios into consideration. MGRA exploits support vector machine as the classifier with the best feature vector. Evaluations confirm that MGRA can accommodate a broad set of gesture variations within each class, including execution time, amplitude and non-gestural movement. Extensive evaluations confirm that MGRA achieves higher accuracy under both static and mobile scenarios and costs less computation time and energy on an LG Nexus 5 than previous methods.

  1. MGRA: Motion Gesture Recognition via Accelerometer.

    Science.gov (United States)

    Hong, Feng; You, Shujuan; Wei, Meiyu; Zhang, Yongtuo; Guo, Zhongwen

    2016-04-13

    Accelerometers have been widely embedded in most current mobile devices, enabling easy and intuitive operations. This paper proposes a Motion Gesture Recognition system (MGRA) based on accelerometer data only, which is entirely implemented on mobile devices and can provide users with real-time interactions. A robust and unique feature set is enumerated through the time domain, the frequency domain and singular value decomposition analysis using our motion gesture set containing 11,110 traces. The best feature vector for classification is selected, taking both static and mobile scenarios into consideration. MGRA exploits support vector machine as the classifier with the best feature vector. Evaluations confirm that MGRA can accommodate a broad set of gesture variations within each class, including execution time, amplitude and non-gestural movement. Extensive evaluations confirm that MGRA achieves higher accuracy under both static and mobile scenarios and costs less computation time and energy on an LG Nexus 5 than previous methods.

  2. Gesture recognition for an exergame prototype

    NARCIS (Netherlands)

    Gacem, B.; Vergouw, R.; Verbiest, H.; Cicek, E.; van Oosterhout, T.; Krose, B.; Bakkes, S.

    2011-01-01

    We will demonstrate a prototype exergame aimed at the serious domain of elderly fitness. The exergame incorporates straightforward means to gesture recognition, and utilises a Kinect camera to obtain 2.5D sensory data of the human user.

  3. Hand Gesture Recognition Based on Improved FRNN

    Institute of Scientific and Technical Information of China (English)

    TENG Xiao-long; WANG Xiang-yang; LIU Chong-qing

    2005-01-01

    The trained Gaussian mixture model is used to make skincolour segmentation for the input image sequences. The hand gesture region is extracted, and the relative normalization images are obtained by interpolation operation. To solve the problem of hand gesture recognition, Fuzzy-Rough based nearest neighbour (FRNN) algorithm is applied for classification. For avoiding the costly compute, an improved nearest neighbour classification algorithm based on fuzzy-rough set theory (FRNNC) is proposed. The algorithm employs the represented cluster points instead of the whole training samples, and takes the hand gesture data's fuzziness and the roughness into account, so the compute spending is decreased and the recognition rate is increased. The 30 gestures in Chinese sign language alphabet are used for approving the effectiveness of the proposed algorithm. The recognition rate is 94.96%, which is better than that of KNN (K nearest neighbor)and Fuzzy-KNN (Fuzzy K nearest neighbor).

  4. Active Appearance Model Based Hand Gesture Recognition

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    This paper addresses the application of hand gesture recognition in monocular image sequences using Active Appearance Model (AAM). For this work, the proposed algorithm is conposed of constructing AAMs and fitting the models to the interest region. In training stage, according to the manual labeled feature points, the relative AAM is constructed and the corresponding average feature is obtained. In recognition stage, the interesting hand gesture region is firstly segmented by skin and movement cues.Secondly, the models are fitted to the image that includes the hand gesture, and the relative features are extracted.Thirdly, the classification is done by comparing the extracted features and average features. 30 different gestures of Chinese sign language are applied for testing the effectiveness of the method. The Experimental results are given indicating good performance of the algorithm.

  5. Gesture Recognition for an Exergame Prototype

    NARCIS (Netherlands)

    Gacem, Brahim; Vergouw, Robert; Verbiest, Harm; Cicek, Emrullah; Van Oosterhout, Tim; Bakkes, Sander; Kröse, Ben

    2011-01-01

    We will demonstrate a prototype exergame aimed at the serious domain of elderly fitness. The exergame incorporates straightforward means to gesture recognition, and utilises a Kinect camera to obtain 2.5D sensory data of the human user.

  6. The language-gesture connection: Evidence from aphasia.

    Science.gov (United States)

    Dipper, Lucy; Pritchard, Madeleine; Morgan, Gary; Cocks, Naomi

    2015-01-01

    A significant body of evidence from cross-linguistic and developmental studies converges to suggest that co-speech iconic gesture mirrors language. This paper aims to identify whether gesture reflects impaired spoken language in a similar way. Twenty-nine people with aphasia (PWA) and 29 neurologically healthy control participants (NHPs) produced a narrative discourse, retelling the story of a cartoon video. Gesture and language were analysed in terms of semantic content and structure for two key motion events. The aphasic data showed an influence on gesture from lexical choices but no corresponding clausal influence. Both the groups produced gesture that matched the semantics of the spoken language and gesture that did not, although there was one particular gesture-language mismatch (semantically "light" verbs paired with semantically richer gesture) that typified the PWA narratives. These results indicate that gesture is both closely related to spoken language impairment and compensatory.

  7. Dynamic Hand Gesture Recognition Using the Skeleton of the Hand

    Directory of Open Access Journals (Sweden)

    Coquin Didier

    2005-01-01

    Full Text Available This paper discusses the use of the computer vision in the interpretation of human gestures. Hand gestures would be an intuitive and ideal way of exchanging information with other people in a virtual space, guiding some robots to perform certain tasks in a hostile environment, or interacting with computers. Hand gestures can be divided into two main categories: static gestures and dynamic gestures. In this paper, a novel dynamic hand gesture recognition technique is proposed. It is based on the 2D skeleton representation of the hand. For each gesture, the hand skeletons of each posture are superposed providing a single image which is the dynamic signature of the gesture. The recognition is performed by comparing this signature with the ones from a gesture alphabet, using Baddeley's distance as a measure of dissimilarities between model parameters.

  8. Dynamic Hand Gesture Recognition Using the Skeleton of the Hand

    Science.gov (United States)

    Ionescu, Bogdan; Coquin, Didier; Lambert, Patrick; Buzuloiu, Vasile

    2005-12-01

    This paper discusses the use of the computer vision in the interpretation of human gestures. Hand gestures would be an intuitive and ideal way of exchanging information with other people in a virtual space, guiding some robots to perform certain tasks in a hostile environment, or interacting with computers. Hand gestures can be divided into two main categories: static gestures and dynamic gestures. In this paper, a novel dynamic hand gesture recognition technique is proposed. It is based on the 2D skeleton representation of the hand. For each gesture, the hand skeletons of each posture are superposed providing a single image which is the dynamic signature of the gesture. The recognition is performed by comparing this signature with the ones from a gesture alphabet, using Baddeley's distance as a measure of dissimilarities between model parameters.

  9. The neural substrate of gesture recognition.

    Science.gov (United States)

    Villarreal, Mirta; Fridman, Esteban A; Amengual, Alejandra; Falasco, German; Gerschcovich, Eliana Roldan; Gerscovich, Eliana Roldan; Ulloa, Erlinda R; Leiguarda, Ramon C

    2008-01-01

    Previous studies have linked action recognition with a particular pool of neurons located in the ventral premotor cortex, the posterior parietal cortex and the superior temporal sulcus (the mirror neuron system). However, it is still unclear if transitive and intransitive gestures share the same neural substrates during action-recognition processes. In the present study, we used event-related functional magnetic resonance imaging (fMRI) to assess the cortical areas active during recognition of pantomimed transitive actions, intransitive gestures, and meaningless control actions. Perception of all types of gestures engaged the right pre-supplementary motor area (pre-SMA), and bilaterally in the posterior superior temporal cortex, the posterior parietal cortex, occipitotemporal regions and visual cortices. Activation of the posterior superior temporal sulcus/superior temporal gyrus region was found in both hemispheres during recognition of transitive and intransitive gestures, and in the right hemisphere during the control condition; the middle temporal gyrus showed activation in the left hemisphere when subjects recognized transitive and intransitive gestures; activation of the left inferior parietal lobe and intraparietal sulcus (IPS) was mainly observed in the left hemisphere during recognition of the three conditions. The most striking finding was the greater activation of the left inferior frontal gyrus (IFG) during recognition of intransitive actions. Results show that a similar neural substrate, albeit, with a distinct engagement underlies the cognitive processing of transitive and intransitive gestures recognition. These findings suggest that selective disruptions in these circuits may lead to distinct clinical deficits.

  10. Gesture recognition by instantaneous surface EMG images.

    Science.gov (United States)

    Geng, Weidong; Du, Yu; Jin, Wenguang; Wei, Wentao; Hu, Yu; Li, Jiajun

    2016-11-15

    Gesture recognition in non-intrusive muscle-computer interfaces is usually based on windowed descriptive and discriminatory surface electromyography (sEMG) features because the recorded amplitude of a myoelectric signal may rapidly fluctuate between voltages above and below zero. Here, we present that the patterns inside the instantaneous values of high-density sEMG enables gesture recognition to be performed merely with sEMG signals at a specific instant. We introduce the concept of an sEMG image spatially composed from high-density sEMG and verify our findings from a computational perspective with experiments on gesture recognition based on sEMG images with a classification scheme of a deep convolutional network. Without any windowed features, the resultant recognition accuracy of an 8-gesture within-subject test reached 89.3% on a single frame of sEMG image and reached 99.0% using simple majority voting over 40 frames with a 1,000 Hz sampling rate. Experiments on the recognition of 52 gestures of NinaPro database and 27 gestures of CSL-HDEMG database also validated that our approach outperforms state-of-the-arts methods. Our findings are a starting point for the development of more fluid and natural muscle-computer interfaces with very little observational latency. For example, active prostheses and exoskeletons based on high-density electrodes could be controlled with instantaneous responses.

  11. Gesture recognition by instantaneous surface EMG images

    Science.gov (United States)

    Geng, Weidong; Du, Yu; Jin, Wenguang; Wei, Wentao; Hu, Yu; Li, Jiajun

    2016-01-01

    Gesture recognition in non-intrusive muscle-computer interfaces is usually based on windowed descriptive and discriminatory surface electromyography (sEMG) features because the recorded amplitude of a myoelectric signal may rapidly fluctuate between voltages above and below zero. Here, we present that the patterns inside the instantaneous values of high-density sEMG enables gesture recognition to be performed merely with sEMG signals at a specific instant. We introduce the concept of an sEMG image spatially composed from high-density sEMG and verify our findings from a computational perspective with experiments on gesture recognition based on sEMG images with a classification scheme of a deep convolutional network. Without any windowed features, the resultant recognition accuracy of an 8-gesture within-subject test reached 89.3% on a single frame of sEMG image and reached 99.0% using simple majority voting over 40 frames with a 1,000 Hz sampling rate. Experiments on the recognition of 52 gestures of NinaPro database and 27 gestures of CSL-HDEMG database also validated that our approach outperforms state-of-the-arts methods. Our findings are a starting point for the development of more fluid and natural muscle-computer interfaces with very little observational latency. For example, active prostheses and exoskeletons based on high-density electrodes could be controlled with instantaneous responses. PMID:27845347

  12. Combined Hand Gesture — Speech Model for Human Action Recognition

    Directory of Open Access Journals (Sweden)

    Sheng-Tzong Cheng

    2013-12-01

    Full Text Available This study proposes a dynamic hand gesture detection technology to effectively detect dynamic hand gesture areas, and a hand gesture recognition technology to improve the dynamic hand gesture recognition rate. Meanwhile, the corresponding relationship between state sequences in hand gesture and speech models is considered by integrating speech recognition technology with a multimodal model, thus improving the accuracy of human behavior recognition. The experimental results proved that the proposed method can effectively improve human behavior recognition accuracy and the feasibility of system applications. Experimental results verified that the multimodal gesture-speech model provided superior accuracy when compared to the single modal versions.

  13. Combined hand gesture--speech model for human action recognition.

    Science.gov (United States)

    Cheng, Sheng-Tzong; Hsu, Chih-Wei; Li, Jian-Pan

    2013-12-12

    This study proposes a dynamic hand gesture detection technology to effectively detect dynamic hand gesture areas, and a hand gesture recognition technology to improve the dynamic hand gesture recognition rate. Meanwhile, the corresponding relationship between state sequences in hand gesture and speech models is considered by integrating speech recognition technology with a multimodal model, thus improving the accuracy of human behavior recognition. The experimental results proved that the proposed method can effectively improve human behavior recognition accuracy and the feasibility of system applications. Experimental results verified that the multimodal gesture-speech model provided superior accuracy when compared to the single modal versions.

  14. Different visual exploration of tool-related gestures in left hemisphere brain damaged patients is associated with poor gestural imitation.

    Science.gov (United States)

    Vanbellingen, Tim; Schumacher, Rahel; Eggenberger, Noëmi; Hopfner, Simone; Cazzoli, Dario; Preisig, Basil C; Bertschi, Manuel; Nyffeler, Thomas; Gutbrod, Klemens; Bassetti, Claudio L; Bohlhalter, Stephan; Müri, René M

    2015-05-01

    According to the direct matching hypothesis, perceived movements automatically activate existing motor components through matching of the perceived gesture and its execution. The aim of the present study was to test the direct matching hypothesis by assessing whether visual exploration behavior correlate with deficits in gestural imitation in left hemisphere damaged (LHD) patients. Eighteen LHD patients and twenty healthy control subjects took part in the study. Gesture imitation performance was measured by the test for upper limb apraxia (TULIA). Visual exploration behavior was measured by an infrared eye-tracking system. Short videos including forty gestures (20 meaningless and 20 communicative gestures) were presented. Cumulative fixation duration was measured in different regions of interest (ROIs), namely the face, the gesturing hand, the body, and the surrounding environment. Compared to healthy subjects, patients fixated significantly less the ROIs comprising the face and the gesturing hand during the exploration of emblematic and tool-related gestures. Moreover, visual exploration of tool-related gestures significantly correlated with tool-related imitation as measured by TULIA in LHD patients. Patients and controls did not differ in the visual exploration of meaningless gestures, and no significant relationships were found between visual exploration behavior and the imitation of emblematic and meaningless gestures in TULIA. The present study thus suggests that altered visual exploration may lead to disturbed imitation of tool related gestures, however not of emblematic and meaningless gestures. Consequently, our findings partially support the direct matching hypothesis. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Visual Interpretation Of Hand Gestures For Human Computer Interaction

    Directory of Open Access Journals (Sweden)

    M.S.Sahane

    2014-01-01

    Full Text Available The use of hand gestures provides an attractive alternative to cumbersome interface devices for human-computer interaction (HCI. In particular, visual interpretation of hand gestures can help in achieving the ease and naturalness desired for HCI. This discussion is organized on the basis of the method used for modeling, analyzing, and recognizing gestures. We propose pointing gesture-based large display interaction using a depth camera. A user interacts with applications for large display by using pointing gestures with the barehand. The calibration between large display and depth camera can be automatically performed by using RGB-D camera.. We also discuss implemented gestural systems as well as other potential applications of vision-based gesture recognition. We discuss directions of future research in gesture recognition, including its integration with other natural modes of human computer interaction.

  16. Narrative processing in typically developing children and children with early unilateral brain injury: seeing gesture matters.

    Science.gov (United States)

    Demir, Özlem Ece; Fisher, Joan A; Goldin-Meadow, Susan; Levine, Susan C

    2014-03-01

    Narrative skill in kindergarteners has been shown to be a reliable predictor of later reading comprehension and school achievement. However, we know little about how to scaffold children's narrative skill. Here we examine whether the quality of kindergarten children's narrative retellings depends on the kind of narrative elicitation they are given. We asked this question with respect to typically developing (TD) kindergarten children and children with pre- or perinatal unilateral brain injury (PL), a group that has been shown to have difficulty with narrative production. We compared children's skill in retelling stories originally presented to them in 4 different elicitation formats: (a) wordless cartoons, (b) stories told by a narrator through the auditory modality, (c) stories told by a narrator through the audiovisual modality without co-speech gestures, and (e) stories told by a narrator in the audiovisual modality with co-speech gestures. We found that children told better structured narratives in response to the audiovisual + gesture elicitation format than in response to the other 3 elicitation formats, consistent with findings that co-speech gestures can scaffold other aspects of language and memory. The audiovisual + gesture elicitation format was particularly beneficial for children who had the most difficulty telling a well-structured narrative, a group that included children with larger lesions associated with cerebrovascular infarcts.

  17. Multi-touch pinch gestures: performance and ergonomics

    OpenAIRE

    Hoggan, Eve; Nacenta, Miguel; Kristensson, Per Ola; Williamson, John; Oulasvirta, Antti; Lehtiö, Anu

    2013-01-01

    Multi-touch gestures are prevalent interaction techniques for many different types of devices and applications. One of the most common gestures is the pinch gesture, which involves the expansion or contraction of a finger spread. There are multiple uses for this gesture—zooming and scaling being the most common—but little is known about the factors affecting performance and ergonomics of the gesture motion itself. In this note, we present the results from a study where we manipulated angle, d...

  18. Animation Stimuli System for Research on Instructor Gestures in Education.

    Science.gov (United States)

    Cui, Jian; Popescu, Voicu; Adamo-Villani, Nicoletta; Wagner Cook, Susan; Duggan, Katherine A; Friedman, Howard S

    2017-01-01

    Education research has shown that instructor gestures can help capture, maintain, and direct the student's attention during a lecture as well as enhance learning and retention. Traditional education research on instructor gestures relies on video stimuli, which are time consuming to produce, especially when gesture precision and consistency across conditions are strictly enforced. The proposed system allows users to efficiently create accurate and effective stimuli for complex studies on gesture, without the need for computer animation expertise or artist talent.

  19. Research on Virtual Object Tele-operation Based on Gesture

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    A tele-operation method of virtual environment based on gesture is presented.Firstly,the design block diagram and the information flow of the virtual environment tele-operation simulation system are given.Secondly,the coordination transformation between virtual gesture and the tele-operated aircraft is presented.Finally,a tele-operation simulation system based on gesture is developed.And the simulation results demonstrate that there is more consistency between the virtual gesture and the moving object.

  20. Gestures and Language: Fair and Foul in Other Cultures.

    Science.gov (United States)

    Wilcox, Joanne

    1994-01-01

    Discusses social gaffes that North Americans can make when using inappropriate gestures and body language in other cultures, focusing on the meaning of common gestures in Asia, the Middle East, and Europe. Includes a whimsical 10-question gesture and body language quiz. (MDM)

  1. Gesture Controlled Robot using Image Processing

    Directory of Open Access Journals (Sweden)

    Harish Kumar Kaura

    2013-05-01

    Full Text Available Service robots directly interact with people, so finding a more natural and easy user interface is of fundamental importance. While earlier works have focused primarily on issues such as manipulation and navigation in the environment, few robotic systems are used with user friendly interfaces that possess the ability to control the robot by natural means. To facilitate a feasible solution to this requirement, we have implemented a system through which the user can give commands to a wireless robot using gestures. Through this method, the user can control or navigate the robot by using gestures of his/her palm, thereby interacting with the robotic system. The command signals are generated from these gestures using image processing. These signals are then passed to the robot to navigate it in the specified directions.

  2. Pitch Gestures in Generative Modeling of Music

    DEFF Research Database (Denmark)

    Jensen, Kristoffer

    2011-01-01

    Generative models of music are in need of performance and gesture additions, i.e. inclusions of subtle temporal and dynamic alterations, and gestures so as to render the music musical. While much of the research regarding music generation is based on music theory, the work presented here is based...... on the temporal perception, which is divided into three parts, the immediate (subchunk), the short-term memory (chunk), and the superchunk. By review of the relevant temporal perception literature, the necessary performance elements to add in the metrical generative model, related to the chunk memory......, are obtained. In particular, the pitch gestures are modeled as rising, falling, or as arches with positive or negative peaks....

  3. Gesture Based Control and EMG Decomposition

    Science.gov (United States)

    Wheeler, Kevin R.; Chang, Mindy H.; Knuth, Kevin H.

    2005-01-01

    This paper presents two probabilistic developments for use with Electromyograms (EMG). First described is a new-electric interface for virtual device control based on gesture recognition. The second development is a Bayesian method for decomposing EMG into individual motor unit action potentials. This more complex technique will then allow for higher resolution in separating muscle groups for gesture recognition. All examples presented rely upon sampling EMG data from a subject's forearm. The gesture based recognition uses pattern recognition software that has been trained to identify gestures from among a given set of gestures. The pattern recognition software consists of hidden Markov models which are used to recognize the gestures as they are being performed in real-time from moving averages of EMG. Two experiments were conducted to examine the feasibility of this interface technology. The first replicated a virtual joystick interface, and the second replicated a keyboard. Moving averages of EMG do not provide easy distinction between fine muscle groups. To better distinguish between different fine motor skill muscle groups we present a Bayesian algorithm to separate surface EMG into representative motor unit action potentials. The algorithm is based upon differential Variable Component Analysis (dVCA) [l], [2] which was originally developed for Electroencephalograms. The algorithm uses a simple forward model representing a mixture of motor unit action potentials as seen across multiple channels. The parameters of this model are iteratively optimized for each component. Results are presented on both synthetic and experimental EMG data. The synthetic case has additive white noise and is compared with known components. The experimental EMG data was obtained using a custom linear electrode array designed for this study.

  4. Communicating Epistemic Stance: How Speech and Gesture Patterns Reflect Epistemicity and Evidentiality

    Science.gov (United States)

    Roseano, Paolo; González, Montserrat; Borràs-Comes, Joan; Prieto, Pilar

    2016-01-01

    This study investigates how epistemic stance is encoded and perceived in face-to-face communication when language is regarded as comprised by speech and gesture. Two studies were conducted with this goal in mind. The first study consisted of a production task in which participants performed opinion reports. Results showed that speakers communicate…

  5. Dynamic Gesture Recognition Based on Depth Information

    Directory of Open Access Journals (Sweden)

    GU, D.

    2015-08-01

    Full Text Available Human machine interaction by body language is becoming popular recently. With the help of 3D camera, video stream with depth information provides more detailed data to describe a movement. This paper proposed an algorithm to recognize dynamic gestures. Data preparation is needed first to eliminate some distractions. Then the start and the end of a possible meaningful gesture should be made clear. Finally, Dynamic Time Warping (DTW is employed to calculate the similarity between a sample stream and the template. The test results show that the algorithm works well

  6. An Intelligent Multilingual Mouse Gesture Recognition System

    Directory of Open Access Journals (Sweden)

    Nidal F. Shilbayeh

    2005-01-01

    Full Text Available A comprehensive mouse gesture system is designed and tested successfully. The system is based on UNIPEN algorithm in terms of mouse movements and applies its geometrical principles such as angles and transposition steps. The system incorporates Neural Networks as its learning and recognition engine. The designed algorithm is not only capable of translating discrete gesture moves, but also continuous sentences and complete paragraphs. Hopfield Network is also used for initial learning to add a feature of language independence to the system.

  7. Gliding and Saccadic Gaze Gesture Recognition in Real Time

    DEFF Research Database (Denmark)

    Rozado, David; San Agustin, Javier; Rodriguez, Francisco

    2012-01-01

    paradigm in the context of human-machine interaction as low-cost gaze trackers become more ubiquitous. The viability of gaze gestures as an innovative way to control a computer rests on how easily they can be assimilated by potential users and also on the ability of machine learning algorithms...... to discriminate intentional gaze gestures from typical gaze activity performed during standard interaction with electronic devices. In this work, through a set of experiments and user studies, we evaluate the performance of two different gaze gestures modalities, gliding gaze gestures and saccadic gaze gestures...

  8. Generating Culture-Specific Gestures for Virtual Agent Dialogs

    DEFF Research Database (Denmark)

    Endrass, Birgit; Damian, Ionut; Huber, Peter

    2010-01-01

    Integrating culture into the behavioral model of virtual agents has come into focus lately. When investigating verbal aspects of behavior, nonverbal behaviors are desirably added automatically, driven by the speech-act. In this paper, we present a corpus driven approach of generating gestures in ...... in a culture-specific way that accompany agent dialogs. The frequency of gestures and gesture-types, the correlation of gesture-types and speech-acts as well as the expressivity of gestures have been analyzed in the two cultures of Germany and Japan and integrated into a demonstrator....

  9. A Robot Control System Based on Gesture Recognition Using Kinect

    Directory of Open Access Journals (Sweden)

    Biao MA

    2013-05-01

    Full Text Available The Kinect camera is widely used for capturing human body images and human motion recognition in video game playing, and there are already some research works on gesture recognition. However, to achieve the anti-interference performance, the current recognition algorithms are often complex and tardiness, and most of the applications are based on the incomplete gesture library and not all hand gestures can be recognized. This paper explores a new method and algorithm which can describe all five fingertips for each hand in any time for hand gesture recognition with the Kinect system. The hand images are processed to build the hand models which are then compared with the gesture library for gesture recognition. After hand gestures are recognized with high accuracy and less computing, control commands corresponding to hand gestures are sent from the hand gesture recognition system to a hexagon robot controller wirelessly, the hexagon robot can then be controlled wirelessly and change its shape according to the hand gesture command. Thus the robot can interact with humans promptly through the gesture recognition system.

  10. Grammar of Dance Gesture from Bali Traditional Dance

    Directory of Open Access Journals (Sweden)

    Yaya Heryadi

    2012-11-01

    Full Text Available Automatic recognition of dance gesture is one important research area in computer vision with many potential applications. Bali traditional dance comprises of many dance gestures that relatively unchanged over the years. Although previous studies have reported various methods for recognizing gesture, to the best of our knowledge, a method to model and classify dance gesture of Bali traditional dance is still unfound in literature. The aim of this paper is to build a robust recognizer based on linguistic motivated method to recognize dance gesture of Bali traditional dance choreography. The empiric results showed that probabilistic grammar-based classifiers that were induced using the Alergia algorithm with Symbolic Aggregate Approximation (SAX discretization method achieved 92% of average precision in recognizing a predefined set of dance gestures. The study also showed that the most discriminative features to represent Bali traditional dance gestures are skeleton joint features of: left/right foot and left/right elbow.

  11. Conditional random field-based gesture recognition with depth information

    Science.gov (United States)

    Chung, Hyunsook; Yang, Hee-Deok

    2013-01-01

    Gesture recognition is useful for human-computer interaction. The difficulty of gesture recognition is that instances of gestures vary both in motion and shape in three-dimensional (3-D) space. We use depth information generated using Microsoft's Kinect in order to detect 3-D human body components and apply a threshold model with a conditional random field in order to recognize meaningful gestures using continuous motion information. Body gesture recognition is achieved through a framework consisting of two steps. First, a human subject is described by a set of features, encoding the angular relationship between body components in 3-D space. Second, a feature vector is recognized using a threshold model with a conditional random field. In order to show the performance of the proposed method, we use a public data set, the Microsoft Research Cambridge-12 Kinect gesture database. The experimental results demonstrate that the proposed method can efficiently and effectively recognize body gestures automatically.

  12. The Function of Gesture in an Architectural Design Meeting

    CERN Document Server

    Visser, Willemien

    2009-01-01

    This text presents a cognitive-psychology analysis of spontaneous, co-speech gestures in a face-to-face architectural design meeting (A1 in DTRS7). The long-term objective is to formulate specifications for remote collaborative-design systems, especially for supporting the use of different semiotic modalities (multi-modal interaction). According to their function for design, interaction, and collaboration, we distinguish different gesture families: representational (entity designating or specifying), organisational (management of discourse, interaction, or functional design actions), focalising, discourse and interaction modulating, and disambiguating gestures. Discussion and conclusion concern the following points. It is impossible to attribute fixed functions to particular gesture forms. "Designating" gestures may also have a design function. The gestures identified in A1 possess a certain generic character. The gestures identified are neither systematically irreplaceable, nor optional accessories to speech...

  13. Hand movements with a phase structure and gestures that depict action stem from a left hemispheric system of conceptualization.

    Science.gov (United States)

    Helmich, I; Lausberg, H

    2014-10-01

    The present study addresses the previously discussed controversy on the contribution of the right and left cerebral hemispheres to the production and conceptualization of spontaneous hand movements and gestures. Although it has been shown that each hemisphere contains the ability to produce hand movements, results of left hemispherically lateralized motor functions challenge the view of a contralateral hand movement production system. To examine hemispheric specialization in hand movement and gesture production, ten right-handed participants were tachistoscopically presented pictures of everyday life actions. The participants were asked to demonstrate with their hands, but without speaking what they had seen on the drawing. Two independent blind raters evaluated the videotaped hand movements and gestures employing the Neuropsychological Gesture Coding System. The results showed that the overall frequency of right- and left-hand movements is equal independent of stimulus lateralization. When hand movements were analyzed considering their Structure, the presentation of the action stimuli to the left hemisphere resulted in more hand movements with a phase structure than the presentation to the right hemisphere. Furthermore, the presentation to the left hemisphere resulted in more right and left-hand movements with a phase structure, whereas the presentation to the right hemisphere only increased contralateral left-hand movements with a phase structure as compared to hand movements without a phase structure. Gestures that depict action were primarily displayed in response to stimuli presented in the right visual field than in the left one. The present study shows that both hemispheres possess the faculty to produce hand movements in response to action stimuli. However, the left hemisphere dominates the production of hand movements with a phase structure and gestures that depict action. We therefore conclude that hand movements with a phase structure and gestures that

  14. The effect of static and dynamic visual gestures on stuttering inhibition.

    Science.gov (United States)

    Guntupalli, Vijaya K; Nanjundeswaran, Chayadevie; Kalinowski, Joseph; Dayalu, Vikram N

    2011-03-29

    The aim of the study was to evaluate the role of steady-state and dynamic visual gestures of vowels in stuttering inhibition. Eight adults who stuttered recited sentences from memory while watching video presentations of the following visual speech gestures: (a) a steady-state /u/, (b) dynamic production of /a-i-u/, (c) steady-state /u/ with an accompanying audible 1 kHz pure tone, and (d) dynamic production of /a-i-u/ with an accompanying audible 1 kHz pure tone. A 1 kHz pure tone and a no-external signal condition served as control conditions. Results revealed a significant main effect of auditory condition on stuttering frequency. Relative to the no-external signal condition, the combined visual plus pure tone conditions resulted in a statistically significant reduction in stuttering frequency. In addition, a significant difference in stuttering frequency was also observed when the visual plus pure tone conditions were compared to the visual only conditions. However, no significant differences were observed between the no-external signal condition and visual only conditions, or the no-external signal condition and pure tone condition. These findings are in contrast to previous findings demonstrated by similar vowel gestures presented via the auditory modality that resulted in high levels of stuttering inhibition. The differential role of sensory modalities in speech perception and production as well as their individual capacities to transfer gestural information for the purposes of stuttering inhibition is discussed.

  15. Password Based Hand Gesture Controlled Robot

    Directory of Open Access Journals (Sweden)

    Shanmukha Rao

    2016-04-01

    Full Text Available Gesture is a most natural way of communication between human and computer in real system. Hand gesture is one of the important methods of non-verbal communications for humans. A simulation tool, MATLAB based colour image processing is used to recognize hand gesture. With the help of wireless communication, it is easier to interact with the robot. The objective of this project is to build a password protected wireless gesture control robot using Arduino, RF transmitter and receiver module. The continuous images are processed and the command signal is sent to the Arduino Uno microcontroller and according to the number of fingers, it sends the commands to the RF transmitter which is received by the transmitter and is processed at the receiver end which drives the motor to a particular direction. The robot moves forward, backward, right and left when we show one, two, three, four fingers (fingers with some red color band or tape respectively. As soon as the hand is moved off from the frame immediately it will stop. This can be used for physically disabled people who can’t use their hands to move the wheel chair. And it can also be used in various military applications where radioactive substances which can’t be touched by the human hand.

  16. Gesture recognition for interactive exercise programs.

    Science.gov (United States)

    Perkins, Jedediah; Pavel, Misha; Jimison, Holly B; Scott, Susan

    2008-01-01

    This paper describes a gesture recognition system which can recognize seated exercises that will be incorporated into an in-home automated interactive exercise program. Hidden Markov Models (HMMs) are used as a motion classifier, with motion features extracted from the grayscale images and the location of the subject's head estimated at initialization. An overall recognition rate of 94.1% is achieved.

  17. Recognition of Deictic Gestures for Wearable Computing

    DEFF Research Database (Denmark)

    Moeslund, Thomas B.; Nørgaard, Lau

    2006-01-01

    -invasive handgesture recognition system aimed at deictic gestures. Our system is based on the powerful Sequential Monte Carlo framework which is enhanced with respect to increased robustness. This is achieved by using ratios in the likelihood function together with two image cues: edges and skin color. The system...

  18. A Prelinguistic Gestural Universal of Human Communication

    Science.gov (United States)

    Liszkowski, Ulf; Brown, Penny; Callaghan, Tara; Takada, Akira; de Vos, Conny

    2012-01-01

    Several cognitive accounts of human communication argue for a language-independent, prelinguistic basis of human communication and language. The current study provides evidence for the universality of a prelinguistic gestural basis for human communication. We used a standardized, semi-natural elicitation procedure in seven very different cultures…

  19. Humanoid Upper Torso Complexity for Displaying Gestures

    Directory of Open Access Journals (Sweden)

    Robert Richardson

    2012-05-01

    Full Text Available Body language is an important part of human‐ to‐human communication; therefore body language in humanoid robots is very important for successful communication and social interaction with humans. The number of degrees of freedom (d.o.f necessary to achieve realistic body language in robots has been investigated. Using animation, three robots were simulated performing body language gestures; the complex model was given 25 d.o.f, the simplified model 18 d.o.f and the basic model 10 d.o.f. A subjective survey was created online using these animations, to obtain people’s opinions on the realism of the gestures and to see if they could recognize the emotions portrayed. It was concluded that the basic system was the least realistic, complex system the most realistic, and the simplified system was only slightly less realistic than the human. Modular robotic joints were then fabricated so that the gestures could be implemented experimentally. The experimental results demonstrate that through simplification of the required degrees of freedom, the gestures can be experimentally reproduced.

  20. A Prelinguistic Gestural Universal of Human Communication

    Science.gov (United States)

    Liszkowski, Ulf; Brown, Penny; Callaghan, Tara; Takada, Akira; de Vos, Conny

    2012-01-01

    Several cognitive accounts of human communication argue for a language-independent, prelinguistic basis of human communication and language. The current study provides evidence for the universality of a prelinguistic gestural basis for human communication. We used a standardized, semi-natural elicitation procedure in seven very different cultures…

  1. Humanoid Upper Torso Complexity for Displaying Gestures

    Directory of Open Access Journals (Sweden)

    Robert Richardson

    2008-11-01

    Full Text Available Body language is an important part of human-to-human communication; therefore body language in humanoid robots is very important for successful communication and social interaction with humans. The number of degrees of freedom (d.o.f necessary to achieve realistic body language in robots has been investigated. Using animation, three robots were simulated performing body language gestures; the complex model was given 25 d.o.f, the simplified model 18 d.o.f and the basic model 10 d.o.f. A subjective survey was created online using these animations, to obtain people's opinions on the realism of the gestures and to see if they could recognise the emotions portrayed. It was concluded that the basic system was the least realistic, complex system the most realistic, and the simplified system was only slightly less realistic than the human. Modular robotic joints were then fabricated so that the gestures could be implemented experimentally. The experimental results demonstrate that through simplification of the required degrees of freedom, the gestures can be experimentally reproduced.

  2. Do French-English Bilingual Children Gesture More than Monolingual Children?

    Science.gov (United States)

    Nicoladis, Elena; Pika, Simone; Marentette, Paula

    2009-01-01

    Previous studies have shown that bilingual adults use more gestures than English monolinguals. Because no study has compared the gestures of bilinguals and monolinguals in both languages, the high gesture rate could be due to transfer from a high gesture language or could result from the use of gesture to aid in linguistic access. In this study we…

  3. Single and Multiple Hand Gesture Recognition Systems: A Comparative Analysis

    Directory of Open Access Journals (Sweden)

    Siddharth Rautaray

    2014-10-01

    Full Text Available With the evolution of higher computing speed, efficient communication technologies, and advanced display techniques the legacy HCI techniques become obsolete and are no more helpful in accurate and fast flow of information in present day computing devices. Hence the need of user friendly human machine interfaces for real time interfaces for human computer interaction have to be designed and developed to make the man machine interaction more intuitive and user friendly. The vision based hand gesture recognition affords users with the ability to interact with computers in more natural and intuitive ways. These gesture recognition systems generally consist of three main modules like hand segmentation, hand tracking and gesture recognition from hand features, designed using different image processing techniques which are further integrated with different applications. An increase use of new interfaces based on hand gesture recognition designed to cope up with the computing devices for interaction. This paper is an effort to provide a comparative analysis between such real time vision based hand gesture recognition systems which are based on interaction using single and multiple hand gestures. Single hand gesture based recognition systems (SHGRS have fewer complexes to implement, with a constraint to the count of different gestures which is large enough with various permutations and combinations of gesture, which is possible with multiple hands in multiple hand gesture recognition systems (MHGRS. The thorough comparative analysis has been done on various other vital parameters for the recognition systems.

  4. Gesture's role in speaking, learning, and creating language.

    Science.gov (United States)

    Goldin-Meadow, Susan; Alibali, Martha Wagner

    2013-01-01

    When speakers talk, they gesture. The goal of this review is to investigate the contribution that these gestures make to how we communicate and think. Gesture can play a role in communication and thought at many timespans. We explore, in turn, gesture's contribution to how language is produced and understood in the moment; its contribution to how we learn language and other cognitive skills; and its contribution to how language is created over generations, over childhood, and on the spot. We find that the gestures speakers produce when they talk are integral to communication and can be harnessed in a number of ways. (a) Gesture reflects speakers' thoughts, often their unspoken thoughts, and thus can serve as a window onto cognition. Encouraging speakers to gesture can thus provide another route for teachers, clinicians, interviewers, etc., to better understand their communication partners. (b) Gesture can change speakers' thoughts. Encouraging gesture thus has the potential to change how students, patients, witnesses, etc., think about a problem and, as a result, alter the course of learning, therapy, or an interchange. (c) Gesture provides building blocks that can be used to construct a language. By watching how children and adults who do not already have a language put those blocks together, we can observe the process of language creation. Our hands are with us at all times and thus provide researchers and learners with an ever-present tool for understanding how we talk and think.

  5. Musical Shaping Gestures: Considerations about Terminology and Methodology

    Directory of Open Access Journals (Sweden)

    Elaine King

    2013-12-01

    Full Text Available Fulford and Ginsborg's investigation into non-verbal communication during music rehearsal-talk between performers with and without hearing impairments extends existing research in the field of gesture studies by contributing significantly to our understanding of musicians' physical gestures as well as opening up discussion about the relationship between speech, sign and gesture in discourse about music. Importantly, the authors weigh up the possibility of an emerging sign language about music. This commentary focuses on three key considerations in response to their paper: first, use of terminology in the study of gesture, specifically about 'musical shaping gestures' (MSGs; second, methodological issues about capturing physical gestures; and third, evaluation of the application of gesture research beyond the rehearsal context. While the difficulties of categorizing gestures in observational research are acknowledged, I indicate that the consistent application of terminology from outside and within the study is paramount. I also suggest that the classification of MSGs might be based upon a set of observed physical characteristics within a single gesture, including size, duration, speed, plane and handedness, leading towards an alternative taxonomy for interpreting these data. Finally, evaluation of the application of gesture research in education and performance arenas is provided.

  6. Meaningful gesture in monkeys? Investigating whether mandrills create social culture.

    Science.gov (United States)

    Laidre, Mark E

    2011-02-02

    Human societies exhibit a rich array of gestures with cultural origins. Often these gestures are found exclusively in local populations, where their meaning has been crafted by a community into a shared convention. In nonhuman primates like African monkeys, little evidence exists for such culturally-conventionalized gestures. Here I report a striking gesture unique to a single community of mandrills (Mandrillus sphinx) among nineteen studied across North America, Africa, and Europe. The gesture was found within a community of 23 mandrills where individuals old and young, female and male covered their eyes with their hands for periods which could exceed 30 min, often while simultaneously raising their elbow prominently into the air. This 'Eye covering' gesture has been performed within the community for a decade, enduring deaths, removals, and births, and it persists into the present. Differential responses to Eye covering versus controls suggested that the gesture might have a locally-respected meaning, potentially functioning over a distance to inhibit interruptions as a 'do not disturb' sign operates. The creation of this gesture by monkeys suggests that the ability to cultivate shared meanings using novel manual acts may be distributed more broadly beyond the human species. Although logistically difficult with primates, the translocation of gesturers between communities remains critical to experimentally establishing the possible cultural origin and transmission of nonhuman gestures.

  7. 3D Hand Gesture Analysis Through a Real-time Gesture Search Engine

    Directory of Open Access Journals (Sweden)

    Shahrouz Yousefi

    2015-06-01

    Full Text Available 3D gesture recognition and tracking are highly desired features of interaction design in future mobile and smart environments. Specifically, in virtual/augmented reality applications, intuitive interaction with the physical space seems unavoidable and 3D gestural interaction might be the most effective alternative for the current input facilities such as touchscreens. In this paper, we introduce a novel solution for real-time 3D gesture-based interaction by finding the best match from an extremely large gesture database. This database includes images of various articulated hand gestures with the annotated 3D position/orientation parameters of the hand joints. Our unique matching algorithm is based on the hierarchical scoring of the low-level edge-orientation features between the query frames and database and retrieving the best match. Once the best match is found from the database in each moment, the pre-recorded 3D motion parameters can instantly be used for natural interaction. The proposed bare-hand interaction technology performs in real time with high accuracy using an ordinary camera.

  8. 3D Hand Gesture Analysis through a Real-Time Gesture Search Engine

    Directory of Open Access Journals (Sweden)

    Shahrouz Yousefi

    2015-06-01

    Full Text Available 3D gesture recognition and tracking are highly desired features of interaction design in future mobile and smart environments. Specifically, in virtual/augmented reality applications, intuitive interaction with the physical space seems unavoidable and 3D gestural interaction might be the most effective alternative for the current input facilities such as touchscreens. In this paper, we introduce a novel solution for real-time 3D gesture-based interaction by finding the best match from an extremely large gesture database. This database includes images of various articulated hand gestures with the annotated 3D position/orientation parameters of the hand joints. Our unique matching algorithm is based on the hierarchical scoring of the low-level edge-orientation features between the query frames and database and retrieving the best match. Once the best match is found from the database in each moment, the pre-recorded 3D motion parameters can instantly be used for natural interaction. The proposed bare-hand interaction technology performs in real time with high accuracy using an ordinary camera.

  9. The origins of non-human primates' manual gestures.

    Science.gov (United States)

    Liebal, Katja; Call, Josep

    2012-01-12

    The increasing body of research into human and non-human primates' gestural communication reflects the interest in a comparative approach to human communication, particularly possible scenarios of language evolution. One of the central challenges of this field of research is to identify appropriate criteria to differentiate a gesture from other non-communicative actions. After an introduction to the criteria currently used to define non-human primates' gestures and an overview of ongoing research, we discuss different pathways of how manual actions are transformed into manual gestures in both phylogeny and ontogeny. Currently, the relationship between actions and gestures is not only investigated on a behavioural, but also on a neural level. Here, we focus on recent evidence concerning the differential laterality of manual actions and gestures in apes in the framework of a functional asymmetry of the brain for both hand use and language.

  10. Towards successful user interaction with systems: focusing on user-derived gestures for smart home systems.

    Science.gov (United States)

    Choi, Eunjung; Kwon, Sunghyuk; Lee, Donghun; Lee, Hogin; Chung, Min K

    2014-07-01

    Various studies that derived gesture commands from users have used the frequency ratio to select popular gestures among the users. However, the users select only one gesture from a limited number of gestures that they could imagine during an experiment, and thus, the selected gesture may not always be the best gesture. Therefore, two experiments including the same participants were conducted to identify whether the participants maintain their own gestures after observing other gestures. As a result, 66% of the top gestures were different between the two experiments. Thus, to verify the changed gestures between the two experiments, a third experiment including another set of participants was conducted, which showed that the selected gestures were similar to those from the second experiment. This finding implies that the method of using the frequency in the first step does not necessarily guarantee the popularity of the gestures.

  11. Contrasting effects of errorless naming treatment and gestural facilitation for word retrieval in aphasia.

    Science.gov (United States)

    Raymer, Anastasia M; McHose, Beth; Smith, Kimberly G; Iman, Lisa; Ambrose, Alexis; Casselton, Colleen

    2012-01-01

    We compared the effects of two treatments for aphasic word retrieval impairments, errorless naming treatment (ENT) and gestural facilitation of naming (GES), within the same individuals, anticipating that the use of gesture would enhance the effect of treatment over errorless treatment alone. In addition to picture naming, we evaluated results for other outcome measures that were largely untested in earlier ENT studies. In a single participant crossover treatment design, we examined the effects of ENT and GES in eight individuals with stroke-induced aphasia and word retrieval impairments (three semantic anomia, five phonological anomia) in counterbalanced phases across participants. We evaluated effects of the two treatments for a daily picture naming/gesture production probe measure and in standardised aphasia tests and communication rating scales administered across phases of the experiment. Both treatments led to improvements in naming of trained words (small-to-large effect sizes) in individuals with semantic and phonological anomia. Small generalised naming improvements were noted for three individuals with phonological anomia. GES improved use of corresponding gestures for trained words (large effect sizes). Results were largely maintained at one month post-treatment completion. Increases in scores on standardised aphasia testing also occurred for both ENT and GES training. Both ENT and GES led to improvements in naming measures, with no clear difference between treatments. Increased use of gestures following GES provided a potential compensatory means of communication for those who did not improve verbal skills. Both treatments are considered to be effective methods to promote recovery of word retrieval and verbal production skills in individuals with aphasia.

  12. Human computer interaction using hand gestures

    CERN Document Server

    Premaratne, Prashan

    2014-01-01

    Human computer interaction (HCI) plays a vital role in bridging the 'Digital Divide', bringing people closer to consumer electronics control in the 'lounge'. Keyboards and mouse or remotes do alienate old and new generations alike from control interfaces. Hand Gesture Recognition systems bring hope of connecting people with machines in a natural way. This will lead to consumers being able to use their hands naturally to communicate with any electronic equipment in their 'lounge.' This monograph will include the state of the art hand gesture recognition approaches and how they evolved from their inception. The author would also detail his research in this area for the past 8 years and how the future might turn out to be using HCI. This monograph will serve as a valuable guide for researchers (who would endeavour into) in the world of HCI.

  13. Recognition of Gestures using Artifical Neural Network

    Directory of Open Access Journals (Sweden)

    Marcel MORE

    2013-12-01

    Full Text Available Sensors for motion measurements are now becoming more widespread. Thanks to their parameters and affordability they are already used not only in the professional sector, but also in devices intended for daily use or entertainment. One of their applications is in control of devices by gestures. Systems that can determine type of gesture from measured motion have many uses. Some are for example in medical practice, but they are still more often used in devices such as cell phones, where they serve as a non-standard form of input. Today there are already several approaches for solving this problem, but building sufficiently reliable system is still a challenging task. In our project we are developing solution based on artificial neural network. In difference to other solutions, this one doesn’t require building model for each measuring system and thus it can be used in combination with various sensors just with minimal changes in his structure.

  14. Static hand gesture recognition from a video

    Science.gov (United States)

    Rokade, Rajeshree S.; Doye, Dharmpal

    2011-10-01

    A sign language (also signed language) is a language which, instead of acoustically conveyed sound patterns, uses visually transmitted sign patterns to convey meaning- "simultaneously combining hand shapes, orientation and movement of the hands". Sign languages commonly develop in deaf communities, which can include interpreters, friends and families of deaf people as well as people who are deaf or hard of hearing themselves. In this paper, we proposed a novel system for recognition of static hand gestures from a video, based on Kohonen neural network. We proposed algorithm to separate out key frames, which include correct gestures from a video sequence. We segment, hand images from complex and non uniform background. Features are extracted by applying Kohonen on key frames and recognition is done.

  15. Gesture Control of a Mobile Robot using Kinect Sensor

    OpenAIRE

    Cekova, Katerina; Koceska, Natasa; Koceski, Saso

    2016-01-01

    This paper describes a methodology for gesture control of a custom developed mobile robot, using body gestures and Microsoft Kinect sensor. The Microsoft Kinect sensor’s ability is to track joint positions has been used in order to develop software application gestures recognition and their mapping into control commands. The proposed methodology has been experimentally evaluated. The results of the experimental evaluation, presented in the paper, showed that the proposed methodology is accura...

  16. Engineering gestures for multimodal user interfaces

    OpenAIRE

    Echtler, Florian; Kammer, Dietrich; Vanacken, Davy; Hoste, Lode; Signer, Beat

    2014-01-01

    Despite increased presence of gestural and multimodal user interfaces in research as well as daily life, development of such systems still mostly relies on programming concepts which have emerged from classic WIMP user interfaces. This workshop proposes to explore the gap between attempts to formalize and structure development for multimodal interfaces in the research community on the one hand and the lack of adoption of these formal languages and frameworks by practitioners and other researc...

  17. Distinguishing the communicative functions of gestures

    DEFF Research Database (Denmark)

    Jokinen, Kristiina; Navarretta, Costanza; Paggio, Patrizia

    2008-01-01

    This paper deals with the results of a machine learning experiment conducted on annotated gesture data from two case studies (Danish and Estonian). The data concern mainly facial displays, that are annotated with attributes relating to shape and dynamics, as well as communicative function....... The results of the experiments show that the granularity of the attributes used seems appropriate for the task of distinguishing the desired communicative functions. This is a promising result in view of a future automation of the annotation task....

  18. Virtual sculpting with advanced gestural interface

    OpenAIRE

    Kılıboz, Nurettin Çağrı

    2013-01-01

    Ankara : The Department of Computer Engineering and the Graduate School of Engineering and Science of Bilkent University, 2013. Thesis (Master's) -- Bilkent University, 2013. Includes bibliographical references leaves 54-58. In this study, we propose a virtual reality application that can be utilized to design preliminary/conceptual models similar to real world clay sculpting. The proposed system makes use of the innovative gestural interface that enhances the experience of...

  19. What makes a movement a gesture? ☆

    OpenAIRE

    Novack, Miriam A.; Wakefield, Elizabeth M.; Goldin-Meadow, Susan

    2015-01-01

    Theories of how adults interpret the actions of others have focused on the goals and intentions of actors engaged in object-directed actions. Recent research has challenged this assumption, and shown that movements are often interpreted as being for their own sake (Schachner & Carey, 2013). Here we postulate a third interpretation of movement—movement that represents action, but does not literally act on objects in the world. These movements are gestures. In this paper, we describe a framewor...

  20. Gesture en route to words

    DEFF Research Database (Denmark)

    Jensen de López, Kristine M.

    2010-01-01

    This study explores the communicative production of gestrural and vocal modalities by 8 normally developing children in two different cultures (Danish and Zapotec: Mexican indigenous) 16 to 20 months). We analyzed spontaneous production of gestrures and words in children's transition to the two...

  1. Gesture recognition in patients with aphasia.

    Science.gov (United States)

    Daniloff, J K; Noll, J D; Fristoe, M; Lloyd, L L

    1982-02-01

    This study focuses on the controversial issue of the integrity of gestural communication abilities in subjects with aphasia. To define the ability of subjects to interpret symbolic gestures, an Amer-Ind Recognition Test (ART) was developed which required no verbal response from the examiner or the subject. The relationships between impairment of Amer-Ind signal recognition and (a) severity of aphasia, (b) listening and talking abilities and (c) the type of response picture used were investigated. Whether subjects more often chose related foils than unrelated foils in a forced-choice format was also examined. Two training tests and the ART are described. Results from administration to 15 aphasic subjects indicated that: (a) all subjects performed equally well, regardless of their aphasia severity classification; (b) action picture recognition was related to listening ability; (c) action pictures were easier to identify than object pictures; and (d) on error responses, subjects overwhelmingly chose related over unrelated foils. The possibility that gestural abilities were relatively well preserved among the subjects tested, in the presence of a wide range of listening and talking deficits, is also discussed.

  2. Illumination-invariant hand gesture recognition

    Science.gov (United States)

    Mendoza-Morales, América I.; Miramontes-Jaramillo, Daniel; Kober, Vitaly

    2015-09-01

    In recent years, human-computer interaction (HCI) has received a lot of interest in industry and science because it provides new ways to interact with modern devices through voice, body, and facial/hand gestures. The application range of the HCI is from easy control of home appliances to entertainment. Hand gesture recognition is a particularly interesting problem because the shape and movement of hands usually are complex and flexible to be able to codify many different signs. In this work we propose a three step algorithm: first, detection of hands in the current frame is carried out; second, hand tracking across the video sequence is performed; finally, robust recognition of gestures across subsequent frames is made. Recognition rate highly depends on non-uniform illumination of the scene and occlusion of hands. In order to overcome these issues we use two Microsoft Kinect devices utilizing combined information from RGB and infrared sensors. The algorithm performance is tested in terms of recognition rate and processing time.

  3. Real-Time Hand Gesture Recognition Using Finger Segmentation

    Directory of Open Access Journals (Sweden)

    Zhi-hua Chen

    2014-01-01

    Full Text Available Hand gesture recognition is very significant for human-computer interaction. In this work, we present a novel real-time method for hand gesture recognition. In our framework, the hand region is extracted from the background with the background subtraction method. Then, the palm and fingers are segmented so as to detect and recognize the fingers. Finally, a rule classifier is applied to predict the labels of hand gestures. The experiments on the data set of 1300 images show that our method performs well and is highly efficient. Moreover, our method shows better performance than a state-of-art method on another data set of hand gestures.

  4. Real-time hand gesture recognition using finger segmentation.

    Science.gov (United States)

    Chen, Zhi-hua; Kim, Jung-Tae; Liang, Jianning; Zhang, Jing; Yuan, Yu-Bo

    2014-01-01

    Hand gesture recognition is very significant for human-computer interaction. In this work, we present a novel real-time method for hand gesture recognition. In our framework, the hand region is extracted from the background with the background subtraction method. Then, the palm and fingers are segmented so as to detect and recognize the fingers. Finally, a rule classifier is applied to predict the labels of hand gestures. The experiments on the data set of 1300 images show that our method performs well and is highly efficient. Moreover, our method shows better performance than a state-of-art method on another data set of hand gestures.

  5. A Hierarchical Model for Continuous Gesture Recognition Using Kinect

    DEFF Research Database (Denmark)

    Jensen, Søren Kejser; Moesgaard, Christoffer; Nielsen, Christoffer Samuel

    2013-01-01

    Human gesture recognition is an area, which has been studied thoroughly in recent years,and close to100% recognition rates in restricted environments have been achieved, often either with single separated gestures in the input stream, or with computationally intensive systems. The results...... are unfortunately not as striking, when it comes to a continuous stream of gestures. In this paper we introduce a hierarchical system for gesture recognition for use in a gaming setting, with a continuous stream of data. Layer 1 is based on Nearest Neighbor Search and layer 2 uses Hidden Markov Models. The system...

  6. Human -Computer Interface using Gestures based on Neural Network

    Directory of Open Access Journals (Sweden)

    Aarti Malik

    2014-10-01

    Full Text Available - Gestures are powerful tools for non-verbal communication. Human computer interface (HCI is a growing field which reduces the complexity of interaction between human and machine in which gestures are used for conveying information or controlling the machine. In the present paper, static hand gestures are utilized for this purpose. The paper presents a novel technique of recognizing hand gestures i.e. A-Z alphabets, 0-9 numbers and 6 additional control signals (for keyboard and mouse control by extracting various features of hand ,creating a feature vector table and training a neural network. The proposed work has a recognition rate of 99%. .

  7. Gesture Based Educational Software for Children with Acquired Brain Injuries

    Directory of Open Access Journals (Sweden)

    Er. Zainab Pirani

    2010-05-01

    Full Text Available " GESBI” is gesture based audio visual teaching tool designed to help children with acquired brain injuries, providing hours of entertainment in a play-and-learn environment while introducing the foundation skills in basic arithmetic, spelling, reading and solving puzzles. These children communicate with the computer via gestures based on my previous research paper “KONCERN- Hand Gesture Recognition for Physically Impaired” in which gestures are captured by camera and processed without the need of wearing any sensor based gloves etc.

  8. Gesturing more diminishes recall of abstract words when gesture is allowed and concrete words when it is taboo.

    Science.gov (United States)

    Matthews-Saugstad, Krista M; Raymakers, Erik P; Kelty-Stephen, Damian G

    2017-07-01

    Gesture during speech can promote or diminish recall for conversation content. We explored effects of cognitive load on this relationship, manipulating it at two scales: individual-word abstractness and social constraints to prohibit gestures. Prohibited gestures can diminish recall but more so for abstract-word recall. Insofar as movement planning adds to cognitive load, movement amplitude may moderate gesture effects on memory, with greater permitted- and prohibited-gesture movements reducing abstract-word recall and concrete-word recall, respectively. We tested these effects in a dyadic game in which 39 adult participants described words to confederates without naming the word or five related words. Results supported our expectations and indicated that memory effects of gesturing depend on social, cognitive, and motoric aspects of discourse.

  9. Cortical correlates of gesture processing: clues to the cerebral mechanisms underlying apraxia during the imitation of meaningless gestures.

    Science.gov (United States)

    Hermsdörfer, J; Goldenberg, G; Wachsmuth, C; Conrad, B; Ceballos-Baumann, A O; Bartenstein, P; Schwaiger, M; Boecker, H

    2001-07-01

    The clinical test of imitation of meaningless gestures is highly sensitive in revealing limb apraxia after dominant left brain damage. To relate lesion locations in apraxic patients to functional brain activation and to reveal the neuronal network subserving gesture representation, repeated H2(15O)-PET measurements were made in seven healthy subjects during a gesture discrimination task. Observing paired images of either meaningless hand or meaningless finger gestures, subjects had to indicate whether they were identical or different. As a control condition subjects simply had to indicate whether two portrayed persons were identical or not. Brain activity during the discrimination of hand gestures was strongly lateralized to the left hemisphere, a prominent peak activation being localized within the inferior parietal cortex (BA40). The discrimination of finger gestures induced a more symmetrical activation and rCBF peaks in the right intraparietal sulcus and in medial visual association areas (BA18/19). Two additional foci of prominent rCBF increase were found. One focus was located at the left lateral occipitotemporal junction (BA 19/37) and was related to both tasks; the other in the pre-SMA was particularly related to hand gestures. The pattern of task-dependent activation corresponds closely to the predictions made from the clinical findings, and underlines the left brain dominance for meaningless hand gestures and the critical involvement of the parietal cortex. The lateral visual association areas appear to support first stages of gesture representation, and the parietal cortex is part of the dorsal action stream. Finger gestures may require in addition precise visual analysis and spatial attention enabled by occipital and right intraparietal activity. Pre-SMA activity during the perception of hand gestures may reflect engagement of a network that is intimately related to gesture execution.

  10. Archetypal Gesture and Everyday Gesture: a fundamental binomial in Delsartean theory

    Directory of Open Access Journals (Sweden)

    Elena Randi

    2012-11-01

    Full Text Available This text presents François Delsarte’s system from a historical-exploratory viewpoint, focusing on some particular aspects of the work of the French master and the interpretation of his work by some of his main disciples. The article describes the status of the body and its importance in the Delsarte system, taking the notions of archetypal gesture and everyday gesture as the bases of this system. Indeed, the text highlights both historical facts obtained from the Delsarte archive, and arguments questioning the authorship of exercises attributed to Delsarte, which, according to the text, may have been created by his students.

  11. Gesturing with an Injured Brain: How Gesture Helps Children with Early Brain Injury Learn Linguistic Constructions

    Science.gov (United States)

    Ozcaliskan, Seyda; Levine, Susan C.; Goldin-Meadow, Susan

    2013-01-01

    Children with pre/perinatal unilateral brain lesions (PL) show remarkable plasticity for language development. Is this plasticity characterized by the same developmental trajectory that characterizes typically developing (TD) children, with gesture leading the way into speech? We explored this question, comparing eleven children with PL -- matched…

  12. Do Gestural Interfaces Promote Thinking? Embodied Interaction: Congruent Gestures and Direct Touch Promote Performance in Math

    Science.gov (United States)

    Segal, Ayelet

    2011-01-01

    Can action support cognition? Can direct touch support performance? Embodied interaction involving digital devices is based on the theory of grounded cognition. Embodied interaction with gestural interfaces involves more of our senses than traditional (mouse-based) interfaces, and in particular includes direct touch and physical movement, which…

  13. The Effects of Prohibiting Gestures on Children's Lexical Retrieval Ability

    Science.gov (United States)

    Pine, Karen J.; Bird, Hannah; Kirk, Elizabeth

    2007-01-01

    Two alternative accounts have been proposed to explain the role of gestures in thinking and speaking. The Information Packaging Hypothesis (Kita, 2000) claims that gestures are important for the conceptual packaging of information before it is coded into a linguistic form for speech. The Lexical Retrieval Hypothesis (Rauscher, Krauss & Chen, 1996)…

  14. Differential Diagnosis of Severe Speech Disorders Using Speech Gestures

    Science.gov (United States)

    Bahr, Ruth Huntley

    2005-01-01

    The differentiation of childhood apraxia of speech from severe phonological disorder is a common clinical problem. This article reports on an attempt to describe speech errors in children with childhood apraxia of speech on the basis of gesture use and acoustic analyses of articulatory gestures. The focus was on the movement of articulators and…

  15. Associations among Play, Gesture and Early Spoken Language Acquisition

    Science.gov (United States)

    Hall, Suzanne; Rumney, Lisa; Holler, Judith; Kidd, Evan

    2013-01-01

    The present study investigated the developmental interrelationships between play, gesture use and spoken language development in children aged 18-31 months. The children completed two tasks: (i) a structured measure of pretend (or "symbolic") play and (ii) a measure of vocabulary knowledge in which children have been shown to gesture.…

  16. Eye-based head gestures for interaction in the car

    DEFF Research Database (Denmark)

    Mardanbeigi, Diako; Witzner Hansen, Dan

    2013-01-01

    In this paper we suggest using a new method for head gesture recognition in the automotive context. This method involves using only the eye tracker for measuring the head movements through the eye movements when the gaze point is fixed. It allows for identifying a wide range of head gestures...

  17. Supporting One-Time Point Annotations for Gesture Recognition.

    Science.gov (United States)

    Nguyen-Dinh, Long-Van; Calatroni, Alberto; Troester, Gerhard

    2016-12-08

    This paper investigates a new annotation technique that reduces significantly the amount of time to annotate training data for gesture recognition. Conventionally, the annotations comprise the start and end times, and the corresponding labels of gestures in sensor recordings. In this work, we propose a one-time point annotation in which labelers do not have to select the start and end time carefully, but just mark a one-time point within the time a gesture is happening. The technique gives more freedom and reduces significantly the burden for labelers. To make the one-time point annotations applicable, we propose a novel BoundarySearch algorithm to find automatically the correct temporal boundaries of gestures by discovering data patterns around their given one-time point annotations. The corrected annotations are then used to train gesture models. We evaluate the method on three applications from wearable gesture recognition with various gesture classes (10-17 classes) recorded with different sensor modalities. The results show that training on the corrected annotations can achieve performances close to a fully supervised training on clean annotations (lower by just up to 5% F1-score on average). Furthermore, the BoundarySearch algorithm is also evaluated on the ChaLearn 2014 multi-modal gesture recognition challenge recorded with Kinect sensors from computer vision and achieves similar results.

  18. Hand Gesture and Mathematics Learning: Lessons from an Avatar

    Science.gov (United States)

    Cook, Susan Wagner; Friedman, Howard S.; Duggan, Katherine A.; Cui, Jian; Popescu, Voicu

    2017-01-01

    A beneficial effect of gesture on learning has been demonstrated in multiple domains, including mathematics, science, and foreign language vocabulary. However, because gesture is known to co-vary with other non-verbal behaviors, including eye gaze and prosody along with face, lip, and body movements, it is possible the beneficial effect of gesture…

  19. Seeing Signs : On the appearance of manual movements in gestures

    NARCIS (Netherlands)

    Arendsen, J.

    2009-01-01

    This dissertation presents the results of a series of studies on the appearance of manual movements in gestures. The main goal of this research is to increase our understanding of how humans perceive signs and other gestures. Generated insights from human perception may aid the development of techno

  20. Seeing Signs: On the appearance of manual movements in gestures

    NARCIS (Netherlands)

    Arendsen, J.

    2009-01-01

    This dissertation presents the results of a series of studies on the appearance of manual movements in gestures. The main goal of this research is to increase our understanding of how humans perceive signs and other gestures. Generated insights from human perception may aid the development of techno

  1. Gestures as Semiotic Resources in the Mathematics Classroom

    Science.gov (United States)

    Arzarello, Ferdinando; Paola, Domingo; Robutti, Ornella; Sabena, Cristina

    2009-01-01

    In this paper, we consider gestures as part of the resources activated in the mathematics classroom: speech, inscriptions, artifacts, etc. As such, gestures are seen as one of the semiotic tools used by students and teacher in mathematics teaching-learning. To analyze them, we introduce a suitable model, the "semiotic bundle." It allows focusing…

  2. Gestural Imitation and Limb Apraxia in Corticobasal Degeneration

    Science.gov (United States)

    Salter, Jennifer E.; Roy, Eric A.; Black, Sandra E.; Joshi, Anish; Almeida, Quincy

    2004-01-01

    Limb apraxia is a common symptom of corticobasal degeneration (CBD). While previous research has shown that individuals with CBD have difficulty imitating transitive (tool-use actions) and intransitive non-representational gestures (nonsense actions), intransitive representational gestures (actions without a tool) have not been examined. In the…

  3. Communicative Effectiveness of Pantomime Gesture in People with Aphasia

    Science.gov (United States)

    Rose, Miranda L.; Mok, Zaneta; Sekine, Kazuki

    2017-01-01

    Background: Human communication occurs through both verbal and visual/motoric modalities. Simultaneous conversational speech and gesture occurs across all cultures and age groups. When verbal communication is compromised, more of the communicative load can be transferred to the gesture modality. Although people with aphasia produce meaning-laden…

  4. How different iconic gestures add to the communication of PWA

    NARCIS (Netherlands)

    van Nispen, Karin

    2016-01-01

    Introduction Gestures can convey information in addition to speech (Beattie et al., 1999). In the absence of conventions on their meaning (McNeill, 2000), people probably rely on iconicity, the mapping between form and meaning, to construct and derive meaning from gesture (Perniss et al., 2010).

  5. Linking Gestures: Cross-Cultural Variation during Instructional Analogies

    Science.gov (United States)

    Richland, Lindsey Engle

    2015-01-01

    Deictic linking gestures, hand and arm motions that physically embody links being communicated between two or more objects in the shared communicative environment, are explored in a cross-cultural sample of mathematics instruction. Linking gestures are specifically examined here when they occur in the context of communicative analogies designed to…

  6. Gestural Introduction of Ground Reference in L2 Narrative Discourse

    Science.gov (United States)

    Yoshioka, Keiko; Kellerman, Eric

    2006-01-01

    In the field of second language acquisition (SLA) and use, learners' gestures have mainly been regarded as a type of communication strategy produced to replace missing words. However, the results of the analyses conducted here on the way in which Dutch learners of Japanese introduce Ground reference in speech and gesture in narrative show that the…

  7. Beat Gestures Modulate Auditory Integration in Speech Perception

    Science.gov (United States)

    Biau, Emmanuel; Soto-Faraco, Salvador

    2013-01-01

    Spontaneous beat gestures are an integral part of the paralinguistic context during face-to-face conversations. Here we investigated the time course of beat-speech integration in speech perception by measuring ERPs evoked by words pronounced with or without an accompanying beat gesture, while participants watched a spoken discourse. Words…

  8. Dynamic gesture recognition based on multiple sensors fusion technology.

    Science.gov (United States)

    Wenhui, Wang; Xiang, Chen; Kongqiao, Wang; Xu, Zhang; Jihai, Yang

    2009-01-01

    This paper investigates the roles of a three-axis accelerometer, surface electromyography sensors and a webcam for dynamic gesture recognition. A decision-level multiple sensor fusion method based on action elements is proposed to distinguish a set of 20 kinds of dynamic hand gestures. Experiments are designed and conducted to collect three kinds of sensor data stream simultaneously during gesture implementation and compare the performance of different subsets in gesture recognition. Experimental results from three subjects show that the combination of three kinds of sensor achieves recognition accuracies at 87.5%-91.8%, which are higher largely than that of the single sensor conditions. This study is valuable to realize continuous and dynamic gesture recognition based on multiple sensor fusion technology for multi-model interaction.

  9. Gesture Recognition Based on the Probability Distribution of Arm Trajectories

    Science.gov (United States)

    Wan, Khairunizam; Sawada, Hideyuki

    The use of human motions for the interaction between humans and computers is becoming an attractive alternative to verbal media, especially through the visual interpretation of the human body motion. In particular, hand gestures are used as non-verbal media for the humans to communicate with machines that pertain to the use of the human gestures to interact with them. This paper introduces a 3D motion measurement of the human upper body for the purpose of the gesture recognition, which is based on the probability distribution of arm trajectories. In this study, by examining the characteristics of the arm trajectories given by a signer, motion features are selected and classified by using a fuzzy technique. Experimental results show that the use of the features extracted from arm trajectories effectively works on the recognition of dynamic gestures of a human, and gives a good performance to classify various gesture patterns.

  10. Gestures, vocalizations and memory in language origins.

    Directory of Open Access Journals (Sweden)

    Francisco eAboitiz

    2012-02-01

    Full Text Available This article discusses the possible homologies between the human language networks and comparable auditory projection systems in the macaque brain, in an attempt to conciliate two existing views on language evolution: one that makes emphasis on hand control and gestures, and the other that makes emphasis on auditory-vocal mechanisms. The capacity for language is based on relatively well defined neural substrates whose rudiments have been traced into the non-human primate brain. In its core, this circuit makes up an auditory-vocal sensorimotor circuit with two main components, a ventral pathway connecting anterior auditory regions with anterior ventrolateral prefrontal areas, and a dorsal pathway connecting auditory areas with parietal areas and with posterior ventrolateral prefrontal areas via the arcuate fasciculus and the superior longitudinal fasciculus. In humans, the dorsal circuit is especially important for phonological processing and phonological working memory, capacities that are critical for language acquisition and for complex syntax processing. In the macaque, the homologue to the dorsal circuit overlaps with an inferior parietal-ventrolateral prefrontal network for hand and gestural action selection that is under voluntary control, while vocalizations are largely fixed and involuntary. The recruitment of this dorsal component for vocalization behavior in the human lineage, together with a direct cortical control of the subcortical vocalizing system, are proposed to have marked a fundamental innovation in human evolution, generating an inflection point that permitted the explosion of language and human communication. In this context, vocal communication and gesturing have a common history in primate communication.

  11. ANÁLISIS ECOFISIOLÓGICO DEL CULTIVO ASOCIADO MAÍZ (Zea mays L. - FRÍJOL VOLUBLE (Phaselus vulgaris L. ECOPHYSIOLOGICAL ANALYSIS OF CORN (Zea mays L. - CLIMBIMG COMMON BEAN (Phaselus vulgaris L. INTERCROPPING

    Directory of Open Access Journals (Sweden)

    León Darío Vélez Vargas

    2007-12-01

    Full Text Available El objetivo es analizar el estado del conocimiento del cultivo asociado maíz-fríjol voluble trepador (MxFv, desde una perspectiva ecofisiológica. Se parte de la revisión de las variables evaluadas en las investigaciones consultadas y su clasificación en variables descriptivas, explicativas y condicionantes. Esta clasificación permitió establecer el carácter predominantemente descriptivo de la mayoría de las investigaciones con respecto a los efectos de la competencia por recursos del suelo y luz sobre las especies asociadas, principalmente en el rendimiento. Solo unas pocas investigaciones abordan el estudio de aspectos morfológicos, de componentes del rendimiento y dinámica del crecimiento y desarrollo de ambas especies, que pueden contribuir a identificar y explicar las causas de los efectos del asocio sobre el comportamiento de ambas especies, como en el caso del rendimiento. Las investigaciones se han concentrado en variables condicionantes como genotipos y densidad de población. En la asociación, el fríjol es la especie más afectada; sin embargo, su alta plasticidad morfofiosiológica posibilita las ventajas de la asociación frente a los unicultivos de maíz y fríjol. La comprensión del funcionamiento de la asociación permitirá avanzar en el mejoramiento de la producción de maíz y fríjol en asocio y de la relaciones de competencia en los agroecosistemas.This paper reviews the state of knowledge on maize - climbing bean intercropping from an ecophysiological perspective. Starting from the revision of evaluated variables in the consulted investigations and classifing them within three categories: descriptive, explanative and conditional variables. This classification provides elements to argue that researches have been more descriptive than explanative. Many studies describe both association effects and conditionals variables, such as genotypes and population densities, while few studies focus on the mechanisms that

  12. RENDIMIENTO Y REACCIÓN A COLLETOTRICHUM LINDEMUATHIANUM EN CULTIVARES DE FRÍJOL VOLUBLE (PHASEOLUS VULGARIS L. YIELD AND REACTION TO COLLETOTRICHUM LINDEMUATHIANUM IN CULTIVARS OF CLIMBING BEANS (PHASEOLUS VULGARIS L.

    Directory of Open Access Journals (Sweden)

    Carolina Gallego G.

    2010-12-01

    Full Text Available Bajo condiciones de la sabana de Bogotá (Colombia, se evaluaron 32 cultivares de fríjol voluble por componentes del rendimiento y por su reacción a una mezcla de aislamientos de Colletotrichum lindemuthianum procedentes de Boyacá y Cundinamarca. Los genotipos que presentaron un buen comportamiento en rendimiento y una reacción en campo a la resistencia de la enfermedad fueron: D. Moreno y 3198. Los que expresaron una reacción de resistencia a la antracnosis fueron: 3180, 3182, 3177 y G-2333. Aquellos que mostraron un buen comportamiento en componentes de rendimiento fueron: 3164, 3159, 3176 y Radical. Estos genotipos podrían usarse como posibles candidatos parentales ó sobresalientes en el programa de mejoramiento de fríjol. También se realizó análisis de dos marcadores moleculares tipo SCAR ligados a los genes Co-4 y Co-5 que confieren resistencia a C. lindemuthianum, ninguno de los materiales de evaluación a excepción del testigo resistente G-2333, amplificó los marcadores SCAR, asociados a los genes de resistencia de interés.Under Bogotá plateau (Colombia conditions, 32 cultivars of climbing bean were evaluated by components of yield and by reaction with a mixture of isolations of Colletotrichum lindemuthianum coming from Boyacá and Cundinamarca. The cultivars that presented a good behavior in yield and resistance reaction to the disease were: D. Moreno and 3198. Those that expressed a resistant reaction to the anthracnose were: 3180, 3182, 3177 and G-2333. Finally those that showed a good behavior in yield components were 3164, 3159, 3176 and Radical. These genotypes could be used as excellent candidates in the breeding program of common bean. It was also carried out a test for each cultivar, by means of two markers molecular type SCAR tried to resistance genes to anthracnose Co-4 and Co-5. Any of the evaluation materials amplified for the couple of genes, except for the resistant control G-2333.

  13. Longitudinal Trajectories of Gestural and Linguistic Abilities in Very Preterm Infants in the Second Year of Life

    Science.gov (United States)

    Sansavini, Alessandra; Guarini, Annalisa; Savini, Silvia; Broccoli, Serena; Justice, Laura; Alessandroni, Rosina; Faldella, Giacomo

    2011-01-01

    The present study involved a systematic longitudinal analysis, with three points of assessment in the second year of life, of gestures/actions, word comprehension, and word production in a sample of very preterm infants compared to a sample of full-term infants. The relationships among these competencies as well as their predictive value on…

  14. Gesture Frequency Linked Primarily to Story Length in 4-10-Year Old Children's Stories

    Science.gov (United States)

    Nicoladis, Elena; Marentette, Paula; Navarro, Samuel

    2016-01-01

    Previous studies have shown that older children gesture more while telling a story than younger children. This increase in gesture use has been attributed to increased story complexity. In adults, both narrative complexity and imagery predict gesture frequency. In this study, we tested the strength of three predictors of children's gesture use in…

  15. Depth Camera-Based 3D Hand Gesture Controls with Immersive Tactile Feedback for Natural Mid-Air Gesture Interactions

    Directory of Open Access Journals (Sweden)

    Kwangtaek Kim

    2015-01-01

    Full Text Available Vision-based hand gesture interactions are natural and intuitive when interacting with computers, since we naturally exploit gestures to communicate with other people. However, it is agreed that users suffer from discomfort and fatigue when using gesture-controlled interfaces, due to the lack of physical feedback. To solve the problem, we propose a novel complete solution of a hand gesture control system employing immersive tactile feedback to the user’s hand. For this goal, we first developed a fast and accurate hand-tracking algorithm with a Kinect sensor using the proposed MLBP (modified local binary pattern that can efficiently analyze 3D shapes in depth images. The superiority of our tracking method was verified in terms of tracking accuracy and speed by comparing with existing methods, Natural Interaction Technology for End-user (NITE, 3D Hand Tracker and CamShift. As the second step, a new tactile feedback technology with a piezoelectric actuator has been developed and integrated into the developed hand tracking algorithm, including the DTW (dynamic time warping gesture recognition algorithm for a complete solution of an immersive gesture control system. The quantitative and qualitative evaluations of the integrated system were conducted with human subjects, and the results demonstrate that our gesture control with tactile feedback is a promising technology compared to a vision-based gesture control system that has typically no feedback for the user’s gesture inputs. Our study provides researchers and designers with informative guidelines to develop more natural gesture control systems or immersive user interfaces with haptic feedback.

  16. Depth camera-based 3D hand gesture controls with immersive tactile feedback for natural mid-air gesture interactions.

    Science.gov (United States)

    Kim, Kwangtaek; Kim, Joongrock; Choi, Jaesung; Kim, Junghyun; Lee, Sangyoun

    2015-01-08

    Vision-based hand gesture interactions are natural and intuitive when interacting with computers, since we naturally exploit gestures to communicate with other people. However, it is agreed that users suffer from discomfort and fatigue when using gesture-controlled interfaces, due to the lack of physical feedback. To solve the problem, we propose a novel complete solution of a hand gesture control system employing immersive tactile feedback to the user's hand. For this goal, we first developed a fast and accurate hand-tracking algorithm with a Kinect sensor using the proposed MLBP (modified local binary pattern) that can efficiently analyze 3D shapes in depth images. The superiority of our tracking method was verified in terms of tracking accuracy and speed by comparing with existing methods, Natural Interaction Technology for End-user (NITE), 3D Hand Tracker and CamShift. As the second step, a new tactile feedback technology with a piezoelectric actuator has been developed and integrated into the developed hand tracking algorithm, including the DTW (dynamic time warping) gesture recognition algorithm for a complete solution of an immersive gesture control system. The quantitative and qualitative evaluations of the integrated system were conducted with human subjects, and the results demonstrate that our gesture control with tactile feedback is a promising technology compared to a vision-based gesture control system that has typically no feedback for the user's gesture inputs. Our study provides researchers and designers with informative guidelines to develop more natural gesture control systems or immersive user interfaces with haptic feedback.

  17. Real-time affine invariant gesture recognition for LED smart lighting control

    Science.gov (United States)

    Chen, Xu; Liao, Miao; Feng, Xiao-Fan

    2015-03-01

    Gesture recognition has attracted extensive research interest in the field of human computer interaction. Realtime affine invariant gesture recognition is an important and challenging problem. This paper presents a robust affine view invariant gesture recognition system for realtime LED smart light control. As far as we know, this is the first time that gesture recognition has been applied for control LED smart light in realtime. Employing skin detection, hand blobs captured from a top view camera are first localized and aligned. Subsequently, SVM classifiers trained on HOG features and robust shape features are then utilized for gesture recognition. By accurately recognizing two types of gestures ("gesture 8" and a "5 finger gesture"), a user is enabled to toggle lighting on/off efficiently and control light intensity on a continuous scale. In each case, gesture recognition is rotation- and translation-invariant. Extensive evaluations in an office setting demonstrate the effectiveness and robustness of the proposed gesture recognition algorithm.

  18. Optical gesture sensing and depth mapping technologies for head-mounted displays: an overview

    Science.gov (United States)

    Kress, Bernard; Lee, Johnny

    2013-05-01

    Head Mounted Displays (HMDs), and especially see-through HMDs have gained renewed interest in recent time, and for the first time outside the traditional military and defense realm, due to several high profile consumer electronics companies presenting their products to hit market. Consumer electronics HMDs have quite different requirements and constrains as their military counterparts. Voice comments are the de-facto interface for such devices, but when the voice recognition does not work (not connection to the cloud for example), trackpad and gesture sensing technologies have to be used to communicate information to the device. We review in this paper the various technologies developed today integrating optical gesture sensing in a small footprint, as well as the various related 3d depth mapping sensors.

  19. From gesture to sign language: conventionalization of classifier constructions by adult hearing learners of British Sign Language.

    Science.gov (United States)

    Marshall, Chloë R; Morgan, Gary

    2015-01-01

    There has long been interest in why languages are shaped the way they are, and in the relationship between sign language and gesture. In sign languages, entity classifiers are handshapes that encode how objects move, how they are located relative to one another, and how multiple objects of the same type are distributed in space. Previous studies have shown that hearing adults who are asked to use only manual gestures to describe how objects move in space will use gestures that bear some similarities to classifiers. We investigated how accurately hearing adults, who had been learning British Sign Language (BSL) for 1-3 years, produce and comprehend classifiers in (static) locative and distributive constructions. In a production task, learners of BSL knew that they could use their hands to represent objects, but they had difficulty choosing the same, conventionalized, handshapes as native signers. They were, however, highly accurate at encoding location and orientation information. Learners therefore show the same pattern found in sign-naïve gesturers. In contrast, handshape, orientation, and location were comprehended with equal (high) accuracy, and testing a group of sign-naïve adults showed that they too were able to understand classifiers with higher than chance accuracy. We conclude that adult learners of BSL bring their visuo-spatial knowledge and gestural abilities to the tasks of understanding and producing constructions that contain entity classifiers. We speculate that investigating the time course of adult sign language acquisition might shed light on how gesture became (and, indeed, becomes) conventionalized during the genesis of sign languages.

  20. An investigation of the use of co-verbal gestures in oral discourse among Chinese speakers with fluent versus non-fluent aphasia and healthy adults

    Directory of Open Access Journals (Sweden)

    Anthony Pak Hin Kong

    2015-04-01

    Full Text Available Introduction Co-verbal gestures can facilitate word production among persons with aphasia (PWA (Rose, Douglas, & Matyas, 2002 and play a communicative role for PWA to convey ideas (Sekine & Rose, 2013. Kong, Law, Kwan, Lai, and Lam (2015 recently reported a systematic approach to independently analyze gesture forms and functions in spontaneous oral discourse produced. When this annotation framework was used to compare speech-accompanying gestures used by PWA and unimpaired speakers, Kong, Law, Wat, and Lai (2013 found a significantly higher gesture-to-word ratio among PWAs. Speakers who were more severe in aphasia or produced a lower percentage of complete sentences or simple sentences in their narratives tended to use more gestures. Moreover, verbal-semantic processing impairment, but not the degree of hemiplegia, was found to affect PWAs’ employment of gestures. The current study aims to (1 investigate whether the frequency of gestural employment varied across speakers with non-fluent aphasia, fluent aphasia, and their controls, (2 examine how the distribution of gesture forms and functions differed across the three speaker groups, and (3 determine how well factors of complexity of linguistic output, aphasia severity, semantic processing integrity, and hemiplegia would predict the frequency of gesture use among PWAs. Method The participants included 23 Cantonese-speaking individuals with fluent aphasia, 21 with non-fluent aphasia, and 23 age- and education-matched controls. Three sets of language samples and video files were collected through the narrative tasks of recounting a personally important event, sequential description, and story-telling, using the Cantonese AphasiaBank protocol (Kong, Law, & Lee, 2009. While the language samples were linguistically quantified to reflect word- and sentential-level performance as well as discourse-level characteristics, the videos were annotated on the form and function of each gesture. All PWAs were

  1. A Gesture Recognition System for Detecting Behavioral Patterns of ADHD.

    Science.gov (United States)

    Bautista, Miguel Ángel; Hernández-Vela, Antonio; Escalera, Sergio; Igual, Laura; Pujol, Oriol; Moya, Josep; Violant, Verónica; Anguera, María T

    2016-01-01

    We present an application of gesture recognition using an extension of dynamic time warping (DTW) to recognize behavioral patterns of attention deficit hyperactivity disorder (ADHD). We propose an extension of DTW using one-class classifiers in order to be able to encode the variability of a gesture category, and thus, perform an alignment between a gesture sample and a gesture class. We model the set of gesture samples of a certain gesture category using either Gaussian mixture models or an approximation of convex hulls. Thus, we add a theoretical contribution to classical warping path in DTW by including local modeling of intraclass gesture variability. This methodology is applied in a clinical context, detecting a group of ADHD behavioral patterns defined by experts in psychology/psychiatry, to provide support to clinicians in the diagnose procedure. The proposed methodology is tested on a novel multimodal dataset (RGB plus depth) of ADHD children recordings with behavioral patterns. We obtain satisfying results when compared to standard state-of-the-art approaches in the DTW context.

  2. A Novel Music Player Controlling Design Based on Gesture Recognition

    Directory of Open Access Journals (Sweden)

    Yi Liu

    2014-01-01

    Full Text Available This study has proposed a novel music player controlling method based on gesture recognition, which translating the gesture interacting signal into a controlling instruction for the music player. Firstly, utilizing the laptop’s webcam to capture the image information, and then employing the image processing method to tackle it. The skin color detection was used to obtain the information of gesture candidates, and a background subtraction was introduced to eliminate the distributing information. Moreover, in order to ensure the rapid and effective implementation of music player proposed here, the barycenter of a gesture was calculated as one Recognized Reference Information (RRI; a ratio between the gesture’s width and height was also selected as the other RRI; the comparison of these two RRI values was utilized to obtain a pattern signal of the gesture which was corresponding to the controlling instruction for the Music Player. Eventually, a Music Player was programmed and the pattern signal generated by gesture recognition was used to control the Music Player as willing to realize four basic functions: “play”, “pause”, “previous” and “next”. A series of tests using our gesture recognition based Music Player was conducted under the condition with different kinds of complex backgrounds, and the results showed the satisfactory performance of our interactive designing.

  3. Recognizing and interpreting gestures on a mobile robot

    Energy Technology Data Exchange (ETDEWEB)

    Kortenkamp, D.; Huber, E.; Bonasso, R.P. [Metrica, Inc., NASA Johnson Space Center, Houston, TX (United States)

    1996-12-31

    Gesture recognition is an important skill for robots that work closely with humans. Gestures help to clarify spoken commands and are a compact means of relaying geometric information. We have developed a real-time, three-dimensional gesture recognition system that resides on-board a mobile robot. Using a coarse three-dimensional model of a human to guide stereo measurements of body parts, the system is capable of recognizing six distinct gestures made by an unadorned human in an unaltered environment. An active vision approach focuses the vision system`s attention on small, moving areas of space to allow for frame rate processing even when the person and/or the robot are moving. This paper describes the gesture recognition system, including the coarse model and the active vision approach. This paper also describes how the gesture recognition system is integrated with an intelligent control architecture to allow for complex gesture interpretation and complex robot action. Results from experiments with an actual mobile robot are given.

  4. Iconic Gestures for Robot Avatars, Recognition and Integration with Speech.

    Science.gov (United States)

    Bremner, Paul; Leonards, Ute

    2016-01-01

    Co-verbal gestures are an important part of human communication, improving its efficiency and efficacy for information conveyance. One possible means by which such multi-modal communication might be realized remotely is through the use of a tele-operated humanoid robot avatar. Such avatars have been previously shown to enhance social presence and operator salience. We present a motion tracking based tele-operation system for the NAO robot platform that allows direct transmission of speech and gestures produced by the operator. To assess the capabilities of this system for transmitting multi-modal communication, we have conducted a user study that investigated if robot-produced iconic gestures are comprehensible, and are integrated with speech. Robot performed gesture outcomes were compared directly to those for gestures produced by a human actor, using a within participant experimental design. We show that iconic gestures produced by a tele-operated robot are understood by participants when presented alone, almost as well as when produced by a human. More importantly, we show that gestures are integrated with speech when presented as part of a multi-modal communication equally well for human and robot performances.

  5. Iconic Gestures for Robot Avatars, Recognition and Integration with Speech

    Science.gov (United States)

    Bremner, Paul; Leonards, Ute

    2016-01-01

    Co-verbal gestures are an important part of human communication, improving its efficiency and efficacy for information conveyance. One possible means by which such multi-modal communication might be realized remotely is through the use of a tele-operated humanoid robot avatar. Such avatars have been previously shown to enhance social presence and operator salience. We present a motion tracking based tele-operation system for the NAO robot platform that allows direct transmission of speech and gestures produced by the operator. To assess the capabilities of this system for transmitting multi-modal communication, we have conducted a user study that investigated if robot-produced iconic gestures are comprehensible, and are integrated with speech. Robot performed gesture outcomes were compared directly to those for gestures produced by a human actor, using a within participant experimental design. We show that iconic gestures produced by a tele-operated robot are understood by participants when presented alone, almost as well as when produced by a human. More importantly, we show that gestures are integrated with speech when presented as part of a multi-modal communication equally well for human and robot performances. PMID:26925010

  6. Iconic Gestures for Robot Avatars, Recognition and Integration with Speech

    Directory of Open Access Journals (Sweden)

    Paul Adam Bremner

    2016-02-01

    Full Text Available Co-verbal gestures are an important part of human communication, improving its efficiency and efficacy for information conveyance. One possible means by which such multi-modal communication might be realised remotely is through the use of a tele-operated humanoid robot avatar. Such avatars have been previously shown to enhance social presence and operator salience. We present a motion tracking based tele-operation system for the NAO robot platform that allows direct transmission of speech and gestures produced by the operator. To assess the capabilities of this system for transmitting multi-modal communication, we have conducted a user study that investigated if robot-produced iconic gestures are comprehensible, and are integrated with speech. Robot performed gesture outcomes were compared directly to those for gestures produced by a human actor, using a within participant experimental design. We show that iconic gestures produced by a tele-operated robot are understood by participants when presented alone, almost as well as when produced by a human. More importantly, we show that gestures are integrated with speech when presented as part of a multi-modal communication equally well for human and robot performances.

  7. Dynamic gesture classification using skeleton model on RGB-D data

    Science.gov (United States)

    Tamura, Y.; Umetani, T.; Kashima, N.; Nakamura, H.

    2014-03-01

    This study aims to subjectively detect and classify similar gestures using a red-green-blue-depth camera. Human gesture recognition is one of the crucial components for realizing natural user interfaces (NUIs) using computers and machines. The quality of the NUI highly depends on the robustness of the achieved gesture recognition. We, therefore, propose a gesture classification method using singular spectrum transformation. Using this method, we can robustly classify gestures and behavior.

  8. 3D Hand Gesture Recognition using the Hough Transform

    Directory of Open Access Journals (Sweden)

    OPRISESCU, S.

    2013-08-01

    Full Text Available This paper presents an automatic 3D dynamic hand gesture recognition algorithm relying on both intensity and depth information provided by a Kinect camera. Gesture classification consists of a decision tree constructed on six parameters delivered by the Hough transform of projected 3D points. The Hough transform is originally applied, for the first time, on the projected gesture trajectories to obtain a reliable decision. The experimental data obtained from 300 video sequences with different subjects validate the proposed recognition method.

  9. 3D Imaging for hand gesture recognition: Exploring the software-hardware interaction of current technologies

    Science.gov (United States)

    Periverzov, Frol; Ilieş, Horea T.

    2012-09-01

    Interaction with 3D information is one of the fundamental and most familiar tasks in virtually all areas of engineering and science. Several recent technological advances pave the way for developing hand gesture recognition capabilities available to all, which will lead to more intuitive and efficient 3D user interfaces (3DUI). These developments can unlock new levels of expression and productivity in all activities concerned with the creation and manipulation of virtual 3D shapes and, specifically, in engineering design. Building fully automated systems for tracking and interpreting hand gestures requires robust and efficient 3D imaging techniques as well as potent shape classifiers. We survey and explore current and emerging 3D imaging technologies, and focus, in particular, on those that can be used to build interfaces between the users' hands and the machine. The purpose of this paper is to categorize and highlight the relevant differences between these existing 3D imaging approaches in terms of the nature of the information provided, output data format, as well as the specific conditions under which these approaches yield reliable data. Furthermore we explore the impact of each of these approaches on the computational cost and reliability of the required image processing algorithms. Finally we highlight the main challenges and opportunities in developing natural user interfaces based on hand gestures, and conclude with some promising directions for future research. [Figure not available: see fulltext.

  10. What makes a movement a gesture?

    Science.gov (United States)

    Novack, Miriam A; Wakefield, Elizabeth M; Goldin-Meadow, Susan

    2016-01-01

    Theories of how adults interpret the actions of others have focused on the goals and intentions of actors engaged in object-directed actions. Recent research has challenged this assumption, and shown that movements are often interpreted as being for their own sake (Schachner & Carey, 2013). Here we postulate a third interpretation of movement-movement that represents action, but does not literally act on objects in the world. These movements are gestures. In this paper, we describe a framework for predicting when movements are likely to be seen as representations. In Study 1, adults described one of three scenes: (1) an actor moving objects, (2) an actor moving her hands in the presence of objects (but not touching them) or (3) an actor moving her hands in the absence of objects. Participants systematically described the movements as depicting an object-directed action when the actor moved objects, and favored describing the movements as depicting movement for its own sake when the actor produced the same movements in the absence of objects. However, participants favored describing the movements as representations when the actor produced the movements near, but not on, the objects. Study 2 explored two additional features-the form of an actor's hands and the presence of speech-like sounds-to test the effect of context on observers' classification of movement as representational. When movements are seen as representations, they have the power to influence communication, learning, and cognition in ways that movement for its own sake does not. By incorporating representational gesture into our framework for movement analysis, we take an important step towards developing a more cohesive understanding of action-interpretation.

  11. When do speakers use gestures to specify who does what to whom? The role of language proficiency and type of gestures in narratives.

    Science.gov (United States)

    So, Wing Chee; Kita, Sotaro; Goldin-Meadow, Susan

    2013-12-01

    Previous research has found that iconic gestures (i.e., gestures that depict the actions, motions or shapes of entities) identify referents that are also lexically specified in the co-occurring speech produced by proficient speakers. This study examines whether concrete deictic gestures (i.e., gestures that point to physical entities) bear a different kind of relation to speech, and whether this relation is influenced by the language proficiency of the speakers. Two groups of speakers who had different levels of English proficiency were asked to retell a story in English. Their speech and gestures were transcribed and coded. Our findings showed that proficient speakers produced concrete deictic gestures for referents that were not specified in speech, and iconic gestures for referents that were specified in speech, suggesting that these two types of gestures bear different kinds of semantic relations with speech. In contrast, less proficient speakers produced concrete deictic gestures and iconic gestures whether or not referents were lexically specified in speech. Thus, both type of gesture and proficiency of speaker need to be considered when accounting for how gesture and speech are used in a narrative context.

  12. When do speakers use gesture to specify who does what to whom? The role of language proficiency and type of gesture in narratives

    Science.gov (United States)

    So, Wing Chee; Kita, Sotaro; Goldin-Meadow, Susan

    2014-01-01

    Previous research has found that iconic gestures (i.e., gestures that depict the actions, motions or shapes of entities) identify referents that are also lexically specified in the co-occurring speech produced by proficient speakers. This study examines whether concrete deictic gestures (i.e., gestures that point to physical entities) bear a different kind of relation to speech, and whether this relation is influenced by the language proficiency of the speakers. Two groups of speakers who had different levels of English proficiency were asked to retell a story in English. Their speech and gestures were transcribed and coded. Our findings showed that proficient speakers produced concrete deictic gestures for referents that were not specified in speech, and iconic gestures for referents that were specified in speech, suggesting that these two types of gestures bear different kinds of semantic relations with speech. In contrast, less proficient speakers produced concrete deictic gestures and iconic gestures whether or not referents were lexically specified in speech. Thus, both type of gesture and proficiency of speaker need to be considered when accounting for how gesture and speech are used in a narrative context. PMID:23337950

  13. An Interactive Astronaut-Robot System with Gesture Control.

    Science.gov (United States)

    Liu, Jinguo; Luo, Yifan; Ju, Zhaojie

    2016-01-01

    Human-robot interaction (HRI) plays an important role in future planetary exploration mission, where astronauts with extravehicular activities (EVA) have to communicate with robot assistants by speech-type or gesture-type user interfaces embedded in their space suits. This paper presents an interactive astronaut-robot system integrating a data-glove with a space suit for the astronaut to use hand gestures to control a snake-like robot. Support vector machine (SVM) is employed to recognize hand gestures and particle swarm optimization (PSO) algorithm is used to optimize the parameters of SVM to further improve its recognition accuracy. Various hand gestures from American Sign Language (ASL) have been selected and used to test and validate the performance of the proposed system.

  14. An Interactive Image Segmentation Method in Hand Gesture Recognition.

    Science.gov (United States)

    Chen, Disi; Li, Gongfa; Sun, Ying; Kong, Jianyi; Jiang, Guozhang; Tang, Heng; Ju, Zhaojie; Yu, Hui; Liu, Honghai

    2017-01-27

    In order to improve the recognition rate of hand gestures a new interactive image segmentation method for hand gesture recognition is presented, and popular methods, e.g., Graph cut, Random walker, Interactive image segmentation using geodesic star convexity, are studied in this article. The Gaussian Mixture Model was employed for image modelling and the iteration of Expectation Maximum algorithm learns the parameters of Gaussian Mixture Model. We apply a Gibbs random field to the image segmentation and minimize the Gibbs Energy using Min-cut theorem to find the optimal segmentation. The segmentation result of our method is tested on an image dataset and compared with other methods by estimating the region accuracy and boundary accuracy. Finally five kinds of hand gestures in different backgrounds are tested on our experimental platform, and the sparse representation algorithm is used, proving that the segmentation of hand gesture images helps to improve the recognition accuracy.

  15. Power independent EMG based gesture recognition for robotics.

    Science.gov (United States)

    Li, Ling; Looney, David; Park, Cheolsoo; Rehman, Naveed U; Mandic, Danilo P

    2011-01-01

    A novel method for detecting muscle contraction is presented. This method is further developed for identifying four different gestures to facilitate a hand gesture controlled robot system. It is achieved based on surface Electromyograph (EMG) measurements of groups of arm muscles. The cross-information is preserved through a simultaneous processing of EMG channels using a recent multivariate extension of Empirical Mode Decomposition (EMD). Next, phase synchrony measures are employed to make the system robust to different power levels due to electrode placements and impedances. The multiple pairwise muscle synchronies are used as features of a discrete gesture space comprising four gestures (flexion, extension, pronation, supination). Simulations on real-time robot control illustrate the enhanced accuracy and robustness of the proposed methodology.

  16. An Interactive Astronaut-Robot System with Gesture Control

    Directory of Open Access Journals (Sweden)

    Jinguo Liu

    2016-01-01

    Full Text Available Human-robot interaction (HRI plays an important role in future planetary exploration mission, where astronauts with extravehicular activities (EVA have to communicate with robot assistants by speech-type or gesture-type user interfaces embedded in their space suits. This paper presents an interactive astronaut-robot system integrating a data-glove with a space suit for the astronaut to use hand gestures to control a snake-like robot. Support vector machine (SVM is employed to recognize hand gestures and particle swarm optimization (PSO algorithm is used to optimize the parameters of SVM to further improve its recognition accuracy. Various hand gestures from American Sign Language (ASL have been selected and used to test and validate the performance of the proposed system.

  17. Gesture Commanding of a Robot with EVA Gloves Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Gesture commanding can be applied and evaluated with NASA robot systems. Application of this input modality can improve the way crewmembers interact with robots...

  18. Improved ASL based Gesture Recognition using HMM for System Application

    Directory of Open Access Journals (Sweden)

    Shalini Anand

    2014-03-01

    Full Text Available Gesture recognition is a growing field of research and among various human computer interactions; hand gesture recognition is very popular for interacting between human and machines. It is non verbal way of communication and this research area is full of innovative approaches. This project aims at recognizing 34 basic static hand gestures based on American Sign Language (ASL including alphabets as well as numbers (0 to 9. In this project we have not considered two alphabets i.e J and Z as our project aims as recognizing static hand gesture but according to ASL they are considered as dynamic. The main features used are optimization of the database using neural network and Hidden Markov Model (HMM. That is the algorithm is based on shape based features by keeping in the mind that shape of human hand is same for all human beings except in some situations

  19. Hand preferences in preschool children: Reaching, pointing and symbolic gestures.

    Science.gov (United States)

    Cochet, Hélène; Centelles, Laurie; Jover, Marianne; Plachta, Suzy; Vauclair, Jacques

    2015-01-01

    Manual asymmetries emerge very early in development and several researchers have reported a significant right-hand bias in toddlers although this bias fluctuates depending on the nature of the activity being performed. However, little is known about the further development of asymmetries in preschoolers. In this study, patterns of hand preference were assessed in 50 children aged 3-5 years for different activities, including reaching movements, pointing gestures and symbolic gestures. Contrary to what has been reported in children before 3 years of age, we did not observe any difference in the mean handedness indices obtained in each task. Moreover, the asymmetry of reaching was found to correlate with that of pointing gestures, but not with that of symbolic gestures. In relation to the results reported in infants and adults, this study may help deciphering the mechanisms controlling the development of handedness by providing measures of manual asymmetries in an age range that has been so far rather neglected.

  20. Sound Synthesis Affected by Physical Gestures in Real-Time

    DEFF Research Database (Denmark)

    Graugaard, Lars

    2006-01-01

    Motivation and strategies for affecting electronic music through physical gestures are presented and discussed. Two implementations are presented and experience with their use in performance is reported. A concept of sound shaping and sound colouring that connects an instrumental performer...

  1. Human gesture recognition using three-dimensional integral imaging.

    Science.gov (United States)

    Javier Traver, V; Latorre-Carmona, Pedro; Salvador-Balaguer, Eva; Pla, Filiberto; Javidi, Bahram

    2014-10-01

    Three-dimensional (3D) integral imaging allows one to reconstruct a 3D scene, including range information, and provides sectional refocused imaging of 3D objects at different ranges. This paper explores the potential use of 3D passive sensing integral imaging for human gesture recognition tasks from sequences of reconstructed 3D video scenes. As a preliminary testbed, the 3D integral imaging sensing is implemented using an array of cameras with the appropriate algorithms for 3D scene reconstruction. Recognition experiments are performed by acquiring 3D video scenes of multiple hand gestures performed by ten people. We analyze the capability and performance of gesture recognition using 3D integral imaging representations at given distances and compare its performance with the use of standard two-dimensional (2D) single-camera videos. To the best of our knowledge, this is the first report on using 3D integral imaging for human gesture recognition.

  2. Development of a Hand Gestures SDK for NUI-Based Applications

    Directory of Open Access Journals (Sweden)

    Seongjo Lee

    2015-01-01

    Full Text Available Concomitant with the advent of the ubiquitous era, research into better human computer interaction (HCI for human-focused interfaces has intensified. Natural user interface (NUI, in particular, is being actively investigated with the objective of more intuitive and simpler interaction between humans and computers. However, developing NUI-based applications without special NUI-related knowledge is difficult. This paper proposes a NUI-specific SDK, called “Gesture SDK,” for development of NUI-based applications. Gesture SDK provides a gesture generator with which developers can directly define gestures. Further, a “Gesture Recognition Component” is provided that enables defined gestures to be recognized by applications. We generated gestures using the proposed SDK and developed a “Smart Interior,” NUI-based application using the Gesture Recognition Component. The results of experiments conducted indicate that the recognition rate of the generated gestures was 96% on average.

  3. Designing Motion Gesture Interfaces in Mobile Phones for Blind People

    Institute of Scientific and Technical Information of China (English)

    任向实

    2014-01-01

    Despite the existence of advanced functions in smartphones, most blind people are still using old-fashioned phones with familiar layouts and dependence on tactile buttons. Smartphones support accessibility features including vibration, speech and sound feedback, and screen readers. However, these features are only intended to provide feedback to user commands or input. It is still a challenge for blind people to discover functions on the screen and to input the commands. Although voice commands are supported in smartphones, these commands are difficult for a system to recognize in noisy environments. At the same time, smartphones are integrated with sophisticated motion sensors, and motion gestures with device tilt have been gaining attention for eyes-free input. We believe that these motion gesture interactions offer more efficient access to smartphone functions for blind people. However, most blind people are not smartphone users and they are aware of neither the affordances available in smartphones nor the potential for interaction through motion gestures. To investigate the most usable gestures for blind people, we conducted a user-defined study with 13 blind participants. Using the gesture set and design heuristics from the user study, we implemented motion gesture based interfaces with speech and vibration feedback for browsing phone books and making a call. We then conducted a second study to investigate the usability of the motion gesture interface and user experiences using the system. The findings indicated that motion gesture interfaces are more efficient than traditional button interfaces. Through the study results, we provided implications for designing smartphone interfaces.

  4. Deep Dynamic Neural Networks for Multimodal Gesture Segmentation and Recognition

    OpenAIRE

    Wu, Di; Pigou, Lionel; Kindermans, Pieter-Jan; Le, Nam Do-Hoang; Shao, Ling; Dambre, Joni; Odobez, Jean-Marc

    2016-01-01

    This paper describes a novel method called Deep Dynamic Neural Networks (DDNN) for multimodal gesture recognition. A semi-supervised hierarchical dynamic framework based on a Hidden Markov Model (HMM) is proposed for simultaneous gesture segmentation and recognition where skeleton joint information, depth and RGB images, are the multimodal input observations. Unlike most traditional approaches that rely on the construction of complex handcrafted features, our approach learns high-level spatio...

  5. Authentication based on gestures with smartphone in hand

    Science.gov (United States)

    Varga, Juraj; Švanda, Dominik; Varchola, Marek; Zajac, Pavol

    2017-08-01

    We propose a new method of authentication for smartphones and similar devices based on gestures made by user with the device itself. The main advantage of our method is that it combines subtle biometric properties of the gesture (something you are) with a secret information that can be freely chosen by the user (something you know). Our prototype implementation shows that the scheme is feasible in practice. Further development, testing and fine tuning of parameters is required for deployment in the real world.

  6. An Empirical Analysis of Functions of Gestures in L2 Public Speaking%二语演讲中手势功能的实证分析

    Institute of Scientific and Technical Information of China (English)

    孟艳丽; 郭建

    2014-01-01

    手势研究是近年来国外语言学界兴起的一股新的研究潮流,其重点是手势与语言的关系。手势是演讲中的有机成分,在二语演讲中尤为重要,但目前关于二语演讲中手势的研究极少。对中国大学生英语演讲比赛视频中手势与语言关系加以分析发现,手势在二语演讲中具有多重功能,涉及语义、语音、语用、语篇多个层面,具体包括协助意义形象化、调节二语韵律、标记叙述层次及话题转换、构建语篇衔接连贯四个方面。手势与语言的多层面配合可以丰富二语者的表达资源,帮助观众对演讲内容的理解,增强二语演讲的交流效果。%In recent years, gesture study, with a focus on gesture-speech relations, is a relatively new research trend in western linguistics. Although gesture is widely acknowledged as an integral part in the delivery of public speech, there has been little theoretical or empirical analysis of its use. This paper analyzes the gestures in the videos of Chinese College students’ English speech contest, with the purpose to explore the functions of gesture in L2 public speaking. Four functions of gesture in the English public speaking are identified: 1 ) visualize the key, abstract information in speech by iconic gestures;2) indicate changes in narrative level and shifts in (sub)topics;3) help adjust prosodic features in the English oral production by beat gestures;4) work as cohesive devices to help construct the coherence of the discourse. The results indicate that gestures have multi-dimensional functions involving more than four levels of semantic, prosodic, pragmatic and textual levels of discourses. The integration of gestures and speech at multiple levels helps enrich L2 speaker’ s expressional resources, facilitate audience’ s comprehension of L2 speeches, and enhance the communicative effect of L2 speech.

  7. Children's use of gesture to resolve lexical ambiguity.

    Science.gov (United States)

    Kidd, Evan; Holler, Judith

    2009-11-01

    We report on a study investigating 3-5-year-old children's use of gesture to resolve lexical ambiguity. Children were told three short stories that contained two homonym senses; for example, bat (flying mammal) and bat (sports equipment). They were then asked to re-tell these stories to a second experimenter. The data were coded for the means that children used during attempts at disambiguation: speech, gesture, or a combination of the two. The results indicated that the 3-year-old children rarely disambiguated the two senses, mainly using deictic pointing gestures during attempts at disambiguation. In contrast, the 4-year-old children attempted to disambiguate the two senses more often, using a larger proportion of iconic gestures than the other children. The 5-year-old children used less iconic gestures than the 4-year-olds, but unlike the 3-year-olds, were able to disambiguate the senses through the verbal channel. The results highlight the value of gesture to the development of children's language and communication skills.

  8. Unsupervised Trajectory Segmentation for Surgical Gesture Recognition in Robotic Training.

    Science.gov (United States)

    Despinoy, Fabien; Bouget, David; Forestier, Germain; Penet, Cedric; Zemiti, Nabil; Poignet, Philippe; Jannin, Pierre

    2016-06-01

    Dexterity and procedural knowledge are two critical skills that surgeons need to master to perform accurate and safe surgical interventions. However, current training systems do not allow us to provide an in-depth analysis of surgical gestures to precisely assess these skills. Our objective is to develop a method for the automatic and quantitative assessment of surgical gestures. To reach this goal, we propose a new unsupervised algorithm that can automatically segment kinematic data from robotic training sessions. Without relying on any prior information or model, this algorithm detects critical points in the kinematic data that define relevant spatio-temporal segments. Based on the association of these segments, we obtain an accurate recognition of the gestures involved in the surgical training task. We, then, perform an advanced analysis and assess our algorithm using datasets recorded during real expert training sessions. After comparing our approach with the manual annotations of the surgical gestures, we observe 97.4% accuracy for the learning purpose and an average matching score of 81.9% for the fully automated gesture recognition process. Our results show that trainees workflow can be followed and surgical gestures may be automatically evaluated according to an expert database. This approach tends toward improving training efficiency by minimizing the learning curve.

  9. Real time gesture based control: A prototype development

    Science.gov (United States)

    Bhargava, Deepshikha; Solanki, L.; Rai, Satish Kumar

    2016-03-01

    The computer industry is getting advanced. In a short span of years, industry is growing high with advanced techniques. Robots have been replacing humans, increasing the efficiency, accessibility and accuracy of the system and creating man-machine interaction. Robotic industry is developing many new trends. However, they still need to be controlled by humans itself. This paper presents an approach to control a motor like a robot with hand gestures not by old ways like buttons or physical devices. Controlling robots with hand gestures is very popular now-a-days. Currently, at this level, gesture features are applied for detecting and tracking the hand in real time. A principal component analysis algorithm is being used for identification of a hand gesture by using open CV image processing library. Contours, convex-hull, and convexity defects are the gesture features. PCA is a statistical approach used for reducing the number of variables in hand recognition. While extracting the most relevant information (feature) contained in the images (hand). After detecting and recognizing hand a servo motor is being controlled, which uses hand gesture as an input device (like mouse and keyboard), and reduces human efforts.

  10. Pointing and tracing gestures may enhance anatomy and physiology learning.

    Science.gov (United States)

    Macken, Lucy; Ginns, Paul

    2014-07-01

    Currently, instructional effects generated by Cognitive load theory (CLT) are limited to visual and auditory cognitive processing. In contrast, "embodied cognition" perspectives suggest a range of gestures, including pointing, may act to support communication and learning, but there is relatively little research showing benefits of such "embodied learning" in the health sciences. This study investigated whether explicit instructions to gesture enhance learning through its cognitive effects. Forty-two university-educated adults were randomly assigned to conditions in which they were instructed to gesture, or not gesture, as they learnt from novel, paper-based materials about the structure and function of the human heart. Subjective ratings were used to measure levels of intrinsic, extraneous and germane cognitive load. Participants who were instructed to gesture performed better on a knowledge test of terminology and a test of comprehension; however, instructions to gesture had no effect on subjective ratings of cognitive load. This very simple instructional re-design has the potential to markedly enhance student learning of typical topics and materials in the health sciences and medicine.

  11. GestuRe and ACtion Exemplar (GRACE) video database: stimuli for research on manners of human locomotion and iconic gestures.

    Science.gov (United States)

    Aussems, Suzanne; Kwok, Natasha; Kita, Sotaro

    2017-09-15

    Human locomotion is a fundamental class of events, and manners of locomotion (e.g., how the limbs are used to achieve a change of location) are commonly encoded in language and gesture. To our knowledge, there is no openly accessible database containing normed human locomotion stimuli. Therefore, we introduce the GestuRe and ACtion Exemplar (GRACE) video database, which contains 676 videos of actors performing novel manners of human locomotion (i.e., moving from one location to another in an unusual manner) and videos of a female actor producing iconic gestures that represent these actions. The usefulness of the database was demonstrated across four norming experiments. First, our database contains clear matches and mismatches between iconic gesture videos and action videos. Second, the male actors and female actors whose action videos matched the gestures in the best possible way, perform the same actions in very similar manners and different actions in highly distinct manners. Third, all the actions in the database are distinct from each other. Fourth, adult native English speakers were unable to describe the 26 different actions concisely, indicating that the actions are unusual. This normed stimuli set is useful for experimental psychologists working in the language, gesture, visual perception, categorization, memory, and other related domains.

  12. The role of synchrony and ambiguity in speech-gesture integration during comprehension.

    Science.gov (United States)

    Habets, Boukje; Kita, Sotaro; Shao, Zeshu; Ozyurek, Asli; Hagoort, Peter

    2011-08-01

    During face-to-face communication, one does not only hear speech but also see a speaker's communicative hand movements. It has been shown that such hand gestures play an important role in communication where the two modalities influence each other's interpretation. A gesture typically temporally overlaps with coexpressive speech, but the gesture is often initiated before (but not after) the coexpressive speech. The present ERP study investigated what degree of asynchrony in the speech and gesture onsets are optimal for semantic integration of the concurrent gesture and speech. Videos of a person gesturing were combined with speech segments that were either semantically congruent or incongruent with the gesture. Although gesture and speech always overlapped in time, gesture and speech were presented with three different degrees of asynchrony. In the SOA 0 condition, the gesture onset and the speech onset were simultaneous. In the SOA 160 and 360 conditions, speech was delayed by 160 and 360 msec, respectively. ERPs time locked to speech onset showed a significant difference between semantically congruent versus incongruent gesture-speech combinations on the N400 for the SOA 0 and 160 conditions. No significant difference was found for the SOA 360 condition. These results imply that speech and gesture are integrated most efficiently when the differences in onsets do not exceed a certain time span because of the fact that iconic gestures need speech to be disambiguated in a way relevant to the speech context.

  13. Training industrial robots with gesture recognition techniques

    Science.gov (United States)

    Piane, Jennifer; Raicu, Daniela; Furst, Jacob

    2013-01-01

    In this paper we propose to use gesture recognition approaches to track a human hand in 3D space and, without the use of special clothing or markers, be able to accurately generate code for training an industrial robot to perform the same motion. The proposed hand tracking component includes three methods: a color-thresholding model, naïve Bayes analysis and Support Vector Machine (SVM) to detect the human hand. Next, it performs stereo matching on the region where the hand was detected to find relative 3D coordinates. The list of coordinates returned is expectedly noisy due to the way the human hand can alter its apparent shape while moving, the inconsistencies in human motion and detection failures in the cluttered environment. Therefore, the system analyzes the list of coordinates to determine a path for the robot to move, by smoothing the data to reduce noise and looking for significant points used to determine the path the robot will ultimately take. The proposed system was applied to pairs of videos recording the motion of a human hand in a „real‟ environment to move the end-affector of a SCARA robot along the same path as the hand of the person in the video. The correctness of the robot motion was determined by observers indicating that motion of the robot appeared to match the motion of the video.

  14. THE CONTRIBUTION OF GESTURES TO PERSONAL BRANDING

    Directory of Open Access Journals (Sweden)

    Brînduşa-Mariana Amălăncei

    2015-07-01

    Full Text Available A form of (self-promotion but also an authentic strategic choice, the personal brand has become a topical preoccupation of marketing specialists. Personal branding or self-marketing represents an innovative concept that associates the efficiency of personal development with the effectiveness of communication and marketing techniques adapted to the individual and that comprises the entire collection of techniques allowing the identification and promotion of the self/individual. The main objective is a clear communication with regard to personal identity, no matter by means of which method, so that it gives uniqueness and offers a competitive advantage. Although online promotion is increasingly gaining ground for the creation of a personal brand, an individual’s verbal and nonverbal behaviour represent very important differentiating elements. Starting from the premise that gestures often complement, anticipate, substitute or contradict the verbal, we will endeavour to highlight a number of significations that can be attributed to the various body movements and that can successfully contribute to the creation of a powerful personal brand.

  15. Hegel’s Gesture Towards Radical Cosmopolitanism

    Directory of Open Access Journals (Sweden)

    Shannon Brincat

    2009-09-01

    Full Text Available This is a preliminary argument of a much larger research project inquiring into the relation betweenHegel’s philosophical system and the project of emancipation in Critical International Relations Theory. Specifically, the paper examines how Hegel’s theory of recognition gestures towards a form of radical cosmopolitanism in world politics to ensure the conditions of rational freedom for all humankind. Much of the paper is a ground-clearing exercise defining what is ‘living’ in Hegel’s thought for emancipatory approaches in world politics, to borrow from Croce’s now famous question. It focuses on Hegel’s unique concept of freedom which places recognition as central in the formation of self-consciousness and therefore as a key determinant in the conditions necessary forhuman freedom to emerge in political community. While further research is needed to ascertain the precise relationship between Hegel’s recognition theoretic, emancipation and cosmopolitanism, it is contended that the intersubjective basis of Hegel’s concept of freedom through recognition necessitates some form of radical cosmopolitanism that ensures successful processes of recognition between all peoples, the precise institutional form of which remains unspecified.

  16. Emotion and the processing of symbolic gestures: an event-related brain potential study

    Science.gov (United States)

    Flaisch, Tobias; Häcker, Frank; Renner, Britta

    2011-01-01

    The present study used event-related brain potentials to examine the hypothesis that emotional gestures draw attentional resources at the level of distinct processing stages. Twenty healthy volunteers viewed pictures of hand gestures with negative (insult) and positive (approval) emotional meaning as well as neutral control gestures (pointing) while dense sensor event-related potentials (ERPs) were recorded. Emotion effects were reflected in distinct ERP modulations in early and later time windows. Insult gestures elicited increased P1, early posterior negativity (EPN) and late positive potential (LPP) components as compared to neutral control gestures. Processing of approval gestures was associated with an increased P1 wave and enlarged EPN amplitudes during an early time window, while the LPP amplitude was not significantly modulated. Accordingly, negative insult gestures appear more potent than positive approval gestures in inducing a heightened state of attention during processing stages implicated in stimulus recognition and focused attention. PMID:20212003

  17. Recognizing Bharatnatyam Mudra Using Principles of Gesture Recognition

    Directory of Open Access Journals (Sweden)

    Shweta Mozarkar

    2013-08-01

    Full Text Available A primary goal of gesture recognition research is to create a system which can identify specific human gestures and use them to convey information for the device control. Gesture Recognition is interpreting human gestures via mathematical algorithms. Indian classical Dance uses the expressive gestures called Mudra as a supporting visual mode of communication with the audience. These mudras are expressive meaningful (static or dynamic positions of body parts. This project attempts to recognize the mudra sequence using Image-processing and Pattern Recognition techniques and link the result to understand the corresponding expressions of the Indian classical dance via interpretation of few static Bharatnatyam Mudras. Here, a novel approach of computer aided recognition of Bharatnatyam Mudras is proposed using the saliency technique which uses the hypercomplex representation (i.e., quaternion Fourier Transform of the image, to highlight the object from background and in order to get the salient features of the static double hand mudra image. K Nearest Neighbor algorithm is used for classification. The entry giving the minimum difference for all the mudra features is the match for the given input image. Finally emotional description for the recognized mudra image is displayed.

  18. [Alterations in the imitation of gestures (conduction apraxia)].

    Science.gov (United States)

    Politis, D G

    The aim of this presentation is to report the performance pattern of a patient who suffered ideomotor apraxia with a disorder pattern of the conduction apraxia (CA) type. This clinical picture was originally reported by Ochipa et al. in 1994 as an alteration in the pathway that joins the two lexicons; later, in 2000, Cubelli et al. claimed that there is no evidence for the existence of such a pathway and suggested that the symptoms were due to an alteration affecting the mechanisms governing visuomotor conversion. A 51 year old patient who, following a traumatic head injury, presented aphasia and apraxia with 40% errors in the imitation of familiar gestures test, 50% errors in the imitation of non familiar gestures (NFG), 0% errors in the visual admission of objects test and 0% in the tool usage test. The differences between the performance in the imitation tests and in the other tests are statistically significant. Although the patient displayed slight alterations in the gesture decision test (20% mistakes), alterations to the action input lexicon would not account for the patient's performance since there is a significant difference between his performance in the imitation of NFG test and the gesture decision test. Moreover, he did not present alterations in the discrimination of gestures. From the above, it can be said that the patient seems to present CA due to alterations in the non semantic interlexical pathway and in the perilexical pathway, as originally postulated by Ochipa et al.

  19. Perceived gesture dynamics in nonverbal expression of emotion.

    Science.gov (United States)

    Dael, Nele; Goudbeek, Martijn; Scherer, K R

    2013-01-01

    Recent judgment studies have shown that people are able to fairly correctly attribute emotional states to others' bodily expressions. It is, however, not clear which movement qualities are salient, and how this applies to emotional gesture during speech-based interaction. In this study we investigated how the expression of emotions that vary on three major emotion dimensions-that is, arousal, valence, and potency-affects the perception of dynamic arm gestures. Ten professional actors enacted 12 emotions in a scenario-based social interaction setting. Participants (N = 43) rated all emotional expressions with muted sound and blurred faces on six spatiotemporal characteristics of gestural arm movement that were found to be related to emotion in previous research (amount of movement, movement speed, force, fluency, size, and height/vertical position). Arousal and potency were found to be strong determinants of the perception of gestural dynamics, whereas the differences between positive or negative emotions were less pronounced. These results confirm the importance of arm movement in communicating major emotion dimensions and show that gesture forms an integrated part of multimodal nonverbal emotion communication.

  20. Gesture Recognition Using Character Recognition Techniques on Two-dimensional Eigenspace

    OpenAIRE

    大野, 宏; 山本, 正信; Ohno, Hiroshi; Yamamoto, Masanobu

    1999-01-01

    This paper describes a novel method for gesture recognition using character recognition techniques on two-dimensional eigenspace. An image-based approach can capture human body poses in 3D motion from multiple image sequences. The sequence of poses can be reduced into a trajectory on the two-dimensional eigenspace with preserving the main features in gesture, so that the gesture recognition equals the character recognition. Experiments for the gesture recognition using some character recognit...

  1. Gestural communication in orangutans (Pongo pygmaeus and Pongo abelii) : a cognitive approach

    OpenAIRE

    Cartmill, Erica A.

    2009-01-01

    While most human language is expressed verbally, the gestures produced concurrent to speech provide additional information, help listeners interpret meaning, and provide insight into the cognitive processes of the speaker. Several theories have suggested that gesture played an important, possibly central, role in the evolution of language. Great apes have been shown to use gestures flexibly in different situations and to modify their gestures in response to changing contexts. However, it has...

  2. Selection of suitable hand gestures for reliable myoelectric human computer interface

    OpenAIRE

    2015-01-01

    Background Myoelectric controlled prosthetic hand requires machine based identification of hand gestures using surface electromyogram (sEMG) recorded from the forearm muscles. This study has observed that a sub-set of the hand gestures have to be selected for an accurate automated hand gesture recognition, and reports a method to select these gestures to maximize the sensitivity and specificity. Methods Experiments were conducted where sEMG was recorded from the muscles of the forearm while s...

  3. Brave NUI World Designing Natural User Interfaces for Touch and Gesture

    CERN Document Server

    Wigdor, Daniel

    2011-01-01

    Touch and gestural devices have been hailed as next evolutionary step in human-computer interaction. As software companies struggle to catch up with one another in terms of developing the next great touch-based interface, designers are charged with the daunting task of keeping up with the advances in new technology and this new aspect to user experience design. Product and interaction designers, developers and managers are already well versed in UI design, but touch-based interfaces have added a new level of complexity.

  4. Gesture analysis of students' majoring mathematics education in micro teaching process

    Science.gov (United States)

    Maldini, Agnesya; Usodo, Budi; Subanti, Sri

    2017-08-01

    In the process of learning, especially math learning, process of interaction between teachers and students is certainly a noteworthy thing. In these interactions appear gestures or other body spontaneously. Gesture is an important source of information, because it supports oral communication and reduce the ambiguity of understanding the concept/meaning of the material and improve posture. This research which is particularly suitable for an exploratory research design to provide an initial illustration of the phenomenon. The goal of the research in this article is to describe the gesture of S1 and S2 students of mathematics education at the micro teaching process. To analyze gesture subjects, researchers used McNeil clarification. The result is two subjects using 238 gesture in the process of micro teaching as a means of conveying ideas and concepts in mathematics learning. During the process of micro teaching, subjects using the four types of gesture that is iconic gestures, deictic gesture, regulator gesturesand adapter gesture as a means to facilitate the delivery of the intent of the material being taught and communication to the listener. Variance gesture that appear on the subject due to the subject using a different gesture patterns to communicate mathematical ideas of their own so that the intensity of gesture that appeared too different.

  5. Gesture in Multiparty Interaction: A Study of Embodied Discourse in Spoken English and American Sign Language

    Science.gov (United States)

    Shaw, Emily P.

    2013-01-01

    This dissertation is an examination of gesture in two game nights: one in spoken English between four hearing friends and another in American Sign Language between four Deaf friends. Analyses of gesture have shown there exists a complex integration of manual gestures with speech. Analyses of sign language have implicated the body as a medium…

  6. Traveller: An Interactive Cultural Training System Controlled by User-Defined Body Gestures

    NARCIS (Netherlands)

    Kistler, F.; André, E.; Mascarenhas, S.; Silva, A.; Paiva, A.; Degens, D.M.; Hofstede, G.J.; Krumhuber, E.; Kappas, A.; Aylett, R.

    2013-01-01

    In this paper, we describe a cultural training system based on an interactive storytelling approach and a culturally-adaptive agent architecture, for which a user-defined gesture set was created. 251 full body gestures by 22 users were analyzed to find intuitive gestures for the in-game actions in

  7. Methodological Reflections on Gesture Analysis in Second Language Acquisition and Bilingualism Research

    Science.gov (United States)

    Gullberg, Marianne

    2010-01-01

    Gestures, i.e. the symbolic movements that speakers perform while they speak, form a closely interconnected system with speech, where gestures serve both addressee-directed ("communicative") and speaker-directed ("internal") functions. This article aims (1) to show that a combined analysis of gesture and speech offers new ways to address…

  8. Prosodic Structure Shapes the Temporal Realization of Intonation and Manual Gesture Movements

    Science.gov (United States)

    Esteve-Gibert, Nuria; Prieto, Pilar

    2013-01-01

    Purpose: Previous work on the temporal coordination between gesture and speech found that the prominence in gesture coordinates with speech prominence. In this study, the authors investigated the anchoring regions in speech and pointing gesture that align with each other. The authors hypothesized that (a) in contrastive focus conditions, the…

  9. What is the best strategy for retaining gestures in working memory?

    Science.gov (United States)

    Gimenes, Guillaume; Pennequin, Valérie; Mercer, Tom

    2016-07-01

    This study aimed to determine whether the recall of gestures in working memory could be enhanced by verbal or gestural strategies. We also attempted to examine whether these strategies could help resist verbal or gestural interference. Fifty-four participants were divided into three groups according to the content of the training session. This included a control group, a verbal strategy group (where gestures were associated with labels) and a gestural strategy group (where participants repeated gestures and were told to imagine reproducing the movements). During the experiment, the participants had to reproduce a series of gestures under three conditions: "no interference", gestural interference (gestural suppression) and verbal interference (articulatory suppression). The results showed that task performance was enhanced in the verbal strategy group, but there was no significant difference between the gestural strategy and control groups. Moreover, compared to the "no interference" condition, performance decreased in the presence of gestural interference, except within the verbal strategy group. Finally, verbal interference hindered performance in all groups. The discussion focuses on the use of labels to recall gestures and differentiates the induced strategies from self-initiated strategies.

  10. A Coding System with Independent Annotations of Gesture Forms and Functions during Verbal Communication: Development of a Database of Speech and GEsture (DoSaGE).

    Science.gov (United States)

    Kong, Anthony Pak-Hin; Law, Sam-Po; Kwan, Connie Ching-Yin; Lai, Christy; Lam, Vivian

    2015-03-01

    Gestures are commonly used together with spoken language in human communication. One major limitation of gesture investigations in the existing literature lies in the fact that the coding of forms and functions of gestures has not been clearly differentiated. This paper first described a recently developed Database of Speech and GEsture (DoSaGE) based on independent annotation of gesture forms and functions among 119 neurologically unimpaired right-handed native speakers of Cantonese (divided into three age and two education levels), and presented findings of an investigation examining how gesture use was related to age and linguistic performance. Consideration of these two factors, for which normative data are currently very limited or lacking in the literature, is relevant and necessary when one evaluates gesture employment among individuals with and without language impairment. Three speech tasks, including monologue of a personally important event, sequential description, and story-telling, were used for elicitation. The EUDICO Linguistic ANnotator (ELAN) software was used to independently annotate each participant's linguistic information of the transcript, forms of gestures used, and the function for each gesture. About one-third of the subjects did not use any co-verbal gestures. While the majority of gestures were non-content-carrying, which functioned mainly for reinforcing speech intonation or controlling speech flow, the content-carrying ones were used to enhance speech content. Furthermore, individuals who are younger or linguistically more proficient tended to use fewer gestures, suggesting that normal speakers gesture differently as a function of age and linguistic performance.

  11. Recognition of Hand Gestures Observed by Depth Cameras

    Directory of Open Access Journals (Sweden)

    Tomasz Kapuscinski

    2015-04-01

    Full Text Available We focus on gesture recognition based on 3D information in the form of a point cloud of the observed scene. A descriptor of the scene is built on the basis of a Viewpoint Feature Histogram (VFH. To increase the distinctiveness of the descriptor the scene is divided into smaller 3D cells and VFH is calculated for each of them. A verification of the method on publicly available Polish and American sign language datasets containing dynamic gestures as well as hand postures acquired by a time-of-flight (ToF camera or Kinect is presented. Results of cross-validation test are given. Hand postures are recognized using a nearest neighbour classifier with city-block distance. For dynamic gestures two types of classifiers are applied: (i the nearest neighbour technique with dynamic time warping and (ii hidden Markov models. The results confirm the usefulness of our approach.

  12. Autonomous Multiple Gesture Recognition System for Disabled People

    Directory of Open Access Journals (Sweden)

    Amarjot Singh

    2014-01-01

    Full Text Available The paper presents an intelligent multi gesture spotting system that can be used by disabled people to easily communicate with machines resulting into easement in day-to-day works. The system makes use of pose estimation for 10 signs used by hearing impaired people to communicate. Pose is extracted on the basis of silhouettes using timed motion history (tMHI followed by gesture recognition with Hu-Moments. Signs involving motion are recognized with the help of optical flow. Based on the recognized gestures, particular instructions are sent to the robot connected to system resulting into an appropriate action/movement by the robot. The system is unique as it can act as a assisting device and can communicate in local as well as wide area to assist the disabled person.

  13. Dynamic Gesture Recognition Using Hidden Markov Model in Static Background

    Directory of Open Access Journals (Sweden)

    Malvika Bansal

    2011-11-01

    Full Text Available Human Computer Interaction is a challenging endeavor.Being able to communicate with your computer (or robot just as we humans interact with one another has been the prime objective of HCI research since the last two decades. A number of devices have been invented, each bringing with it a new aspect of interaction. Much work has gone into Speech and Gesture Recognition to develop an approach that would allow users to interact with their system by simple using their voice or simple intuitive gestures as against sitting in front of the computer and using a mouse or keyboard. Natural Interaction must be fast, convenient and reliable. In our project, we intend to develop one such natural interaction interface, one that can recognize hand gesture movements in real time using HMM but by using Computer Vision instead of sensory gloves.

  14. Hand Gesture Recognition Using Modified 1$ and Background Subtraction Algorithms

    Directory of Open Access Journals (Sweden)

    Hazem Khaled

    2015-01-01

    Full Text Available Computers and computerized machines have tremendously penetrated all aspects of our lives. This raises the importance of Human-Computer Interface (HCI. The common HCI techniques still rely on simple devices such as keyboard, mice, and joysticks, which are not enough to convoy the latest technology. Hand gesture has become one of the most important attractive alternatives to existing traditional HCI techniques. This paper proposes a new hand gesture detection system for Human-Computer Interaction using real-time video streaming. This is achieved by removing the background using average background algorithm and the 1$ algorithm for hand’s template matching. Then every hand gesture is translated to commands that can be used to control robot movements. The simulation results show that the proposed algorithm can achieve high detection rate and small recognition time under different light changes, scales, rotation, and background.

  15. An Adaptive Feature Extractor for Gesture SEMG Recognition

    Science.gov (United States)

    Zhang, Xu; Chen, Xiang; Zhao, Zhang-Yan; Li, Qiang; Yang, Ji-Hai; Lantz, Vuokko; Wang, Kong-Qiao

    This paper proposes an adaptive feature extraction method for pattern recognition of hand gesture action sEMG to enhance the reusability of myoelectric control. The feature extractor is based on wavelet packet transform and Local Discriminant Basis (LDB) algorithms to select several optimized decomposition subspaces of origin SEMG waveforms caused by hand gesture motions. Then the square roots of mean energy of signal in those subspaces are calculated to form the feature vector. In data acquisition experiments, five healthy subjects implement six kinds of hand motions every day for a week. The recognition results of hand gesture on the basis of the measured SEMG signals from different use sessions demonstrate that the feature extractor is effective. Our work is valuable for the realization of myoelectric control system in rehabilitation and other medical applications.

  16. Smart Remote for the Setup Box Using Gesture Control

    Directory of Open Access Journals (Sweden)

    Surepally Uday Kumar

    2016-04-01

    Full Text Available The basic purpose of this project is to provide a means to control a set top box (capable of infrared communication, in this case Hathway using hand gestures. Thus, this system will act like a remote control for operating set top box, but this will be achieved through hand gestures instead of pushing buttons. To send and receive remote control signals, this project uses an infrared LED as Transmitter. Using an infrared receiver, an Arduino can detect the bits being sent by a remote control. And to playback a remote control signal, the Arduino can flash an infrared LED at 38 kHz. With this project we can design a gesture controlled remote by using a glove, it can be fixed to the hand, we can send any signal of any length, at any related frequency, and thus we can design a universal remote.

  17. Character-based Recognition of Simple Word Gesture

    Directory of Open Access Journals (Sweden)

    Paulus Insap Santosa

    2013-11-01

    Full Text Available People with normal senses use spoken language to communicate with others. This method cannot be used by those with hearing and speech impaired. These two groups of people will have difficulty when they try to communicate to each other using their own language. Sign language is not easy to learn, as there are various sign languages, and not many tutors are available. This research focused on a simple word recognition gesture based on characters that form a word to be recognized. The method used for character recognition was the nearest neighbour method. This method identified different fingers using the different markers attached to each finger. Testing a simple word gesture recognition is done by providing a series of characters that make up the intended simple word. The accuracy of a simple word gesture recognition depended upon the accuracy of recognition of each character.

  18. Spatial and temporal segmented dense trajectories for gesture recognition

    Science.gov (United States)

    Yamada, Kaho; Yoshida, Takeshi; Sumi, Kazuhiko; Habe, Hitoshi; Mitsugami, Ikuhisa

    2017-03-01

    Recently, dense trajectories [1] have been shown to be a successful video representation for action recognition, and have demonstrated state-of-the-art results with a variety of datasets. However, if we apply these trajectories to gesture recognition, recognizing similar and fine-grained motions is problematic. In this paper, we propose a new method in which dense trajectories are calculated in segmented regions around detected human body parts. Spatial segmentation is achieved by body part detection [2]. Temporal segmentation is performed for a fixed number of video frames. The proposed method removes background video noise and can recognize similar and fine-grained motions. Only a few video datasets are available for gesture classification; therefore, we have constructed a new gesture dataset and evaluated the proposed method using this dataset. The experimental results show that the proposed method outperforms the original dense trajectories.

  19. An Efficient Solution for Hand Gesture Recognition from Video Sequence

    Directory of Open Access Journals (Sweden)

    PRODAN, R.-C.

    2012-08-01

    Full Text Available The paper describes a system of hand gesture recognition by image processing for human robot interaction. The recognition and interpretation of the hand postures acquired through a video camera allow the control of the robotic arm activity: motion - translation and rotation in 3D - and tightening/releasing the clamp. A gesture dictionary was defined and heuristic algorithms for recognition were developed and tested. The system can be used for academic and industrial purposes, especially for those activities where the movements of the robotic arm were not previously scheduled, for training the robot easier than using a remote control. Besides the gesture dictionary, the novelty of the paper consists in a new technique for detecting the relative positions of the fingers in order to recognize the various hand postures, and in the achievement of a robust system for controlling robots by postures of the hands.

  20. Interrogating the Founding Gestures of the New Materialism

    Directory of Open Access Journals (Sweden)

    Dennis Bruining

    2016-11-01

    Full Text Available In this article, I aim to further thinking in the broadly ‘new materialist’ field by insisting it attends to some ubiquitous assumptions. More specifically, I critically interrogate what Sara Ahmed has termed ‘the founding gestures of the “new materialism”’. These founding rhetorical gestures revolve around a perceived neglect of the matter of materiality in ‘postmodernism’ and ‘poststructuralism’ and are meant to pave the way for new materialism’s own conception of matter-in/of-the-world. I argue in this article that an engagement with the postmodern critique of language as constitutive, as well as the poststructuralist critique of pure self-presence, does not warrant these founding gestures to be so uncritically rehearsed. Moreover, I demonstrate that texts which rely on these gestures, or at least the ones I discuss in this article, are not only founded on a misrepresentation of postmodern and poststructuralist thought, but are also guilty of repeating the perceived mistakes of which they are critical, such as upholding the language/matter dichotomy. I discuss a small selection of texts that make use of those popular rhetorical gestures to juxtapose the past that is invoked with a more nuanced reading of that past. My contention is that if ‘the founding gestures of the “new materialism”’ are not addressed, the complexity of the postmodern and poststructuralist positions continues to be obscured, with damaging consequences for the further development of the emerging field of new materialism, as well as our understanding of cultural theory’s past.

  1. Scientific Visualization of Radio Astronomy Data using Gesture Interaction

    Science.gov (United States)

    Mulumba, P.; Gain, J.; Marais, P.; Woudt, P.

    2015-09-01

    MeerKAT in South Africa (Meer = More Karoo Array Telescope) will require software to help visualize, interpret and interact with multidimensional data. While visualization of multi-dimensional data is a well explored topic, little work has been published on the design of intuitive interfaces to such systems. More specifically, the use of non-traditional interfaces (such as motion tracking and multi-touch) has not been widely investigated within the context of visualizing astronomy data. We hypothesize that a natural user interface would allow for easier data exploration which would in turn lead to certain kinds of visualizations (volumetric, multidimensional). To this end, we have developed a multi-platform scientific visualization system for FITS spectral data cubes using VTK (Visualization Toolkit) and a natural user interface to explore the interaction between a gesture input device and multidimensional data space. Our system supports visual transformations (translation, rotation and scaling) as well as sub-volume extraction and arbitrary slicing of 3D volumetric data. These tasks were implemented across three prototypes aimed at exploring different interaction strategies: standard (mouse/keyboard) interaction, volumetric gesture tracking (Leap Motion controller) and multi-touch interaction (multi-touch monitor). A Heuristic Evaluation revealed that the volumetric gesture tracking prototype shows great promise for interfacing with the depth component (z-axis) of 3D volumetric space across multiple transformations. However, this is limited by users needing to remember the required gestures. In comparison, the touch-based gesture navigation is typically more familiar to users as these gestures were engineered from standard multi-touch actions. Future work will address a complete usability test to evaluate and compare the different interaction modalities against the different visualization tasks.

  2. A Versatile Embedded Platform for EMG Acquisition and Gesture Recognition.

    Science.gov (United States)

    Benatti, Simone; Casamassima, Filippo; Milosevic, Bojan; Farella, Elisabetta; Schönle, Philipp; Fateh, Schekeb; Burger, Thomas; Huang, Qiuting; Benini, Luca

    2015-10-01

    Wearable devices offer interesting features, such as low cost and user friendliness, but their use for medical applications is an open research topic, given the limited hardware resources they provide. In this paper, we present an embedded solution for real-time EMG-based hand gesture recognition. The work focuses on the multi-level design of the system, integrating the hardware and software components to develop a wearable device capable of acquiring and processing EMG signals for real-time gesture recognition. The system combines the accuracy of a custom analog front end with the flexibility of a low power and high performance microcontroller for on-board processing. Our system achieves the same accuracy of high-end and more expensive active EMG sensors used in applications with strict requirements on signal quality. At the same time, due to its flexible configuration, it can be compared to the few wearable platforms designed for EMG gesture recognition available on market. We demonstrate that we reach similar or better performance while embedding the gesture recognition on board, with the benefit of cost reduction. To validate this approach, we collected a dataset of 7 gestures from 4 users, which were used to evaluate the impact of the number of EMG channels, the number of recognized gestures and the data rate on the recognition accuracy and on the computational demand of the classifier. As a result, we implemented a SVM recognition algorithm capable of real-time performance on the proposed wearable platform, achieving a classification rate of 90%, which is aligned with the state-of-the-art off-line results and a 29.7 mW power consumption, guaranteeing 44 hours of continuous operation with a 400 mAh battery.

  3. EyeScreen: A Vision-Based Gesture Interaction System

    Institute of Scientific and Technical Information of China (English)

    LI Shan-qing; XU Yi-hua; JIA Yun-de

    2007-01-01

    EyeScreen is a vision-based interaction system which provides a natural gesture interface for human-computer interaction (HCI) by tracking human fingers and recognizing gestures. Multi-view video images are captured by two cameras facing a computer screen, which can be used to detect clicking actions of a fingertip and improve the recognition rate. The system enables users to directly interact with rendered objects on the screen. Robustness of the system has been verified by extensive experiments with different user scenarios. EyeScreen can be used in many applications such as intelligent interaction and digital entertainment.

  4. Social brain hypothesis, vocal and gesture networks of wild chimpanzees

    Directory of Open Access Journals (Sweden)

    Anna Ilona Roberts

    2016-11-01

    Full Text Available A key driver of brain evolution in primates and humans is the cognitive demands arising from managing social relationships. In primates, grooming plays a key role in maintaining these relationships, but the time that can be devoted to grooming is inherently limited. Communication may act as an additional, more time-efficient bonding mechanism to grooming, but how patterns of communication are related to patterns of sociality is still poorly understood. We used social network analysis to examine the associations between close proximity (duration of time spent within 10m per hour spent in the same party, grooming, vocal communication and gestural communication (duration of time and frequency of behaviour per hour spent within 10 meters in wild chimpanzees. The results were not corrected for multiple testing. Chimpanzees had differentiated social relationships, with focal chimpanzees maintaining some level of proximity to almost all group members, but directing gestures at and grooming with a smaller number of preferred social partners. Pairs of chimpanzees that had high levels of close proximity had higher rates of grooming. Importantly, higher rates of gestural communication were also positively associated with levels of proximity, and specifically gestures associated with affiliation (greeting, gesture to mutually groom were related to proximity. Synchronized low-intensity pant-hoots were also positively related to proximity in pairs of chimpanzees. Further, there were differences in the size of individual chimpanzees’ proximity networks - the number of social relationships they maintained with others. Focal chimpanzees with larger proximity networks had a higher rate of both synchronized low- intensity pant-hoots and synchronized high-intensity pant-hoots. These results suggest that in addition to grooming, both gestures and synchronized vocalisations may play key roles in allowing chimpanzees to manage a large and differentiated set of

  5. Static gesture recognition using features extracted from skeletal data

    CSIR Research Space (South Africa)

    Mangera, R

    2013-12-01

    Full Text Available . The depth image in the left depicts the actual pose of the user ”Sleep” whilst the skeleton model being tracked is shown on the right. It is evident that the skeleton tracking for the pose is inaccurate. sample frames of a single person performing the 10... gestures. The gesture classes were given the following labels: ”star”, ”cross”, ”flow”, ”my”, ”sleep”, ”victory”, ”hands-up”, ”left arm extended”, ”right arm extended”, and ”both arms ex- tended”. B. Data Collection The dataset was collected using an Asus...

  6. View Invariant Gesture Recognition using 3D Motion Primitives

    DEFF Research Database (Denmark)

    Holte, Michael Boelstoft; Moeslund, Thomas B.

    2008-01-01

    This paper presents a method for automatic recognition of human gestures. The method works with 3D image data from a range camera to achieve invariance to viewpoint. The recognition is based solely on motion from characteristic instances of the gestures. These instances are denoted 3D motion...... primitives. The method extracts 3D motion from range images and represent the motion from each input frame in a view invariant manner using harmonic shape context. The harmonic shape context is classified as a 3D motion primitive. A sequence of input frames results in a set of primitives that are classified...

  7. Gesture Recognition for Educational Games: Magic Touch Math

    Science.gov (United States)

    Kye, Neo Wen; Mustapha, Aida; Azah Samsudin, Noor

    2017-08-01

    Children nowadays are having problem learning and understanding basic mathematical operations because they are not interested in studying or learning mathematics. This project proposes an educational game called Magic Touch Math that focuses on basic mathematical operations targeted to children between the age of three to five years old using gesture recognition to interact with the game. Magic Touch Math was developed in accordance to the Game Development Life Cycle (GDLC) methodology. The prototype developed has helped children to learn basic mathematical operations via intuitive gestures. It is hoped that the application is able to get the children motivated and interested in mathematics.

  8. Finger recognition and gesture imitation in Gerstmann's syndrome.

    Science.gov (United States)

    Moro, V; Pernigo, S; Urgesi, C; Zapparoli, P; Aglioti, S M

    2008-01-01

    We report the association between finger agnosia and gesture imitation deficits in a right-handed, right-hemisphere damaged patient with Gerstmann's syndrome (GS), a neuropsychological syndrome characterized by finger and toe agnosia, left-right disorientation and dyscalculia. No language deficits were found. The patient showed a gestural imitation deficit that specifically involved finger movements and postures. The association between finger recognition and imitation deficits suggests that both static and dynamic aspects of finger representations are impaired in GS. We suggest that GS is a disorder of body representation that involves hands and fingers, that is, the non-facial body parts most involved in social interactions.

  9. Timing of Gestures: Gestures Anticipating or Simultaneous With Speech as Indexes of Text Comprehension in Children and Adults.

    Science.gov (United States)

    Ianì, Francesco; Cutica, Ilaria; Bucciarelli, Monica

    2016-06-08

    The deep comprehension of a text is tantamount to the construction of an articulated mental model of that text. The number of correct recollections is an index of a learner's mental model of a text. We assume that another index of comprehension is the timing of the gestures produced during text recall; gestures are simultaneous with speech when the learner has built an articulated mental model of the text, whereas they anticipate the speech when the learner has built a less articulated mental model. The results of four experiments confirm the predictions deriving from our assumptions for both children and adults. Provided that the recollections are correct, the timing of gestures can differ and can be considered a further measure of the quality of the mental model, beyond the number of correct recollections.

  10. Effects of Conducting-Gesture Instruction on Seventh-Grade Band Students' Performance Response to Conducting Emblems.

    Science.gov (United States)

    Cofer, R. Shayne

    1998-01-01

    Investigates effects of short-term conducting gesture instruction on seventh-grade band students' recognition of and performance response to musical conducting gestures. Indicates that short-term conducting-gesture instruction has a positive, statistically significant impact on recognition of and performance response to conducting gestures.…

  11. Non-formal Therapy and Learning Potentials through Human Gesture Synchronised to Robotic Gesture

    DEFF Research Database (Denmark)

    Petersson, Eva; Brooks, Tony

    2007-01-01

    for use as a supplement to traditional rehabilitation therapy sessions. The process involves the capturing of gesture data through an intuitive non-intrusive interface. The interface is invisible to the naked eye and offers a direct and immediate association between the child's physical feed......Children with severe physical disabilities have limited possibilities for joyful experiences and interactive play. Physical training and therapy to improve such opportunities for these children is often enduring, tedious and boring through repetition-and this is often the case for both patient...... and the facilitator or therapist. The aim of the study reported in this paper was to explore how children with a severe physical disability could use an easily accessible robotic device that enabled control of projected images towards achieving joyful experiences and interactive play, so as to give opportunities...

  12. Asymmetric dynamic attunement of speech and gestures in the construction of children’s understanding

    Directory of Open Access Journals (Sweden)

    Lisette eDe Jonge-Hoekstra

    2016-03-01

    Full Text Available As children learn they use their speech to express words and their hands to gesture. This study investigates the interplay between real-time gestures and speech as children construct cognitive understanding during a hands-on science task. 12 children (M = 6, F = 6 from Kindergarten (n = 5 and first grade (n = 7 participated in this study. Each verbal utterance and gesture during the task were coded, on a complexity scale derived from dynamic skill theory. To explore the interplay between speech and gestures, we applied a cross recurrence quantification analysis (CRQA to the two coupled time series of the skill levels of verbalizations and gestures. The analysis focused on 1 the temporal relation between gestures and speech, 2 the relative strength and direction of the interaction between gestures and speech, 3 the relative strength and direction between gestures and speech for different levels of understanding, and 4 relations between CRQA measures and other child characteristics. The results show that older and younger children differ in the (temporal asymmetry in the gestures-speech interaction. For younger children, the balance leans more towards gestures leading speech in time, while the balance leans more towards speech leading gestures for older children. Secondly, at the group level, speech attracts gestures in a more dynamically stable fashion than vice versa, and this asymmetry in gestures and speech extends to lower and higher understanding levels. Yet, for older children, the mutual coupling between gestures and speech is more dynamically stable regarding the higher understanding levels. Gestures and speech are more synchronized in time as children are older. A higher score on schools’ language tests is related to speech attracting gestures more rigidly and more asymmetry between gestures and speech, only for the less difficult understanding levels. A higher score on math or past science tasks is related to less asymmetry between

  13. Prototyping with your hands: the many roles of gesture in the communication of design concepts

    DEFF Research Database (Denmark)

    Cash, Philip; Maier, Anja

    2016-01-01

    There is an on-going focus exploring the use of gesture in design situations; however, there are still significant questions as to how this is related to the understanding and communication of design concepts. This work explores the use of gesture through observing and video-coding four teams...... of engineering graduates during an ideation session. This was used to detail the relationship between the function behaviour structure elements and individual gestures as well as to identify archetypal gesture sequences – compound reflective, compound directed one-way, mirroring, and modification. Gesture...

  14. Combining point context and dynamic time warping for online gesture recognition

    Science.gov (United States)

    Mao, Xia; Li, Chen

    2017-05-01

    Previous gesture recognition methods usually focused on recognizing gestures after the entire gesture sequences were obtained. However, in many practical applications, a system has to identify gestures before they end to give instant feedback. We present an online gesture recognition approach that can realize early recognition of unfinished gestures with low latency. First, a curvature buffer-based point context (CBPC) descriptor is proposed to extract the shape feature of a gesture trajectory. The CBPC descriptor is a complete descriptor with a simple computation, and thus has its superiority in online scenarios. Then, we introduce an online windowed dynamic time warping algorithm to realize online matching between the ongoing gesture and the template gestures. In the algorithm, computational complexity is effectively decreased by adding a sliding window to the accumulative distance matrix. Lastly, the experiments are conducted on the Australian sign language data set and the Kinect hand gesture (KHG) data set. Results show that the proposed method outperforms other state-of-the-art methods especially when gesture information is incomplete.

  15. Effects of learning with gesture on children's understanding of a new language concept.

    Science.gov (United States)

    Wakefield, Elizabeth M; James, Karin H

    2015-08-01

    Asking children to gesture while being taught a concept facilitates their learning. Here, we investigated whether children benefitted equally from producing gestures that reflected speech (speech-gesture matches) versus gestures that complemented speech (speech-gesture mismatches), when learning the concept of palindromes. As in previous studies, we compared the utility of each gesture strategy to a speech alone strategy. Because our task was heavily based on language ability, we also considered children's phonological competency as a predictor of success at posttest. Across conditions, children who had low phonological competence were equally likely to perform well at posttest. However, gesture use was predictive of learning for children with high phonological competence: Those who produced either gesture strategy during training were more likely to learn than children who used a speech alone strategy. These results suggest that educators should be encouraged to use either speech-gesture match or mismatch strategies to aid learners, but that gesture may be especially beneficial to children who possess basic skills related to the new concept, in this case, phonological competency. Results also suggest that there are differences between the cognitive effects of naturally produced speech-gesture matches and mismatches, and those that are scripted and taught to children.

  16. Localization and Recognition of Dynamic Hand Gestures Based on Hierarchy of Manifold Classifiers

    Science.gov (United States)

    Favorskaya, M.; Nosov, A.; Popov, A.

    2015-05-01

    Generally, the dynamic hand gestures are captured in continuous video sequences, and a gesture recognition system ought to extract the robust features automatically. This task involves the highly challenging spatio-temporal variations of dynamic hand gestures. The proposed method is based on two-level manifold classifiers including the trajectory classifiers in any time instants and the posture classifiers of sub-gestures in selected time instants. The trajectory classifiers contain skin detector, normalized skeleton representation of one or two hands, and motion history representing by motion vectors normalized through predetermined directions (8 and 16 in our case). Each dynamic gesture is separated into a set of sub-gestures in order to predict a trajectory and remove those samples of gestures, which do not satisfy to current trajectory. The posture classifiers involve the normalized skeleton representation of palm and fingers and relative finger positions using fingertips. The min-max criterion is used for trajectory recognition, and the decision tree technique was applied for posture recognition of sub-gestures. For experiments, a dataset "Multi-modal Gesture Recognition Challenge 2013: Dataset and Results" including 393 dynamic hand-gestures was chosen. The proposed method yielded 84-91% recognition accuracy, in average, for restricted set of dynamic gestures.

  17. LOCALIZATION AND RECOGNITION OF DYNAMIC HAND GESTURES BASED ON HIERARCHY OF MANIFOLD CLASSIFIERS

    Directory of Open Access Journals (Sweden)

    M. Favorskaya

    2015-05-01

    Full Text Available Generally, the dynamic hand gestures are captured in continuous video sequences, and a gesture recognition system ought to extract the robust features automatically. This task involves the highly challenging spatio-temporal variations of dynamic hand gestures. The proposed method is based on two-level manifold classifiers including the trajectory classifiers in any time instants and the posture classifiers of sub-gestures in selected time instants. The trajectory classifiers contain skin detector, normalized skeleton representation of one or two hands, and motion history representing by motion vectors normalized through predetermined directions (8 and 16 in our case. Each dynamic gesture is separated into a set of sub-gestures in order to predict a trajectory and remove those samples of gestures, which do not satisfy to current trajectory. The posture classifiers involve the normalized skeleton representation of palm and fingers and relative finger positions using fingertips. The min-max criterion is used for trajectory recognition, and the decision tree technique was applied for posture recognition of sub-gestures. For experiments, a dataset “Multi-modal Gesture Recognition Challenge 2013: Dataset and Results” including 393 dynamic hand-gestures was chosen. The proposed method yielded 84–91% recognition accuracy, in average, for restricted set of dynamic gestures.

  18. Early communicative gestures and play as predictors of language development in children born with and without family risk for dyslexia.

    Science.gov (United States)

    Unhjem, Astrid; Eklund, Kenneth; Nergård-Nilssen, Trude

    2014-08-01

    The present study investigated early communicative gestures, play, and language skills in children born with family risk for dyslexia (FR) and a control group of children without this inheritable risk at ages 12, 15, 18, and 24 months. Participants were drawn from the Tromsø Longitudinal study of Dyslexia (TLD) which follows children's cognitive and language development from age 12 months through Grade 2 in order to identify early markers of developmental dyslexia. Results showed that symbolic play and parent reported play at age 12 months and communicative gestures at age 15 months explained 61% of the variance in productive language at 24 months in the FR group. These early nonlinguistic measures seem to be potentially interesting markers of later language development in children born at risk for dyslexia.

  19. Extraction of Spatial-Temporal Features for Vision-Based Gesture Recognition

    Institute of Scientific and Technical Information of China (English)

    HUANG YU; XU Guangyou; ZHU Yuanxin

    2000-01-01

    One of the key problems in a vision-based gesture recognition system is the extraction of spatial-temporal features of gesturing.In this paper an approach of motion-based segmentation is proposed to realize this task.The direct method cooperated with the robust M-estimator to estimate the affine parameters of gesturing motion is used, and based on the dominant motion model the gesturing region is extracted, i.e.,the dominant object. So the spatial-temporal features of gestures can be extracted. Finally, the dynamic time warping (DTW) method is directly used to perform matching of 12 control gestures (6 for"translation"orders,6 for"rotation"orders).A small demonstration system has been set up to verify the method, in which a panorama image viewer can be controlled (set by mosaicing a sequence of standard"Garden"images) with recognized gestures instead of the 3-D mouse tool.

  20. Most probable longest common subsequence for recognition of gesture character input.

    Science.gov (United States)

    Frolova, Darya; Stern, Helman; Berman, Sigal

    2013-06-01

    This paper presents a technique for trajectory classification with applications to dynamic free-air hand gesture recognition. Such gestures are unencumbered and drawn in free air. Our approach is an extension to the longest common subsequence (LCS) classification algorithm. A learning preprocessing stage is performed to create a probabilistic 2-D template for each gesture, which allows taking into account different trajectory distortions with different probabilities. The modified LCS, termed the most probable LCS (MPLCS), is developed to measure the similarity between the probabilistic template and the hand gesture sample. The final decision is based on the length and probability of the extracted subsequence. Validation tests using a cohort of gesture digits from video-based capture show that the approach is promising with a recognition rate of more than 98 % for video stream preisolated digits. The MPLCS algorithm can be integrated into a gesture recognition interface to facilitate gesture character input. This can greatly enhance the usability of such interfaces.

  1. Words and gestures: infants' interpretations of different forms of symbolic reference.

    Science.gov (United States)

    Namy, L L; Waxman, S R

    1998-04-01

    In 3 experiments, we examine the relation between language acquisition and other symbolic abilities in the early stages of language acquisition. We introduce 18- and 26-month-olds to object categories (e.g., fruit, vehicles) using a novel word or a novel symbolic gesture to name the objects. We compare the influence of these two symbolic forms on infants' object categorization. Children at both ages interpreted novel words as names for object categories. However, infants' interpretations of gestures changed over development. At 18 months, infants spontaneously interpreted gestures, like words, as names for object categories; at 26 months, infants spontaneously interpreted words but not gestures as names. The older infants succeeded in interpreting novel gestures as names only when given additional practice with the gestural medium. This clear developmental pattern supports the prediction that an initial general ability to learn symbols (both words and gestures) develops into a more focused tendency to use words as the predominant symbolic form.

  2. Increased androgenic sensitivity in the hind limb muscular system marks the evolution of a derived gestural display

    Science.gov (United States)

    Mangiamele, Lisa A.; Fuxjager, Matthew J.; Schuppe, Eric R.; Taylor, Rebecca S.; Hödl, Walter; Preininger, Doris

    2016-01-01

    Physical gestures are prominent features of many species’ multimodal displays, yet how evolution incorporates body and leg movements into animal signaling repertoires is unclear. Androgenic hormones modulate the production of reproductive signals and sexual motor skills in many vertebrates; therefore, one possibility is that selection for physical signals drives the evolution of androgenic sensitivity in select neuromotor pathways. We examined this issue in the Bornean rock frog (Staurois parvus, family: Ranidae). Males court females and compete with rivals by performing both vocalizations and hind limb gestural signals, called “foot flags.” Foot flagging is a derived display that emerged in the ranids after vocal signaling. Here, we show that administration of testosterone (T) increases foot flagging behavior under seminatural conditions. Moreover, using quantitative PCR, we also find that adult male S. parvus maintain a unique androgenic phenotype, in which androgen receptor (AR) in the hind limb musculature is expressed at levels ∼10× greater than in two other anuran species, which do not produce foot flags (Rana pipiens and Xenopus laevis). Finally, because males of all three of these species solicit mates with calls, we accordingly detect no differences in AR expression in the vocal apparatus (larynx) among taxa. The results show that foot flagging is an androgen-dependent gestural signal, and its emergence is associated with increased androgenic sensitivity within the hind limb musculature. Selection for this novel gestural signal may therefore drive the evolution of increased AR expression in key muscles that control signal production to support adaptive motor performance. PMID:27143723

  3. Creating gesture controlled games for robot-assisted stroke rehabilitation

    NARCIS (Netherlands)

    Basteris, A.; Johansson, E.; Klein, P.; Nasr, N.; Nijenhuis, S.; Sale, P.; Schätzlein, F.; Stienen, A.H.A.

    2014-01-01

    Regular training exercises are fundamental to regain functional use of arm and hand control after a stroke. With the SCRIPT system, the patient can practice hand excercising independently at home by playing gesture controlled games using a robotic glove (orthosis). The system could offer prolonged r

  4. Dissociating linguistic and nonlinguistic gestural communication in the brain.

    Science.gov (United States)

    MacSweeney, Mairéad; Campbell, Ruth; Woll, Bencie; Giampietro, Vincent; David, Anthony S; McGuire, Philip K; Calvert, Gemma A; Brammer, Michael J

    2004-08-01

    Gestures of the face, arms, and hands are components of signed languages used by Deaf people. Signaling codes, such as the racecourse betting code known as Tic Tac, are also made up of such gestures. Tic Tac lacks the phonological structure of British Sign Language (BSL) but is similar in terms of its visual and articulatory components. Using fMRI, we compared the neural correlates of viewing a gestural language (BSL) and a manual-brachial code (Tic Tac) relative to a low-level baseline task. We compared three groups: Deaf native signers, hearing native signers, and hearing nonsigners. None of the participants had any knowledge of Tic Tac. All three groups activated an extensive frontal-posterior network in response to both types of stimuli. Superior temporal cortex, including the planum temporale, was activated bilaterally in response to both types of gesture in all groups, irrespective of hearing status. The engagement of these traditionally auditory processing regions was greater in Deaf than hearing participants. These data suggest that the planum temporale may be responsive to visual movement in both deaf and hearing people, yet when hearing is absent early in development, the visual processing role of this region is enhanced. Greater activation for BSL than Tic Tac was observed in signers, but not in nonsigners, in the left posterior superior temporal sulcus and gyrus, extending into the supramarginal gyrus. This suggests that the left posterior perisylvian cortex is of fundamental importance to language processing, regardless of the modality in which it is conveyed.

  5. Mothers Respond Differently to Infants' Gestural versus Nongestural Communicative Bids

    Science.gov (United States)

    Olson, Janet; Masur, Elise Frank

    2013-01-01

    Thirty infants at 1;1 and their mothers were videotaped while playing for 18 minutes. Experimental stimuli were presented in three communicative intent contexts--proto-declarative, proto-imperative, and ambiguous--to elicit infant communicative bids that did and did not contain gestures. Mothers' responses were analyzed, and their verbal responses…

  6. Merging of phonological and gestural circuits in early language evolution.

    Science.gov (United States)

    Aboitiz, Francisco; García, Ricardo

    2009-01-01

    In the monkey, cortical auditory projections subdivide into a dorsal stream mostly involved in spatiotemporal processing, that projects mainly to dorsal frontal areas; and a ventral stream involved in stimulus identification, connected to the ventrolateral prefrontal cortex (VLPFC). We propose that in the human lineage, part of the dorsal auditory pathway has specialized in vocalization processing, enhancing vocal repetition and short-term memory capacities that are crucial for linguistic development. In the human, the vocalization-related dorsal auditory component tends to converge in the VLPFC with the ventral auditory stream and with projections involved in gestural control; and consists of a direct connection between the auditory cortex and the VLPFC via the arcuate fasciculus, and an indirect pathway via the supramarginal gyrus. Additionally, intraparietal and inferior parietal afferents to the VLPFC are associated with communicative hand gestures, with manipulation skills and with early tool-making. Although in general terms compatible with the mirror-neuron gestural hypothesis for language origins, this proposal underlines the participation of the dorsal auditory pathway in voice processing as a key event that marked the beginning of human phonology and the subsequent evolution of language. Instead, the mirror neuron system for gestures and the primitive vocalization network (ventral pathway) contributed to provide a communicative scaffolding that facilitated the emergence of human-like phonology. Furthermore, we emphasize the phylogenetic continuity (homology) between non-human and human vocalization and their neural substrates, something that is not usually stressed in the mirror neuron perspective.

  7. Behand: augmented virtuality gestural interaction for mobile phones

    DEFF Research Database (Denmark)

    Caballero, Luz; Chang, Ting-Ray; Menendez Blanco, Maria

    2010-01-01

    This paper introduces Behand. Behand is a new way of interaction that allows a mobile phone user to manipulate virtual three-dimensional objects inside the phone by gesturing with his hand. Behand provides a straightforward 3D interface, something current mobile phones do not offer, and extends t...

  8. Deep Dynamic Neural Networks for Multimodal Gesture Segmentation and Recognition.

    Science.gov (United States)

    Wu, Di; Pigou, Lionel; Kindermans, Pieter-Jan; Le, Nam Do-Hoang; Shao, Ling; Dambre, Joni; Odobez, Jean-Marc

    2016-08-01

    This paper describes a novel method called Deep Dynamic Neural Networks (DDNN) for multimodal gesture recognition. A semi-supervised hierarchical dynamic framework based on a Hidden Markov Model (HMM) is proposed for simultaneous gesture segmentation and recognition where skeleton joint information, depth and RGB images, are the multimodal input observations. Unlike most traditional approaches that rely on the construction of complex handcrafted features, our approach learns high-level spatio-temporal representations using deep neural networks suited to the input modality: a Gaussian-Bernouilli Deep Belief Network (DBN) to handle skeletal dynamics, and a 3D Convolutional Neural Network (3DCNN) to manage and fuse batches of depth and RGB images. This is achieved through the modeling and learning of the emission probabilities of the HMM required to infer the gesture sequence. This purely data driven approach achieves a Jaccard index score of 0.81 in the ChaLearn LAP gesture spotting challenge. The performance is on par with a variety of state-of-the-art hand-tuned feature-based approaches and other learning-based methods, therefore opening the door to the use of deep learning techniques in order to further explore multimodal time series data.

  9. Effects of a robotic storyteller's moody gestures on storytelling perception

    NARCIS (Netherlands)

    Xu, J.; Broekens, J.; Hindriks, K.; Neerincx, M.A.

    2015-01-01

    A parameterized behavior model was developed for robots to show mood during task execution. In this study, we applied the model to the coverbal gestures of a robotic storyteller. This study investigated whether parameterized mood expression can 1) show mood that is changing over time; 2) reinforce

  10. Gestural Abilities of Children with Specific Language Impairment

    Science.gov (United States)

    Wray, Charlotte; Norbury, Courtenay Frazier; Alcock, Katie

    2016-01-01

    Background: Specific language impairment (SLI) is diagnosed when language is significantly below chronological age expectations in the absence of other developmental disorders, sensory impairments or global developmental delays. It has been suggested that gesture may enhance communication in children with SLI by providing an alternative means to…

  11. Hand Gesture and Neural Network Based Human Computer Interface

    Directory of Open Access Journals (Sweden)

    Aekta Patel

    2014-06-01

    Full Text Available Computer is used by every people either at their work or at home. Our aim is to make computers that can understand human language and can develop a user friendly human computer interfaces (HCI. Human gestures are perceived by vision. The research is for determining human gestures to create an HCI. Coding of these gestures into machine language demands a complex programming algorithm. In this project, We have first detected, recognized and pre-processing the hand gestures by using General Method of recognition. Then We have found the recognized image’s properties and using this, mouse movement, click and VLC Media player controlling are done. After that we have done all these functions thing using neural network technique and compared with General recognition method. From this we can conclude that neural network technique is better than General Method of recognition. In this, I have shown the results based on neural network technique and comparison between neural network method & general method.

  12. Generating Control Commands From Gestures Sensed by EMG

    Science.gov (United States)

    Wheeler, Kevin R.; Jorgensen, Charles

    2006-01-01

    An effort is under way to develop noninvasive neuro-electric interfaces through which human operators could control systems as diverse as simple mechanical devices, computers, aircraft, and even spacecraft. The basic idea is to use electrodes on the surface of the skin to acquire electromyographic (EMG) signals associated with gestures, digitize and process the EMG signals to recognize the gestures, and generate digital commands to perform the actions signified by the gestures. In an experimental prototype of such an interface, the EMG signals associated with hand gestures are acquired by use of several pairs of electrodes mounted in sleeves on a subject s forearm (see figure). The EMG signals are sampled and digitized. The resulting time-series data are fed as input to pattern-recognition software that has been trained to distinguish gestures from a given gesture set. The software implements, among other things, hidden Markov models, which are used to recognize the gestures as they are being performed in real time. Thus far, two experiments have been performed on the prototype interface to demonstrate feasibility: an experiment in synthesizing the output of a joystick and an experiment in synthesizing the output of a computer or typewriter keyboard. In the joystick experiment, the EMG signals were processed into joystick commands for a realistic flight simulator for an airplane. The acting pilot reached out into the air, grabbed an imaginary joystick, and pretended to manipulate the joystick to achieve left and right banks and up and down pitches of the simulated airplane. In the keyboard experiment, the subject pretended to type on a numerical keypad, and the EMG signals were processed into keystrokes. The results of the experiments demonstrate the basic feasibility of this method while indicating the need for further research to reduce the incidence of errors (including confusion among gestures). Topics that must be addressed include the numbers and arrangements

  13. Perception of initial obstruent voicing is influenced by gestural organization

    Science.gov (United States)

    Best, Catherine T.; Hallé, Pierre A.

    2009-01-01

    Cross-language differences in phonetic settings for phonological contrasts of stop voicing have posed a challenge for attempts to relate specific phonological features to specific phonetic details. We probe the phonetic-phonological relationship for voicing contrasts more broadly, analyzing in particular their relevance to nonnative speech perception, from two theoretical perspectives: feature geometry and articulatory phonology. Because these perspectives differ in assumptions about temporal/phasing relationships among features/gestures within syllable onsets, we undertook a cross-language investigation on perception of obstruent (stop, fricative) voicing contrasts in three nonnative onsets that use a common set of features/gestures but with differing time-coupling. Listeners of English and French, which differ in their phonetic settings for word-initial stop voicing distinctions, were tested on perception of three onset types, all nonnative to both English and French, that differ in how initial obstruent voicing is coordinated with a lateral feature/gesture and additional obstruent features/gestures. The targets, listed from least complex to most complex onsets, were: a lateral fricative voicing distinction (Zulu /ɬ/-ɮ/), a laterally-released affricate voicing distinction (Tlingit /tɬ/-/dɮ/), and a coronal stop voicing distinction in stop+/l/ clusters (Hebrew /tl/-/dl/). English and French listeners' performance reflected the differences in their native languages' stop voicing distinctions, compatible with prior perceptual studies on singleton consonant onsets. However, both groups' abilities to perceive voicing as a separable parameter also varied systematically with the structure of the target onsets, supporting the notion that the gestural organization of syllable onsets systematically affects perception of initial voicing distinctions. PMID:20228878

  14. Perception of initial obstruent voicing is influenced by gestural organization.

    Science.gov (United States)

    Best, Catherine T; Hallé, Pierre A

    2010-01-01

    Cross-language differences in phonetic settings for phonological contrasts of stop voicing have posed a challenge for attempts to relate specific phonological features to specific phonetic details. We probe the phonetic-phonological relationship for voicing contrasts more broadly, analyzing in particular their relevance to nonnative speech perception, from two theoretical perspectives: feature geometry and articulatory phonology. Because these perspectives differ in assumptions about temporal/phasing relationships among features/gestures within syllable onsets, we undertook a cross-language investigation on perception of obstruent (stop, fricative) voicing contrasts in three nonnative onsets that use a common set of features/gestures but with differing time-coupling. Listeners of English and French, which differ in their phonetic settings for word-initial stop voicing distinctions, were tested on perception of three onset types, all nonnative to both English and French, that differ in how initial obstruent voicing is coordinated with a lateral feature/gesture and additional obstruent features/gestures. The targets, listed from least complex to most complex onsets, were: a lateral fricative voicing distinction (Zulu /ɬ/-ɮ/), a laterally-released affricate voicing distinction (Tlingit /tɬ/-/dɮ/), and a coronal stop voicing distinction in stop+/l/ clusters (Hebrew /tl/-/dl/). English and French listeners' performance reflected the differences in their native languages' stop voicing distinctions, compatible with prior perceptual studies on singleton consonant onsets. However, both groups' abilities to perceive voicing as a separable parameter also varied systematically with the structure of the target onsets, supporting the notion that the gestural organization of syllable onsets systematically affects perception of initial voicing distinctions.

  15. Predicting an Individual’s Gestures from the Interlocutor’s Co-occurring Gestures and Related Speech

    DEFF Research Database (Denmark)

    Navarretta, Costanza

    2016-01-01

    features of head movements and facial expressions contributes to the identification of the presence and shape of head movements and facial expressions respectively. Speech only contributes to prediction in the case of facial expressions. The obtained results show that the gestures of the interlocutors...

  16. Handling agents and patients: representational cospeech gestures help children comprehend complex syntactic constructions.

    Science.gov (United States)

    Theakston, Anna L; Coates, Anna; Holler, Judith

    2014-07-01

    Gesture is an important precursor of children's early language development, for example, in the transition to multiword speech and as a predictor of later language abilities. However, it is unclear whether gestural input can influence children's comprehension of complex grammatical constructions. In Study 1, 3- (M = 3 years 5 months) and 4-year-old (M = 4 years 6 months) children witnessed 2-participant actions described using the infrequent object-cleft-construction (OCC; It was the dog that the cat chased). Half saw an experimenter accompanying her descriptions with gestures representing the 2 participants and indicating the direction of action; the remaining children did not witness gesture. Children who witnessed gestures showed better comprehension of the OCC than those who did not witness gestures, both in and beyond the immediate physical context, but this benefit was restricted to the oldest 4-year-olds. In Study 2, a further group of older 4-year-old children (M = 4 years 7 months) witnessed the same 2-participant actions described by an experimenter and accompanied by gestures, but the gesture represented only the 2 participants and not the direction of the action. Again, a benefit of gesture was observed on subsequent comprehension of the OCC. We interpret these findings as demonstrating that representational cospeech gestures can help children comprehend complex linguistic structures by highlighting the roles played by the participants in the event. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  17. Coronary Heart Disease Preoperative Gesture Interactive Diagnostic System Based on Augmented Reality.

    Science.gov (United States)

    Zou, Yi-Bo; Chen, Yi-Min; Gao, Ming-Ke; Liu, Quan; Jiang, Si-Yu; Lu, Jia-Hui; Huang, Chen; Li, Ze-Yu; Zhang, Dian-Hua

    2017-08-01

    Coronary heart disease preoperative diagnosis plays an important role in the treatment of vascular interventional surgery. Actually, most doctors are used to diagnosing the position of the vascular stenosis and then empirically estimating vascular stenosis by selective coronary angiography images instead of using mouse, keyboard and computer during preoperative diagnosis. The invasive diagnostic modality is short of intuitive and natural interaction and the results are not accurate enough. Aiming at above problems, the coronary heart disease preoperative gesture interactive diagnostic system based on Augmented Reality is proposed. The system uses Leap Motion Controller to capture hand gesture video sequences and extract the features which that are the position and orientation vector of the gesture motion trajectory and the change of the hand shape. The training planet is determined by K-means algorithm and then the effect of gesture training is improved by multi-features and multi-observation sequences for gesture training. The reusability of gesture is improved by establishing the state transition model. The algorithm efficiency is improved by gesture prejudgment which is used by threshold discriminating before recognition. The integrity of the trajectory is preserved and the gesture motion space is extended by employing space rotation transformation of gesture manipulation plane. Ultimately, the gesture recognition based on SRT-HMM is realized. The diagnosis and measurement of the vascular stenosis are intuitively and naturally realized by operating and measuring the coronary artery model with augmented reality and gesture interaction techniques. All of the gesture recognition experiments show the distinguish ability and generalization ability of the algorithm and gesture interaction experiments prove the availability and reliability of the system.

  18. Feature Extraction of Gesture Recognition Based on Image Analysis for Different Environmental Conditions

    Directory of Open Access Journals (Sweden)

    Rahul A. Dedakiya

    2015-05-01

    Full Text Available Gesture recognition system received great attention in the recent few years because of its manifoldness applications and the ability to interact with machine efficiently through human computer interaction. Gesture is one of human body languages which are popularly used in our daily life. It is a communication system that consists of hand movements and facial expressions via communication by actions and sights. This research mainly focuses on the research of gesture extraction and finger segmentation in the gesture recognition. In this paper, we have used image analysis technologies to create an application by encoding in MATLAB program. We will use this application to segment and extract the finger from one specific gesture. This paper is aimed to give gesture recognition in different natural conditions like dark and glare condition, different distances condition and similar object condition then collect the results to calculate the successful extraction rate.

  19. Intelligent RF-Based Gesture Input Devices Implemented Using e-Textiles

    Directory of Open Access Journals (Sweden)

    Dana Hughes

    2017-01-01

    Full Text Available We present an radio-frequency (RF-based approach to gesture detection and recognition, using e-textile versions of common transmission lines used in microwave circuits. This approach allows for easy fabrication of input swatches that can detect a continuum of finger positions and similarly basic gestures, using a single measurement line. We demonstrate that the swatches can perform gesture detection when under thin layers of cloth or when weatherproofed, providing a high level of versatility not present with other types of approaches. Additionally, using small convolutional neural networks, low-level gestures can be identified with a high level of accuracy using a small, inexpensive microcontroller, allowing for an intelligent fabric that reports only gestures of interest, rather than a simple sensor requiring constant surveillance from an external computing device. The resulting e-textile smart composite has applications in controlling wearable devices by providing a simple, eyes-free mechanism to input simple gestures.

  20. Intelligent RF-Based Gesture Input Devices Implemented Using e-Textiles †

    Science.gov (United States)

    Hughes, Dana; Profita, Halley; Radzihovsky, Sarah; Correll, Nikolaus

    2017-01-01

    We present an radio-frequency (RF)-based approach to gesture detection and recognition, using e-textile versions of common transmission lines used in microwave circuits. This approach allows for easy fabrication of input swatches that can detect a continuum of finger positions and similarly basic gestures, using a single measurement line. We demonstrate that the swatches can perform gesture detection when under thin layers of cloth or when weatherproofed, providing a high level of versatility not present with other types of approaches. Additionally, using small convolutional neural networks, low-level gestures can be identified with a high level of accuracy using a small, inexpensive microcontroller, allowing for an intelligent fabric that reports only gestures of interest, rather than a simple sensor requiring constant surveillance from an external computing device. The resulting e-textile smart composite has applications in controlling wearable devices by providing a simple, eyes-free mechanism to input simple gestures. PMID:28125010

  1. A basic gesture and motion format for virtual reality multisensory applications

    CERN Document Server

    Luciani, Annie; Couroussé, Damien; Castagné, Nicolas; Cadoz, Claude; Florens, Jean-Loup

    2010-01-01

    The question of encoding movements such as those produced by human gestures may become central in the coming years, given the growing importance of movement data exchanges between heterogeneous systems and applications (musical applications, 3D motion control, virtual reality interaction, etc.). For the past 20 years, various formats have been proposed for encoding movement, especially gestures. Though, these formats, at different degrees, were designed in the context of quite specific applications (character animation, motion capture, musical gesture, biomechanical concerns...). The article introduce a new file format, called GMS (for 'Gesture and Motion Signal'), with the aim of being more low-level and generic, by defining the minimal features a format carrying movement/gesture information needs, rather than by gathering all the information generally given by the existing formats. The article argues that, given its growing presence in virtual reality situations, the "gesture signal" itself must be encoded,...

  2. Tapping-In-Place: Increasing the Naturalness of Immersive Walking-In-Place Locomotion Through Novel Gestural Input

    DEFF Research Database (Denmark)

    Nilsson, Niels Christian; Serafin, Stefania; Laursen, Morten Havmøller

    2013-01-01

    to a less natural walking experience. In this paper we present two novel forms of gestural input for WIP locomotion and describe a within subjects study comparing these to the traditional stepping gesture. The two gestures proposed are: a wiping gesture where the user alternately bends each knee, moving one...... lower leg backwards, and a tapping gesture where the user in turn lifts each heel without breaking contact with the ground. Visual feedback was delivered through a head-mounted display and auditory feedback was provided by means of a 24-channel surround sound system. The gestures were evaluated in terms...

  3. Electronic Hand Glove Through Gestures For Verbally Challenged Persons

    Directory of Open Access Journals (Sweden)

    Mukesh P. Mahajan

    2016-04-01

    Full Text Available This paper presents design of Electronic hand glove to facilitate an easy and better communication through synthesized speech for the verbally challenged peoples. Most Probably, a speechless person communicates through sign language which is not understood by the majority of people. The proposed system is designed to solve this problem. Gestures of fingers of a person of this glove will be converted into synthesized speech to convey an audible message to others. Speech is typically accompanied by manual gestures. Earlier there were many systems designed for dumb and deaf to interact with ordinary people. But these systems had many drawbacks and interrupts. We are designing such a system that even a dumb, deaf and blind can communicate with each other without taking help of ordinary people. This system is going to help them to interact with the outside world

  4. Gesture recognition for smart home applications using portable radar sensors.

    Science.gov (United States)

    Wan, Qian; Li, Yiran; Li, Changzhi; Pal, Ranadip

    2014-01-01

    In this article, we consider the design of a human gesture recognition system based on pattern recognition of signatures from a portable smart radar sensor. Powered by AAA batteries, the smart radar sensor operates in the 2.4 GHz industrial, scientific and medical (ISM) band. We analyzed the feature space using principle components and application-specific time and frequency domain features extracted from radar signals for two different sets of gestures. We illustrate that a nearest neighbor based classifier can achieve greater than 95% accuracy for multi class classification using 10 fold cross validation when features are extracted based on magnitude differences and Doppler shifts as compared to features extracted through orthogonal transformations. The reported results illustrate the potential of intelligent radars integrated with a pattern recognition system for high accuracy smart home and health monitoring purposes.

  5. Facial Gesture Recognition Using Correlation And Mahalanobis Distance

    CERN Document Server

    Kapoor, Supriya; Bhatia, Rahul

    2010-01-01

    Augmenting human computer interaction with automated analysis and synthesis of facial expressions is a goal towards which much research effort has been devoted recently. Facial gesture recognition is one of the important component of natural human-machine interfaces; it may also be used in behavioural science, security systems and in clinical practice. Although humans recognise facial expressions virtually without effort or delay, reliable expression recognition by machine is still a challenge. The face expression recognition problem is challenging because different individuals display the same expression differently. This paper presents an overview of gesture recognition in real time using the concepts of correlation and Mahalanobis distance.We consider the six universal emotional categories namely joy, anger, fear, disgust, sadness and surprise.

  6. An improved HMM/SVM dynamic hand gesture recognition algorithm

    Science.gov (United States)

    Zhang, Yi; Yao, Yuanyuan; Luo, Yuan

    2015-10-01

    In order to improve the recognition rate and stability of dynamic hand gesture recognition, for the low accuracy rate of the classical HMM algorithm in train the B parameter, this paper proposed an improved HMM/SVM dynamic gesture recognition algorithm. In the calculation of the B parameter of HMM model, this paper introduced the SVM algorithm which has the strong ability of classification. Through the sigmoid function converted the state output of the SVM into the probability and treat this probability as the observation state transition probability of the HMM model. After this, it optimized the B parameter of HMM model and improved the recognition rate of the system. At the same time, it also enhanced the accuracy and the real-time performance of the human-computer interaction. Experiments show that this algorithm has a strong robustness under the complex background environment and the varying illumination environment. The average recognition rate increased from 86.4% to 97.55%.

  7. Perspectives on gesture from music informatics, performance and aesthetics

    DEFF Research Database (Denmark)

    Jensen, Kristoffer; Frimodt-Møller, Søren; Grund, Cynthia

    2014-01-01

    and gestures in emotional musical expression using motion capture, the visual and auditive cues musicians provide each other in an ensemble when rehearsing, and the decision processes involved when a musician coordinates with other musicians. These projects seek to combine and compare intuitions derived from......This article chronicles the research of the Nordic Network of Music Informatics, Performance and Aesthetics (NNIMIPA), and shows how the milieux bridge the gap between the disciplines involved. As examples, three projects within NNIMIPA involving performance interaction examine the role of audio...... low-tech instructional music workshops that rely heavily on the use of whole-body gestures with the insights provided by high-tech studies and formal logic models of the performing musician, not only with respect to the sound, but also with regard to the movements of the performer and the mechanisms...

  8. Feasibility of interactive gesture control of a robotic microscope

    Directory of Open Access Journals (Sweden)

    Antoni Sven-Thomas

    2015-09-01

    Full Text Available Robotic devices become increasingly available in the clinics. One example are motorized surgical microscopes. While there are different scenarios on how to use the devices for autonomous tasks, simple and reliable interaction with the device is a key for acceptance by surgeons. We study, how gesture tracking can be integrated within the setup of a robotic microscope. In our setup, a Leap Motion Controller is used to track hand motion and adjust the field of view accordingly. We demonstrate with a survey that moving the field of view over a specified course is possible even for untrained subjects. Our results indicate that touch-less interaction with robots carrying small, near field gesture sensors is feasible and can be of use in clinical scenarios, where robotic devices are used in direct proximity of patient and physicians.

  9. [George Herbert Mead. Thought as the conversation of interior gestures].

    Science.gov (United States)

    Quéré, Louis

    2010-01-01

    For George Herbert Mead, thinking amounts to holding an "inner conversation of gestures ". Such a conception does not seem especially original at first glance. What makes it truly original is the "social-behavioral" approach of which it is a part, and, particularly, two ideas. The first is that the conversation in question is a conversation of gestures or attitudes, and the second, that thought and reflexive intelligence arise from the internalization of an external process supported by the social mechanism of communication: that of conduct organization. It imports then to understand what distinguishes such ideas from those of the founder of behavioral psychology, John B. Watson, for whom thinking amounts to nothing other than subvocal speech.

  10. Primate vocalization, gesture, and the evolution of human language.

    Science.gov (United States)

    Arbib, Michael A; Liebal, Katja; Pika, Simone

    2008-12-01

    The performance of language is multimodal, not confined to speech. Review of monkey and ape communication demonstrates greater flexibility in the use of hands and body than for vocalization. Nonetheless, the gestural repertoire of any group of nonhuman primates is small compared with the vocabulary of any human language and thus, presumably, of the transitional form called protolanguage. We argue that it was the coupling of gestural communication with enhanced capacities for imitation that made possible the emergence of protosign to provide essential scaffolding for protospeech in the evolution of protolanguage. Similarly, we argue against a direct evolutionary path from nonhuman primate vocalization to human speech. The analysis refines aspects of the mirror system hypothesis on the role of the primate brain's mirror system for manual action in evolution of the human language-ready brain.

  11. Gesture Interaction Browser-Based 3D Molecular Viewer.

    Science.gov (United States)

    Virag, Ioan; Stoicu-Tivadar, Lăcrămioara; Crişan-Vida, Mihaela

    2016-01-01

    The paper presents an open source system that allows the user to interact with a 3D molecular viewer using associated hand gestures for rotating, scaling and panning the rendered model. The novelty of this approach is that the entire application is browser-based and doesn't require installation of third party plug-ins or additional software components in order to visualize the supported chemical file formats. This kind of solution is suitable for instruction of users in less IT oriented environments, like medicine or chemistry. For rendering various molecular geometries our team used GLmol (a molecular viewer written in JavaScript). The interaction with the 3D models is made with Leap Motion controller that allows real-time tracking of the user's hand gestures. The first results confirmed that the resulting application leads to a better way of understanding various types of translational bioinformatics related problems in both biomedical research and education.

  12. Multi-Touch Screen Interfaces And Gesture Analysis: A Study

    Directory of Open Access Journals (Sweden)

    Mrudula Nimbarte

    2011-12-01

    Full Text Available The way how we handle computers today will soon change. The future technology will allow us tointeract with the computer on different level from the current technology what we are used to. The toolssuch as the mouse and the keyboard need to communicate with the computer will slowly disappearand be replaced with more comfortable and more natural tools for the human being to use. That future isalready here. The rate of how touch screen hardware and applications are used is growing rapidlyand will break new r e c o r ds in n e a r f ut u r e . This new technology requires different ways ofdetecting inputs from the user - inputs which will be made out of on- screen gestures rather than byclicking of buttons or scrolling mouse wheels. In this paper we s tudied the gestures defined formulti-touch screen interfaces, the methods used to detect them and how they are passed on to otherapplications.

  13. Finger tips detection for two handed gesture recognition

    Science.gov (United States)

    Bhuyan, M. K.; Kar, Mithun Kumar; Neog, Debanga Raj

    2011-10-01

    In this paper, a novel algorithm is proposed for fingertips detection in view of two-handed static hand pose recognition. In our method, finger tips of both hands are detected after detecting hand regions by skin color-based segmentation. At first, the face is removed in the image by using Haar classifier and subsequently, the regions corresponding to the gesturing hands are isolated by a region labeling technique. Next, the key geometric features characterizing gesturing hands are extracted for two hands. Finally, for all possible/allowable finger movements, a probabilistic model is developed for pose recognition. Proposed method can be employed in a variety of applications like sign language recognition and human-robot-interactions etc.

  14. Detecting Key Inter-Joint Distances and Anthropometry Effects for Static Gesture Development using Microsoft Kinect

    Science.gov (United States)

    2013-09-01

    viewed as a subfield of gesture recognition [10]. By using a discriminant analysis, we were able to identify 9 out of 400 possible key inter-joint...INTRODUCTION The development of such gesture recognition devices as Nintendo’s Wii,1 Sony’s PlayStation Move,2 and Microsoft’s Kinect3 has given a new...the device according to users’ needs. Many programs have been created within the realm of human body tracking, hand detection, gesture recognition , and

  15. Gesture-Directed Sensor-Information Fusion for Communication in Hazardous Environments

    Science.gov (United States)

    2010-06-01

    sensors for gesture recognition [1], [2]. An important future step to enhance the effectiveness of the war fighter is to integrate CBRN and other...addition to the standard eGlove magnetic and motion gesture recognition sensors. War fighters progressing through a battlespace are now providing...a camera for gesture recognition is absolutely not an option for a CBRN war fighter in a battlefield scenario. Multi sensor fusion is commonly

  16. Training experience in gestures affects the display of social gaze in baboons' communication with a human.

    Science.gov (United States)

    Bourjade, Marie; Canteloup, Charlotte; Meguerditchian, Adrien; Vauclair, Jacques; Gaunet, Florence

    2015-01-01

    Gaze behaviour, notably the alternation of gaze between distal objects and social partners that accompanies primates' gestural communication is considered a standard indicator of intentionality. However, the developmental precursors of gaze behaviour in primates' communication are not well understood. Here, we capitalized on the training in gestures dispensed to olive baboons (Papio anubis) as a way of manipulating individual communicative experience with humans. We aimed to delineate the effects of such a training experience on gaze behaviour displayed by the monkeys in relation with gestural requests. Using a food-requesting paradigm, we compared subjects trained in requesting gestures (i.e. trained subjects) to naïve subjects (i.e. control subjects) for their occurrences of (1) gaze behaviour, (2) requesting gestures and (3) temporal combination of gaze alternation with gestures. We found that training did not affect the frequencies of looking at the human's face, looking at food or alternating gaze. Hence, social gaze behaviour occurs independently from the amount of communicative experience with humans. However, trained baboons-gesturing more than control subjects-exhibited most gaze alternation combined with gestures, whereas control baboons did not. By reinforcing the display of gaze alternation along with gestures, we suggest that training may have served to enhance the communicative function of hand gestures. Finally, this study brings the first quantitative report of monkeys producing requesting gestures without explicit training by humans (controls). These results may open a window on the developmental mechanisms (i.e. incidental learning vs. training) underpinning gestural intentional communication in primates.

  17. A Kinect-based Gesture Recognition Approach for a Natural Human Robot Interface

    OpenAIRE

    Grazia Cicirelli; Carmela Attolico; Cataldo Guaragnella; Tiziana D'Orazio

    2015-01-01

    In this paper, we present a gesture recognition system for the development of a human-robot interaction (HRI) interface. Kinect cameras and the OpenNI framework are used to obtain real-time tracking of a human skeleton. Ten different gestures, performed by different persons, are defined. Quaternions of joint angles are first used as robust and significant features. Next, neural network (NN) classifiers are trained to recognize the different gestures. This work deals with different challenging...

  18. Spectral Collaborative Representation based Classification for Hand Gestures recognition on Electromyography Signals

    OpenAIRE

    Boyali, Ali

    2015-01-01

    In this study, we introduce a novel variant and application of the Collaborative Representation based Classification in spectral domain for recognition of the hand gestures using the raw surface Electromyography signals. The intuitive use of spectral features are explained via circulant matrices. The proposed Spectral Collaborative Representation based Classification (SCRC) is able to recognize gestures with higher levels of accuracy for a fairly rich gesture set. The worst recognition result...

  19. The gestures ASL signers use tell us when they are ready to learn math

    OpenAIRE

    Goldin-Meadow, Susan; Shield, Aaron; Lenzen, Daniel; Herzig, Melissa; Padden, Carol

    2012-01-01

    The manual gestures that hearing children produce when explaining their answers to math problems predict whether they will profit from instruction in those problems. We ask here whether gesture plays a similar role in deaf children, whose primary communication system is in the manual modality. Forty ASL-signing deaf children explained their solutions to math problems and were then given instruction in those problems. Children who produced many gestures conveying different information from the...

  20. Effects of observing and producing deictic gestures on memory and learning in different age groups

    OpenAIRE

    Ouwehand, Kim

    2016-01-01

    markdownabstractThe studies presented in this dissertation aimed to investigate whether observing or producing deictic gestures (i.e., pointing and tracing gestures to index a referent in space or a movement pathway), could facilitate memory and learning in children, young adults, and older adults. More specifically, regarding memory it was investigated whether the use of deictic gestures would improve performance on tasks targeting cognitive functions that are found to change with age (worki...

  1. Multi-touch rotation gestures : performance and ergonomics

    OpenAIRE

    Hoggan, Eve; Williamson, John; Nacenta, Miguel; Kristensson, Per Ola; Lehtiö, Anu

    2013-01-01

    This work was supported by the Engineering and Physical Sciences Research Council (EP/H027408/1), the Scottish Informatics and Computer Science Alliance, Max Planck Center for Visual Computing and Communications, Academy of Finland, Emil Aaltonen Foundation, and the Department of Computer Science, University of Helsinki. Rotations performed with the index finger and thumb involve some of the most complex motor action among common multi-touch gestures, yet little is known about the factors ...

  2. Children’s Interaction Ability Towards Multi-Touch Gestures

    Directory of Open Access Journals (Sweden)

    Nor Hidayah Hussain

    2016-12-01

    Full Text Available The modern, powerful and multi-touch technology has gained attention among younger users. The devices are not only limited to entertainment purposes, but is also increasingly introduced for learning purposes at kindergartens and preschool. However, the number of studies that address the interaction of multi-touch gestures among kindergarten children are still limited. In fact, such interactions foster great learning potential in developmental skills for children. This paper specifically focuses on the priority of children’s interaction abilities towards multi-touch gestures such as rotation, zoom-in and zoom-out. This study had involved ten kindergarten children in a kindergarten located in Kajang, Selangor between ages of four to six years old. A direct observation technique was used in this study. The findings show three items from the aspects of motor and cognitive skills (such as touch input unable to reach screen sensitivity, unintentional touches and fingers touching the object inaccurately are the interaction ability that should be prioritized. Thus, this study suggests that the development of an adaptive multi-touch gestures application should be adapted into to children’s motor and cognitive skills, besides the other aspects.  

  3. A Natural Interaction Interface for UAVs Using Intuitive Gesture Recognition

    Science.gov (United States)

    Chandarana, Meghan; Trujillo, Anna; Shimada, Kenji; Allen, Danette

    2016-01-01

    The popularity of unmanned aerial vehicles (UAVs) is increasing as technological advancements boost their favorability for a broad range of applications. One application is science data collection. In fields like Earth and atmospheric science, researchers are seeking to use UAVs to augment their current portfolio of platforms and increase their accessibility to geographic areas of interest. By increasing the number of data collection platforms UAVs will significantly improve system robustness and allow for more sophisticated studies. Scientists would like be able to deploy an available fleet of UAVs to fly a desired flight path and collect sensor data without needing to understand the complex low-level controls required to describe and coordinate such a mission. A natural interaction interface for a Ground Control System (GCS) using gesture recognition is developed to allow non-expert users (e.g., scientists) to define a complex flight path for a UAV using intuitive hand gesture inputs from the constructed gesture library. The GCS calculates the combined trajectory on-line, verifies the trajectory with the user, and sends it to the UAV controller to be flown.

  4. HULL GESTURE AND RESISTANCE PREDICTION OF HIGH-SPEED VESSELS*

    Institute of Scientific and Technical Information of China (English)

    NI Chong-ben; ZHU Ren-chuan; MIAO Guo-ping; FAN Ju

    2011-01-01

    Since trim and sinkage are significant while vessels are advancing forward with high speed, the predicted vessel resistance based on restrained model theory or experiment may not be real resistance of vessels during voyage. It is necessary to take the influence of hull gesture into account for oredicting the resistance of high-speed ship. In the present work the resistance problem of high speed ship is treated with the viscous flow theory, and the dynamic mesh technique is adopted to coincide with variation of hull gesture of high speed vessel on voyage. The simulation of the models of S60 ship and a trimaran moving in towing tank with high speed are conducted by using the above theory and technique. The corresponding numerical results are in good agreement with the experimental data. It indicates that the resistance prediction for high speed vessels should take hull gesture into consideration and the dynamic mesh method proposed here is effective in calculating the resistance of high speed vessels.

  5. Kinesthetic Elementary Mathematics - Creating Flow with Gesture Modality

    Directory of Open Access Journals (Sweden)

    Jussi Okkonen

    2016-06-01

    Full Text Available Educational games for young children have boomed with the growing abundance of easy to use interfaces, especially on smartphones and tablets. In addition, most major gaming consoles boast of multimodal interaction, including the more novel and immersive gesture-based or bodily interaction; a concept proved by masses of consumers including young children.  In this paper, we examine an elementary mathematics learning application that aims to promote a state of flow for children aged between 6-8 years. The application is run on a PC and uses the Microsoft Kinect sensor for motion tracking. It provides gamified approaches to teach the number system between 0-20. Our underlying hypothesis is that kinesthetic learning methods supported by bodily interaction provide leverage to different types of learners.The paper describes the results of two sets (n1=23, n2=44 of pilot tests for exercise application for PC and Kinect. The tools utilized include a short and simplified survey for the children, and another survey and open-ended questionnaire for the teachers. Our key findings relate to the user experience of gesture-based interaction and show how the gesture modality promotes flow. Furthermore, we discuss our preliminary assessment on several learning related themes.

  6. An arc-length warping algorithm for gesture recognition using quaternion representation.

    Science.gov (United States)

    Cifuentes, Jenny; Pham, Minh Tu; Moreau, Richard; Prieto, Flavio; Boulanger, Pierre

    2013-01-01

    This paper presents a new algorithm, called Dynamic Arc-Length Warping algorithm (DALW) for hand gesture recognition based on the orientation data. In this algorithm, after calculating the quaternion for each orientation measurement, we use DALW algorithm to obtain a similarity measure between different trajectories. We present the benefits of using quaternion alongside the implementation of Dynamic Arc Length Warping to present an optimized tool for gesture recognition.We show the advantages of this approach compared with other techniques. This tool can be used to distinguish similar and different gestures. An experimental validation is carried out to classify a series of simple human gestures.

  7. Full-body gestures and movements recognition: user descriptive and unsupervised learning approaches in GDL classifier

    Science.gov (United States)

    Hachaj, Tomasz; Ogiela, Marek R.

    2014-09-01

    Gesture Description Language (GDL) is a classifier that enables syntactic description and real time recognition of full-body gestures and movements. Gestures are described in dedicated computer language named Gesture Description Language script (GDLs). In this paper we will introduce new GDLs formalisms that enable recognition of selected classes of movement trajectories. The second novelty is new unsupervised learning method with which it is possible to automatically generate GDLs descriptions. We have initially evaluated both proposed extensions of GDL and we have obtained very promising results. Both the novel methodology and evaluation results will be described in this paper.

  8. Dactyl Alphabet Gesture Recognition in a Video Sequence Using Microsoft Kinect

    Science.gov (United States)

    Artyukhin, S. G.; Mestetskiy, L. M.

    2015-05-01

    This paper presents an efficient framework for solving the problem of static gesture recognition based on data obtained from the web cameras and depth sensor Kinect (RGB-D - data). Each gesture given by a pair of images: color image and depth map. The database store gestures by it features description, genereated by frame for each gesture of the alphabet. Recognition algorithm takes as input a video sequence (a sequence of frames) for marking, put in correspondence with each frame sequence gesture from the database, or decide that there is no suitable gesture in the database. First, classification of the frame of the video sequence is done separately without interframe information. Then, a sequence of successful marked frames in equal gesture is grouped into a single static gesture. We propose a method combined segmentation of frame by depth map and RGB-image. The primary segmentation is based on the depth map. It gives information about the position and allows to get hands rough border. Then, based on the color image border is specified and performed analysis of the shape of the hand. Method of continuous skeleton is used to generate features. We propose a method of skeleton terminal branches, which gives the opportunity to determine the position of the fingers and wrist. Classification features for gesture is description of the position of the fingers relative to the wrist. The experiments were carried out with the developed algorithm on the example of the American Sign Language. American Sign Language gesture has several components, including the shape of the hand, its orientation in space and the type of movement. The accuracy of the proposed method is evaluated on the base of collected gestures consisting of 2700 frames.

  9. Real-Time and Robust Method for Hand Gesture Recognition System Based on Cross-Correlation Coefficient

    OpenAIRE

    Azad, Reza; Azad, Babak; Kazerooni, Iman Tavakoli

    2014-01-01

    Hand gesture recognition possesses extensive applications in virtual reality, sign language recognition, and computer games. The direct interface of hand gestures provides us a new way for communicating with the virtual environment. In this paper a novel and real-time approach for hand gesture recognition system is presented. In the suggested method, first, the hand gesture is extracted from the main image by the image segmentation and morphological operation and then is sent to feature extra...

  10. Real-Time and Robust Method for Hand Gesture Recognition System Based on Cross-Correlation Coefficient

    OpenAIRE

    Reza Azad; Babak Azad; Iman tavakoli kazerooni

    2013-01-01

    Hand gesture recognition possesses extensive applications in virtual reality, sign language recognition, and computer games. The direct interface of hand gestures provides us a new way for communicating with the virtual environment. In this paper a novel and real-time approach for hand gesture recognition system is presented. In the suggested method, first, the hand gesture is extracted from the main image by the image segmentation and morphological operation and then is sent to feature extra...

  11. Real-Time Human Pose Estimation and Gesture Recognition from Depth Images Using Superpixels and SVM Classifier

    Directory of Open Access Journals (Sweden)

    Hanguen Kim

    2015-05-01

    Full Text Available In this paper, we present human pose estimation and gesture recognition algorithms that use only depth information. The proposed methods are designed to be operated with only a CPU (central processing unit, so that the algorithm can be operated on a low-cost platform, such as an embedded board. The human pose estimation method is based on an SVM (support vector machine and superpixels without prior knowledge of a human body model. In the gesture recognition method, gestures are recognized from the pose information of a human body. To recognize gestures regardless of motion speed, the proposed method utilizes the keyframe extraction method. Gesture recognition is performed by comparing input keyframes with keyframes in registered gestures. The gesture yielding the smallest comparison error is chosen as a recognized gesture. To prevent recognition of gestures when a person performs a gesture that is not registered, we derive the maximum allowable comparison errors by comparing each registered gesture with the other gestures. We evaluated our method using a dataset that we generated. The experiment results show that our method performs fairly well and is applicable in real environments.

  12. Real-time human pose estimation and gesture recognition from depth images using superpixels and SVM classifier.

    Science.gov (United States)

    Kim, Hanguen; Lee, Sangwon; Lee, Dongsung; Choi, Soonmin; Ju, Jinsun; Myung, Hyun

    2015-05-26

    In this paper, we present human pose estimation and gesture recognition algorithms that use only depth information. The proposed methods are designed to be operated with only a CPU (central processing unit), so that the algorithm can be operated on a low-cost platform, such as an embedded board. The human pose estimation method is based on an SVM (support vector machine) and superpixels without prior knowledge of a human body model. In the gesture recognition method, gestures are recognized from the pose information of a human body. To recognize gestures regardless of motion speed, the proposed method utilizes the keyframe extraction method. Gesture recognition is performed by comparing input keyframes with keyframes in registered gestures. The gesture yielding the smallest comparison error is chosen as a recognized gesture. To prevent recognition of gestures when a person performs a gesture that is not registered, we derive the maximum allowable comparison errors by comparing each registered gesture with the other gestures. We evaluated our method using a dataset that we generated. The experiment results show that our method performs fairly well and is applicable in real environments.

  13. Gesturing during mental problem solving reduces eye movements, especially for individuals with lower visual working memory capacity

    NARCIS (Netherlands)

    Pouw, Wim T J L; Mavilidi, Myrto Foteini; van Gog, Tamara; Paas, Fred

    2016-01-01

    Non-communicative hand gestures have been found to benefit problem-solving performance. These gestures seem to compensate for limited internal cognitive capacities, such as visual working memory capacity. Yet, it is not clear how gestures might perform this cognitive function. One hypothesis is that

  14. Gesturing during mental problem solving reduces eye movements, especially for individuals with lower visual working memory capacity

    NARCIS (Netherlands)

    W.T.J.L. Pouw (Wim); M.-F. Mavilidi (Myrto-Foteini); T.A.J.M. van Gog (Tamara); G.W.C. Paas (Fred)

    2016-01-01

    textabstractNon-communicative hand gestures have been found to benefit problem-solving performance. These gestures seem to compensate for limited internal cognitive capacities, such as visual working memory capacity. Yet, it is not clear how gestures might perform this cognitive function. One hypoth

  15. Observation of Depictive Versus Tracing Gestures Selectively Aids Verbal Versus Visual-Spatial Learning in Primary School Children

    NARCIS (Netherlands)

    van Wermeskerken, Margot; Fijan, Nathalie; Eielts, Charly; Pouw, Wim T. J. L.

    2016-01-01

    Previous research has established that gesture observation aids learning in children. The current study examined whether observation of gestures (i.e. depictive and tracing gestures) differentially affected verbal and visual-spatial retention when learning a route and its street names. Specifically,

  16. Phonetic Effects on the Timing of Gestural Coordination in Modern Greek Consonant Clusters

    Science.gov (United States)

    Yip, Jonathan Chung-Kay

    2013-01-01

    Theoretical approaches to the principles governing the coordination of speech gestures differ in their assessment of the contributions of biomechanical and perceptual pressures on this coordination. Perceptually-oriented accounts postulate that, for consonant-consonant (C1-C2) sequences, gestural timing patterns arise from speakers' sensitivity to…

  17. When gesture-speech combinations do and do not index linguistic change.

    Science.gov (United States)

    Ozçalışkan, Seyda; Goldin-Meadow, Susan

    2009-02-01

    At the one-word stage children use gesture to supplement their speech ('eat'+point at cookie), and the onset of such supplementary gesture-speech combinations predicts the onset of two-word speech ('eat cookie'). Gesture thus signals a child's readiness to produce two-word constructions. The question we ask here is what happens when the child begins to flesh out these early skeletal two-word constructions with additional arguments. One possibility is that gesture continues to be a forerunner of linguistic change as children flesh out their skeletal constructions by adding arguments. Alternatively, after serving as an opening wedge into language, gesture could cease its role as a forerunner of linguistic change. Our analysis of 40 children--from 14 to 34 months--showed that children relied on gesture to produce the first instance of a variety of constructions. However, once each construction was established in their repertoire, the children did not use gesture to flesh out the construction. Gesture thus acts as a harbinger of linguistic steps only when those steps involve new constructions, not when the steps merely flesh out existing constructions.

  18. L2 Vocabulary Teaching with Student- and Teacher-Generated Gestures: A Classroom Perspective

    Science.gov (United States)

    Clark, Jordan; Trofimovich, Pavel

    2016-01-01

    This action research project explored the use of gestures for teaching and learning French vocabulary in an upper-beginner adult classroom with 21 students from various language backgrounds. Over the course of 4 weeks, the teacher developed and used 4 sets of themed activities using both teacher- and student-generated gestures to introduce new…

  19. Effects of the Instructor's Pointing Gestures on Learning Performance in Video Lectures

    Science.gov (United States)

    Pi, Zhongling; Hong, Jianzhong; Yang, Jiumin

    2017-01-01

    Recent research on video lectures has indicated that the instructor's pointing gestures facilitate learning performance. This study examined whether the instructor's pointing gestures were superior to nonhuman cues in enhancing video lectures learning, and second, if there was a positive effect, what the underlying mechanisms of the effect might…

  20. Hospitable Gestures in the University Lecture: Analysing Derrida's Pedagogy

    Science.gov (United States)

    Ruitenberg, Claudia

    2014-01-01

    Based on archival research, this article analyses the pedagogical gestures in Derrida's (largely unpublished) lectures on hospitality (1995/96), with particular attention to the enactment of hospitality in these gestures. The motivation for this analysis is twofold. First, since the large-group university lecture has been widely critiqued as…

  1. Gesture-based control of in-car devices; Gestenbasierte Interaktion mit Geraeten im Automobil

    Energy Technology Data Exchange (ETDEWEB)

    Zobl, M.; Geiger, M.; Morguet, P.; Nieschulz, R.; Lang, M. [Technische Univ. Muenchen (Germany)

    2002-07-01

    Extensive usability tests were taken in our driving simulator to determine the use of gestures for the operation of in-car devices. In this paper the results are discussed and the requirements for an in-car gesture recognition system are described. (orig.)

  2. HAGR-D: A Novel Approach for Gesture Recognition with Depth Maps.

    Science.gov (United States)

    Santos, Diego G; Fernandes, Bruno J T; Bezerra, Byron L D

    2015-11-12

    The hand is an important part of the body used to express information through gestures, and its movements can be used in dynamic gesture recognition systems based on computer vision with practical applications, such as medical, games and sign language. Although depth sensors have led to great progress in gesture recognition, hand gesture recognition still is an open problem because of its complexity, which is due to the large number of small articulations in a hand. This paper proposes a novel approach for hand gesture recognition with depth maps generated by the Microsoft Kinect Sensor (Microsoft, Redmond, WA, USA) using a variation of the CIPBR (convex invariant position based on RANSAC) algorithm and a hybrid classifier composed of dynamic time warping (DTW) and Hidden Markov models (HMM), called the hybrid approach for gesture recognition with depth maps (HAGR-D). The experiments show that the proposed model overcomes other algorithms presented in the literature in hand gesture recognition tasks, achieving a classification rate of 97.49% in the MSRGesture3D dataset and 98.43% in the RPPDI dynamic gesture dataset.

  3. Research on gesture recognition of augmented reality maintenance guiding system based on improved SVM

    Science.gov (United States)

    Zhao, Shouwei; Zhang, Yong; Zhou, Bin; Ma, Dongxi

    2014-09-01

    Interaction is one of the key techniques of augmented reality (AR) maintenance guiding system. Because of the complexity of the maintenance guiding system's image background and the high dimensionality of gesture characteristics, the whole process of gesture recognition can be divided into three stages which are gesture segmentation, gesture characteristic feature modeling and trick recognition. In segmentation stage, for solving the misrecognition of skin-like region, a segmentation algorithm combing background mode and skin color to preclude some skin-like regions is adopted. In gesture characteristic feature modeling of image attributes stage, plenty of characteristic features are analyzed and acquired, such as structure characteristics, Hu invariant moments features and Fourier descriptor. In trick recognition stage, a classifier based on Support Vector Machine (SVM) is introduced into the augmented reality maintenance guiding process. SVM is a novel learning method based on statistical learning theory, processing academic foundation and excellent learning ability, having a lot of issues in machine learning area and special advantages in dealing with small samples, non-linear pattern recognition at high dimension. The gesture recognition of augmented reality maintenance guiding system is realized by SVM after the granulation of all the characteristic features. The experimental results of the simulation of number gesture recognition and its application in augmented reality maintenance guiding system show that the real-time performance and robustness of gesture recognition of AR maintenance guiding system can be greatly enhanced by improved SVM.

  4. Gesture, Meaning-Making, and Embodiment: Second Language Learning in an Elementary Classroom

    Science.gov (United States)

    Rosborough, Alessandro

    2014-01-01

    The purpose of the present study was to investigate the mediational role of gesture and body movement/positioning between a teacher and an English language learner in a second-grade classroom. Responding to Thibault's (2011) call for understanding language through whole-body sense making, aspects of gesture and body positioning were analyzed for…

  5. Effects of gestures on older adults' learning from video-based models

    NARCIS (Netherlands)

    Ouwehand, Kim; van Gog, Tamara; Paas, Fred

    2015-01-01

    This study investigated whether the positive effects of gestures on learning by decreasing working memory load, found in children and young adults, also apply to older adults, who might especially benefit from gestures given memory deficits associated with aging. Participants learned a

  6. Cross-Cultural Transfer in Gesture Frequency in Chinese-English Bilinguals

    Science.gov (United States)

    So, Wing Chee

    2010-01-01

    The purpose of this paper is to examine cross-cultural differences in gesture frequency and the extent to which exposure to two cultures would affect the gesture frequency of bilinguals when speaking in both languages. The Chinese-speaking monolinguals from China, English-speaking monolinguals from America, and Chinese-English bilinguals from…

  7. Gesture and speech during shared book reading with preschoolers with specific language impairment.

    Science.gov (United States)

    Lavelli, Manuela; Barachetti, Chiara; Florit, Elena

    2015-11-01

    This study examined (a) the relationship between gesture and speech produced by children with specific language impairment (SLI) and typically developing (TD) children, and their mothers, during shared book-reading, and (b) the potential effectiveness of gestures accompanying maternal speech on the conversational responsiveness of children. Fifteen preschoolers with expressive SLI were compared with fifteen age-matched and fifteen language-matched TD children. Child and maternal utterances were coded for modality, gesture type, gesture-speech informational relationship, and communicative function. Relative to TD peers, children with SLI used more bimodal utterances and gestures adding unique information to co-occurring speech. Some differences were mirrored in maternal communication. Sequential analysis revealed that only in the SLI group maternal reading accompanied by gestures was significantly followed by child's initiatives, and when maternal non-informative repairs were accompanied by gestures, they were more likely to elicit adequate answers from children. These findings support the 'gesture advantage' hypothesis in children with SLI, and have implications for educational and clinical practice.

  8. Beat that Word: How Listeners Integrate Beat Gesture and Focus in Multimodal Speech Discourse.

    Science.gov (United States)

    Dimitrova, Diana; Chu, Mingyuan; Wang, Lin; Özyürek, Asli; Hagoort, Peter

    2016-09-01

    Communication is facilitated when listeners allocate their attention to important information (focus) in the message, a process called "information structure." Linguistic cues like the preceding context and pitch accent help listeners to identify focused information. In multimodal communication, relevant information can be emphasized by nonverbal cues like beat gestures, which represent rhythmic nonmeaningful hand movements. Recent studies have found that linguistic and nonverbal attention cues are integrated independently in single sentences. However, it is possible that these two cues interact when information is embedded in context, because context allows listeners to predict what information is important. In an ERP study, we tested this hypothesis and asked listeners to view videos capturing a dialogue. In the critical sentence, focused and nonfocused words were accompanied by beat gestures, grooming hand movements, or no gestures. ERP results showed that focused words are processed more attentively than nonfocused words as reflected in an N1 and P300 component. Hand movements also captured attention and elicited a P300 component. Importantly, beat gesture and focus interacted in a late time window of 600-900 msec relative to target word onset, giving rise to a late positivity when nonfocused words were accompanied by beat gestures. Our results show that listeners integrate beat gesture with the focus of the message and that integration costs arise when beat gesture falls on nonfocused information. This suggests that beat gestures fulfill a unique focusing function in multimodal discourse processing and that they have to be integrated with the information structure of the message.

  9. Interaction Between Words and Symbolic Gestures as Revealed By N400.

    Science.gov (United States)

    Fabbri-Destro, Maddalena; Avanzini, Pietro; De Stefani, Elisa; Innocenti, Alessandro; Campi, Cristina; Gentilucci, Maurizio

    2015-07-01

    What happens if you see a person pronouncing the word "go" after having gestured "stop"? Differently from iconic gestures, that must necessarily be accompanied by verbal language in order to be unambiguously understood, symbolic gestures are so conventionalized that they can be effortlessly understood in the absence of speech. Previous studies proposed that gesture and speech belong to a unique communication system. From an electrophysiological perspective the N400 modulation was considered the main variable indexing the interplay between two stimuli. However, while many studies tested this effect between iconic gestures and speech, little is known about the capability of an emblem to modulate the neural response to subsequently presented words. Using high-density EEG, the present study aimed at evaluating the presence of an N400 effect and its spatiotemporal dynamics, in terms of cortical activations, when emblems primed the observation of words. Participants were presented with symbolic gestures followed by a semantically congruent or incongruent verb. A N400 modulation was detected, showing larger negativity when gesture and words were incongruent. The source localization during N400 time window evidenced the activation of different portions of temporal cortex according to the gesture and word congruence. Our data provide further evidence of how the observation of an emblem influences verbal language perception, and of how this interplay is mainly instanced by different portions of the temporal cortex.

  10. Hospitable Gestures in the University Lecture: Analysing Derrida's Pedagogy

    Science.gov (United States)

    Ruitenberg, Claudia

    2014-01-01

    Based on archival research, this article analyses the pedagogical gestures in Derrida's (largely unpublished) lectures on hospitality (1995/96), with particular attention to the enactment of hospitality in these gestures. The motivation for this analysis is twofold. First, since the large-group university lecture has been widely critiqued as…

  11. Segments, Letters and Gestures: Thoughts on Doing and Teaching Phonetics and Transcription

    Science.gov (United States)

    Muller, Nicole; Papakyritsis, Ioannis

    2011-01-01

    This brief article reflects on some pitfalls inherent in the learning and teaching of segmental phonetic transcription. We suggest that a gestural interpretation to disordered speech data, in conjunction with segmental phonetic transcription, can add valuable insight into patterns of disordered speech, and that a gestural orientation should form…

  12. Segments, Letters and Gestures: Thoughts on Doing and Teaching Phonetics and Transcription

    Science.gov (United States)

    Muller, Nicole; Papakyritsis, Ioannis

    2011-01-01

    This brief article reflects on some pitfalls inherent in the learning and teaching of segmental phonetic transcription. We suggest that a gestural interpretation to disordered speech data, in conjunction with segmental phonetic transcription, can add valuable insight into patterns of disordered speech, and that a gestural orientation should form…

  13. Hand Leading and Hand Taking Gestures in Autism and Typically Developing Children

    Science.gov (United States)

    Gómez, Juan-Carlos

    2015-01-01

    Children with autism use hand taking and hand leading gestures to interact with others. This is traditionally considered to be an example of atypical behaviour illustrating the lack of intersubjective understanding in autism. However the assumption that these gestures are atypical is based upon scarce empirical evidence. In this paper I present…

  14. Gesture, Meaning-Making, and Embodiment: Second Language Learning in an Elementary Classroom

    Science.gov (United States)

    Rosborough, Alessandro

    2014-01-01

    The purpose of the present study was to investigate the mediational role of gesture and body movement/positioning between a teacher and an English language learner in a second-grade classroom. Responding to Thibault's (2011) call for understanding language through whole-body sense making, aspects of gesture and body positioning were analyzed for…

  15. Gesture, Meaning-Making, and Embodiment: Second Language Learning in an Elementary Classroom

    Science.gov (United States)

    Rosborough, Alessandro

    2014-01-01

    The purpose of the present study was to investigate the mediational role of gesture and body movement/positioning between a teacher and an English language learner in a second-grade classroom. Responding to Thibault's (2011) call for understanding language through whole-body sense making, aspects of gesture and body positioning were analyzed…

  16. Assessing Optimal Relationships Among Multi-Touch Gestures and Functions in Computer Applications

    Science.gov (United States)

    2013-07-01

    transfer among devices military personnel use for operations. The authors describe the identification of common gestures using mockups of large...set must easily transfer among devices military personnel use for operations. The authors describe the identification of common gestures using mockups ...provided to software developers to promote a Navy standard, increasing efficiency and commonality among Navy computing systems. 2.0 RESEARCH PURPOSE

  17. Asymmetric Dynamic Attunement of Speech and Gestures in the Construction of Children’s Understanding

    NARCIS (Netherlands)

    De Jonge-Hoekstra, Lisette; Van der Steen, Steffie; Van Geert, Paul; Cox, Ralf F.A.

    2016-01-01

    As children learn they use their speech to express words and their hands to gesture. This study investigates the interplay between real-time gestures and speech as children construct cognitive understanding during a hands-on science task. 12 children (M = 6, F = 6) from Kindergarten (n = 5) and firs

  18. Touch-less interaction with medical images using hand & foot gestures

    DEFF Research Database (Denmark)

    Jalaliniya, Shahram; Smith, Jeremiah; Sousa, Miguel

    2013-01-01

    control. In this paper, we present a system for gesture-based interaction with medical images based on a single wristband sensor and capacitive floor sensors, allowing for hand and foot gesture input. The first limited evaluation of the system showed an acceptable level of accuracy for 12 different hand...

  19. The Gestures ASL Signers Use Tell Us when They Are Ready to Learn Math

    Science.gov (United States)

    Goldin-Meadow, Susan; Shield, Aaron; Lenzen, Daniel; Herzig, Melissa; Padden, Carol

    2012-01-01

    The manual gestures that hearing children produce when explaining their answers to math problems predict whether they will profit from instruction in those problems. We ask here whether gesture plays a similar role in deaf children, whose primary communication system is in the manual modality. Forty ASL-signing deaf children explained their…

  20. Monolingual and Bilingual Preschoolers' Use of Gestures to Interpret Ambiguous Pronouns

    Science.gov (United States)

    Yow, W. Quin

    2015-01-01

    Young children typically do not use order-of-mention to resolve ambiguous pronouns, but may do so if given additional cues, such as gestures. Additionally, this ability to utilize gestures may be enhanced in bilingual children, who may be more sensitive to such cues due to their unique language experience. We asked monolingual and bilingual…

  1. A neuropsychological approach to the study of gesture and pantomime in aphasa

    Directory of Open Access Journals (Sweden)

    Jocelyn Kadish

    1978-08-01

    Full Text Available The impairment of  gesture and pantomime in aphasia was examined from  a neuropsychological perspective. The Boston Diagnostic Test of  Aphasia, Luria's Neuro-psychological Investigation, Pickett's Tests for  gesture and pantomime and the Performance Scale of  the Wechsler Adult Intelligence Scale were administered to six aphasic subjects with varying etiology and severity. Results indicated that severity of  aphasia was positively related to severity of  gestural disturbance; gestural ability was associated with verbal and non-linguistic aspects of  ability, within receptive and expressive levels respectively; performance  on gestural tasks was superior to that on verbal tasks irrespective of  severity of aphasia; damage to Luria's second and third functional  brain units were positively related to deficits  in receptive and expressive gesture respectively; no relationship was found  between seventy of  general intellectual impairment and gestural deficit.  It was concluded that the gestural impairment may best be understood as a breakdown in complex sequential manual motor activity. Theoretical and therapeutic implications were discussed.

  2. Effects of observing and producing deictic gestures on memory and learning in different age groups

    NARCIS (Netherlands)

    K.H.R. Ouwehand (Kim)

    2016-01-01

    markdownabstractThe studies presented in this dissertation aimed to investigate whether observing or producing deictic gestures (i.e., pointing and tracing gestures to index a referent in space or a movement pathway), could facilitate memory and learning in children, young adults, and older adults.

  3. The Role of Gestures in a Teacher-Student-Discourse about Atoms

    Science.gov (United States)

    Abels, Simone

    2016-01-01

    Recent educational research emphasises the importance of analysing talk and gestures to come to an understanding about students' conceptual learning. Gestures are perceived as complex hand movements being equivalent to other language modes. They can convey experienceable as well as abstract concepts. As well as technical language, gestures…

  4. Exploring the Relationship between Gestural Recognition and Imitation: Evidence of Dyspraxia in Autism Spectrum Disorders

    Science.gov (United States)

    Ham, Heidi Stieglitz; Bartolo, Angela; Corley, Martin; Rajendran, Gnanathusharan; Szabo, Aniko; Swanson, Sara

    2011-01-01

    In this study, the relationship between gesture recognition and imitation was explored. Nineteen individuals with Autism Spectrum Disorder (ASD) were compared to a control group of 23 typically developing children on their ability to imitate and recognize three gesture types (transitive, intransitive, and pantomimes). The ASD group performed more…

  5. The Role of Gestures and Facial Cues in Second Language Listening Comprehension

    Science.gov (United States)

    Sueyoshi, Ayano; Hardison, Debra M.

    2005-01-01

    This study investigated the contribution of gestures and facial cues to second-language learners' listening comprehension of a videotaped lecture by a native speaker of English. A total of 42 low-intermediate and advanced learners of English as a second language were randomly assigned to 3 stimulus conditions: AV-gesture-face audiovisual including…

  6. A Show of Hands: Relations between Young Children's Gesturing and Executive Function

    Science.gov (United States)

    O'Neill, Gina; Miller, Patricia H.

    2013-01-01

    This study brought together 2 literatures--gesturing and executive function--in order to examine the possible role of gesture in children's executive function. Children (N = 41) aged 2½-6 years performed a sorting-shift executive function task (Dimensional Change Card Sort). Responses of interest included correct sorting, response latency,…

  7. Hand Leading and Hand Taking Gestures in Autism and Typically Developing Children

    Science.gov (United States)

    Gómez, Juan-Carlos

    2015-01-01

    Children with autism use hand taking and hand leading gestures to interact with others. This is traditionally considered to be an example of atypical behaviour illustrating the lack of intersubjective understanding in autism. However the assumption that these gestures are atypical is based upon scarce empirical evidence. In this paper I present…

  8. Peculiarities in the Gestural Repertoire: An Early Marker for Rett Syndrome?

    Science.gov (United States)

    Marschik, Peter B.; Sigafoos, Jeff; Kaufmann, Walter E.; Wolin, Thomas; Talisa, Victor B.; Bartl-Pokorny, Katrin D.; Budimirovic, Dejan B.; Vollmann, Ralf; Einspieler, Christa

    2012-01-01

    We studied the gestures used by children with classic Rett syndrome (RTT) to provide evidence as to how this essential aspect of communicative functions develops. Seven participants with RTT were longitudinally observed between 9 and 18 months of life. The gestures used by these participants were transcribed and coded from a retrospective analysis…

  9. What Our Hands Say: Exploring Gesture Use in Subgroups of Children with Language Delay

    Science.gov (United States)

    O'Neill, Hilary; Chiat, Shula

    2015-01-01

    Purpose: The aim of this study was to investigate whether children with receptive-expressive language delay (R/ELD) and expressive-only language delay (ELD) differ in their use of gesture; to examine relationships between their use of gesture, symbolic comprehension, and language; to consider implications for assessment and for the nature of…

  10. Touch-less interaction with medical images using hand & foot gestures

    DEFF Research Database (Denmark)

    Jalaliniya, Shahram; Smith, Jeremiah; Sousa, Miguel;

    2013-01-01

    control. In this paper, we present a system for gesture-based interaction with medical images based on a single wristband sensor and capacitive floor sensors, allowing for hand and foot gesture input. The first limited evaluation of the system showed an acceptable level of accuracy for 12 different hand...

  11. Gesture as a Resource for Intersubjectivity in Second-Language Learning Situations

    Science.gov (United States)

    Belhiah, Hassan

    2013-01-01

    This study documents the role of hand gestures in achieving mutual understanding in second-language learning situations. The study tracks the way gesture is coordinated with talk in tutorials between two Korean students and their American teachers. The study adopts an interactional approach to the study of participants' talk and gestural…

  12. View invariant gesture recognition using the CSEMSwissRanger SR-2 camera

    DEFF Research Database (Denmark)

    Holte, Michael Boelstoft; Moeslund, Thomas B.; Fihl, Preben

    2008-01-01

    This paper introduces the use of range information acquired by a CSEM SwissRanger SR-2 camera for view invariant recognition of one and two arms gestures. The range data enables motion detection and 3D representation of gestures. Motion is detected by double difference range images and filtered...

  13. Maternal Gesture Use and Language Development in Infant Siblings of Children with Autism Spectrum Disorder

    Science.gov (United States)

    Talbott, Meagan R.; Nelson, Charles A.; Tager-Flusberg, Helen

    2015-01-01

    Impairments in language and communication are an early-appearing feature of autism spectrum disorders (ASD), with delays in language and gesture evident as early as the first year of life. Research with typically developing populations highlights the importance of both infant and maternal gesture use in infants' early language development.…

  14. Technological evaluation of gesture and speech interfaces for enabling dismounted soldier-robot dialogue

    Science.gov (United States)

    Kattoju, Ravi Kiran; Barber, Daniel J.; Abich, Julian; Harris, Jonathan

    2016-05-01

    With increasing necessity for intuitive Soldier-robot communication in military operations and advancements in interactive technologies, autonomous robots have transitioned from assistance tools to functional and operational teammates able to service an array of military operations. Despite improvements in gesture and speech recognition technologies, their effectiveness in supporting Soldier-robot communication is still uncertain. The purpose of the present study was to evaluate the performance of gesture and speech interface technologies to facilitate Soldier-robot communication during a spatial-navigation task with an autonomous robot. Gesture and speech semantically based spatial-navigation commands leveraged existing lexicons for visual and verbal communication from the U.S Army field manual for visual signaling and a previously established Squad Level Vocabulary (SLV). Speech commands were recorded by a Lapel microphone and Microsoft Kinect, and classified by commercial off-the-shelf automatic speech recognition (ASR) software. Visual signals were captured and classified using a custom wireless gesture glove and software. Participants in the experiment commanded a robot to complete a simulated ISR mission in a scaled down urban scenario by delivering a sequence of gesture and speech commands, both individually and simultaneously, to the robot. Performance and reliability of gesture and speech hardware interfaces and recognition tools were analyzed and reported. Analysis of experimental results demonstrated the employed gesture technology has significant potential for enabling bidirectional Soldier-robot team dialogue based on the high classification accuracy and minimal training required to perform gesture commands.

  15. Wild chimpanzees' use of single and combined vocal and gestural signals.

    Science.gov (United States)

    Hobaiter, C; Byrne, R W; Zuberbühler, K

    2017-01-01

    We describe the individual and combined use of vocalizations and gestures in wild chimpanzees. The rate of gesturing peaked in infancy and, with the exception of the alpha male, decreased again in older age groups, while vocal signals showed the opposite pattern. Although gesture-vocal combinations were relatively rare, they were consistently found in all age groups, especially during affiliative and agonistic interactions. Within behavioural contexts rank (excluding alpha-rank) had no effect on the rate of male chimpanzees' use of vocal or gestural signals and only a small effect on their use of combination signals. The alpha male was an outlier, however, both as a prolific user of gestures and recipient of high levels of vocal and gesture-vocal signals. Persistence in signal use varied with signal type: chimpanzees persisted in use of gestures and gesture-vocal combinations after failure, but where their vocal signals failed they tended to add gestural signals to produce gesture-vocal combinations. Overall, chimpanzees employed signals with a sensitivity to the public/private nature of information, by adjusting their use of signal types according to social context and by taking into account potential out-of-sight audiences. We discuss these findings in relation to the various socio-ecological challenges that chimpanzees are exposed to in their natural forest habitats and the current discussion of multimodal communication in great apes. All animal communication combines different types of signals, including vocalizations, facial expressions, and gestures. However, the study of primate communication has typically focused on the use of signal types in isolation. As a result, we know little on how primates use the full repertoire of signals available to them. Here we present a systematic study on the individual and combined use of gestures and vocalizations in wild chimpanzees. We find that gesturing peaks in infancy and decreases in older age, while vocal signals

  16. Static hand gesture recognition based on finger root-center-angle and length weighted Mahalanobis distance

    Science.gov (United States)

    Chen, Xinghao; Shi, Chenbo; Liu, Bo

    2016-04-01

    Static hand gesture recognition (HGR) has drawn increasing attention in computer vision and human-computer interaction (HCI) recently because of its great potential. However, HGR is a challenging problem due to the variations of gestures. In this paper, we present a new framework for static hand gesture recognition. Firstly, the key joints of the hand, including the palm center, the fingertips and finger roots, are located. Secondly, we propose novel and discriminative features called root-center-angles to alleviate the influence of the variations of gestures. Thirdly, we design a distance metric called finger length weighted Mahalanobis distance (FLWMD) to measure the dissimilarity of the hand gestures. Experiments demonstrate the accuracy, efficiency and robustness of our proposed HGR framework.

  17. Intrinsic mode entropy: an enhanced classification means for automated Greek Sign Language gesture recognition.

    Science.gov (United States)

    Kosmidou, Vasiliki E; Hadjileontiadis, Leontios J

    2008-01-01

    Sign language forms a communication channel among the deaf; however, automated gesture recognition could further expand their communication with the hearers. In this work, data from three-dimensional accelerometer and five-channel surface electromyogram of the user's dominant forearm are analyzed using intrinsic mode entropy (IMEn) for the automated recognition of Greek Sign Language (GSL) gestures. IMEn was estimated for various window lengths and evaluated by the Mahalanobis distance criterion. Discriminant analysis was used to identify the effective scales of the intrinsic mode functions and the window length for the calculation of the IMEn that contributes to the correct classification of the GSL gestures. Experimental results from the IMEn analysis of GSL gestures corresponding to ten words have shown 100% classification accuracy using IMEn as the only classification feature. This provides a promising bed-set towards the automated GSL gesture recognition.

  18. NUI framework based on real-time head pose estimation and hand gesture recognition

    Directory of Open Access Journals (Sweden)

    Kim Hyunduk

    2016-01-01

    Full Text Available The natural user interface (NUI is used for the natural motion interface without using device or tool such as mice, keyboards, pens and markers. In this paper, we develop natural user interface framework based on two recognition module. First module is real-time head pose estimation module using random forests and second module is hand gesture recognition module, named Hand gesture Key Emulation Toolkit (HandGKET. Using the head pose estimation module, we can know where the user is looking and what the user’s focus of attention is. Moreover, using the hand gesture recognition module, we can also control the computer using the user’s hand gesture without mouse and keyboard. In proposed framework, the user’s head direction and hand gesture are mapped into mouse and keyboard event, respectively.

  19. Eye-hand Hybrid Gesture Recognition System for Human Machine Interface

    Directory of Open Access Journals (Sweden)

    N. R. Raajan

    2013-04-01

    Full Text Available Gesture Recognition has become a way for computers to recognise and understand human body language. They bridge the gap between machines and human beings and make the primitive interfaces like keyboards and mice redundant. This paper suggests a hybrid gesture recognition system for computer interface and wireless robot control. The real-time eye-hand gesture recognition system can be used for computer drawing, navigating cursors and simulating mouse clicks, playing games, controlling a wireless robot with commands and more. The robot illustrated in this paper is controlled by RF module. Playing a PING-PONG game has also been demonstrated using the gestures. The Haar cascade classifiers and template matching are used to detect eye gestures and convex hull for finding the defects and counting the number of fingers in the given region.

  20. A Review of Temporal Aspects of Hand Gesture Analysis Applied to Discourse Analysis and Natural Conversation

    Directory of Open Access Journals (Sweden)

    Renata C. B. Madeo

    2013-08-01

    Full Text Available Lately, there has been an increasinginterest in hand gesture analysis systems. Recent works have employedpattern recognition techniques and have focused on the development of systems with more natural userinterfaces. These systems may use gestures to control interfaces or recognize sign language gestures, whichcan provide systems with multimodal interaction; or consist in multimodal tools to help psycholinguists tounderstand new aspects of discourse analysis and to automate laborious tasks.Gestures are characterizedby several aspects, mainly by movementsand sequence of postures. Since data referring to movementsorsequencescarry temporal information, this paper presents aliteraturereviewabouttemporal aspects ofhand gesture analysis, focusing on applications related to natural conversation and psycholinguisticanalysis, using Systematic Literature Review methodology. In our results, we organized works according totype of analysis, methods, highlighting the use of Machine Learning techniques, and applications.

  1. Move, Hold and Touch: A Framework for Tangible Gesture Interactive Systems

    Directory of Open Access Journals (Sweden)

    Leonardo Angelini

    2015-08-01

    Full Text Available Technology is spreading in our everyday world, and digital interaction beyond the screen, with real objects, allows taking advantage of our natural manipulative and communicative skills. Tangible gesture interaction takes advantage of these skills by bridging two popular domains in Human-Computer Interaction, tangible interaction and gestural interaction. In this paper, we present the Tangible Gesture Interaction Framework (TGIF for classifying and guiding works in this field. We propose a classification of gestures according to three relationships with objects: move, hold and touch. Following this classification, we analyzed previous work in the literature to obtain guidelines and common practices for designing and building new tangible gesture interactive systems. We describe four interactive systems as application examples of the TGIF guidelines and we discuss the descriptive, evaluative and generative power of TGIF.

  2. Barack Obama’s pauses and gestures in humorous speeches

    DEFF Research Database (Denmark)

    Navarretta, Costanza

    2017-01-01

    and they emphasise the speech segment which they follow or precede. We also found a highly significant correlation between Obama’s speech pauses and audience response. Obama produces numerous head movements, facial expressions and hand gestures and their functions are related to both discourse content and structure....... Characteristics for these speeches is that Obama points to individuals in the audience and often smiles and laughs. Audience response is equally frequent in the two events, and there are no significant changes in speech rate and frequency of head movements and facial expressions in the two speeches while Obama...

  3. SSC: Gesture-based game for initial dementia examination

    Institute of Scientific and Technical Information of China (English)

    LIU Jun-fa; CHEN Yi-qiang; XIE Chen; GAO Wen

    2006-01-01

    This paper presents a novel system assisting medical dementia examination in a joyful way: the object just needs to play a popular game SSC against the computer during the examination. The SSC game's target is to detect the player's reacting capability, which is related closely with dementia. Our system reaches this target with some advantages: there are no temporal and spatial constraints at all. There is no cost, and it can even improve people's mental status. Hand talk technology and EHMM gesture recognition approach are employed to realize the human computer interface. Experiments showed that this system can evaluate people's reacting capability effectively and is helpful for initial dementia examination.

  4. A Component-Based Vocabulary-Extensible Sign Language Gesture Recognition Framework

    Directory of Open Access Journals (Sweden)

    Shengjing Wei

    2016-04-01

    Full Text Available Sign language recognition (SLR can provide a helpful tool for the communication between the deaf and the external world. This paper proposed a component-based vocabulary extensible SLR framework using data from surface electromyographic (sEMG sensors, accelerometers (ACC, and gyroscopes (GYRO. In this framework, a sign word was considered to be a combination of five common sign components, including hand shape, axis, orientation, rotation, and trajectory, and sign classification was implemented based on the recognition of five components. Especially, the proposed SLR framework consisted of two major parts. The first part was to obtain the component-based form of sign gestures and establish the code table of target sign gesture set using data from a reference subject. In the second part, which was designed for new users, component classifiers were trained using a training set suggested by the reference subject and the classification of unknown gestures was performed with a code matching method. Five subjects participated in this study and recognition experiments under different size of training sets were implemented on a target gesture set consisting of 110 frequently-used Chinese Sign Language (CSL sign words. The experimental results demonstrated that the proposed framework can realize large-scale gesture set recognition with a small-scale training set. With the smallest training sets (containing about one-third gestures of the target gesture set suggested by two reference subjects, (82.6 ± 13.2% and (79.7 ± 13.4% average recognition accuracy were obtained for 110 words respectively, and the average recognition accuracy climbed up to (88 ± 13.7% and (86.3 ± 13.7% when the training set included 50~60 gestures (about half of the target gesture set. The proposed framework can significantly reduce the user’s training burden in large-scale gesture recognition, which will facilitate the implementation of a practical SLR system.

  5. Communication for coordination: gesture kinematics and conventionality affect synchronization success in piano duos.

    Science.gov (United States)

    Bishop, Laura; Goebl, Werner

    2017-07-21

    Ensemble musicians often exchange visual cues in the form of body gestures (e.g., rhythmic head nods) to help coordinate piece entrances. These cues must communicate beats clearly, especially if the piece requires interperformer synchronization of the first chord. This study aimed to (1) replicate prior findings suggesting that points of peak acceleration in head gestures communicate beat position and (2) identify the kinematic features of head gestures that encourage successful synchronization. It was expected that increased precision of the alignment between leaders' head gestures and first note onsets, increased gesture smoothness, magnitude, and prototypicality, and increased leader ensemble/conducting experience would improve gesture synchronizability. Audio/MIDI and motion capture recordings were made of piano duos performing short musical passages under assigned leader/follower conditions. The leader of each trial listened to a particular tempo over headphones, then cued their partner in at the given tempo, without speaking. A subset of motion capture recordings were then presented as point-light videos with corresponding audio to a sample of musicians who tapped in synchrony with the beat. Musicians were found to align their first taps with the period of deceleration following acceleration peaks in leaders' head gestures, suggesting that acceleration patterns communicate beat position. Musicians' synchronization with leaders' first onsets improved as cueing gesture smoothness and magnitude increased and prototypicality decreased. Synchronization was also more successful with more experienced leaders' gestures. These results might be applied to interactive systems using gesture recognition or reproduction for music-making tasks (e.g., intelligent accompaniment systems).

  6. A Component-Based Vocabulary-Extensible Sign Language Gesture Recognition Framework.

    Science.gov (United States)

    Wei, Shengjing; Chen, Xiang; Yang, Xidong; Cao, Shuai; Zhang, Xu

    2016-04-19

    Sign language recognition (SLR) can provide a helpful tool for the communication between the deaf and the external world. This paper proposed a component-based vocabulary extensible SLR framework using data from surface electromyographic (sEMG) sensors, accelerometers (ACC), and gyroscopes (GYRO). In this framework, a sign word was considered to be a combination of five common sign components, including hand shape, axis, orientation, rotation, and trajectory, and sign classification was implemented based on the recognition of five components. Especially, the proposed SLR framework consisted of two major parts. The first part was to obtain the component-based form of sign gestures and establish the code table of target sign gesture set using data from a reference subject. In the second part, which was designed for new users, component classifiers were trained using a training set suggested by the reference subject and the classification of unknown gestures was performed with a code matching method. Five subjects participated in this study and recognition experiments under different size of training sets were implemented on a target gesture set consisting of 110 frequently-used Chinese Sign Language (CSL) sign words. The experimental results demonstrated that the proposed framework can realize large-scale gesture set recognition with a small-scale training set. With the smallest training sets (containing about one-third gestures of the target gesture set) suggested by two reference subjects, (82.6 ± 13.2)% and (79.7 ± 13.4)% average recognition accuracy were obtained for 110 words respectively, and the average recognition accuracy climbed up to (88 ± 13.7)% and (86.3 ± 13.7)% when the training set included 50~60 gestures (about half of the target gesture set). The proposed framework can significantly reduce the user's training burden in large-scale gesture recognition, which will facilitate the implementation of a practical SLR system.

  7. Rising tones and rustling noises: Metaphors in gestural depictions of sounds.

    Science.gov (United States)

    Lemaitre, Guillaume; Scurto, Hugo; Françoise, Jules; Bevilacqua, Frédéric; Houix, Olivier; Susini, Patrick

    2017-01-01

    Communicating an auditory experience with words is a difficult task and, in consequence, people often rely on imitative non-verbal vocalizations and gestures. This work explored the combination of such vocalizations and gestures to communicate auditory sensations and representations elicited by non-vocal everyday sounds. Whereas our previous studies have analyzed vocal imitations, the present research focused on gestural depictions of sounds. To this end, two studies investigated the combination of gestures and non-verbal vocalizations. A first, observational study examined a set of vocal and gestural imitations of recordings of sounds representative of a typical everyday environment (ecological sounds) with manual annotations. A second, experimental study used non-ecological sounds whose parameters had been specifically designed to elicit the behaviors highlighted in the observational study, and used quantitative measures and inferential statistics. The results showed that these depicting gestures are based on systematic analogies between a referent sound, as interpreted by a receiver, and the visual aspects of the gestures: auditory-visual metaphors. The results also suggested a different role for vocalizations and gestures. Whereas the vocalizations reproduce all features of the referent sounds as faithfully as vocally possible, the gestures focus on one salient feature with metaphors based on auditory-visual correspondences. Both studies highlighted two metaphors consistently shared across participants: the spatial metaphor of pitch (mapping different pitches to different positions on the vertical dimension), and the rustling metaphor of random fluctuations (rapidly shaking of hands and fingers). We interpret these metaphors as the result of two kinds of representations elicited by sounds: auditory sensations (pitch and loudness) mapped to spatial position, and causal representations of the sound sources (e.g. rain drops, rustling leaves) pantomimed and

  8. Relating Gestures and Speech: An analysis of students' conceptions about geological sedimentary processes

    Science.gov (United States)

    Herrera, Juan Sebastian; Riggs, Eric M.

    2013-08-01

    Advances in cognitive science and educational research indicate that a significant part of spatial cognition is facilitated by gesture (e.g. giving directions, or describing objects or landscape features). We aligned the analysis of gestures with conceptual metaphor theory to probe the use of mental image schemas as a source of concept representations for students' learning of sedimentary processes. A hermeneutical approach enabled us to access student meaning-making from students' verbal reports and gestures about four core geological ideas that involve sea-level change and sediment deposition. The study included 25 students from three US universities. Participants were enrolled in upper-level undergraduate courses on sedimentology and stratigraphy. We used semi-structured interviews for data collection. Our gesture coding focused on three types of gestures: deictic, iconic, and metaphoric. From analysis of video recorded interviews, we interpreted image schemas in gestures and verbal reports. Results suggested that students attempted to make more iconic and metaphoric gestures when dealing with abstract concepts, such as relative sea level, base level, and unconformities. Based on the analysis of gestures that recreated certain patterns including time, strata, and sea-level fluctuations, we reasoned that proper representational gestures may indicate completeness in conceptual understanding. We concluded that students rely on image schemas to develop ideas about complex sedimentary systems. Our research also supports the hypothesis that gestures provide an independent and non-linguistic indicator of image schemas that shape conceptual development, and also play a role in the construction and communication of complex spatial and temporal concepts in the geosciences.

  9. Role of sex in externally motivated self-touching gestures.

    Science.gov (United States)

    Heaven, Laura; McBrayer, Dan; Prince, Bob

    2002-08-01

    Self-touching gestures can be externally induced by the verbal presentation of anxiety-inducing stimuli and the active discussion of a passage. The frequency of these self-touching gestures appears to be affected by the individual interacting with the topic, the type of discourse (listening or discussing), the type of stimulus (canaries or leeches), and the interaction between the types of discourse and stimulus. This study assessed these variables as well as the sex of the participant and the order of presentation of stimulus type, neither of which were statistically significant. Participants were read two passages, one about a topic (leeches) expected to produce anxiety and the other about a topic (canaries) not expected to do so, and asked to answer questions about the passages. The number of self-touches was counted by an observer in another room. Each participant had both types of discourse (listening and discussing) and both types of stimulus (canaries and leeches). There was no significant difference between the number of self-touches by participants with either the male or female reader. Discussion as a method of discourse was associated with a significantly greater number of self-touches than listening. The interaction between discourse type and stimulus type was also significant. The combination of the anxiety-producing stimulus and the active discourse (discussion) produced the highest average number of self-touches.

  10. Dexiosis: a meaningful gesture of the Classical antiquity

    Directory of Open Access Journals (Sweden)

    Mgr. PhD. Lucia Nováková

    2016-07-01

    Full Text Available Dexiosis is a modern term referring to the handshaking motif appearing in ancient Greek art, which had specific meaning and symbolism. Though it was a characteristic iconographic element of the Classical antiquity, its roots can be traced back to the Archaic period. Dexiosis was not merely a compositional element connecting two people, but carried a deeper meaning. Most often, the motif was associated with funerary art of the Classical Athens. On funerary monuments the deceased were depicted in the circle of their families, which reflected the ideals of contemporary society. Particularly notable is the contrast between the public character of the funerary monument and the private nature of the depiction. Its meaning should be perceived in terms of both the intimate gesture expressing emotions and the formal presentation of the family. Dexiosis emphasized a permanent bond as the fundamental element of the family in particular, and society in general. At the same time, it was associated with the theme of farewell. The gesture was performed by two people in a dialogical composition, which clearly showed their mutual relationship and the figures were depicted in various compositions regardless of their gender or age. The motif was also used in the Hellenistic and the Roman art.

  11. The Apparatus of Belief: Prayer, Technology, and Ritual Gesture

    Directory of Open Access Journals (Sweden)

    Anderson Blanton

    2016-06-01

    Full Text Available Through a focus on the early history of a mass mediated ritual practice, this essay describes the “apparatus of belief,” or the specific ways in which individual religious belief has become intimately related to tele-technologies such as the radio. More specifically, this paper examines prayers that were performed during the immensely popular Healing Waters Broadcast by Oral Roberts, a famous charismatic faith healer. An analysis of these healing prayers reveals the ways in which the old charismatic Christian gesture of manual imposition, or laying on of hands, took on new somatic registers and sensorial attunements when mediated, or transduced, through technologies such as the radio loudspeaker. Emerging from these mid-twentieth century radio broadcasts, this technique of healing prayer popularized by Roberts has now become a key ritual practice and theological motif within the global charismatic Christian healing movement. Critiquing established conceptions of prayer in the disciplines of anthropology and religious studies, this essay describes “belief” as a particular structure of intimacy between sensory capacity, media technology, and pious gesture.

  12. Gesturing during mental problem solving reduces eye movements, especially for individuals with lower visual working memory capacity.

    Science.gov (United States)

    Pouw, Wim T J L; Mavilidi, Myrto-Foteini; van Gog, Tamara; Paas, Fred

    2016-08-01

    Non-communicative hand gestures have been found to benefit problem-solving performance. These gestures seem to compensate for limited internal cognitive capacities, such as visual working memory capacity. Yet, it is not clear how gestures might perform this cognitive function. One hypothesis is that gesturing is a means to spatially index mental simulations, thereby reducing the need for visually projecting the mental simulation onto the visual presentation of the task. If that hypothesis is correct, less eye movements should be made when participants gesture during problem solving than when they do not gesture. We therefore used mobile eye tracking to investigate the effect of co-thought gesturing and visual working memory capacity on eye movements during mental solving of the Tower of Hanoi problem. Results revealed that gesturing indeed reduced the number of eye movements (lower saccade counts), especially for participants with a relatively lower visual working memory capacity. Subsequent problem-solving performance was not affected by having (not) gestured during the mental solving phase. The current findings suggest that our understanding of gestures in problem solving could be improved by taking into account eye movements during gesturing.

  13. Development of cross-linguistic variation in speech and gesture: motion events in English and Turkish.

    Science.gov (United States)

    Ozyürek, Asli; Kita, Sotaro; Allen, Shanley; Brown, Amanda; Furman, Reyhan; Ishizuka, Tomoka

    2008-07-01

    The way adults express manner and path components of a motion event varies across typologically different languages both in speech and cospeech gestures, showing that language specificity in event encoding influences gesture. The authors tracked when and how this multimodal cross-linguistic variation develops in children learning Turkish and English, 2 typologically distinct languages. They found that children learn to speak in language-specific ways from age 3 onward (i.e., English speakers used 1 clause and Turkish speakers used 2 clauses to express manner and path). In contrast, English- and Turkish-speaking children's gestures looked similar at ages 3 and 5 (i.e., separate gestures for manner and path), differing from each other only at age 9 and in adulthood (i.e., English speakers used 1 gesture, but Turkish speakers used separate gestures for manner and path). The authors argue that this pattern of the development of cospeech gestures reflects a gradual shift to language-specific representations during speaking and shows that looking at speech alone may not be sufficient to understand the full process of language acquisition.

  14. The Different Patterns of Gesture between Genders in Mathematical Problem Solving of Geometry

    Science.gov (United States)

    Harisman, Y.; Noto, M. S.; Bakar, M. T.; Amam, A.

    2017-02-01

    This article discusses about students’ gesture between genders in answering problems of geometry. Gesture aims to check students’ understanding which is undefined from their writings. This study is a qualitative research, there were seven questions given to two students of eight grade Junior High School who had the equal ability. The data of this study were collected from mathematical problem solving test, videoing students’ presentation, and interviewing students by asking questions to check their understandings in geometry problems, in this case the researchers would observe the students’ gesture. The result of this study revealed that there were patterns of gesture through students’ conversation and prosodic cues, such as tones, intonation, speech rate and pause. Female students tended to give indecisive gestures, for instance bowing, hesitating, embarrassing, nodding many times in shifting cognitive comprehension, forwarding their body and asking questions to the interviewer when they found tough questions. However, male students acted some gestures such as playing their fingers, focusing on questions, taking longer time to answer hard questions, staying calm in shifting cognitive comprehension. We suggest to observe more sample and focus on students’ gesture consistency in showing their understanding to solve the given problems.

  15. The development of co-speech gesture in the communication of children with autism spectrum disorders.

    Science.gov (United States)

    Sowden, Hannah; Clegg, Judy; Perkins, Michael

    2013-12-01

    Co-speech gestures have a close semantic relationship to speech in adult conversation. In typically developing children co-speech gestures which give additional information to speech facilitate the emergence of multi-word speech. A difficulty with integrating audio-visual information is known to exist for individuals with Autism Spectrum Disorder (ASD), which may affect development of the speech-gesture system. A longitudinal observational study was conducted with four children with ASD, aged 2;4 to 3;5 years. Participants were video-recorded for 20 min every 2 weeks during their attendance on an intervention programme. Recording continued for up to 8 months, thus affording a rich analysis of gestural practices from pre-verbal to multi-word speech across the group. All participants combined gesture with either speech or vocalisations. Co-speech gestures providing additional information to speech were observed to be either absent or rare. Findings suggest that children with ASD do not make use of the facilitating communicative effects of gesture in the same way as typically developing children.

  16. Method for user interface of large displays using arm pointing and finger counting gesture recognition.

    Science.gov (United States)

    Kim, Hansol; Kim, Yoonkyung; Lee, Eui Chul

    2014-01-01

    Although many three-dimensional pointing gesture recognition methods have been proposed, the problem of self-occlusion has not been considered. Furthermore, because almost all pointing gesture recognition methods use a wide-angle camera, additional sensors or cameras are required to concurrently perform finger gesture recognition. In this paper, we propose a method for performing both pointing gesture and finger gesture recognition for large display environments, using a single Kinect device and a skeleton tracking model. By considering self-occlusion, a compensation technique can be performed on the user's detected shoulder position when a hand occludes the shoulder. In addition, we propose a technique to facilitate finger counting gesture recognition, based on the depth image of the hand position. In this technique, the depth image is extracted from the end of the pointing vector. By using exception handling for self-occlusions, experimental results indicate that the pointing accuracy of a specific reference position was significantly improved. The average root mean square error was approximately 13 pixels for a 1920 × 1080 pixels screen resolution. Moreover, the finger counting gesture recognition accuracy was 98.3%.

  17. Human facial neural activities and gesture recognition for machine-interfacing applications.

    Science.gov (United States)

    Hamedi, M; Salleh, Sh-Hussain; Tan, T S; Ismail, K; Ali, J; Dee-Uam, C; Pavaganun, C; Yupapin, P P

    2011-01-01

    The authors present a new method of recognizing different human facial gestures through their neural activities and muscle movements, which can be used in machine-interfacing applications. Human-machine interface (HMI) technology utilizes human neural activities as input controllers for the machine. Recently, much work has been done on the specific application of facial electromyography (EMG)-based HMI, which have used limited and fixed numbers of facial gestures. In this work, a multipurpose interface is suggested that can support 2-11 control commands that can be applied to various HMI systems. The significance of this work is finding the most accurate facial gestures for any application with a maximum of eleven control commands. Eleven facial gesture EMGs are recorded from ten volunteers. Detected EMGs are passed through a band-pass filter and root mean square features are extracted. Various combinations of gestures with a different number of gestures in each group are made from the existing facial gestures. Finally, all combinations are trained and classified by a Fuzzy c-means classifier. In conclusion, combinations with the highest recognition accuracy in each group are chosen. An average accuracy >90% of chosen combinations proved their ability to be used as command controllers.

  18. Appearance-based human gesture recognition using multimodal features for human computer interaction

    Science.gov (United States)

    Luo, Dan; Gao, Hua; Ekenel, Hazim Kemal; Ohya, Jun

    2011-03-01

    The use of gesture as a natural interface plays an utmost important role for achieving intelligent Human Computer Interaction (HCI). Human gestures include different components of visual actions such as motion of hands, facial expression, and torso, to convey meaning. So far, in the field of gesture recognition, most previous works have focused on the manual component of gestures. In this paper, we present an appearance-based multimodal gesture recognition framework, which combines the different groups of features such as facial expression features and hand motion features which are extracted from image frames captured by a single web camera. We refer 12 classes of human gestures with facial expression including neutral, negative and positive meanings from American Sign Languages (ASL). We combine the features in two levels by employing two fusion strategies. At the feature level, an early feature combination can be performed by concatenating and weighting different feature groups, and LDA is used to choose the most discriminative elements by projecting the feature on a discriminative expression space. The second strategy is applied on decision level. Weighted decisions from single modalities are fused in a later stage. A condensation-based algorithm is adopted for classification. We collected a data set with three to seven recording sessions and conducted experiments with the combination techniques. Experimental results showed that facial analysis improve hand gesture recognition, decision level fusion performs better than feature level fusion.

  19. Evaluation of the safety and usability of touch gestures in operating in-vehicle information systems with visual occlusion.

    Science.gov (United States)

    Kim, Huhn; Song, Haewon

    2014-05-01

    Nowadays, many automobile manufacturers are interested in applying the touch gestures that are used in smart phones to operate their in-vehicle information systems (IVISs). In this study, an experiment was performed to verify the applicability of touch gestures in the operation of IVISs from the viewpoints of both driving safety and usability. In the experiment, two devices were used: one was the Apple iPad, with which various touch gestures such as flicking, panning, and pinching were enabled; the other was the SK EnNavi, which only allowed tapping touch gestures. The participants performed the touch operations using the two devices under visually occluded situations, which is a well-known technique for estimating load of visual attention while driving. In scrolling through a list, the flicking gestures required more time than the tapping gestures. Interestingly, both the flicking and simple tapping gestures required slightly higher visual attention. In moving a map, the average time taken per operation and the visual attention load required for the panning gestures did not differ from those of the simple tapping gestures that are used in existing car navigation systems. In zooming in/out of a map, the average time taken per pinching gesture was similar to that of the tapping gesture but required higher visual attention. Moreover, pinching gestures at a display angle of 75° required that the participants severely bend their wrists. Because the display angles of many car navigation systems tends to be more than 75°, pinching gestures can cause severe fatigue on users' wrists. Furthermore, contrary to participants' evaluation of other gestures, several participants answered that the pinching gesture was not necessary when operating IVISs. It was found that the panning gesture is the only touch gesture that can be used without negative consequences when operating IVISs while driving. The flicking gesture is likely to be used if the screen moving speed is slower or

  20. Good and bad in the hands of politicians: spontaneous gestures during positive and negative speech.

    Directory of Open Access Journals (Sweden)

    Daniel Casasanto

    Full Text Available BACKGROUND: According to the body-specificity hypothesis, people with different bodily characteristics should form correspondingly different mental representations, even in highly abstract conceptual domains. In a previous test of this proposal, right- and left-handers were found to associate positive ideas like intelligence, attractiveness, and honesty with their dominant side and negative ideas with their non-dominant side. The goal of the present study was to determine whether 'body-specific' associations of space and valence can be observed beyond the laboratory in spontaneous behavior, and whether these implicit associations have visible consequences. METHODOLOGY AND PRINCIPAL FINDINGS: We analyzed speech and gesture (3012 spoken clauses, 1747 gestures from the final debates of the 2004 and 2008 US presidential elections, which involved two right-handers (Kerry, Bush and two left-handers (Obama, McCain. Blind, independent coding of speech and gesture allowed objective hypothesis testing. Right- and left-handed candidates showed contrasting associations between gesture and speech. In both of the left-handed candidates, left-hand gestures were associated more strongly with positive-valence clauses and right-hand gestures with negative-valence clauses; the opposite pattern was found in both right-handed candidates. CONCLUSIONS: Speakers associate positive messages more strongly with dominant hand gestures and negative messages with non-dominant hand gestures, revealing a hidden link between action and emotion. This pattern cannot be explained by conventions in language or culture, which associate 'good' with 'right' but not with 'left'; rather, results support and extend the body-specificity hypothesis. Furthermore, results suggest that the hand speakers use to gesture may have unexpected (and probably unintended communicative value, providing the listener with a subtle index of how the speaker feels about the content of the co

  1. Imaging a cognitive model of apraxia: the neural substrate of gesture-specific cognitive processes.

    Science.gov (United States)

    Peigneux, Philippe; Van der Linden, Martial; Garraux, Gaetan; Laureys, Steven; Degueldre, Christian; Aerts, Joel; Del Fiore, Guy; Moonen, Gustave; Luxen, Andre; Salmon, Eric

    2004-03-01

    The present study aimed to ascertain the neuroanatomical basis of an influential neuropsychological model for upper limb apraxia [Rothi LJ, et al. The Neuropsychology of Action. 1997. Hove, UK: Psychology Press]. Regional cerebral blood flow was measured in healthy volunteers using H2 15O PET during performance of four tasks commonly used for testing upper limb apraxia, i.e., pantomime of familiar gestures on verbal command, imitation of familiar gestures, imitation of novel gestures, and an action-semantic task that consisted in matching objects for functional use. We also re-analysed data from a previous PET study in which we investigated the neural basis of the visual analysis of gestures. First, we found that two sets of discrete brain areas are predominantly engaged in the imitation of familiar and novel gestures, respectively. Segregated brain activation for novel gesture imitation concur with neuropsychological reports to support the hypothesis that knowledge about the organization of the human body mediates the transition from visual perception to motor execution when imitating novel gestures [Goldenberg Neuropsychologia 1995;33:63-72]. Second, conjunction analyses revealed distinctive neural bases for most of the gesture-specific cognitive processes proposed in this cognitive model of upper limb apraxia. However, a functional analysis of brain imaging data suggested that one single memory store may be used for "to-be-perceived" and "to-be-produced" gestural representations, departing from Rothi et al.'s proposal. Based on the above considerations, we suggest and discuss a revised model for upper limb apraxia that might best account for both brain imaging findings and neuropsychological dissociations reported in the apraxia literature. Copyright 2004 Wiley-Liss, Inc.

  2. Perceived communicative intent in gesture and language modulates the superior temporal sulcus.

    Science.gov (United States)

    Redcay, Elizabeth; Velnoskey, Kayla R; Rowe, Meredith L

    2016-10-01

    Behavioral evidence and theory suggest gesture and language processing may be part of a shared cognitive system for communication. While much research demonstrates both gesture and language recruit regions along perisylvian cortex, relatively less work has tested functional segregation within these regions on an individual level. Additionally, while most work has focused on a shared semantic network, less has examined shared regions for processing communicative intent. To address these questions, functional and structural MRI data were collected from 24 adult participants while viewing videos of an experimenter producing communicative, Participant-Directed Gestures (PDG) (e.g., "Hello, come here"), noncommunicative Self-adaptor Gestures (SG) (e.g., smoothing hair), and three written text conditions: (1) Participant-Directed Sentences (PDS), matched in content to PDG, (2) Third-person Sentences (3PS), describing a character's actions from a third-person perspective, and (3) meaningless sentences, Jabberwocky (JW). Surface-based conjunction and individual functional region of interest analyses identified shared neural activation between gesture (PDGvsSG) and language processing using two different language contrasts. Conjunction analyses of gesture (PDGvsSG) and Third-person Sentences versus Jabberwocky revealed overlap within left anterior and posterior superior temporal sulcus (STS). Conjunction analyses of gesture and Participant-Directed Sentences to Third-person Sentences revealed regions sensitive to communicative intent, including the left middle and posterior STS and left inferior frontal gyrus. Further, parametric modulation using participants' ratings of stimuli revealed sensitivity of left posterior STS to individual perceptions of communicative intent in gesture. These data highlight an important role of the STS in processing participant-directed communicative intent through gesture and language. Hum Brain Mapp 37:3444-3461, 2016. © 2016 Wiley

  3. Key Frame Selection for One-Two Hand Gesture Recognition with HMM

    Directory of Open Access Journals (Sweden)

    Ketki P. Kshirsagar

    2015-06-01

    Full Text Available The sign language recognition is the most popular research area involving computer vision, pattern recognition and image processing. It enhances communication capabilities of the mute person. In this paper, I present an object based key frame selection. Forward Algorithm is used for shape similarity for one and two handed gesture recognition. That recognition is with feature and without feature using HMM method. I proposed use to the hidden markov model with key frame selection facility and gesture trajectory features for one and two hand gesture recognition. Experimental results demonstrate the effectiveness of my proposed scheme for recognizing One Handed American Sign Language and Two Handed British Sign Language.

  4. Wild chimpanzees modify modality of gestures according to the strength of social bonds and personal network size

    Science.gov (United States)

    Roberts, Anna Ilona; Roberts, Sam George Bradley

    2016-01-01

    Primates form strong and enduring social bonds with others and these bonds have important fitness consequences. However, how different types of communication are associated with different types of social bonds is poorly understood. Wild chimpanzees have a large repertoire of gestures, from visual gestures to tactile and auditory gestures. We used social network analysis to examine the association between proximity bonds (time spent in close proximity) and rates of gestural communication in pairs of chimpanzees when the intended recipient was within 10 m of the signaller. Pairs of chimpanzees with strong proximity bonds had higher rates of visual gestures, but lower rates of auditory long-range and tactile gestures. However, individual chimpanzees that had a larger number of proximity bonds had higher rates of auditory and tactile gestures and lower rates of visual gestures. These results suggest that visual gestures may be an efficient way to communicate with a small number of regular interaction partners, but that tactile and auditory gestures may be more effective at communicating with larger numbers of weaker bonds. Increasing flexibility of communication may have played an important role in managing differentiated social relationships in groups of increasing size and complexity in both primate and human evolution. PMID:27649626

  5. Human hand descriptions and gesture recognition for object manipulation.

    Science.gov (United States)

    Cobos, Salvador; Ferre, Manuel; Sánchez-Urán, M Ángel; Ortego, Javier; Aracil, Rafael

    2010-06-01

    This work focuses on obtaining realistic human hand models that are suitable for manipulation tasks. A 24 degrees of freedom (DoF) kinematic model of the human hand is defined. The model reasonably satisfies realism requirements in simulation and movement. To achieve realism, intra- and inter-finger constraints are obtained. The design of the hand model with 24 DoF is based upon a morphological, physiological and anatomical study of the human hand. The model is used to develop a gesture recognition procedure that uses principal components analysis (PCA) and discriminant functions. Two simplified hand descriptions (nine and six DoF) have been developed in accordance with the constraints obtained previously. The accuracy of the simplified models is almost 5% for the nine DoF hand description and 10% for the six DoF hand description. Finally, some criteria are defined by which to select the hand description best suited to the features of the manipulation task.

  6. Body language: The interplay between positional behavior and gestural signaling in the genus Pan and its implications for language evolution.

    Science.gov (United States)

    Smith, Lindsey W; Delgado, Roberto A

    2015-08-01

    The gestural repertoires of bonobos and chimpanzees are well documented, but the relationship between gestural signaling and positional behavior (i.e., body postures and locomotion) has yet to be explored. Given that one theory for language evolution attributes the emergence of increased gestural communication to habitual bipedality, this relationship is important to investigate. In this study, we examined the interplay between gestures, body postures, and locomotion in four captive groups of bonobos and chimpanzees using ad libitum and focal video data. We recorded 43 distinct manual (involving upper limbs and/or hands) and bodily (involving postures, locomotion, head, lower limbs, or feet) gestures. In both species, actors used manual and bodily gestures significantly more when recipients were attentive to them, suggesting these movements are intentionally communicative. Adults of both species spent less than 1.0% of their observation time in bipedal postures or locomotion, yet 14.0% of all bonobo gestures and 14.7% of all chimpanzee gestures were produced when subjects were engaged in bipedal postures or locomotion. Among both bonobo groups and one chimpanzee group, these were mainly manual gestures produced by infants and juvenile females. Among the other chimpanzee group, however, these were mainly bodily gestures produced by adult males in which bipedal posture and locomotion were incorporated into communicative displays. Overall, our findings reveal that bipedality did not prompt an increase in manual gesturing in these study groups. Rather, body postures and locomotion are intimately tied to many gestures and certain modes of locomotion can be used as gestures themselves. © 2015 Wiley Periodicals, Inc.

  7. Impact of Different e-Cigarette Generation and Models on Cognitive Performances, Craving and Gesture: A Randomized Cross-Over Trial (CogEcig)

    Science.gov (United States)

    Caponnetto, Pasquale; Maglia, Marilena; Cannella, Maria Concetta; Inguscio, Lucio; Buonocore, Mariachiara; Scoglio, Claudio; Polosa, Riccardo; Vinci, Valeria

    2017-01-01

    Introduction: Most electronic-cigarettes (e-cigarette) are designed to look like traditional cigarettes and simulate the visual, sensory, and behavioral aspects of smoking traditional cigarettes. This research aimed to explore whether different e-cigarette models and smokers' usual classic cigarettes can impact on cognitive performances, craving and gesture. Methods: The study is randomized cross-over trial designed to compare cognitive performances, craving, and gesture in subjects who used first generation electronic cigarettes, second generation electronic cigarettes with their usual cigarettes. (Trial registration: ClinicalTrials.gov number NCT01735487). Results: Cognitive performance was not affected by “group condition.” Within-group repeated measures analyses showed a significant time effect, indicating an increase of participants' current craving measure in group “usual classic cigarettes (group C),” “disposable cigalike electronic cigarette loaded with cartridges with 24 mg nicotine (group H), second generation electronic cigarette, personal vaporizer model Ego C, loaded with liquid nicotine 24 mg (group E). Measures of gesture not differ over the course of the experiment for all the products under investigation Conclusion: All cognitive measures attention, executive function and working memory are not influenced by the different e-cigarette and gender showing that in general electronics cigarettes could become a strong support also from a cognitive point of view for those who decide to quit smoking. It seems that not only craving and other smoke withdrawal symptoms but also cognitive performance is not only linked to the presence of nicotine; this suggests that the reasons behind the dependence and the related difficulty to quit smoking needs to be looked into also other factors like the gesture. Clinical Trial Registration: www.ClinicalTrials.gov, identifier NCT01735487. PMID:28337155

  8. User-independent accelerometer-based gesture recognition for mobile devices

    Directory of Open Access Journals (Sweden)

    Xian WANG

    2012-12-01

    Full Text Available Many mobile devices embed nowadays inertial sensors. This enables new forms of human-computer interaction through the use of gestures (movements performed with the mobile device as a way of communication. This paper presents an accelerometer-based gesture recognition system for mobile devices which is able to recognize a collection of 10 different hand gestures. The system was conceived to be light and to operate in a user-independent manner in real time. The recognition system was implemented in a smart phone and evaluated through a collection of user tests, which showed a recognition accuracy similar to other state-of-the art techniques and a lower computational complexity. The system was also used to build a human-robot interface that enables controlling a wheeled robot with the gestures made with the mobile phone

  9. Interface Everywhere: Further Development of a Gesture and Voice Commanding Interface Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Natural User Interface (NUI) is a term used to describe a number of technologies such as speech recognition, multi-touch, and kinetic interfaces. Gesture and voice...

  10. User-independent accelerometer-based gesture recognition for mobile devices

    Directory of Open Access Journals (Sweden)

    Eduardo METOLA

    2013-07-01

    Full Text Available Many mobile devices embed nowadays inertial sensors. This enables new forms of human-computer interaction through the use of gestures (movements performed with the mobile device as a way of communication. This paper presents an accelerometer-based gesture recognition system for mobile devices which is able to recognize a collection of 10 different hand gestures. The system was conceived to be light and to operate in a user-independent manner in real time. The recognition system was implemented in a smart phone and evaluated through a collection of user tests, which showed a recognition accuracy similar to other state-of-the art techniques and a lower computational complexity. The system was also used to build a human-robot interface that enables controlling a wheeled robot with the gestures made with the mobile phone

  11. Real-Time Hand Gesture Recognition based on Modified Contour Chain Code Feature Set

    Directory of Open Access Journals (Sweden)

    Reza Azad

    2014-07-01

    Full Text Available Hand gesture recognition and pattern recognition are the growing fields of research. Gestures are the motion of the body or physical action form by the user in order to convey some meaningful information. In this paper we propose a robust and efficient method for real-time hand gesture recognition system. In the suggested method, first, the hand gesture is extracted from the main image by edge detection and morphological operation and then is sent to feature extraction stage. In feature extraction stage modified contour chain code feature set is extracted. Finally in classification stage, we employ multiclass support vector machine (SVM as classifier. In the result part, the proposed approach is applied on American Sign Language (ASL database and the accuracy rate obtained 99.40%. Further, we obtained 99.80% accuracy using five-fold cross validation technique on ASL database.

  12. Low-Complexity Hand Gesture Recognition System for Continuous Streams of Digits and Letters.

    Science.gov (United States)

    Poularakis, Stergios; Katsavounidis, Ioannis

    2016-09-01

    In this paper, we propose a complete gesture recognition framework based on maximum cosine similarity and fast nearest neighbor (NN) techniques, which offers high-recognition accuracy and great computational advantages for three fundamental problems of gesture recognition: 1) isolated recognition; 2) gesture verification; and 3) gesture spotting on continuous data streams. To support our arguments, we provide a thorough evaluation on three large publicly available databases, examining various scenarios, such as noisy environments, limited number of training examples, and time delay in system's response. Our experimental results suggest that this simple NN-based approach is quite accurate for trajectory classification of digits and letters and could become a promising approach for implementations on low-power embedded systems.

  13. Bringing back the body into the mind: gestures enhance word learning in foreign language.

    Science.gov (United States)

    Macedonia, Manuela

    2014-01-01

    Foreign language education in the twenty-first century still teaches vocabulary mainly through reading and listening activities. This is due to the link between teaching practice and traditional philosophy of language, where language is considered to be an abstract phenomenon of the mind. However, a number of studies have shown that accompanying words or phrases of a foreign language with gestures leads to better memory results. In this paper, I review behavioral research on the positive effects of gestures on memory. Then I move to the factors that have been addressed as contributing to the effect, and I embed the reviewed evidence in the theoretical framework of embodiment. Finally, I argue that gestures accompanying foreign language vocabulary learning create embodied representations of those words. I conclude by advocating the use of gestures in future language education as a learning tool that enhances the mind.

  14. Evaluation of surface EMG features for the recognition of American Sign Language gestures.

    Science.gov (United States)

    Kosmidou, Vasiliki E; Hadjileontiadis, Leontios J; Panas, Stavros M

    2006-01-01

    In this work, analysis of the surface electromyogram (sEMG) signal is proposed for the recognition of American sign language (ASL) gestures. To this purpose, sixteen features are extracted from the sEMG signal acquired from the user's forearm, and evaluated by the Mahalanobis distance criterion. Discriminant analysis is used to reduce the number of features used in the classification of the signed ASL gestures. The proposed features are tested against noise resulting in a further reduced set of features, which are evaluated for their discriminant ability. The classification results reveal that 97.7% of the inspected ASL gestures were correctly recognized using sEMG-based features, providing a promising solution to the automatic ASL gesture recognition problem.

  15. Automated Gesturing for Virtual Characters: Speech-driven and Text-driven Approaches

    Directory of Open Access Journals (Sweden)

    Goranka Zoric

    2006-04-01

    Full Text Available We present two methods for automatic facial gesturing of graphically embodied animated agents. In one case, conversational agent is driven by speech in automatic Lip Sync process. By analyzing speech input, lip movements are determined from the speech signal. Another method provides virtual speaker capable of reading plain English text and rendering it in a form of speech accompanied by the appropriate facial gestures. Proposed statistical model for generating virtual speaker’s facial gestures can be also applied as addition to lip synchronization process in order to obtain speech driven facial gesturing. In this case statistical model will be triggered with the input speech prosody instead of lexical analysis of the input text.

  16. Hand Gesture Based Wheelchair Movement Control for Disabled Person Using MEMS.

    Directory of Open Access Journals (Sweden)

    Prof. Vishal V. Pande,

    2014-04-01

    Full Text Available This paper is to develop a wheel chair control which is useful to the physically disabled person with his hand movement or his hand gesture recognition using Acceleration technology.Tremendous leaps have been made in the field of wheelchair technology. However, even these significant advances haven‟t been able to help quadriplegics navigate wheelchair unassisted.It is wheelchair which can be controlled by simple hand gestures. It employs a sensor which controls the wheelchair hand gestures made by the user and interprets the motion intended by user and moves accordingly.In Acceleration we have Acceleration sensor. When we change the direction, the sensor registers values are changed and that values are given to microcontroller. Depending on the direction of the Acceleration, microcontroller controls the wheel chair directions like LEFT, RIGHT, FRONT, and BACK. The aim of this paper is to implement wheel chair direction control with hand gesture reorganization. Keywords-

  17. Music as a Mnemonic to Learn Gesture Sequences in Normal Aging and Alzheimer’s Disease

    OpenAIRE

    Aline eMoussard; Emmanuel eBigand; Isabelle ePeretz; Sylvie eBelleville

    2014-01-01

    Strong links between music and motor functions suggest that music could represent an interesting aid for motor learning. The present study aims for the first time to test the potential of music to assist in the learning of sequences of gestures in normal and pathological aging. Participants with mild Alzheimer’s disease (AD) and healthy older adults (controls) learned sequences of meaningless gestures that were either accompanied by music or a metronome. We also manipulated the learning proce...

  18. Gesture-Directed Sensor-Information Fusion (GDSIF) for Protection and Communication in Hazardous Environments

    Science.gov (United States)

    2009-11-20

    G. Rogers, R. Luna, and J. Ellen, “Wireless Communication Glove Apparatus for Motion Tracking, Gesture Recognition , Data Transmission, and Reception...and easier to deploy in a variety of ways. (See, for example, [1] and [3].) The current eGloves have magnetic and motion sensors for gesture ... recognition [5], [6]. An important future step to enhance the effectiveness of the war fighter is to integrate CBRN and other sensors into the eGloves

  19. Gestural Turing Test. A Motion-Capture Experiment for Exploring Believability In Artificial Nonverbal Communication.

    OpenAIRE

    Ventrella, Jeffrey; Seif El-Nasr, Magy; Aghabeigi, Bardia; Overington, Richard

    2010-01-01

    One of the open problems in creating believable characters in computer games and collaborative virtual environments is simulating adaptive human-like motion. Classical artificial intelligence (AI) research places an emphasis on verbal language. In response to the limitations of classical AI, many researchers have turned their attention to embodied communication and situated intelligence. Inspired by Gestural Theory, which claims that speech emerged from visual, bodily gestures in primates, we...

  20. Real-time hand gesture recognition exploiting multiple 2D and 3D cues

    OpenAIRE

    Dominio, Fabio

    2015-01-01

    The recent introduction of several 3D applications and stereoscopic display technologies has created the necessity of novel human-machine interfaces. The traditional input devices, such as keyboard and mouse, are not able to fully exploit the potential of these interfaces and do not offer a natural interaction. Hand gestures provide, instead, a more natural and sometimes safer way of interacting with computers and other machines without touching them. The use cases for gesture-based interface...

  1. Inconsistent use of gesture space during abstract pointing impairs language comprehension

    Directory of Open Access Journals (Sweden)

    Thomas C Gunter

    2015-02-01

    Full Text Available Pointing towards concrete objects is a well-known and efficient communicative strategy. Much less is known about the communicative effectiveness of abstract pointing where the pointing gestures are directed to empty space. McNeill’s (2003 observations suggest that abstract pointing can be used to establish referents in gesture space, without the referents being physically present. Recently, however, it has been shown that abstract pointing typically provides redundant information to the uttered speech thereby suggesting a very limited communicative value (So et al, 2009. In a first approach to tackle this issue we were interested to know whether perceivers are sensitive at all to this gesture cue or whether it is completely discarded as irrelevant add-on information. Sensitivity to for instance a gesture-speech mismatch would suggest a potential communicative function of abstract pointing. Therefore we devised a mismatch paradigm in which participants watched a video where a female was interviewed on various topics. During her responses, she established two concepts in space using abstract pointing (e.g., pointing to the left when saying Donald, and pointing to the right when saying Mickey. In the last response to each topic, the pointing gesture accompanying a target word (e.g., Donald was either consistent or inconsistent with the previously established location. Event related brain potentials showed an increased N400 and P600 when gesture and speech referred to different referents, indicating that inconsistent use of gesture space impairs language comprehension. Abstract pointing was found to influence comprehension even though gesture was not crucial to understanding the sentences or conducting the experimental task. These data suggest that a referent was retrieved via abstract pointing and that abstract pointing can potentially be used for referent indication in a discourse. We conclude that abstract pointing has a potential communicative

  2. Tracking and Classification of In-Air Hand Gesture Based on Thermal Guided Joint Filter

    OpenAIRE

    Kim, Seongwan; Ban, Yuseok; Lee, Sangyoun

    2017-01-01

    The research on hand gestures has attracted many image processing-related studies, as it intuitively conveys the intention of a human as it pertains to motional meaning. Various sensors have been used to exploit the advantages of different modalities for the extraction of important information conveyed by the hand gesture of a user. Although many works have focused on learning the benefits of thermal information from thermal cameras, most have focused on face recognition or human body detecti...

  3. Dynamic Hand Gesture Recognition for Wearable Devices with Low Complexity Recurrent Neural Networks

    OpenAIRE

    Shin, Sungho; Sung, Wonyong

    2016-01-01

    Gesture recognition is a very essential technology for many wearable devices. While previous algorithms are mostly based on statistical methods including the hidden Markov model, we develop two dynamic hand gesture recognition techniques using low complexity recurrent neural network (RNN) algorithms. One is based on video signal and employs a combined structure of a convolutional neural network (CNN) and an RNN. The other uses accelerometer data and only requires an RNN. Fixed-point optimizat...

  4. Understanding Human Hand Gestures for Learning Robot Pick-and-Place Tasks

    Directory of Open Access Journals (Sweden)

    Hsien-I Lin

    2015-05-01

    Full Text Available Programming robots by human demonstration is an intuitive approach, especially by gestures. Because robot pick-and-place tasks are widely used in industrial factories, this paper proposes a framework to learn robot pick-and-place tasks by understanding human hand gestures. The proposed framework is composed of the module of gesture recognition and the module of robot behaviour control. For the module of gesture recognition, transport empty (TE, transport loaded (TL, grasp (G, and release (RL from Gilbreth's therbligs are the hand gestures to be recognized. A convolution neural network (CNN is adopted to recognize these gestures from a camera image. To achieve the robust performance, the skin model by a Gaussian mixture model (GMM is used to filter out non-skin colours of an image, and the calibration of position and orientation is applied to obtain the neutral hand pose before the training and testing of the CNN. For the module of robot behaviour control, the corresponding robot motion primitives to TE, TL, G, and RL, respectively, are implemented in the robot. To manage the primitives in the robot system, a behaviour-based programming platform based on the Extensible Agent Behavior Specification Language (XABSL is adopted. Because the XABSL provides the flexibility and re-usability of the robot primitives, the hand motion sequence from the module of gesture recognition can be easily used in the XABSL programming platform to implement the robot pick-and-place tasks. The experimental evaluation of seven subjects performing seven hand gestures showed that the average recognition rate was 95.96%. Moreover, by the XABSL programming platform, the experiment showed the cube-stacking task was easily programmed by human demonstration.

  5. Hand Gesture Data Collection Procedure Using a Myo Armband for Machine Learning

    Science.gov (United States)

    2015-09-01

    Data Collection Procedure Using a Myo Armband for Machine Learning by Michael Lee and Nikhil Rao Computational and Information Sciences...Hand Gesture Data Collection Procedure Using a Myo Armband for Machine Learning 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER...Battlefield Information Processing Branch investigated using machine learning (ML) to identify military hand gestures. A Naïve Bayes model was

  6. Hand gesture recognition system based in computer vision and machine learning

    OpenAIRE

    Trigueiros, Paulo; Ribeiro, António Fernando; Reis, L.P.

    2015-01-01

    "Lecture notes in computational vision and biomechanics series, ISSN 2212-9391, vol. 19" Hand gesture recognition is a natural way of human computer interaction and an area of very active research in computer vision and machine learning. This is an area with many different possible applications, giving users a simpler and more natural way to communicate with robots/systems interfaces, without the need for extra devices. So, the primary goal of gesture recognition research applied to Hum...

  7. Combined Dynamic Time Warping with Multiple Sensors for 3D Gesture Recognition.

    Science.gov (United States)

    Choi, Hyo-Rim; Kim, TaeYong

    2017-08-17

    Cyber-physical systems, which closely integrate physical systems and humans, can be applied to a wider range of applications through user movement analysis. In three-dimensional (3D) gesture recognition, multiple sensors are required to recognize various natural gestures. Several studies have been undertaken in the field of gesture recognition; however, gesture recognition was conducted based on data captured from various independent sensors, which rendered the capture and combination of real-time data complicated. In this study, a 3D gesture recognition method using combined information obtained from multiple sensors is proposed. The proposed method can robustly perform gesture recognition regardless of a user's location and movement directions by providing viewpoint-weighted values and/or motion-weighted values. In the proposed method, the viewpoint-weighted dynamic time warping with multiple sensors has enhanced performance by preventing joint measurement errors and noise due to sensor measurement tolerance, which has resulted in the enhancement of recognition performance by comparing multiple joint sequences effectively.

  8. Gesture recognition based on computer vision and glove sensor for remote working environments

    Energy Technology Data Exchange (ETDEWEB)

    Chien, Sung Il; Kim, In Chul; Baek, Yung Mok; Kim, Dong Su; Jeong, Jee Won; Shin, Kug [Kyungpook National University, Taegu (Korea)

    1998-04-01

    In this research, we defined a gesture set needed for remote monitoring and control of a manless system in atomic power station environments. Here, we define a command as the loci of a gesture. We aim at the development of an algorithm using a vision sensor and glove sensors in order to implement the gesture recognition system. The gesture recognition system based on computer vision tracks a hand by using cross correlation of PDOE image. To recognize the gesture word, the 8 direction code is employed as the input symbol for discrete HMM. Another gesture recognition based on sensor has introduced Pinch glove and Polhemus sensor as an input device. The extracted feature through preprocessing now acts as an input signal of the recognizer. For recognition 3D loci of Polhemus sensor, discrete HMM is also adopted. The alternative approach of two foregoing recognition systems uses the vision and and glove sensors together. The extracted mesh feature and 8 direction code from the locus tracking are introduced for further enhancing recognition performance. MLP trained by backpropagation is introduced here and its performance is compared to that of discrete HMM. (author). 32 refs., 44 figs., 21 tabs.

  9. Hand region extraction and gesture recognition from video stream with complex background through entropy analysis.

    Science.gov (United States)

    Lee, JongShill; Lee, YoungJoo; Lee, EungHyuk; Hong, SeungHong

    2004-01-01

    Hand gesture recognition utilizing image processing relies upon recognition through markers or hand extraction by colors, and therefore is heavily restricted by the colors of clothes or skin. We propose a method to recognize band gestures extracted from images with a complex background for a more natural interface in HCI (human computer interaction). The proposed method obtains the image by subtracting one image from another sequential image, measures the entropy, separates hand region from images, tracks the hand region and recognizes hand gestures. Through entropy measurement, we have color information that has near distribution in complexion for regions that have big values and extracted hand region from input images. We could draw the hand region adaptively in variable lighting or individual differences because entropy offers color information as well as motion information at the same time. The detected contour using chain code for the hand region is extracted, and present centroidal profile method that is improved little more and recognized gesture of hand. In the experimental results for 6 kinds of hand gesture, it shows the recognition rate with more than 95% for person and 90-100% for each gesture at 5 frames/sec.

  10. Motion-sensor fusion-based gesture recognition and its VLSI architecture design for mobile devices

    Science.gov (United States)

    Zhu, Wenping; Liu, Leibo; Yin, Shouyi; Hu, Siqi; Tang, Eugene Y.; Wei, Shaojun

    2014-05-01

    With the rapid proliferation of smartphones and tablets, various embedded sensors are incorporated into these platforms to enable multimodal human-computer interfaces. Gesture recognition, as an intuitive interaction approach, has been extensively explored in the mobile computing community. However, most gesture recognition implementations by now are all user-dependent and only rely on accelerometer. In order to achieve competitive accuracy, users are required to hold the devices in predefined manner during the operation. In this paper, a high-accuracy human gesture recognition system is proposed based on multiple motion sensor fusion. Furthermore, to reduce the energy overhead resulted from frequent sensor sampling and data processing, a high energy-efficient VLSI architecture implemented on a Xilinx Virtex-5 FPGA board is also proposed. Compared with the pure software implementation, approximately 45 times speed-up is achieved while operating at 20 MHz. The experiments show that the average accuracy for 10 gestures achieves 93.98% for user-independent case and 96.14% for user-dependent case when subjects hold the device randomly during completing the specified gestures. Although a few percent lower than the conventional best result, it still provides competitive accuracy acceptable for practical usage. Most importantly, the proposed system allows users to hold the device randomly during operating the predefined gestures, which substantially enhances the user experience.

  11. [Research on finger key-press gesture recognition based on surface electromyographic signals].

    Science.gov (United States)

    Cheng, Juan; Chen, Xiang; Lu, Zhiyuan; Zhang, Xu; Zhao, Zhangyan

    2011-04-01

    This article reported researches on the pattern recognition of finger key-press gestures based on surface electromyographic (SEMG) signals. All the gestures were defined referring to the PC standard keyboard, and totally 16 sorts of key-press gestures relating to the right hand were defined. The SEMG signals were collected from the forearm of the subjects by 4 sensors. And two kinds of pattern recognition experiments were designed and implemented for exploring the feasibility and repeatability of the key-press gesture recognition based on SEMG signals. The results from 6 subjects showed, by using the same-day templates, that the average classification rates of 16 defined key-press gestures reached above 75.8%. Moreover, when the training samples added up to 5 days, the recognition accuracies approached those obtained with the same-day templates. The experimental results confirm the feasibility and repeatability of SEMG-based key-press gestures classification, which is meaningful for the implementation of myoelectric control-based virtual keyboard interaction.

  12. Two-stage Hidden Markov Model in Gesture Recognition for Human Robot Interaction

    Directory of Open Access Journals (Sweden)

    Nhan Nguyen-Duc-Thanh

    2012-07-01

    Full Text Available Hidden Markov Model (HMM is very rich in mathematical structure and hence can form the theoretical basis for use in a wide range of applications including gesture representation. Most research in this field, however, uses only HMM for recognizing simple gestures, while HMM can definitely be applied for whole gesture meaning recognition. This is very effectively applicable in Human‐Robot Interaction (HRI. In this paper, we introduce an approach for HRI in which not only the human can naturally control the robot by hand gesture, but also the robot can recognize what kind of task it is executing. The main idea behind this method is the 2‐stages Hidden Markov Model. The 1st HMM is to recognize the prime command‐like gestures. Based on the sequence of prime gestures that are recognized from the 1st stage and which represent the whole action, the 2nd HMM plays a role in task recognition. Another contribution of this paper is that we use the output Mixed Gaussian distribution in HMM to improve the recognition rate. In the experiment, we also complete a comparison of the different number of hidden states and mixture components to obtain the optimal one, and compare to other methods to evaluate this performance.

  13. The impact of iconic gestures on foreign language word learning and its neural substrate.

    Science.gov (United States)

    Macedonia, Manuela; Müller, Karsten; Friederici, Angela D

    2011-06-01

    Vocabulary acquisition represents a major challenge in foreign language learning. Research has demonstrated that gestures accompanying speech have an impact on memory for verbal information in the speakers' mother tongue and, as recently shown, also in foreign language learning. However, the neural basis of this effect remains unclear. In a within-subjects design, we compared learning of novel words coupled with iconic and meaningless gestures. Iconic gestures helped learners to significantly better retain the verbal material over time. After the training, participants' brain activity was registered by means of fMRI while performing a word recognition task. Brain activations to words learned with iconic and with meaningless gestures were contrasted. We found activity in the premotor cortices for words encoded with iconic gestures. In contrast, words encoded with meaningless gestures elicited a network associated with cognitive control. These findings suggest that memory performance for newly learned words is not driven by the motor component as such, but by the motor image that matches an underlying representation of the word's semantics. Copyright © 2010 Wiley-Liss, Inc.

  14. Domestic dogs use contextual information and tone of voice when following a human pointing gesture.

    Directory of Open Access Journals (Sweden)

    Linda Scheider

    Full Text Available Domestic dogs are skillful at using the human pointing gesture. In this study we investigated whether dogs take contextual information into account when following pointing gestures, specifically, whether they follow human pointing gestures more readily in the context in which food has been found previously. Also varied was the human's tone of voice as either imperative or informative. Dogs were more sustained in their searching behavior in the 'context' condition as opposed to the 'no context' condition, suggesting that they do not simply follow a pointing gesture blindly but use previously acquired contextual information to inform their interpretation of that pointing gesture. Dogs also showed more sustained searching behavior when there was pointing than when there was not, suggesting that they expect to find a referent when they see a human point. Finally, dogs searched more in high-pitched informative trials as opposed to the low-pitched imperative trials, whereas in the latter dogs seemed more inclined to respond by sitting. These findings suggest that a dog's response to a pointing gesture is flexible and depends on the context as well as the human's tone of voice.

  15. Two-Stage Hidden Markov Model in Gesture Recognition for Human Robot Interaction

    Directory of Open Access Journals (Sweden)

    Nhan Nguyen-Duc-Thanh

    2012-07-01

    Full Text Available Hidden Markov Model (HMM is very rich in mathematical structure and hence can form the theoretical basis for use in a wide range of applications including gesture representation. Most research in this field, however, uses only HMM for recognizing simple gestures, while HMM can definitely be applied for whole gesture meaning recognition. This is very effectively applicable in Human-Robot Interaction (HRI. In this paper, we introduce an approach for HRI in which not only the human can naturally control the robot by hand gesture, but also the robot can recognize what kind of task it is executing. The main idea behind this method is the 2-stages Hidden Markov Model. The 1st HMM is to recognize the prime command-like gestures. Based on the sequence of prime gestures that are recognized from the 1st stage and which represent the whole action, the 2nd HMM plays a role in task recognition. Another contribution of this paper is that we use the output Mixed Gaussian distribution in HMM to improve the recognition rate. In the experiment, we also complete a comparison of the different number of hidden states and mixture components to obtain the optimal one, and compare to other methods to evaluate this performance.

  16. RGBD Video Based Human Hand Trajectory Tracking and Gesture Recognition System

    Directory of Open Access Journals (Sweden)

    Weihua Liu

    2015-01-01

    Full Text Available The task of human hand trajectory tracking and gesture trajectory recognition based on synchronized color and depth video is considered. Toward this end, in the facet of hand tracking, a joint observation model with the hand cues of skin saliency, motion and depth is integrated into particle filter in order to move particles to local peak in the likelihood. The proposed hand tracking method, namely, salient skin, motion, and depth based particle filter (SSMD-PF, is capable of improving the tracking accuracy considerably, in the context of the signer performing the gesture toward the camera device and in front of moving, cluttered backgrounds. In the facet of gesture recognition, a shape-order context descriptor on the basis of shape context is introduced, which can describe the gesture in spatiotemporal domain. The efficient shape-order context descriptor can reveal the shape relationship and embed gesture sequence order information into descriptor. Moreover, the shape-order context leads to a robust score for gesture invariant. Our approach is complemented with experimental results on the settings of the challenging hand-signed digits datasets and American sign language dataset, which corroborate the performance of the novel techniques.

  17. Baboons' hand preference resists to spatial factors for a communicative gesture but not for a simple manipulative action.

    Science.gov (United States)

    Bourjade, Marie; Meunier, Hélène; Blois-Heulin, Catherine; Vauclair, Jacques

    2013-09-01

    Olive baboons (Papio anubis) do acquire and use intentional requesting gestures in experimental contexts. Individual's hand preference for these gestures is consistent with that observed for typical communicative gestures, but not for manipulative actions. Here, we examine whether the strength of hand preference may also be a good marker of hemispheric specialization for communicative gestures, hence differing from the strength of hand preference for manipulative actions. We compared the consistency of individuals' hand preference with regard to the variation in space of either (i) a communicative partner or (ii) a food item to grasp using a controlled set-up. We report more consistent hand preference for communicative gestures than for grasping actions. Established hand preference in the midline was stronger for gesturing than for grasping and allowed to predict the consistency of hand preference across positions. We found no significant relation between the direction of hand preference and the task.

  18. Turn-taking: A case study of early gesture and word use in answering WHERE and WHICH questions

    Directory of Open Access Journals (Sweden)

    Eve Vivienne Clark

    2015-07-01

    Full Text Available When young children answer questions, they do so more slowly than adults and appear to have difficulty finding the appropriate words. Because children leave gaps before they respond, it is possible that they could answer faster with gestures than with words. In this case study of one child from age 1;4 to 3;5, we compare gestural and verbal responses to adult Where and Which questions, which can be answered with gestures and/or words. After extracting all adult Where and Which questions and child answers from longitudinal videotaped sessions, we examined the timing from the end of each question to the start of the response, and compared the timing for gestures and words. Child responses could take the form of a gesture or word(s; the latter could be words repeated from the adult question or new words retrieved by the child. Or responses could be complex: a gesture + word repeat, gesture + new word, or word repeat + new word.Gestures were the fastest overall, followed successively by word-repeats, then new-word responses. This ordering, with gestures ahead of words, suggests that the child knows what to answer but needs more time to retrieve any relevant words. In short, word retrieval and articulation appear to be bottlenecks in the timing of responses: both add to the planning required in answering a question.

  19. Learning gestures and ethical issues in oncology and nuclear medicine

    Directory of Open Access Journals (Sweden)

    Aboubakr Matrane

    2014-01-01

    Full Text Available Purpose: The purpose of this study is to show the importance of learning gestures in three medical procedures (chemotherapy, brachytherapy, and bone scan. It allows us to assess complications, lack of benefit, and ethical questions to which resident physicians are confronted in their training. Materials and Methods: The study is based on a questionnaire divided into two parts distributed to 70 resident physicians and 90 patients: 60 physicians radiation oncologists and 10 nuclear physicians completed the first part of 24 items. It concerned the learning of medical practices. The second part of 18 items was completed by 90 patients (30 patients in the chemotherapy unit, 30 patients in the brachytherapy unit, and 30 patients in the nuclear medicine department; it was related to patients′ information prior to the completion (performance of the gesture. Results: The training of medical residents physicians took place mainly during the first year on conscious and well-informed patients, with the exception of brachytherapy taught later in the second year. It was preceded by a theoretical education in 56.7%, 43.3%, and 100%, respectively, in case of chemotherapy, brachytherapy, and bone scan unit, but the previous observation by a senior had failed in 16.7% in case of chemotherapy and in 36.7% in case of brachytherapy unit. Despite the almost constant presence of a senior, four incidents were associated with the first acts of chemotherapy and brachytherapy unit and one incident with the bone scan unit. These incidents had been generated, respectively, from 23.4%, 26.7%, and 20% of resident physicians surveyed (in chemotherapy, in brachytherapy, and in bone scan and had a consequence of a loss of opportunity for patient, in 20%, 13.3%, and 40%, respectively. Most patients were informed before the completion of the medical procedure, and cause ethical problems. Alternative ways of learning were known by most of the resident physicians in training

  20. Human facial neural activities and gesture recognition for machine-interfacing applications

    Directory of Open Access Journals (Sweden)

    Hamedi M

    2011-12-01

    Full Text Available M Hamedi1, Sh-Hussain Salleh2, TS Tan2, K Ismail2, J Ali3, C Dee-Uam4, C Pavaganun4, PP Yupapin51Faculty of Biomedical and Health Science Engineering, Department of Biomedical Instrumentation and Signal Processing, University of Technology Malaysia, Skudai, 2Centre for Biomedical Engineering Transportation Research Alliance, 3Institute of Advanced Photonics Science, Nanotechnology Research Alliance, University of Technology Malaysia (UTM, Johor Bahru, Malaysia; 4College of Innovative Management, Valaya Alongkorn Rajabhat University, Pathum Thani, 5Nanoscale Science and Engineering Research Alliance (N'SERA, Advanced Research Center for Photonics, Faculty of Science, King Mongkut's Institute of Technology Ladkrabang, Bangkok, ThailandAbstract: The authors present a new method of recognizing different human facial gestures through their neural activities and muscle movements, which can be used in machine-interfacing applications. Human–machine interface (HMI technology utilizes human neural activities as input controllers for the machine. Recently, much work has been done on the specific application of facial electromyography (EMG-based HMI, which have used limited and fixed numbers of facial gestures. In this work, a multipurpose interface is suggested that can support 2–11 control commands that can be applied to various HMI systems. The significance of this work is finding the most accurate facial gestures for any application with a maximum of eleven control commands. Eleven facial gesture EMGs are recorded from ten volunteers. Detected EMGs are passed through a band-pass filter and root mean square features are extracted. Various combinations of gestures with a different number of gestures in each group are made from the existing facial gestures. Finally, all combinations are trained and classified by a Fuzzy c-means classifier. In conclusion, combinations with the highest recognition accuracy in each group are chosen. An average accuracy