WorldWideScience

Sample records for view invariant gesture

  1. View Invariant Gesture Recognition using 3D Motion Primitives

    DEFF Research Database (Denmark)

    Holte, Michael Boelstoft; Moeslund, Thomas B.

    2008-01-01

    This paper presents a method for automatic recognition of human gestures. The method works with 3D image data from a range camera to achieve invariance to viewpoint. The recognition is based solely on motion from characteristic instances of the gestures. These instances are denoted 3D motion...... as a gesture using a probabilistic edit distance method. The system has been trained on frontal images (0deg camera rotation) and tested on 240 video sequences from 0deg and 45deg. An overall recognition rate of 82.9% is achieved. The recognition rate is independent of the viewpoint which shows that the method...

  2. View invariant gesture recognition using the CSEMSwissRanger SR-2 camera

    DEFF Research Database (Denmark)

    Holte, Michael Boelstoft; Moeslund, Thomas B.; Fihl, Preben

    2008-01-01

    This paper introduces the use of range information acquired by a CSEM SwissRanger SR-2 camera for view invariant recognition of one and two arms gestures. The range data enables motion detection and 3D representation of gestures. Motion is detected by double difference range images and filtered...

  3. Fusion of Range and Intensity Information for View Invariant Gesture Recognition

    DEFF Research Database (Denmark)

    Holte, Michael Boelstoft; Moeslund, Thomas B.; Fihl, Preben

    2008-01-01

    This paper presents a system for view invariant gesture recognition. The approach is based on 3D data from a CSEM SwissRanger SR-2 camera. This camera produces both a depth map as well as an intensity image of a scene. Since the two information types are aligned, we can use the intensity image...... to define a region of interest for the relevant 3D data. This data fusion improves the quality of the range data and hence results in better recognition. The gesture recognition is based on finding motion primitives in the 3D data. The primitives are represented compactly and view invariant using harmonic...... shape context. A probabilistic Edit Distance classifier is applied to identify which gesture best describes a string of primitives. The approach is trained on data from one viewpoint and tested on data from a different viewpoint. The recognition rate is 92.9% which is similar to the recognition rate...

  4. Moment Invariant Features Extraction for Hand Gesture Recognition of Sign Language based on SIBI

    Directory of Open Access Journals (Sweden)

    Angga Rahagiyanto

    2017-07-01

    Full Text Available Myo Armband became an immersive technology to help deaf people for communication each other. The problem on Myo sensor is unstable clock rate. It causes the different length data for the same period even on the same gesture. This research proposes Moment Invariant Method to extract the feature of sensor data from Myo. This method reduces the amount of data and makes the same length of data. This research is user-dependent, according to the characteristics of Myo Armband. The testing process was performed by using alphabet A to Z on SIBI, Indonesian Sign Language, with static and dynamic finger movements. There are 26 class of alphabets and 10 variants in each class. We use min-max normalization for guarantying the range of data. We use K-Nearest Neighbor method to classify dataset. Performance analysis with leave-one-out-validation method produced an accuracy of 82.31%. It requires a more advanced method of classification to improve the performance on the detection results.

  5. Discriminative shared Gaussian processes for multi-view and view-invariant facial expression recognition

    NARCIS (Netherlands)

    Eleftheriadis, Stefanos; Rudovic, Ognjen; Pantic, Maja

    Images of facial expressions are often captured from various views as a result of either head movements or variable camera position. Existing methods for multiview and/or view-invariant facial expression recognition typically perform classification of the observed expression using either classifiers

  6. Invariants

    Indian Academy of Sciences (India)

    removed two cells of the same color. Whenever you are putting a 2 × 1 rectangle you are covering one black and one white cell. So the total number of white cells you have covered minus the total number of black cells you have covered after putting some 2 × 1 rectangles is always zero. So this difference is an invariant! You.

  7. Mirror Neurons System Engagement in Late Adolescents and Adults While Viewing Emotional Gestures.

    Directory of Open Access Journals (Sweden)

    Emilie Salvia

    2016-07-01

    Full Text Available Observing others’ actions enhances muscle-specific cortico-spinal excitability, reflecting putative mirror neurons activity. The exposure to emotional stimuli also modulates cortico-spinal excitability. We investigated how those two phenomena might interact when they are combined, i.e. while observing a gesture performed with an emotion, and whether they change during the transition between adolescence and adulthood, a period of social and brain maturation.We delivered single-pulse transcranial magnetic stimulation (TMS over the hand area of the left primary motor cortex of 27 healthy adults and adolescents and recorded their right first dorsal interossus (FDI muscle activity (i.e. motor evoked potential – MEP, while they viewed either videos of neutral or angry hand actions and facial expressions, or neutral objects as a control condition. We reproduced the motor resonance and the emotion effects -- hand-actions and emotional stimuli induced greater cortico-spinal excitability than the faces / control condition and neutral videos, respectively. Moreover, the influence of emotion was present for faces but not for hand actions, indicating that the motor resonance and the emotion effect might be non-additive. While motor resonance was observed in both groups, the emotion effect was present only in adults and not in adolescents. We discuss the possible neural bases of these findings.

  8. A view invariant gait cycle segmentation for ambient monitoring.

    Science.gov (United States)

    Hao Wu; Beilei Xu; Madhu, Himanshu; Jing Zhou

    2016-08-01

    Gait analysis has many clinical applications in disease detection and treatment evaluation. Gait cycle segmentation is a critical component in gait analysis for timing the gait phases in evaluating many movement disorders. Computer vision techniques have been widely used in surveillance for security monitoring. They are nonintrusive and do not require cooperation from subjects. In this paper, we propose to leverage the videos from existing surveillance monitoring systems to provide long-term and ambient assessments of gait patterns from subjects' daily activity without the requirement of wearing a device. Our proposed method is a novel view-independent method for gait cycle segmentation. We use the temporal duration of spatial features to achieve fast, robust and accurate gait cycle segmentation. The method take videos from a single non-calibrated camera and is not limited by specific viewing angles of the subject.

  9. Mainstreaming gesture based interfaces

    Directory of Open Access Journals (Sweden)

    David Procházka

    2013-01-01

    Full Text Available Gestures are a common way of interaction with mobile devices. They emerged especially with the iPhone production. Gestures in currently used devices are usually based on the original gestures presented by Apple in its iOS (iPhone Operating System. Therefore, there is a wide agreement on the mobile gesture design. In last years, it is possible to see experiments with gesture usage also in the other areas of consumer electronics and computers. The examples can include televisions, large projections etc. These gestures can be marked as spatial or 3D gestures. They are connected with a natural 3D environment rather than with a flat 2D screen. Nevertheless, it is hard to find a comparable design agreement within the spatial gestures. Various projects are based on completely different gesture sets. This situation is confusing for their users and slows down spatial gesture adoption.This paper is focused on the standardization of spatial gestures. The review of projects focused on spatial gesture usage is provided in the first part. The main emphasis is placed on the usability point-of-view. On the basis of our analysis, we argue that the usability is the key issue enabling the wide adoption. The mobile gesture emergence was possible easily because the iPhone gestures were natural. Therefore, it was not necessary to learn them.The design and implementation of our presentation software, which is controlled by gestures, is outlined in the second part of the paper. Furthermore, the usability testing results are provided as well. We have tested our application on a group of users not instructed in the implemented gestures design. These results were compared with the other ones, obtained with our original implementation. The evaluation can be used as the basis for implementation of similar projects.

  10. Evidence for view-invariant face recognition units in unfamiliar face learning.

    Science.gov (United States)

    Etchells, David B; Brooks, Joseph L; Johnston, Robert A

    2017-05-01

    Many models of face recognition incorporate the idea of a face recognition unit (FRU), an abstracted representation formed from each experience of a face which aids recognition under novel viewing conditions. Some previous studies have failed to find evidence of this FRU representation. Here, we report three experiments which investigated this theoretical construct by modifying the face learning procedure from that in previous work. During learning, one or two views of previously unfamiliar faces were shown to participants in a serial matching task. Later, participants attempted to recognize both seen and novel views of the learned faces (recognition phase). Experiment 1 tested participants' recognition of a novel view, a day after learning. Experiment 2 was identical, but tested participants on the same day as learning. Experiment 3 repeated Experiment 1, but tested participants on a novel view that was outside the rotation of those views learned. Results revealed a significant advantage, across all experiments, for recognizing a novel view when two views had been learned compared to single view learning. The observed view invariance supports the notion that an FRU representation is established during multi-view face learning under particular learning conditions.

  11. Measurement Invariance for Latent Constructs in Multiple Populations: A Critical View and Refocus

    Science.gov (United States)

    Raykov, Tenko; Marcoulides, George A.; Li, Cheng-Hsien

    2012-01-01

    Popular measurement invariance testing procedures for latent constructs evaluated by multiple indicators in distinct populations are revisited and discussed. A frequently used test of factor loading invariance is shown to possess serious limitations that in general preclude it from accomplishing its goal of ascertaining this invariance. A process…

  12. Gesture for Linguists: A Handy Primer

    Science.gov (United States)

    Abner, Natasha; Cooperrider, Kensy; Goldin-Meadow, Susan

    2016-01-01

    Humans communicate using language, but they also communicate using gesture – spontaneous movements of the hands and body that universally accompany speech. Gestures can be distinguished from other movements, segmented, and assigned meaning based on their forms and functions. Moreover, gestures systematically integrate with language at all levels of linguistic structure, as evidenced in both production and perception. Viewed typologically, gesture is universal, but nevertheless exhibits constrained variation across language communities (as does language itself ). Finally, gesture has rich cognitive dimensions in addition to its communicative dimensions. In overviewing these and other topics, we show that the study of language is incomplete without the study of its communicative partner, gesture. PMID:26807141

  13. Gesture for Linguists: A Handy Primer.

    Science.gov (United States)

    Abner, Natasha; Cooperrider, Kensy; Goldin-Meadow, Susan

    2015-11-01

    Humans communicate using language, but they also communicate using gesture - spontaneous movements of the hands and body that universally accompany speech. Gestures can be distinguished from other movements, segmented, and assigned meaning based on their forms and functions. Moreover, gestures systematically integrate with language at all levels of linguistic structure, as evidenced in both production and perception. Viewed typologically, gesture is universal, but nevertheless exhibits constrained variation across language communities (as does language itself ). Finally, gesture has rich cognitive dimensions in addition to its communicative dimensions. In overviewing these and other topics, we show that the study of language is incomplete without the study of its communicative partner, gesture.

  14. Gesture, sign, and language: The coming of age of sign language and gesture studies.

    Science.gov (United States)

    Goldin-Meadow, Susan; Brentari, Diane

    2017-01-01

    How does sign language compare with gesture, on the one hand, and spoken language on the other? Sign was once viewed as nothing more than a system of pictorial gestures without linguistic structure. More recently, researchers have argued that sign is no different from spoken language, with all of the same linguistic structures. The pendulum is currently swinging back toward the view that sign is gestural, or at least has gestural components. The goal of this review is to elucidate the relationships among sign language, gesture, and spoken language. We do so by taking a close look not only at how sign has been studied over the past 50 years, but also at how the spontaneous gestures that accompany speech have been studied. We conclude that signers gesture just as speakers do. Both produce imagistic gestures along with more categorical signs or words. Because at present it is difficult to tell where sign stops and gesture begins, we suggest that sign should not be compared with speech alone but should be compared with speech-plus-gesture. Although it might be easier (and, in some cases, preferable) to blur the distinction between sign and gesture, we argue that distinguishing between sign (or speech) and gesture is essential to predict certain types of learning and allows us to understand the conditions under which gesture takes on properties of sign, and speech takes on properties of gesture. We end by calling for new technology that may help us better calibrate the borders between sign and gesture.

  15. Gesture for Linguists: A Handy Primer

    OpenAIRE

    Abner, Natasha; Cooperrider, Kensy; Goldin-Meadow, Susan

    2015-01-01

    Humans communicate using language, but they also communicate using gesture – spontaneous movements of the hands and body that universally accompany speech. Gestures can be distinguished from other movements, segmented, and assigned meaning based on their forms and functions. Moreover, gestures systematically integrate with language at all levels of linguistic structure, as evidenced in both production and perception. Viewed typologically, gesture is universal, but nevertheless exhibits constr...

  16. Dynamic Arm Gesture Recognition Using Spherical Angle Features and Hidden Markov Models

    Directory of Open Access Journals (Sweden)

    Hyesuk Kim

    2015-01-01

    Full Text Available We introduce a vision-based arm gesture recognition (AGR system using Kinect. The AGR system learns the discrete Hidden Markov Model (HMM, an effective probabilistic graph model for gesture recognition, from the dynamic pose of the arm joints provided by the Kinect API. Because Kinect’s viewpoint and the subject’s arm length can substantially affect the estimated 3D pose of each joint, it is difficult to recognize gestures reliably with these features. The proposed system performs the feature transformation that changes the 3D Cartesian coordinates of each joint into the 2D spherical angles of the corresponding arm part to obtain view-invariant and more discriminative features. We confirmed high recognition performance of the proposed AGR system through experiments with two different datasets.

  17. Gesture, sign and language: The coming of age of sign language and gesture studies

    Science.gov (United States)

    Goldin-Meadow, Susan; Brentari, Diane

    2016-01-01

    How does sign language compare to gesture, on the one hand, and to spoken language on the other? At one time, sign was viewed as nothing more than a system of pictorial gestures with no linguistic structure. More recently, researchers have argued that sign is no different from spoken language with all of the same linguistic structures. The pendulum is currently swinging back toward the view that sign is gestural, or at least has gestural components. The goal of this review is to elucidate the relationships among sign language, gesture, and spoken language. We do so by taking a close look not only at how sign has been studied over the last 50 years, but also at how the spontaneous gestures that accompany speech have been studied. We come to the conclusion that signers gesture just as speakers do. Both produce imagistic gestures along with more categorical signs or words. Because, at the moment, it is difficult to tell where sign stops and where gesture begins, we suggest that sign should not be compared to speech alone, but should be compared to speech-plus-gesture. Although it might be easier (and, in some cases, preferable) to blur the distinction between sign and gesture, we argue that making a distinction between sign (or speech) and gesture is essential to predict certain types of learning, and allows us to understand the conditions under which gesture takes on properties of sign, and speech takes on properties of gesture. We end by calling for new technology that may help us better calibrate the borders between sign and gesture. PMID:26434499

  18. An automatic and robust point cloud registration framework based on view-invariant local feature descriptors and transformation consistency verification

    Science.gov (United States)

    Cheng, Xu; Li, Zhongwei; Zhong, Kai; Shi, Yusheng

    2017-11-01

    This paper presents an automatic and robust framework for simultaneously registering pairwise point clouds and identifying the correctness of registration results. Given two partially overlapping point clouds with arbitrary initial positions, a view-invariant local feature descriptor is utilized to build sparse correspondence. A geometry constraint sample consensus (GC-SAC) algorithm is proposed to prune correspondence outliers and obtain an optimal 3D transformation hypothesis. Furthermore, by measuring the similarity between the estimated local and global transformations, a transformation consistency verification method is presented to efficiently detect potential registration failures. Our method provides reliable registration correctness verification even when two point clouds are only roughly registered. Experimental results demonstrate that our framework exhibits high levels of effectiveness and robustness for automatic registration.

  19. An Adaptive Superpixel Based Hand Gesture Tracking and Recognition System

    Directory of Open Access Journals (Sweden)

    Hong-Min Zhu

    2014-01-01

    Full Text Available We propose an adaptive and robust superpixel based hand gesture tracking system, in which hand gestures drawn in free air are recognized from their motion trajectories. First we employed the motion detection of superpixels and unsupervised image segmentation to detect the moving target hand using the first few frames of the input video sequence. Then the hand appearance model is constructed from its surrounding superpixels. By incorporating the failure recovery and template matching in the tracking process, the target hand is tracked by an adaptive superpixel based tracking algorithm, where the problem of hand deformation, view-dependent appearance invariance, fast motion, and background confusion can be well handled to extract the correct hand motion trajectory. Finally, the hand gesture is recognized by the extracted motion trajectory with a trained SVM classifier. Experimental results show that our proposed system can achieve better performance compared to the existing state-of-the-art methods with the recognition accuracy 99.17% for easy set and 98.57 for hard set.

  20. A Broader View of Relativity General Implications of Lorentz and Poincaré Invariance

    CERN Document Server

    Hsu, Jong-Ping

    2006-01-01

    A Broader View of Relativity shows that there is still new life in old physics. The book examines the historical context and theoretical underpinnings of Einstein's theory of special relativity and describes Broad Relativity, a generalized theory of coordinate transformations between inertial reference frames that includes Einstein's special relativity as a special case. It shows how the principle of relativity is compatible with multiple concepts of physical time and how these different procedures for clock synchronization can be useful for thinking about different physical problems, includin

  1. View-Invariant Visuomotor Processing in Computational Mirror Neuron System for Humanoid.

    Directory of Open Access Journals (Sweden)

    Farhan Dawood

    Full Text Available Mirror neurons are visuo-motor neurons found in primates and thought to be significant for imitation learning. The proposition that mirror neurons result from associative learning while the neonate observes his own actions has received noteworthy empirical support. Self-exploration is regarded as a procedure by which infants become perceptually observant to their own body and engage in a perceptual communication with themselves. We assume that crude sense of self is the prerequisite for social interaction. However, the contribution of mirror neurons in encoding the perspective from which the motor acts of others are seen have not been addressed in relation to humanoid robots. In this paper we present a computational model for development of mirror neuron system for humanoid based on the hypothesis that infants acquire MNS by sensorimotor associative learning through self-exploration capable of sustaining early imitation skills. The purpose of our proposed model is to take into account the view-dependency of neurons as a probable outcome of the associative connectivity between motor and visual information. In our experiment, a humanoid robot stands in front of a mirror (represented through self-image using camera in order to obtain the associative relationship between his own motor generated actions and his own visual body-image. In the learning process the network first forms mapping from each motor representation onto visual representation from the self-exploratory perspective. Afterwards, the representation of the motor commands is learned to be associated with all possible visual perspectives. The complete architecture was evaluated by simulation experiments performed on DARwIn-OP humanoid robot.

  2. Altered integration of speech and gesture in children with autism spectrum disorders

    OpenAIRE

    Hubbard, Amy L; MCNEALY, KRISTIN; Scott-Van Zeeland, Ashley A; Callan, Daniel E; Bookheimer, Susan Y.; Dapretto, Mirella

    2012-01-01

    The presence of gesture during speech has been shown to impact perception, comprehension, learning, and memory in normal adults and typically developing children. In neurotypical individuals, the impact of viewing co-speech gestures representing an object and/or action (i.e., iconic gesture) or speech rhythm (i.e., beat gesture) has also been observed at the neural level. Yet, despite growing evidence of delayed gesture development in children with autism spectrum disorders (ASD), few studies...

  3. Giving Speech a Hand: Gesture Modulates Activity in Auditory Cortex During Speech Perception

    OpenAIRE

    Hubbard, Amy L; Wilson, Stephen M.; Callan, Daniel E; Dapretto, Mirella

    2009-01-01

    Viewing hand gestures during face-to-face communication affects speech perception and comprehension. Despite the visible role played by gesture in social interactions, relatively little is known about how the brain integrates hand gestures with co-occurring speech. Here we used functional magnetic resonance imaging (fMRI) and an ecologically valid paradigm to investigate how beat gesture – a fundamental type of hand gesture that marks speech prosody – might impact speech perception at the neu...

  4. Synchronization of speech and gesture: evidence for interaction in action.

    Science.gov (United States)

    Chu, Mingyuan; Hagoort, Peter

    2014-08-01

    Language and action systems are highly interlinked. A critical piece of evidence is that speech and its accompanying gestures are tightly synchronized. Five experiments were conducted to test 2 hypotheses about the synchronization of speech and gesture. According to the interactive view, there is continuous information exchange between the gesture and speech systems, during both their planning and execution phases. According to the ballistic view, information exchange occurs only during the planning phases of gesture and speech, but the 2 systems become independent once their execution has been initiated. In all experiments, participants were required to point to and/or name a light that had just lit up. Virtual reality and motion tracking technologies were used to disrupt their gesture or speech execution. Participants delayed their speech onset when their gesture was disrupted. They did so even when their gesture was disrupted at its late phase and even when they received only the kinesthetic feedback of their gesture. Also, participants prolonged their gestures when their speech was disrupted. These findings support the interactive view and add new constraints on models of speech and gesture production. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  5. Fall Detection for Elderly from Partially Observed Depth-Map Video Sequences Based on View-Invariant Human Activity Representation

    Directory of Open Access Journals (Sweden)

    Rami Alazrai

    2017-03-01

    Full Text Available This paper presents a new approach for fall detection from partially-observed depth-map video sequences. The proposed approach utilizes the 3D skeletal joint positions obtained from the Microsoft Kinect sensor to build a view-invariant descriptor for human activity representation, called the motion-pose geometric descriptor (MPGD. Furthermore, we have developed a histogram-based representation (HBR based on the MPGD to construct a length-independent representation of the observed video subsequences. Using the constructed HBR, we formulate the fall detection problem as a posterior-maximization problem in which the posteriori probability for each observed video subsequence is estimated using a multi-class SVM (support vector machine classifier. Then, we combine the computed posteriori probabilities from all of the observed subsequences to obtain an overall class posteriori probability of the entire partially-observed depth-map video sequence. To evaluate the performance of the proposed approach, we have utilized the Kinect sensor to record a dataset of depth-map video sequences that simulates four fall-related activities of elderly people, including: walking, sitting, falling form standing and falling from sitting. Then, using the collected dataset, we have developed three evaluation scenarios based on the number of unobserved video subsequences in the testing videos, including: fully-observed video sequence scenario, single unobserved video subsequence of random lengths scenarios and two unobserved video subsequences of random lengths scenarios. Experimental results show that the proposed approach achieved an average recognition accuracy of 93 . 6 % , 77 . 6 % and 65 . 1 % , in recognizing the activities during the first, second and third evaluation scenario, respectively. These results demonstrate the feasibility of the proposed approach to detect falls from partially-observed videos.

  6. Single gaze gestures

    DEFF Research Database (Denmark)

    Møllenbach, Emilie; Lilholm, Martin; Gail, Alastair

    2010-01-01

    This paper examines gaze gestures and their applicability as a generic selection method for gaze-only controlled interfaces. The method explored here is the Single Gaze Gesture (SGG), i.e. gestures consisting of a single point-to-point eye movement. Horizontal and vertical, long and short SGGs were...

  7. Priming Gestures with Sounds.

    Science.gov (United States)

    Lemaitre, Guillaume; Heller, Laurie M; Navolio, Nicole; Zúñiga-Peñaranda, Nicolas

    2015-01-01

    We report a series of experiments about a little-studied type of compatibility effect between a stimulus and a response: the priming of manual gestures via sounds associated with these gestures. The goal was to investigate the plasticity of the gesture-sound associations mediating this type of priming. Five experiments used a primed choice-reaction task. Participants were cued by a stimulus to perform response gestures that produced response sounds; those sounds were also used as primes before the response cues. We compared arbitrary associations between gestures and sounds (key lifts and pure tones) created during the experiment (i.e. no pre-existing knowledge) with ecological associations corresponding to the structure of the world (tapping gestures and sounds, scraping gestures and sounds) learned through the entire life of the participant (thus existing prior to the experiment). Two results were found. First, the priming effect exists for ecological as well as arbitrary associations between gestures and sounds. Second, the priming effect is greatly reduced for ecologically existing associations and is eliminated for arbitrary associations when the response gesture stops producing the associated sounds. These results provide evidence that auditory-motor priming is mainly created by rapid learning of the association between sounds and the gestures that produce them. Auditory-motor priming is therefore mediated by short-term associations between gestures and sounds that can be readily reconfigured regardless of prior knowledge.

  8. From Gesture to Speech

    Directory of Open Access Journals (Sweden)

    Maurizio Gentilucci

    2012-11-01

    Full Text Available One of the major problems concerning the evolution of human language is to understand how sounds became associated to meaningful gestures. It has been proposed that the circuit controlling gestures and speech evolved from a circuit involved in the control of arm and mouth movements related to ingestion. This circuit contributed to the evolution of spoken language, moving from a system of communication based on arm gestures. The discovery of the mirror neurons has provided strong support for the gestural theory of speech origin because they offer a natural substrate for the embodiment of language and create a direct link between sender and receiver of a message. Behavioural studies indicate that manual gestures are linked to mouth movements used for syllable emission. Grasping with the hand selectively affected movement of inner or outer parts of the mouth according to syllable pronunciation and hand postures, in addition to hand actions, influenced the control of mouth grasp and vocalization. Gestures and words are also related to each other. It was found that when producing communicative gestures (emblems the intention to interact directly with a conspecific was transferred from gestures to words, inducing modification in voice parameters. Transfer effects of the meaning of representational gestures were found on both vocalizations and meaningful words. It has been concluded that the results of our studies suggest the existence of a system relating gesture to vocalization which was precursor of a more general system reciprocally relating gesture to word.

  9. Gesture-speech integration in children with specific language impairment.

    Science.gov (United States)

    Mainela-Arnold, Elina; Alibali, Martha W; Hostetter, Autumn B; Evans, Julia L

    2014-11-01

    Previous research suggests that speakers are especially likely to produce manual communicative gestures when they have relative ease in thinking about the spatial elements of what they are describing, paired with relative difficulty organizing those elements into appropriate spoken language. Children with specific language impairment (SLI) exhibit poor expressive language abilities together with within-normal-range nonverbal IQs. This study investigated whether weak spoken language abilities in children with SLI influence their reliance on gestures to express information. We hypothesized that these children would rely on communicative gestures to express information more often than their age-matched typically developing (TD) peers, and that they would sometimes express information in gestures that they do not express in the accompanying speech. Participants were 15 children with SLI (aged 5;6-10;0) and 18 age-matched TD controls. Children viewed a wordless cartoon and retold the story to a listener unfamiliar with the story. Children's gestures were identified and coded for meaning using a previously established system. Speech-gesture combinations were coded as redundant if the information conveyed in speech and gesture was the same, and non-redundant if the information conveyed in speech was different from the information conveyed in gesture. Children with SLI produced more gestures than children in the TD group; however, the likelihood that speech-gesture combinations were non-redundant did not differ significantly across the SLI and TD groups. In both groups, younger children were significantly more likely to produce non-redundant speech-gesture combinations than older children. The gesture-speech integration system functions similarly in children with SLI and TD, but children with SLI rely more on gesture to help formulate, conceptualize or express the messages they want to convey. This provides motivation for future research examining whether interventions

  10. Synchronization of speech and gesture: Evidence for interaction in action

    NARCIS (Netherlands)

    Chu, M.; Hagoort, P.

    2014-01-01

    Language and action systems are highly interlinked. A critical piece of evidence is that speech and its accompanying gestures are tightly synchronized. Five experiments were conducted to test 2 hypotheses about the synchronization of speech and gesture. According to the interactive view, there is

  11. Are Depictive Gestures like Pictures? Commonalities and Differences in Semantic Processing

    Science.gov (United States)

    Wu, Ying Choon; Coulson, Seana

    2011-01-01

    Conversation is multi-modal, involving both talk and gesture. Does understanding depictive gestures engage processes similar to those recruited in the comprehension of drawings or photographs? Event-related brain potentials (ERPs) were recorded from neurotypical adults as they viewed spontaneously produced depictive gestures preceded by congruent…

  12. Invariant subspaces

    CERN Document Server

    Radjavi, Heydar

    2003-01-01

    This broad survey spans a wealth of studies on invariant subspaces, focusing on operators on separable Hilbert space. Largely self-contained, it requires only a working knowledge of measure theory, complex analysis, and elementary functional analysis. Subjects include normal operators, analytic functions of operators, shift operators, examples of invariant subspace lattices, compact operators, and the existence of invariant and hyperinvariant subspaces. Additional chapters cover certain results on von Neumann algebras, transitive operator algebras, algebras associated with invariant subspaces,

  13. Gesture Types for Functions

    Science.gov (United States)

    Herbert, Sandra

    2012-01-01

    This paper reports on the different gesture types employed by twenty-three Year 10 students as they endeavoured to explain their understanding of rate of change associated with the functions resulting from two different computer simulations. These gestures also have application to revealing students' understanding of functions. However,…

  14. Mnemonic Effect of Iconic Gesture and Beat Gesture in Adults and Children: Is Meaning in Gesture Important for Memory Recall?

    Science.gov (United States)

    So, Wing Chee; Chen-Hui, Colin Sim; Wei-Shan, Julie Low

    2012-01-01

    Abundant research has shown that encoding meaningful gesture, such as an iconic gesture, enhances memory. This paper asked whether gesture needs to carry meaning to improve memory recall by comparing the mnemonic effect of meaningful (i.e., iconic gestures) and nonmeaningful gestures (i.e., beat gestures). Beat gestures involve simple motoric…

  15. Verbal working memory predicts co-speech gesture: evidence from individual differences.

    Science.gov (United States)

    Gillespie, Maureen; James, Ariel N; Federmeier, Kara D; Watson, Duane G

    2014-08-01

    Gesture facilitates language production, but there is debate surrounding its exact role. It has been argued that gestures lighten the load on verbal working memory (VWM; Goldin-Meadow, Nusbaum, Kelly, & Wagner, 2001), but gestures have also been argued to aid in lexical retrieval (Krauss, 1998). In the current study, 50 speakers completed an individual differences battery that included measures of VWM and lexical retrieval. To elicit gesture, each speaker described short cartoon clips immediately after viewing. Measures of lexical retrieval did not predict spontaneous gesture rates, but lower VWM was associated with higher gesture rates, suggesting that gestures can facilitate language production by supporting VWM when resources are taxed. These data also suggest that individual variability in the propensity to gesture is partly linked to cognitive capacities. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Natural gesture interfaces

    Science.gov (United States)

    Starodubtsev, Illya

    2017-09-01

    The paper describes the implementation of the system of interaction with virtual objects based on gestures. The paper describes the common problems of interaction with virtual objects, specific requirements for the interfaces for virtual and augmented reality.

  17. Gesture and Power

    OpenAIRE

    Covington-Ward, Yolanda

    2016-01-01

    In Gesture and Power Yolanda Covington-Ward examines the everyday embodied practices and performances of the BisiKongo people of the lower Congo to show how their gestures, dances, and spirituality are critical in mobilizing social and political action. Conceiving of the body as the center of analysis, a catalyst for social action, and as a conduit for the social construction of reality, Covington-Ward focuses on specific flashpoints in the last ninety years of Congo's troubled history, when ...

  18. Language as gesture.

    Science.gov (United States)

    Corballis, Michael C

    2009-10-01

    Language can be understood as an embodied system, expressible as gestures. Perception of these gestures depends on the "mirror system," first discovered in monkeys, in which the same neural elements respond both when the animal makes a movement and when it perceives the same movement made by others. This system allows gestures to be understood in terms of how they are produced, as in the so-called motor theory of speech perception. I argue that human speech evolved from manual gestures, with vocal gestures being gradually incorporated into the mirror system in the course of hominin evolution. Speech may have become the dominant mode only with the emergence of Homo sapiens some 170,100 years ago, although language as a relatively complex syntactic system probably emerged over the past 2 million years, initially as a predominantly manual system. Despite the present-day dominance of speech, manual gestures accompany speech, and visuomanual forms of language persist in signed languages of the deaf, in handwriting, and even in such forms as texting.

  19. Hand Gesture and Mathematics Learning: Lessons From an Avatar.

    Science.gov (United States)

    Cook, Susan Wagner; Friedman, Howard S; Duggan, Katherine A; Cui, Jian; Popescu, Voicu

    2017-03-01

    A beneficial effect of gesture on learning has been demonstrated in multiple domains, including mathematics, science, and foreign language vocabulary. However, because gesture is known to co-vary with other non-verbal behaviors, including eye gaze and prosody along with face, lip, and body movements, it is possible the beneficial effect of gesture is instead attributable to these other behaviors. We used a computer-generated animated pedagogical agent to control both verbal and non-verbal behavior. Children viewed lessons on mathematical equivalence in which an avatar either gestured or did not gesture, while eye gaze, head position, and lip movements remained identical across gesture conditions. Children who observed the gesturing avatar learned more, and they solved problems more quickly. Moreover, those children who learned were more likely to transfer and generalize their knowledge. These findings provide converging evidence that gesture facilitates math learning, and they reveal the potential for using technology to study non-verbal behavior in controlled experiments. Copyright © 2016 Cognitive Science Society, Inc.

  20. Giving speech a hand: gesture modulates activity in auditory cortex during speech perception.

    Science.gov (United States)

    Hubbard, Amy L; Wilson, Stephen M; Callan, Daniel E; Dapretto, Mirella

    2009-03-01

    Viewing hand gestures during face-to-face communication affects speech perception and comprehension. Despite the visible role played by gesture in social interactions, relatively little is known about how the brain integrates hand gestures with co-occurring speech. Here we used functional magnetic resonance imaging (fMRI) and an ecologically valid paradigm to investigate how beat gesture-a fundamental type of hand gesture that marks speech prosody-might impact speech perception at the neural level. Subjects underwent fMRI while listening to spontaneously-produced speech accompanied by beat gesture, nonsense hand movement, or a still body; as additional control conditions, subjects also viewed beat gesture, nonsense hand movement, or a still body all presented without speech. Validating behavioral evidence that gesture affects speech perception, bilateral nonprimary auditory cortex showed greater activity when speech was accompanied by beat gesture than when speech was presented alone. Further, the left superior temporal gyrus/sulcus showed stronger activity when speech was accompanied by beat gesture than when speech was accompanied by nonsense hand movement. Finally, the right planum temporale was identified as a putative multisensory integration site for beat gesture and speech (i.e., here activity in response to speech accompanied by beat gesture was greater than the summed responses to speech alone and beat gesture alone), indicating that this area may be pivotally involved in synthesizing the rhythmic aspects of both speech and gesture. Taken together, these findings suggest a common neural substrate for processing speech and gesture, likely reflecting their joint communicative role in social interactions.

  1. Chronometric Invariants

    CERN Document Server

    Zelmanov, Abraham

    2004-01-01

    This book introduces the mathematical apparatus of chronometric invariants (physical observable quantities) in the General Theory of Relativity, and also numerous results the mathematical apparatus found in relativistic cosmology (236 pages, 1 foto).

  2. Gesture & Aphasia: Iconic gestures convey part of the message

    NARCIS (Netherlands)

    van Nispen, Karin; van de Sandt-Koenderman, M.; Sekine, Kazuki; Krahmer, Emiel; Rose, Miranda

    2017-01-01

    Introduction: Gesture, particularly iconic gestures, can convey information absent in speech. Iconic gestures share a direct relation to the concept depicted, and thus could be beneficial during communication for people with aphasia (PWA). The present study aimed to investigate how PWA use iconic

  3. Gesture Facilitates Children's Creative Thinking.

    Science.gov (United States)

    Kirk, Elizabeth; Lewis, Carine

    2017-02-01

    Gestures help people think and can help problem solvers generate new ideas. We conducted two experiments exploring the self-oriented function of gesture in a novel domain: creative thinking. In Experiment 1, we explored the relationship between children's spontaneous gesture production and their ability to generate novel uses for everyday items (alternative-uses task). There was a significant correlation between children's creative fluency and their gesture production, and the majority of children's gestures depicted an action on the target object. Restricting children from gesturing did not significantly reduce their fluency, however. In Experiment 2, we encouraged children to gesture, and this significantly boosted their generation of creative ideas. These findings demonstrate that gestures serve an important self-oriented function and can assist creative thinking.

  4. Gesture Modelling for Linguistic Purposes

    CSIR Research Space (South Africa)

    Olivrin, GJ

    2007-05-01

    Full Text Available The study of sign languages attempts to create a coherent model that binds the expressive nature of signs conveyed in gestures to a linguistic framework. Gesture modelling offers an alternative that provides device independence, scalability...

  5. Does language shape silent gesture?

    Science.gov (United States)

    Özçalışkan, Şeyda; Lucero, Ché; Goldin-Meadow, Susan

    2016-03-01

    Languages differ in how they organize events, particularly in the types of semantic elements they express and the arrangement of those elements within a sentence. Here we ask whether these cross-linguistic differences have an impact on how events are represented nonverbally; more specifically, on how events are represented in gestures produced without speech (silent gesture), compared to gestures produced with speech (co-speech gesture). We observed speech and gesture in 40 adult native speakers of English and Turkish (N=20/per language) asked to describe physical motion events (e.g., running down a path)-a domain known to elicit distinct patterns of speech and co-speech gesture in English- and Turkish-speakers. Replicating previous work (Kita & Özyürek, 2003), we found an effect of language on gesture when it was produced with speech-co-speech gestures produced by English-speakers differed from co-speech gestures produced by Turkish-speakers. However, we found no effect of language on gesture when it was produced on its own-silent gestures produced by English-speakers were identical in how motion elements were packaged and ordered to silent gestures produced by Turkish-speakers. The findings provide evidence for a natural semantic organization that humans impose on motion events when they convey those events without language. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Gesture in the Developing Brain

    Science.gov (United States)

    Dick, Anthony Steven; Goldin-Meadow, Susan; Solodkin, Ana; Small, Steven L.

    2012-01-01

    Speakers convey meaning not only through words, but also through gestures. Although children are exposed to co-speech gestures from birth, we do not know how the developing brain comes to connect meaning conveyed in gesture with speech. We used functional magnetic resonance imaging (fMRI) to address this question and scanned 8- to 11-year-old…

  7. Invariant vacuum

    Science.gov (United States)

    Robles-Pérez, Salvador

    2017-11-01

    We apply the Lewis-Riesenfeld invariant method for the harmonic oscillator with time dependent mass and frequency to the modes of a charged scalar field that propagates in a curved, homogeneous and isotropic spacetime. We recover the Bunch-Davies vacuum in the case of a flat DeSitter spacetime, the equivalent one in the case of a closed DeSitter spacetime and the invariant vacuum in a curved spacetime that evolves adiabatically. In the three cases, it is computed the thermodynamical magnitudes of entanglement between the modes of the particles and antiparticles of the invariant vacuum, and the modification of the Friedmann equation caused by the existence of the energy density of entanglement. The amplitude of the vacuum fluctuations are also computed.

  8. Gesture en route to words

    DEFF Research Database (Denmark)

    Jensen de López, Kristine M.

    2010-01-01

    -word stage as well as interaction between children and their respective caretakers' use of gestural communication. Consitent with previous studies the results showed that all children used the gestural modality extensively across the two cultures. Two subgroups of children were identified regarding whetjer...... the children showed an early preference for the gestural or vocal modality. Through Analyzes of two-element combinations of words and/or gestures, we observd a relative increase in cross-modal (gesture-word and two-word) combinations. The results are discussed in terms understanding gestures as a transition...... period and in relation to the degredd to which gestures can be understood as a universal communicative device applied by children....

  9. Is Seeing Gesture Necessary to Gesture Like a Native Speaker?

    Science.gov (United States)

    Özçalışkan, Şeyda; Lucero, Ché; Goldin-Meadow, Susan

    2016-05-01

    Speakers of all languages gesture, but there are differences in the gestures that they produce. Do speakers learn language-specific gestures by watching others gesture or by learning to speak a particular language? We examined this question by studying the speech and gestures produced by 40 congenitally blind adult native speakers of English and Turkish (n = 20/language), and comparing them with the speech and gestures of 40 sighted adult speakers in each language (20 wearing blindfolds, 20 not wearing blindfolds). We focused on speakers' descriptions of physical motion, which display strong cross-linguistic differences in patterns of speech and gesture use. Congenitally blind speakers of English and Turkish produced speech that resembled the speech produced by sighted speakers of their native language. More important, blind speakers of each language used gestures that resembled the gestures of sighted speakers of that language. Our results suggest that hearing a particular language is sufficient to gesture like a native speaker of that language. © The Author(s) 2016.

  10. Three +1 Faces of Invariance

    CERN Document Server

    Fayngold, Moses

    2010-01-01

    A careful look at an allegedly well-known century-old concept reveals interesting aspects in it that have generally avoided recognition in literature. There are four different kinds of physical observables known or proclaimed as relativistic invariants under space-time rotations. Only observables in the first three categories are authentic invariants, whereas the single "invariant" - proper length - in the fourth category is actually not an invariant. The proper length has little is anything to do with proper distance which is a true invariant. On the other hand, proper distance, proper time, and rest mass have more in common than usually recognized, and particularly, mass - time analogy opens another view of the twin paradox.

  11. HAGR-D: A Novel Approach for Gesture Recognition with Depth Maps.

    Science.gov (United States)

    Santos, Diego G; Fernandes, Bruno J T; Bezerra, Byron L D

    2015-11-12

    The hand is an important part of the body used to express information through gestures, and its movements can be used in dynamic gesture recognition systems based on computer vision with practical applications, such as medical, games and sign language. Although depth sensors have led to great progress in gesture recognition, hand gesture recognition still is an open problem because of its complexity, which is due to the large number of small articulations in a hand. This paper proposes a novel approach for hand gesture recognition with depth maps generated by the Microsoft Kinect Sensor (Microsoft, Redmond, WA, USA) using a variation of the CIPBR (convex invariant position based on RANSAC) algorithm and a hybrid classifier composed of dynamic time warping (DTW) and Hidden Markov models (HMM), called the hybrid approach for gesture recognition with depth maps (HAGR-D). The experiments show that the proposed model overcomes other algorithms presented in the literature in hand gesture recognition tasks, achieving a classification rate of 97.49% in the MSRGesture3D dataset and 98.43% in the RPPDI dynamic gesture dataset.

  12. Research on gesture recognition of augmented reality maintenance guiding system based on improved SVM

    Science.gov (United States)

    Zhao, Shouwei; Zhang, Yong; Zhou, Bin; Ma, Dongxi

    2014-09-01

    Interaction is one of the key techniques of augmented reality (AR) maintenance guiding system. Because of the complexity of the maintenance guiding system's image background and the high dimensionality of gesture characteristics, the whole process of gesture recognition can be divided into three stages which are gesture segmentation, gesture characteristic feature modeling and trick recognition. In segmentation stage, for solving the misrecognition of skin-like region, a segmentation algorithm combing background mode and skin color to preclude some skin-like regions is adopted. In gesture characteristic feature modeling of image attributes stage, plenty of characteristic features are analyzed and acquired, such as structure characteristics, Hu invariant moments features and Fourier descriptor. In trick recognition stage, a classifier based on Support Vector Machine (SVM) is introduced into the augmented reality maintenance guiding process. SVM is a novel learning method based on statistical learning theory, processing academic foundation and excellent learning ability, having a lot of issues in machine learning area and special advantages in dealing with small samples, non-linear pattern recognition at high dimension. The gesture recognition of augmented reality maintenance guiding system is realized by SVM after the granulation of all the characteristic features. The experimental results of the simulation of number gesture recognition and its application in augmented reality maintenance guiding system show that the real-time performance and robustness of gesture recognition of AR maintenance guiding system can be greatly enhanced by improved SVM.

  13. Perception of Communicative and Non-Communicative Motion-Defined Gestures in Parkinson’s Disease

    Science.gov (United States)

    Jaywant, Abhishek; Wasserman, Victor; Kemppainen, Maaria; Neargarder, Sandy; Cronin-Golomb, Alice

    2017-01-01

    Objective Parkinson's disease (PD) is associated with deficits in social cognition and visual perception, but little is known about how the disease affects perception of socially complex biological motion, specifically motion-defined communicative and non-communicative gestures. We predicted that individuals with PD would perform more poorly than normal control (NC) participants in discriminating between communicative and non-communicative gestures, and in describing communicative gestures. We related the results to the participants’ gender, as there are gender differences in social cognition in PD. Method The study included 23 individuals with PD (10 men) and 24 NC participants (10 men) matched for age and education level. Participants viewed point-light human figures that conveyed communicative and non-communicative gestures and were asked to describe each gesture while discriminating between the two gesture types. Results PD as a group were less accurate than NC in describing non-communicative but not communicative gestures. Men with PD were impaired in describing and discriminating between communicative as well as non-communicative gestures. Conclusion The present study demonstrated PD-related impairments in perceiving and inferring the meaning of biological motion gestures. Men with PD may have particular difficulty in understanding the communicative gestures of others in interpersonal exchanges. PMID:27055646

  14. Real-Time Multiview Recognition of Human Gestures by Distributed Image Processing

    Directory of Open Access Journals (Sweden)

    Sato Kosuke

    2010-01-01

    Full Text Available Since a gesture involves a dynamic and complex motion, multiview observation and recognition are desirable. For the better representation of gestures, one needs to know, in the first place, from which views a gesture should be observed. Furthermore, it becomes increasingly important how the recognition results are integrated when larger numbers of camera views are considered. To investigate these problems, we propose a framework under which multiview recognition is carried out, and an integration scheme by which the recognition results are integrated online and in realtime. For performance evaluation, we use the ViHASi (Virtual Human Action Silhouette public image database as a benchmark and our Japanese sign language (JSL image database that contains 18 kinds of hand signs. By examining the recognition rates of each gesture for each view, we found gestures that exhibit view dependency and the gestures that do not. Also, we found that the view dependency itself could vary depending on the target gesture sets. By integrating the recognition results of different views, our swarm-based integration provides more robust and better recognition performance than individual fixed-view recognition agents.

  15. Eye-based head gestures

    DEFF Research Database (Denmark)

    Mardanbegi, Diako; Witzner Hansen, Dan; Pederson, Thomas

    2012-01-01

    A novel method for video-based head gesture recognition using eye information by an eye tracker has been proposed. The method uses a combination of gaze and eye movement to infer head gestures. Compared to other gesture-based methods a major advantage of the method is that the user keeps the gaze...... on the interaction object while interacting. This method has been implemented on a head-mounted eye tracker for detecting a set of predefined head gestures. The accuracy of the gesture classifier is evaluated and verified for gaze-based interaction in applications intended for both large public displays and small...... mobile phone screens. The user study shows that the method detects a set of defined gestures reliably....

  16. Gestures Enhance Foreign Language Learning

    Directory of Open Access Journals (Sweden)

    Manuela Macedonia

    2012-11-01

    Full Text Available Language and gesture are highly interdependent systems that reciprocally influence each other. For example, performing a gesture when learning a word or a phrase enhances its retrieval compared to pure verbal learning. Although the enhancing effects of co-speech gestures on memory are known to be robust, the underlying neural mechanisms are still unclear. Here, we summarize the results of behavioral and neuroscientific studies. They indicate that the neural representation of words consists of complex multimodal networks connecting perception and motor acts that occur during learning. In this context, gestures can reinforce the sensorimotor representation of a word or a phrase, making it resistant to decay. Also, gestures can favor embodiment of abstract words by creating it from scratch. Thus, we propose the use of gesture as a facilitating educational tool that integrates body and mind.

  17. Two sides of the same coin: Speech and gesture mutually interact to enhance comprehension

    NARCIS (Netherlands)

    Kelly, S.D.; Özyürek, A.; Maris, E.G.G.

    2010-01-01

    Gesture and speech are assumed to form an integrated system during language production. Based on this view, we propose the integrated-systems hypothesis, which explains two ways in which gesture and speech are integrated-through mutual and obligatory interactions-in language comprehension.

  18. Method for gesture based modeling

    DEFF Research Database (Denmark)

    2006-01-01

    A computer program based method is described for creating models using gestures. On an input device, such as an electronic whiteboard, a user draws a gesture which is recognized by a computer program and interpreted relative to a predetermined meta-model. Based on the interpretation, an algorithm...... is assigned to the gesture drawn by the user. The executed algorithm may, for example, consist in creating a new model element, modifying an existing model element, or deleting an existing model element....

  19. Hybrid gesture recognition system for short-range use

    Science.gov (United States)

    Minagawa, Akihiro; Fan, Wei; Katsuyama, Yutaka; Takebe, Hiroaki; Ozawa, Noriaki; Hotta, Yoshinobu; Sun, Jun

    2012-03-01

    In recent years, various gesture recognition systems have been studied for use in television and video games[1]. In such systems, motion areas ranging from 1 to 3 meters deep have been evaluated[2]. However, with the burgeoning popularity of small mobile displays, gesture recognition systems capable of operating at much shorter ranges have become necessary. The problems related to such systems are exacerbated by the fact that the camera's field of view is unknown to the user during operation, which imposes several restrictions on his/her actions. To overcome the restrictions generated from such mobile camera devices, and to create a more flexible gesture recognition interface, we propose a hybrid hand gesture system, in which two types of gesture recognition modules are prepared and with which the most appropriate recognition module is selected by a dedicated switching module. The two recognition modules of this system are shape analysis using a boosting approach (detection-based approach)[3] and motion analysis using image frame differences (motion-based approach)(for example, see[4]). We evaluated this system using sample users and classified the resulting errors into three categories: errors that depend on the recognition module, errors caused by incorrect module identification, and errors resulting from user actions. In this paper, we show the results of our investigations and explain the problems related to short-range gesture recognition systems.

  20. Iconic Gestures Facilitate Discourse Comprehension in Individuals With Superior Immediate Memory for Body Configurations.

    Science.gov (United States)

    Wu, Ying Choon; Coulson, Seana

    2015-11-01

    To understand a speaker's gestures, people may draw on kinesthetic working memory (KWM)-a system for temporarily remembering body movements. The present study explored whether sensitivity to gesture meaning was related to differences in KWM capacity. KWM was evaluated through sequences of novel movements that participants viewed and reproduced with their own bodies. Gesture sensitivity was assessed through a priming paradigm. Participants judged whether multimodal utterances containing congruent, incongruent, or no gestures were related to subsequent picture probes depicting the referents of those utterances. Individuals with low KWM were primarily inhibited by incongruent speech-gesture primes, whereas those with high KWM showed facilitation-that is, they were able to identify picture probes more quickly when preceded by congruent speech and gestures than by speech alone. Group differences were most apparent for discourse with weakly congruent speech and gestures. Overall, speech-gesture congruency effects were positively correlated with KWM abilities, which may help listeners match spatial properties of gestures to concepts evoked by speech. © The Author(s) 2015.

  1. Towards the creation of a Gesture Library

    Directory of Open Access Journals (Sweden)

    Bruno Galveia

    2015-06-01

    Full Text Available The evolution of technology has risen new possibilities in the so called Natural User Interfaces research area. Among distinct initiatives, several researchers are working with the existing sensors towards improving the support to gesture languages. This article tackles the recognition of gestures, using the Kinect sensor, in order to create a gesture library and support the gesture recognition processes afterwards.

  2. Initial experiments with Multiple Musical Gestures

    DEFF Research Database (Denmark)

    Jensen, Kristoffer; Graugaard, Lars

    2005-01-01

    other manipulation gestures. The initial position controls which parameter is being affected, the notes intensity is controlled by the downward gesture speed, and a sequence is finalized instantly with one upward gesture. The synthesis employs a novel interface structure, the multiple musical gesture...

  3. Altered integration of speech and gesture in children with autism spectrum disorders.

    Science.gov (United States)

    Hubbard, Amy L; McNealy, Kristin; Scott-Van Zeeland, Ashley A; Callan, Daniel E; Bookheimer, Susan Y; Dapretto, Mirella

    2012-09-01

    The presence of gesture during speech has been shown to impact perception, comprehension, learning, and memory in normal adults and typically developing children. In neurotypical individuals, the impact of viewing co-speech gestures representing an object and/or action (i.e., iconic gesture) or speech rhythm (i.e., beat gesture) has also been observed at the neural level. Yet, despite growing evidence of delayed gesture development in children with autism spectrum disorders (ASD), few studies have examined how the brain processes multimodal communicative cues occurring during everyday communication in individuals with ASD. Here, we used a previously validated functional magnetic resonance imaging (fMRI) paradigm to examine the neural processing of co-speech beat gesture in children with ASD and matched controls. Consistent with prior observations in adults, typically developing children showed increased responses in right superior temporal gyrus and sulcus while listening to speech accompanied by beat gesture. Children with ASD, however, exhibited no significant modulatory effects in secondary auditory cortices for the presence of co-speech beat gesture. Rather, relative to their typically developing counterparts, children with ASD showed significantly greater activity in visual cortex while listening to speech accompanied by beat gesture. Importantly, the severity of their socio-communicative impairments correlated with activity in this region, such that the more impaired children demonstrated the greatest activity in visual areas while viewing co-speech beat gesture. These findings suggest that although the typically developing brain recognizes beat gesture as communicative and successfully integrates it with co-occurring speech, information from multiple sensory modalities is not effectively integrated during social communication in the autistic brain.

  4. Ape gestures and language evolution

    Science.gov (United States)

    Pollick, Amy S.; de Waal, Frans B. M.

    2007-01-01

    The natural communication of apes may hold clues about language origins, especially because apes frequently gesture with limbs and hands, a mode of communication thought to have been the starting point of human language evolution. The present study aimed to contrast brachiomanual gestures with orofacial movements and vocalizations in the natural communication of our closest primate relatives, bonobos (Pan paniscus) and chimpanzees (Pan troglodytes). We tested whether gesture is the more flexible form of communication by measuring the strength of association between signals and specific behavioral contexts, comparing groups of both the same and different ape species. Subjects were two captive bonobo groups, a total of 13 individuals, and two captive chimpanzee groups, a total of 34 individuals. The study distinguished 31 manual gestures and 18 facial/vocal signals. It was found that homologous facial/vocal displays were used very similarly by both ape species, yet the same did not apply to gestures. Both within and between species gesture usage varied enormously. Moreover, bonobos showed greater flexibility in this regard than chimpanzees and were also the only species in which multimodal communication (i.e., combinations of gestures and facial/vocal signals) added to behavioral impact on the recipient. PMID:17470779

  5. Gesture in an exoteric context

    Directory of Open Access Journals (Sweden)

    Žikić Bojan P.

    2004-01-01

    Full Text Available Anthropological approach to gesture studies is based on the cultural communication as well on defining the context of occurrence within the general cultural context, where latter provides the frame of reference for the communication issued. The particular gesture is reviewed according to the relation of the morphology of physical movement to the semantic contents achieved and transmitted within the contexts of relevance. The gestures discussed are those performed by the players of the Italian football clubs "Torino" and "Juventus" after scoring the goals at the games between those clubs in some past seasons. The explanatory procedure links this discussion to the gesture studies in Serbian ethnology and anthropology by structural-homologic consideration of some further contexts of relevance.

  6. Non Audio-Video gesture recognition system

    DEFF Research Database (Denmark)

    Craciunescu, Razvan; Mihovska, Albena Dimitrova; Kyriazakos, Sofoklis

    2016-01-01

    Gesture recognition is a topic in computer science and language technology with the goal of interpreting human gestures via mathematical algorithms. Gestures can originate from any bodily motion or state but commonly originate from the face or hand. Current research focus includes on the emotion...... recognition from the face and hand gesture recognition. Gesture recognition enables humans to communicate with the machine and interact naturally without any mechanical devices. This paper investigates the possibility to use non-audio/video sensors in order to design a low-cost gesture recognition device...

  7. Visible embodiment: gestures as simulated action.

    Science.gov (United States)

    Hostetter, Autumn B; Alibali, Martha W

    2008-06-01

    Spontaneous gestures that accompany speech are related to both verbal and spatial processes. We argue that gestures emerge from perceptual and motor simulations that underlie embodied language and mental imagery. We first review current thinking about embodied cognition, embodied language, and embodied mental imagery. We then provide evidence that gestures stem from spatial representations and mental images. We then propose the gestures-as-simulated-action framework to explain how gestures might arise from an embodied cognitive system. Finally, we compare this framework with other current models of gesture production, and we briefly outline predictions that derive from the framework.

  8. RGBD Video Based Human Hand Trajectory Tracking and Gesture Recognition System

    Directory of Open Access Journals (Sweden)

    Weihua Liu

    2015-01-01

    Full Text Available The task of human hand trajectory tracking and gesture trajectory recognition based on synchronized color and depth video is considered. Toward this end, in the facet of hand tracking, a joint observation model with the hand cues of skin saliency, motion and depth is integrated into particle filter in order to move particles to local peak in the likelihood. The proposed hand tracking method, namely, salient skin, motion, and depth based particle filter (SSMD-PF, is capable of improving the tracking accuracy considerably, in the context of the signer performing the gesture toward the camera device and in front of moving, cluttered backgrounds. In the facet of gesture recognition, a shape-order context descriptor on the basis of shape context is introduced, which can describe the gesture in spatiotemporal domain. The efficient shape-order context descriptor can reveal the shape relationship and embed gesture sequence order information into descriptor. Moreover, the shape-order context leads to a robust score for gesture invariant. Our approach is complemented with experimental results on the settings of the challenging hand-signed digits datasets and American sign language dataset, which corroborate the performance of the novel techniques.

  9. So What's Your Affiliation With Gesture?

    OpenAIRE

    Kirchhof, Carolin; Malisz, Zofia; Wagner, Petra

    2011-01-01

    Following De Ruiter (2000), I propose that there is no such thing as a lexical affiliate for every gesture. I suggest interpreting gestures by their conceptual affiliates. The existence of conceptual rather than lexical gesture affiliates is supported by empirical data from a perception study with German native speakers. They linked gestures in video clips without sound to their accompanying speech that was in a separate audio clip. The manifold lexical connections people made could be united...

  10. Password Based Hand Gesture Controlled Robot

    OpenAIRE

    Shanmukha Rao; CH, Rajasekhar

    2016-01-01

    Gesture is a most natural way of communication between human and computer in real system. Hand gesture is one of the important methods of non-verbal communications for humans. A simulation tool, MATLAB based colour image processing is used to recognize hand gesture. With the help of wireless communication, it is easier to interact with the robot. The objective of this project is to build a password protected wireless gesture control robot using Arduino, RF transmitter and receiver...

  11. Introduction: Towards an Ethics of Gesture

    OpenAIRE

    Ruprecht, Lucia

    2017-01-01

    The introduction to this special section of Performance Philosophy takes Giorgio Agamben’s remarks about the mediality and potentiality of gesture as a starting point to rethink gesture’s nexus with ethics. Shifting the emphasis from philosophical reflection to corporeal practice, it defines gestural ethics as an acting-otherwise which comes into being in the particularities of singular gestural practice, its forms, kinetic qualities, temporal displacements and calls for response. Gestural ac...

  12. Gesture Supports Spatial Thinking in STEM

    Science.gov (United States)

    Stieff, Mike; Lira, Matthew E.; Scopelitis, Stephanie A.

    2016-01-01

    The present article describes two studies that examine the impact of teaching students to use gesture to support spatial thinking in the Science, Technology, Engineering, and Mathematics (STEM) discipline of chemistry. In Study 1 we compared the effectiveness of instruction that involved either watching gesture, reproducing gesture, or reading…

  13. Does language shape silent gesture?☆

    Science.gov (United States)

    Özçalışkan, Şeyda; Lucero, Ché; Goldin-Meadow, Susan

    2016-01-01

    Languages differ in how they organize events, particularly in the types of semantic elements they express and the arrangement of those elements within a sentence. Here we ask whether these cross-linguistic differences have an impact on how events are represented nonverbally; more specifically, on how events are represented in gestures produced without speech (silent gesture), compared to gestures produced with speech (co-speech gesture). We observed speech and gesture in 40 adult native speakers of English and Turkish (N = 20/per language) asked to describe physical motion events (e.g., running down a path)—a domain known to elicit distinct patterns of speech and co-speech gesture in English- and Turkish-speakers. Replicating previous work (Kita & Özyürek, 2003), we found an effect of language on gesture when it was produced with speech—co-speech gestures produced by English-speakers differed from co-speech gestures produced by Turkish-speakers. However, we found no effect of language on gesture when it was produced on its own—silent gestures produced by English-speakers were identical in how motion elements were packaged and ordered to silent gestures produced by Turkish-speakers. The findings provide evidence for a natural semantic organization that humans impose on motion events when they convey those events without language. PMID:26707427

  14. Gesture Activated Mobile Edutainment (GAME)

    DEFF Research Database (Denmark)

    Rehm, Matthias; Leichtenstern, Karin; Plomer, Joerg

    2010-01-01

    An approach to intercultural training of nonverbal behavior is presented that draws from research on role-plays with virtual agents and ideas from situated learning. To this end, a mobile serious game is realized where the user acquires knowledge about German emblematic gestures and tries them out...... in role-plays with virtual agents. Gesture performance is evaluated making use of build-in acceleration sensors of smart phones. After an account of the theoretical background covering diverse areas like virtual agents, situated learning and intercultural training, the paper presents the GAME approach...... along with details on the gesture recognition and content authoring. By its experience-based role plays with virtual characters, GAME brings together ideas from situated learning and intercultural training in an integrated approach and paves the way for new m-learning concepts....

  15. Conductor gestures influence evaluations of ensemble performance.

    Science.gov (United States)

    Morrison, Steven J; Price, Harry E; Smedley, Eric M; Meals, Cory D

    2014-01-01

    Previous research has found that listener evaluations of ensemble performances vary depending on the expressivity of the conductor's gestures, even when performances are otherwise identical. It was the purpose of the present study to test whether this effect of visual information was evident in the evaluation of specific aspects of ensemble performance: articulation and dynamics. We constructed a set of 32 music performances that combined auditory and visual information and were designed to feature a high degree of contrast along one of two target characteristics: articulation and dynamics. We paired each of four music excerpts recorded by a chamber ensemble in both a high- and low-contrast condition with video of four conductors demonstrating high- and low-contrast gesture specifically appropriate to either articulation or dynamics. Using one of two equivalent test forms, college music majors and non-majors (N = 285) viewed sixteen 30 s performances and evaluated the quality of the ensemble's articulation, dynamics, technique, and tempo along with overall expressivity. Results showed significantly higher evaluations for performances featuring high rather than low conducting expressivity regardless of the ensemble's performance quality. Evaluations for both articulation and dynamics were strongly and positively correlated with evaluations of overall ensemble expressivity.

  16. Feasibility of interactive gesture control of a robotic microscope

    Directory of Open Access Journals (Sweden)

    Antoni Sven-Thomas

    2015-09-01

    Full Text Available Robotic devices become increasingly available in the clinics. One example are motorized surgical microscopes. While there are different scenarios on how to use the devices for autonomous tasks, simple and reliable interaction with the device is a key for acceptance by surgeons. We study, how gesture tracking can be integrated within the setup of a robotic microscope. In our setup, a Leap Motion Controller is used to track hand motion and adjust the field of view accordingly. We demonstrate with a survey that moving the field of view over a specified course is possible even for untrained subjects. Our results indicate that touch-less interaction with robots carrying small, near field gesture sensors is feasible and can be of use in clinical scenarios, where robotic devices are used in direct proximity of patient and physicians.

  17. Gestures convey content: an exploration of the semantic functions of physicians' gestures.

    Science.gov (United States)

    Gerwing, Jennifer; Dalby, Anne Marie Landmark

    2014-09-01

    Gestures' semiotic role in clinical interactions is unexplored. Using theoretical underpinnings from basic research on gesture, our objective was to investigate the semantic contributions of physicians' gestures during interactions with patients with a different native language. We analyzed gestures-speech composites in eight videotaped interactions between physicians and patients during treatment plan discussions. Using microanalysis of face-to-face dialogue and conversation analysis, we identified physicians' gestures, decided whether they served semantic functions, and explored their relationship with the accompanying speech. Using the operational definitions developed here resulted in high reliability. Physicians gestured at a mean rate of 6.5 gestures per 100 words. Approximately half of the gestures served semantic functions, with referents that were concrete (e.g., actions, body parts) and abstract (e.g., regularity, timelines). Gestures conveyed topic information, but speech conveyed information about that topic and context for interpreting gestures' meaning. Analyzing the semantic functions of gestures in clinical interactions is feasible. Physicians' gestures and speech formed integrated messages; the two modalities conveyed mutually dependent meanings. Physicians could become aware of the semiotic potential of gestures. However, conversational gestures lack conventional meanings and rely on the accompanying speech to provide necessary context for interpreting their meaning. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  18. How do gestures influence thinking and speaking? The gesture-for-conceptualization hypothesis.

    Science.gov (United States)

    Kita, Sotaro; Alibali, Martha W; Chu, Mingyuan

    2017-04-01

    People spontaneously produce gestures during speaking and thinking. The authors focus here on gestures that depict or indicate information related to the contents of concurrent speech or thought (i.e., representational gestures). Previous research indicates that such gestures have not only communicative functions, but also self-oriented cognitive functions. In this article, the authors propose a new theoretical framework, the gesture-for-conceptualization hypothesis, which explains the self-oriented functions of representational gestures. According to this framework, representational gestures affect cognitive processes in 4 main ways: gestures activate, manipulate, package, and explore spatio-motoric information for speaking and thinking. These four functions are shaped by gesture's ability to schematize information, that is, to focus on a small subset of available information that is potentially relevant to the task at hand. The framework is based on the assumption that gestures are generated from the same system that generates practical actions, such as object manipulation; however, gestures are distinct from practical actions in that they represent information. The framework provides a novel, parsimonious, and comprehensive account of the self-oriented functions of gestures. The authors discuss how the framework accounts for gestures that depict abstract or metaphoric content, and they consider implications for the relations between self-oriented and communicative functions of gestures. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  19. Silent gestures speak in aphasia

    NARCIS (Netherlands)

    van Nispen, Karin; van de Sandt-Koenderman, M.; Krahmer, Emiel

    2017-01-01

    Background & Aim As the result of brain damage, people with aphasia (PWA) have language difficulties (Goodglass, 1993). Consequently, their communication can be greatly affected. This raises the question of whether gestures could convey information missing in their speech. Although it is known that

  20. Computational invariant theory

    CERN Document Server

    Derksen, Harm

    2015-01-01

    This book is about the computational aspects of invariant theory. Of central interest is the question how the invariant ring of a given group action can be calculated. Algorithms for this purpose form the main pillars around which the book is built. There are two introductory chapters, one on Gröbner basis methods and one on the basic concepts of invariant theory, which prepare the ground for the algorithms. Then algorithms for computing invariants of finite and reductive groups are discussed. Particular emphasis lies on interrelations between structural properties of invariant rings and computational methods. Finally, the book contains a chapter on applications of invariant theory, covering fields as disparate as graph theory, coding theory, dynamical systems, and computer vision. The book is intended for postgraduate students as well as researchers in geometry, computer algebra, and, of course, invariant theory. The text is enriched with numerous explicit examples which illustrate the theory and should be ...

  1. Perceived communicative intent in gesture and language modulates the superior temporal sulcus.

    Science.gov (United States)

    Redcay, Elizabeth; Velnoskey, Kayla R; Rowe, Meredith L

    2016-10-01

    Behavioral evidence and theory suggest gesture and language processing may be part of a shared cognitive system for communication. While much research demonstrates both gesture and language recruit regions along perisylvian cortex, relatively less work has tested functional segregation within these regions on an individual level. Additionally, while most work has focused on a shared semantic network, less has examined shared regions for processing communicative intent. To address these questions, functional and structural MRI data were collected from 24 adult participants while viewing videos of an experimenter producing communicative, Participant-Directed Gestures (PDG) (e.g., "Hello, come here"), noncommunicative Self-adaptor Gestures (SG) (e.g., smoothing hair), and three written text conditions: (1) Participant-Directed Sentences (PDS), matched in content to PDG, (2) Third-person Sentences (3PS), describing a character's actions from a third-person perspective, and (3) meaningless sentences, Jabberwocky (JW). Surface-based conjunction and individual functional region of interest analyses identified shared neural activation between gesture (PDGvsSG) and language processing using two different language contrasts. Conjunction analyses of gesture (PDGvsSG) and Third-person Sentences versus Jabberwocky revealed overlap within left anterior and posterior superior temporal sulcus (STS). Conjunction analyses of gesture and Participant-Directed Sentences to Third-person Sentences revealed regions sensitive to communicative intent, including the left middle and posterior STS and left inferior frontal gyrus. Further, parametric modulation using participants' ratings of stimuli revealed sensitivity of left posterior STS to individual perceptions of communicative intent in gesture. These data highlight an important role of the STS in processing participant-directed communicative intent through gesture and language. Hum Brain Mapp 37:3444-3461, 2016. © 2016 Wiley

  2. Hand Matters: Left-Hand Gestures Enhance Metaphor Explanation

    Science.gov (United States)

    Argyriou, Paraskevi; Mohr, Christine; Kita, Sotaro

    2017-01-01

    Research suggests that speech-accompanying gestures influence cognitive processes, but it is not clear whether the gestural benefit is specific to the gesturing hand. Two experiments tested the "(right/left) hand-specificity" hypothesis for self-oriented functions of gestures: gestures with a particular hand enhance cognitive processes…

  3. Learning from Gesture: How Our Hands Change Our Minds

    Science.gov (United States)

    Novack, Miriam; Goldin-Meadow, Susan

    2015-01-01

    When people talk, they gesture, and those gestures often reveal information that cannot be found in speech. Learners are no exception. A learner's gestures can index moments of conceptual instability, and teachers can make use of those gestures to gain access into a student's thinking. Learners can also discover novel ideas from the gestures they…

  4. How early do children understand gesture-speech combinations with iconic gestures?

    Science.gov (United States)

    Stanfield, Carmen; Williamson, Rebecca; Ozçalişkan, Seyda

    2014-03-01

    Children understand gesture+speech combinations in which a deictic gesture adds new information to the accompanying speech by age 1;6 (Morford & Goldin-Meadow, 1992; 'push'+point at ball). This study explores how early children understand gesture+speech combinations in which an iconic gesture conveys additional information not found in the accompanying speech (e.g., 'read'+BOOK gesture). Our analysis of two- to four-year-old children's responses in a gesture+speech comprehension task showed that children grasp the meaning of iconic co-speech gestures by age three and continue to improve their understanding with age. Overall, our study highlights the important role gesture plays in language comprehension as children learn to unpack increasingly complex communications addressed to them at the early ages.

  5. Aspects of the Multiple Musical Gestures

    DEFF Research Database (Denmark)

    Jensen, Karl Kristoffer

    2006-01-01

    A simple to use pointer interface in 2D for producing music is presented as a means for real-time playing and sound generation. The music is produced by simple gestures that are repeated easily. The gestures include left-to-right and right-to-left motion shapes for spectral envelope and temporal...... envelope of the sounds, with optional backwards motion for the addition of noise; downward motion for note onset and several other manipulation gestures. The initial position controls which parameter is being affected, the notes intensity is controlled by the downward gesture speed, and a sequence...... is finalized instantly with one upward gesture. Several synthesis methods are presented and the control mechanisms are mapped into the multiple musical gesture interface. This enables a number of performers to interact on the same interface, either by each playing the same musical instruments simultaneously...

  6. Gestures in an Intelligent User Interface

    Science.gov (United States)

    Fikkert, Wim; van der Vet, Paul; Nijholt, Anton

    In this chapter we investigated which hand gestures are intuitive to control a large display multimedia interface from a user's perspective. Over the course of two sequential user evaluations, we defined a simple gesture set that allows users to fully control a large display multimedia interface, intuitively. First, we evaluated numerous gesture possibilities for a set of commands that can be issued to the interface. These gestures were selected from literature, science fiction movies, and a previous exploratory study. Second, we implemented a working prototype with which the users could interact with both hands and the preferred hand gestures with 2D and 3D visualizations of biochemical structures. We found that the gestures are influenced to significant extent by the fast paced developments in multimedia interfaces such as the Apple iPhone and the Nintendo Wii and to no lesser degree by decades of experience with the more traditional WIMP-based interfaces.

  7. Gesture analysis for physics education researchers

    Directory of Open Access Journals (Sweden)

    Rachel E. Scherr

    2008-01-01

    Full Text Available Systematic observations of student gestures can not only fill in gaps in students’ verbal expressions, but can also offer valuable information about student ideas, including their source, their novelty to the speaker, and their construction in real time. This paper provides a review of the research in gesture analysis that is most relevant to physics education researchers and illustrates gesture analysis for the purpose of better understanding student thinking about physics.

  8. Hand gestures mouse cursor control

    Directory of Open Access Journals (Sweden)

    Marian-Avram Vincze

    2014-05-01

    Full Text Available The paper describes the implementation of a human-computer interface for controlling the mouse cursor. The test reveal the fact: a low-cost web camera some processing algorithms are quite enough to control the mouse cursor on computers. Even if the system is influenced by the illuminance level on the plane of the hand, the current study may represent a start point for some studies on the hand tracking and gesture recognition field.

  9. A labial gesture for /l/

    Science.gov (United States)

    Campbell, Fiona; Gick, Bryan

    2003-04-01

    Both in language change and in substitutions during language acquisition and disordered speech, /l/ has often been observed to alternate with labial sounds such as [w] or rounded vowels, particularly in postvocalic position. While there are many possible explanations for this alternation, including acoustic enhancement and articulator coupling, one possibility that has not been tested is whether normal adult speakers of English actually produce lip rounding for /l/. A study was conducted to test for the presence of a labial gesture in normal productions of /l/. Front and side video data of lip positions were collected from three adult English speakers during productions of /l/ and /d/. Significant differences were found for all subjects in lip protrusion (upper and lower) and/or lip aperture (horizontal and vertical) in post-vocalic allophones, as well as between the pre- and post-vocalic allophones of /l/. No significant differences were observed in comparisons of pre-vocalic /l/ and /d/. Results suggest that there is in fact a labial gesture in the post-vocalic allophone of /l/, but not in the pre-vocalic allophone. These findings are consistent with a notion of gestural simplification as a possible explanation for substitutions and in language change. [Research supported by NSERC.

  10. Gesture and Speech in Interaction - 4th edition (GESPIN 4)

    OpenAIRE

    Ferré, Gaëlle; Mark, Tutton

    2015-01-01

    International audience; The fourth edition of Gesture and Speech in Interaction (GESPIN) was held in Nantes, France. With more than 40 papers, these proceedings show just what a flourishing field of enquiry gesture studies continues to be. The keynote speeches of the conference addressed three different aspects of multimodal interaction:gesture and grammar, gesture acquisition, and gesture and social interaction. In a talk entitled Qualitiesof event construal in speech and gesture: Aspect and...

  11. Research on Interaction-oriented Gesture Recognition

    Directory of Open Access Journals (Sweden)

    Lu Huang

    2014-01-01

    Full Text Available This thesis designs a series of gesture interaction with the features of the natural human-machine interaction; besides, it utilizes the 3D acceleration sensors as interactive input. Afterwards, it builds the Discrete Hidden Markov Model to make gesture recognition by introducing the collection proposal of gesture interaction based on the acceleration sensors and pre-handling the gesture acceleration signal obtained in the collection. In the end, the thesis proofs the design proposal workable and effective according to the experiments.

  12. Gesture facilitates the syntactic analysis of speech

    Directory of Open Access Journals (Sweden)

    Henning eHolle

    2012-03-01

    Full Text Available Recent research suggests that the brain routinely binds together information from gesture and speech. However, most of this research focused on the integration of representational gestures with the semantic content of speech. Much less is known about how other aspects of gesture, such as emphasis, influence the interpretation of the syntactic relations in a spoken message. Here, we investigated whether beat gestures alter which syntactic structure is assigned to ambiguous spoken German sentences. The P600 component of the Event Related Brain Potential indicated that the more complex syntactic structure is easier to process when the speaker emphasizes the subject of a sentence with a beat. Thus, a simple flick of the hand can change our interpretation of who has been doing what to whom in a spoken sentence. We conclude that gestures and speech are an integrated system. Unlike previous studies, which have shown that the brain effortlessly integrates semantic information from gesture and speech, our study is the first to demonstrate that this integration also occurs for syntactic information. Moreover, the effect appears to be gesture-specific and was not found for other stimuli that draw attention to certain parts of speech, including prosodic emphasis, or a moving visual stimulus with the same trajectory as the gesture. This suggests that only visual emphasis produced with a communicative intention in mind (that is, beat gestures influences language comprehension, but not a simple visual movement lacking such an intention.

  13. Co-speech iconic gestures and visuo-spatial working memory.

    Science.gov (United States)

    Wu, Ying Choon; Coulson, Seana

    2014-11-01

    Three experiments tested the role of verbal versus visuo-spatial working memory in the comprehension of co-speech iconic gestures. In Experiment 1, participants viewed congruent discourse primes in which the speaker's gestures matched the information conveyed by his speech, and incongruent ones in which the semantic content of the speaker's gestures diverged from that in his speech. Discourse primes were followed by picture probes that participants judged as being either related or unrelated to the preceding clip. Performance on this picture probe classification task was faster and more accurate after congruent than incongruent discourse primes. The effect of discourse congruency on response times was linearly related to measures of visuo-spatial, but not verbal, working memory capacity, as participants with greater visuo-spatial WM capacity benefited more from congruent gestures. In Experiments 2 and 3, participants performed the same picture probe classification task under conditions of high and low loads on concurrent visuo-spatial (Experiment 2) and verbal (Experiment 3) memory tasks. Effects of discourse congruency and verbal WM load were additive, while effects of discourse congruency and visuo-spatial WM load were interactive. Results suggest that congruent co-speech gestures facilitate multi-modal language comprehension, and indicate an important role for visuo-spatial WM in these speech-gesture integration processes. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Talking Hands: Tongue Motor Excitability During Observation of Hand Gestures Associated with Words.

    Directory of Open Access Journals (Sweden)

    Naeem eKomeilipoor

    2014-09-01

    Full Text Available Perception of speech and gestures engage common brain areas. Neural regions involved in speech perception overlap with those involved in speech production in an articulator-specific manner. Yet, it is unclear whether motor cortex also has a role in processing communicative actions like gesture and sign language. We asked whether the mere observation of hand gestures, paired and not paired with words, may result in changes in the excitability of the hand and tongue areas of motor cortex. Using single-pulse transcranial magnetic stimulation, we measured the motor excitability in tongue and hand areas of left primary motor cortex, while participants viewed video sequences of bimanual hand movements associated or not-associated with nouns. We found higher motor excitability in the tongue area during the presentation of meaningful gestures (noun-associated as opposed to meaningless ones, while the excitability of hand motor area was not differentially affected by gesture observation. Our results let us argue that the observation of gestures associated with a word results in activation of articulatory motor network accompanying speech production.

  15. Talking hands: tongue motor excitability during observation of hand gestures associated with words.

    Science.gov (United States)

    Komeilipoor, Naeem; Vicario, Carmelo Mario; Daffertshofer, Andreas; Cesari, Paola

    2014-01-01

    Perception of speech and gestures engage common brain areas. Neural regions involved in speech perception overlap with those involved in speech production in an articulator-specific manner. Yet, it is unclear whether motor cortex also has a role in processing communicative actions like gesture and sign language. We asked whether the mere observation of hand gestures, paired and not paired with words, may result in changes in the excitability of the hand and tongue areas of motor cortex. Using single-pulse transcranial magnetic stimulation (TMS), we measured the motor excitability in tongue and hand areas of left primary motor cortex, while participants viewed video sequences of bimanual hand movements associated or not-associated with nouns. We found higher motor excitability in the tongue area during the presentation of meaningful gestures (noun-associated) as opposed to meaningless ones, while the excitability of hand motor area was not differentially affected by gesture observation. Our results let us argue that the observation of gestures associated with a word results in activation of articulatory motor network accompanying speech production.

  16. Hearing and seeing meaning in speech and gesture: insights from brain and behaviour

    Science.gov (United States)

    Özyürek, Aslı

    2014-01-01

    As we speak, we use not only the arbitrary form–meaning mappings of the speech channel but also motivated form–meaning correspondences, i.e. iconic gestures that accompany speech (e.g. inverted V-shaped hand wiggling across gesture space to demonstrate walking). This article reviews what we know about processing of semantic information from speech and iconic gestures in spoken languages during comprehension of such composite utterances. Several studies have shown that comprehension of iconic gestures involves brain activations known to be involved in semantic processing of speech: i.e. modulation of the electrophysiological recording component N400, which is sensitive to the ease of semantic integration of a word to previous context, and recruitment of the left-lateralized frontal–posterior temporal network (left inferior frontal gyrus (IFG), medial temporal gyrus (MTG) and superior temporal gyrus/sulcus (STG/S)). Furthermore, we integrate the information coming from both channels recruiting brain areas such as left IFG, posterior superior temporal sulcus (STS)/MTG and even motor cortex. Finally, this integration is flexible: the temporal synchrony between the iconic gesture and the speech segment, as well as the perceived communicative intent of the speaker, modulate the integration process. Whether these findings are special to gestures or are shared with actions or other visual accompaniments to speech (e.g. lips) or other visual symbols such as pictures are discussed, as well as the implications for a multimodal view of language. PMID:25092664

  17. A speaker’s gesture style can affect language comprehension: ERP evidence from gesture-speech integration

    OpenAIRE

    Obermeier, Christian; Kelly, Spencer D.; Gunter, Thomas C.

    2015-01-01

    In face-to-face communication, speech is typically enriched by gestures. Clearly, not all people gesture in the same way, and the present study explores whether such individual differences in gesture style are taken into account during the perception of gestures that accompany speech. Participants were presented with one speaker that gestured in a straightforward way and another that also produced self-touch movements. Adding trials with such grooming movements makes the gesture information a...

  18. Hand Gesture Recognition Using Ultrasonic Waves

    KAUST Repository

    AlSharif, Mohammed Hussain

    2016-04-01

    Gesturing is a natural way of communication between people and is used in our everyday conversations. Hand gesture recognition systems are used in many applications in a wide variety of fields, such as mobile phone applications, smart TVs, video gaming, etc. With the advances in human-computer interaction technology, gesture recognition is becoming an active research area. There are two types of devices to detect gestures; contact based devices and contactless devices. Using ultrasonic waves for determining gestures is one of the ways that is employed in contactless devices. Hand gesture recognition utilizing ultrasonic waves will be the focus of this thesis work. This thesis presents a new method for detecting and classifying a predefined set of hand gestures using a single ultrasonic transmitter and a single ultrasonic receiver. This method uses a linear frequency modulated ultrasonic signal. The ultrasonic signal is designed to meet the project requirements such as the update rate, the range of detection, etc. Also, it needs to overcome hardware limitations such as the limited output power, transmitter, and receiver bandwidth, etc. The method can be adapted to other hardware setups. Gestures are identified based on two main features; range estimation of the moving hand and received signal strength (RSS). These two factors are estimated using two simple methods; channel impulse response (CIR) and cross correlation (CC) of the reflected ultrasonic signal from the gesturing hand. A customized simple hardware setup was used to classify a set of hand gestures with high accuracy. The detection and classification were done using methods of low computational cost. This makes the proposed method to have a great potential for the implementation in many devices including laptops and mobile phones. The predefined set of gestures can be used for many control applications.

  19. Type of gesture, valence, and gaze modulate the influence of gestures on observer's behaviors.

    Science.gov (United States)

    De Stefani, Elisa; Innocenti, Alessandro; Secchi, Claudio; Papa, Veronica; Gentilucci, Maurizio

    2013-01-01

    The present kinematic study aimed at determining whether the observation of arm/hand gestures performed by conspecifics affected an action apparently unrelated to the gesture (i.e., reaching-grasping). In 3 experiments we examined the influence of different gestures on action kinematics. We also analyzed the effects of words corresponding in meaning to the gestures, on the same action. In Experiment 1, the type of gesture, valence and actor's gaze were the investigated variables Participants executed the action of reaching-grasping after discriminating whether the gestures produced by a conspecific were meaningful or not. The meaningful gestures were request or symbolic and their valence was positive or negative. They were presented by the conspecific either blindfolded or not. In control Experiment 2 we searched for effects of the sole gaze, and, in Experiment 3, the effects of the same characteristics of words corresponding in meaning to the gestures and visually presented by the conspecific. Type of gesture, valence, and gaze influenced the actual action kinematics; these effects were similar, but not the same as those induced by words. We proposed that the signal activated a response which made the actual action faster for negative valence of gesture, whereas for request signals and available gaze, the response interfered with the actual action more than symbolic signals and not available gaze. Finally, we proposed the existence of a common circuit involved in the comprehension of gestures and words and in the activation of consequent responses to them.

  20. Neural interaction of speech and gesture: differential activations of metaphoric co-verbal gestures.

    Science.gov (United States)

    Kircher, Tilo; Straube, Benjamin; Leube, Dirk; Weis, Susanne; Sachs, Olga; Willmes, Klaus; Konrad, Kerstin; Green, Antonia

    2009-01-01

    Gestures are an important part of human communication. However, little is known about the neural correlates of gestures accompanying speech comprehension. The goal of this study is to investigate the neural basis of speech-gesture interaction as reflected in activation increase and decrease during observation of natural communication. Fourteen German participants watched video clips of 5 s duration depicting an actor who performed metaphoric gestures to illustrate the abstract content of spoken sentences. Furthermore, video clips of isolated gestures (without speech), isolated spoken sentences (without gestures) and gestures in the context of an unknown language (Russian) were additionally presented while functional magnetic resonance imaging (fMRI) data were acquired. Bimodal speech and gesture processing led to left hemispheric activation increases of the posterior middle temporal gyrus, the premotor cortex, the inferior frontal gyrus, and the right superior temporal sulcus. Activation reductions during the bimodal condition were located in the left superior temporal gyrus and the left posterior insula. Gesture related activation increases and decreases were dependent on language semantics and were not found in the unknown-language condition. Our results suggest that semantic integration processes for bimodal speech plus gesture comprehension are reflected in activation increases in the classical left hemispheric language areas. Speech related gestures seem to enhance language comprehension during the face-to-face communication.

  1. Geometry and Gesture-Based Features from Saccadic Eye-Movement as a Biometric in Radiology

    Energy Technology Data Exchange (ETDEWEB)

    Hammond, Tracy [Texas A& M University, College Station; Tourassi, Georgia [ORNL; Yoon, Hong-Jun [ORNL; Alamudun, Folami T. [ORNL

    2017-07-01

    In this study, we present a novel application of sketch gesture recognition on eye-movement for biometric identification and estimating task expertise. The study was performed for the task of mammographic screening with simultaneous viewing of four coordinated breast views as typically done in clinical practice. Eye-tracking data and diagnostic decisions collected for 100 mammographic cases (25 normal, 25 benign, 50 malignant) and 10 readers (three board certified radiologists and seven radiology residents), formed the corpus for this study. Sketch gesture recognition techniques were employed to extract geometric and gesture-based features from saccadic eye-movements. Our results show that saccadic eye-movement, characterized using sketch-based features, result in more accurate models for predicting individual identity and level of expertise than more traditional eye-tracking features.

  2. Enhancing Communication through Gesture and Naming Therapy

    Science.gov (United States)

    Caute, Anna; Pring, Tim; Cocks, Naomi; Cruice, Madeline; Best, Wendy; Marshall, Jane

    2013-01-01

    Purpose: In this study, the authors investigated whether gesture, naming, and strategic treatment improved the communication skills of 14 people with severe aphasia. Method: All participants received 15 hr of gesture and naming treatment (reported in a companion article [Marshall et al., 2012]). Half the group received a further 15 hr of strategic…

  3. Observation of static gestures influences speech production.

    Science.gov (United States)

    Jarick, Michelle; Jones, Jeffery A

    2008-08-01

    Research investigating 'mirror neurons' has demonstrated the presence of an observation-execution matching system in humans. One hypothesized role for this system might be to aid in action understanding by encoding the underlying intentions of the actor. To investigate this hypothesis, we asked participants to observe photographs of an actor making orofacial gestures (implying verbal or non-verbal acts), and to produce syllables that were compatible or incompatible with the gesture they observed. We predicted that if mirror neurons encode the intentions of an actor, then the pictures implying verbal gestures would affect speech production, whereas the non-verbal gestures would not. Our results showed that the observation of compatible verbal gestures facilitated verbal responses, while incompatible verbal gestures caused interference. Although this compatibility effect did not reach statistical significance when the photographs implied a non-verbal act, responses were faster on average when the gesture implied the use of similar articulators as those involved with the production of the target syllable. Altogether, these behavioral findings compliment previous neuroimaging studies indicating that static pictures portraying gestures activate brain regions associated with an observation-execution matching system.

  4. Enhancing Gesture Quality in Young Singers

    Science.gov (United States)

    Liao, Mei-Ying; Davidson, Jane W.

    2016-01-01

    Studies have shown positive results for the use of gesture as a successful technique in aiding children's singing. The main purpose of this study was to examine the effects of movement training for children with regard to enhancing gesture quality. Thirty-six fifth-grade students participated in the empirical investigation. They were randomly…

  5. Gesture in a Kindergarten Mathematics Classroom

    Science.gov (United States)

    Elia, Iliada; Evangelou, Kyriacoulla

    2014-01-01

    Recent studies have advocated that mathematical meaning is mediated by gestures. This case study explores the gestures kindergarten children produce when learning spatial concepts in a mathematics classroom setting. Based on a video study of a mathematical lesson in a kindergarten class, we concentrated on the verbal and non-verbal behavior of one…

  6. Gestures in an Intelligent User Interface

    NARCIS (Netherlands)

    Fikkert, F.W.; van der Vet, P.E.; Nijholt, Antinus; Shao, Ling; Shan, Caifeng; Luo, Jiebo; Etoh, Minoru

    2010-01-01

    In this chapter we investigated which hand gestures are intuitive to control a large display multimedia interface from a user’s perspective. Over the course of two sequential user evaluations we defined a simple gesture set that allows users to fully control a large display multimedia interface,

  7. Nonsymbolic Gestural Interaction for Ambient Intelligence

    DEFF Research Database (Denmark)

    Rehm, Matthias

    2010-01-01

    the addressee with subtle clues about personality or cultural background. Gestures are an extremly rich source of communication-specific and contextual information for interactions in ambient intelligence environments. This chapter reviews the semantic layers of gestural interaction, focusing on the layer...

  8. Integration of speech and gesture in aphasia.

    Science.gov (United States)

    Cocks, Naomi; Byrne, Suzanne; Pritchard, Madeleine; Morgan, Gary; Dipper, Lucy

    2018-02-07

    Information from speech and gesture is often integrated to comprehend a message. This integration process requires the appropriate allocation of cognitive resources to both the gesture and speech modalities. People with aphasia are likely to find integration of gesture and speech difficult. This is due to a reduction in cognitive resources, a difficulty with resource allocation or a combination of the two. Despite it being likely that people who have aphasia will have difficulty with integration, empirical evidence describing this difficulty is limited. Such a difficulty was found in a single case study by Cocks et al. in 2009, and is replicated here with a greater number of participants. To determine whether individuals with aphasia have difficulties understanding messages in which they have to integrate speech and gesture. Thirty-one participants with aphasia (PWA) and 30 control participants watched videos of an actor communicating a message in three different conditions: verbal only, gesture only, and verbal and gesture message combined. The message related to an action in which the name of the action (e.g., 'eat') was provided verbally and the manner of the action (e.g., hands in a position as though eating a burger) was provided gesturally. Participants then selected a picture that 'best matched' the message conveyed from a choice of four pictures which represented a gesture match only (G match), a verbal match only (V match), an integrated verbal-gesture match (Target) and an unrelated foil (UR). To determine the gain that participants obtained from integrating gesture and speech, a measure of multimodal gain (MMG) was calculated. The PWA were less able to integrate gesture and speech than the control participants and had significantly lower MMG scores. When the PWA had difficulty integrating, they more frequently selected the verbal match. The findings suggest that people with aphasia can have difficulty integrating speech and gesture in order to obtain

  9. Comparing invariants of SK1

    OpenAIRE

    Wouters, Tim

    2010-01-01

    In this text, we compare several invariants of the reduced Whitehead group SK1 of a central simple algebra. For biquaternion algebras, we compare a generalised invariant of Suslin as constructed by the author in a previous article to an invariant introduced by Knus-Merkurjev-Rost-Tignol. Using explicit computations, we prove these invariants are essentially the same. We also prove the non-triviality of an invariant introduced by Kahn. To obtain this result, we compare Kahn's invariant to an i...

  10. Invariant sets for Windows

    CERN Document Server

    Morozov, Albert D; Dragunov, Timothy N; Malysheva, Olga V

    1999-01-01

    This book deals with the visualization and exploration of invariant sets (fractals, strange attractors, resonance structures, patterns etc.) for various kinds of nonlinear dynamical systems. The authors have created a special Windows 95 application called WInSet, which allows one to visualize the invariant sets. A WInSet installation disk is enclosed with the book.The book consists of two parts. Part I contains a description of WInSet and a list of the built-in invariant sets which can be plotted using the program. This part is intended for a wide audience with interests ranging from dynamical

  11. [Assessment of gestures and their psychiatric relevance].

    Science.gov (United States)

    Bulucz, Judit; Simon, Lajos

    2008-01-01

    Analyzing and investigating non-verbal behavior and gestures has been receiving much attention since the last century. Thanks to the pioneer work of Ekman and Friesen we have a number of descriptive-analytic, categorizing and semantic content related scales and scoring systems. Generation of gestures, the integrative system with speech and the inter-cultural differences are in the focus of interest. Furthermore, analysis of the gestural changes caused by lesions of distinct neurological areas point toward to formation of new diagnostic approaches. The more widespread application of computerized methods resulted in an increasing number of experiments which study gesture generation, reproduction in mechanical and virtual reality. Increasing efforts are directed towards the understanding of human and computerized recognition of human gestures. In this review we describe the results emphasizing the relations of those results with psychiatric and neuropsychiatric disorders, specifically schizophrenia and affective spectrum.

  12. Gesture-Based Robot Control with Variable Autonomy from the JPL Biosleeve

    Science.gov (United States)

    Wolf, Michael T.; Assad, Christopher; Vernacchia, Matthew T.; Fromm, Joshua; Jethani, Henna L.

    2013-01-01

    This paper presents a new gesture-based human interface for natural robot control. Detailed activity of the user's hand and arm is acquired via a novel device, called the BioSleeve, which packages dry-contact surface electromyography (EMG) and an inertial measurement unit (IMU) into a sleeve worn on the forearm. The BioSleeve's accompanying algorithms can reliably decode as many as sixteen discrete hand gestures and estimate the continuous orientation of the forearm. These gestures and positions are mapped to robot commands that, to varying degrees, integrate with the robot's perception of its environment and its ability to complete tasks autonomously. This flexible approach enables, for example, supervisory point-to-goal commands, virtual joystick for guarded teleoperation, and high degree of freedom mimicked manipulation, all from a single device. The BioSleeve is meant for portable field use; unlike other gesture recognition systems, use of the BioSleeve for robot control is invariant to lighting conditions, occlusions, and the human-robot spatial relationship and does not encumber the user's hands. The BioSleeve control approach has been implemented on three robot types, and we present proof-of-principle demonstrations with mobile ground robots, manipulation robots, and prosthetic hands.

  13. Algorithms in invariant theory

    CERN Document Server

    Sturmfels, Bernd

    2008-01-01

    J. Kung and G.-C. Rota, in their 1984 paper, write: "Like the Arabian phoenix rising out of its ashes, the theory of invariants, pronounced dead at the turn of the century, is once again at the forefront of mathematics". The book of Sturmfels is both an easy-to-read textbook for invariant theory and a challenging research monograph that introduces a new approach to the algorithmic side of invariant theory. The Groebner bases method is the main tool by which the central problems in invariant theory become amenable to algorithmic solutions. Students will find the book an easy introduction to this "classical and new" area of mathematics. Researchers in mathematics, symbolic computation, and computer science will get access to a wealth of research ideas, hints for applications, outlines and details of algorithms, worked out examples, and research problems.

  14. Relativistic gauge invariant potentials

    Energy Technology Data Exchange (ETDEWEB)

    Gonzalez, J.J. (Valladolid Univ. (Spain). Dept. de Fisica Teorica); Negro, J. (Valladolid Univ. (Spain). Dept. de Fisica Teorica); Olmo, M.A. del (Valladolid Univ. (Spain). Dept. de Fisica Teorica)

    1995-01-01

    A global method characterizing the invariant connections on an abelian principal bundle under a group of transformations is applied in order to get gauge invariant electromagnetic (elm.) potentials in a systematic way. So, we have classified all the elm. gauge invariant potentials under the Poincare subgroups of dimensions 4, 5, and 6, up to conjugation. It is paid attention in particular to the situation where these subgroups do not act transitively on the space-time manifold. We have used the same procedure for some galilean subgroups to get nonrelativistic potentials and study the way they are related to their relativistic partners by means of contractions. Some conformal gauge invariant potentials have also been derived and considered when they are seen as consequence of an enlargement of the Poincare symmetries. (orig.)

  15. The Different Benefits from Different Gestures in Understanding a Concept

    Science.gov (United States)

    Kang, Seokmin; Hallman, Gregory L.; Son, Lisa K.; Black, John B.

    2013-01-01

    Explanations are typically accompanied by hand gestures. While research has shown that gestures can help learners understand a particular concept, different learning effects in different types of gesture have been less understood. To address the issues above, the current study focused on whether different types of gestures lead to different levels…

  16. Gesturing by Speakers with Aphasia: How Does It Compare?

    Science.gov (United States)

    Mol, Lisette; Krahmer, Emiel; van de Sandt-Koenderman, Mieke

    2013-01-01

    Purpose: To study the independence of gesture and verbal language production. The authors assessed whether gesture can be semantically compensatory in cases of verbal language impairment and whether speakers with aphasia and control participants use similar depiction techniques in gesture. Method: The informativeness of gesture was assessed in 3…

  17. Gliding and Saccadic Gaze Gesture Recognition in Real Time

    DEFF Research Database (Denmark)

    Rozado, David; San Agustin, Javier; Rodriguez, Francisco

    2012-01-01

    to discriminate intentional gaze gestures from typical gaze activity performed during standard interaction with electronic devices. In this work, through a set of experiments and user studies, we evaluate the performance of two different gaze gestures modalities, gliding gaze gestures and saccadic gaze gestures...

  18. HSV Brightness Factor Matching for Gesture Recognition System

    OpenAIRE

    Mokhtar M. Hasan; Pramod K. Mishra

    2010-01-01

    The main goal of gesture recognition research is to establish a system which can identify specific human gestures and use these identified gestures to be carried out by the machine, In this paper, we introduce a new method for gesture recognition that based on computing the local brightness for each block of the gesture image, the gesture image is divided into 25x25 blocks each of 5x5 block size, and we calculated the local brightness of each block, so, each gesture produces 25x25 features va...

  19. Comprehensibility and neural substrate of communicative gestures in severe aphasia.

    Science.gov (United States)

    Hogrefe, Katharina; Ziegler, Wolfram; Weidinger, Nicole; Goldenberg, Georg

    2017-08-01

    Communicative gestures can compensate incomprehensibility of oral speech in severe aphasia, but the brain damage that causes aphasia may also have an impact on the production of gestures. We compared the comprehensibility of gestural communication of persons with severe aphasia and non-aphasic persons and used voxel based lesion symptom mapping (VLSM) to determine lesion sites that are responsible for poor gestural expression in aphasia. On group level, persons with aphasia conveyed more information via gestures than controls indicating a compensatory use of gestures in persons with severe aphasia. However, individual analysis showed a broad range of gestural comprehensibility. VLSM suggested that poor gestural expression was associated with lesions in anterior temporal and inferior frontal regions. We hypothesize that likely functional correlates of these localizations are selection of and flexible changes between communication channels as well as between different types of gestures and between features of actions and objects that are expressed by gestures. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Spatiotemporal dynamics of early cortical gesture processing.

    Science.gov (United States)

    Möhring, Nicole; Shen, Christina; Neuhaus, Andres H

    2014-10-01

    Gesture processing has been consistently shown to be associated with activation of the inferior parietal lobe (IPL); however, little is known about the integration of IPL activation into the temporal dynamics of early sensory areas. Using a temporally graded repetition suppression paradigm, we examined the activation and time course of brain areas involved in hand gesture processing. We recorded event-related potentials in response to stimulus pairs of static hand images forming gestures of the popular rock-paper-scissors game and estimated their neuronal generators. We identified two main components associated with adaptive patterns related to stimulus repetition. The N190 component elicited at temporo-parietal sites adapted to repetitions of the same gesture and was associated with right-hemispheric extrastriate body area activation. A later component at parieto-occipital sites demonstrated temporally graded adaptation effects for all gestures with a left-hemispheric dominance. Source localization revealed concurrent activations of the right extrastriate body area, fusiform gyri bilaterally, and the left IPL at about 250 ms. The adaptation pattern derived from the graded repetition suppression paradigm demonstrates the functional sensitivity of these sources to gesture processing. Given the literature on IPL contribution to imitation, action recognition, and action execution, IPL activation at about 250 ms may represent the access into specific cognitive routes for gesture processing and may thus be involved in integrating sensory information from cortical body areas into subsequent visuo-motor transformation processes. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. Binary optical filters for scale invariant pattern recognition

    Science.gov (United States)

    Reid, Max B.; Downie, John D.; Hine, Butler P.

    1992-01-01

    Binary synthetic discriminant function (BSDF) optical filters which are invariant to scale changes in the target object of more than 50 percent are demonstrated in simulation and experiment. Efficient databases of scale invariant BSDF filters can be designed which discriminate between two very similar objects at any view scaled over a factor of 2 or more. The BSDF technique has considerable advantages over other methods for achieving scale invariant object recognition, as it also allows determination of the object's scale. In addition to scale, the technique can be used to design recognition systems invariant to other geometric distortions.

  2. Introduction to gesture and SLA: Toward an integrated approach

    OpenAIRE

    Gullberg, Marianne; McCafferty, Stephen

    2008-01-01

    The title of this special issue, Gesture and SLA: Toward an Integrated Approach, stems in large part from the idea known as integrationism, principally set forth by Harris (2003, 2005), which posits that it is time to “demythologize” linguistics, moving away from the “orthodox exponents” that have idealized the notion of language. The integrationist approach intends a view that focuses on communication—that is, language in use, language as a “fact of life” (Harris, 2003, p. 50). Although not ...

  3. Device Control Using Gestures Sensed from EMG

    Science.gov (United States)

    Wheeler, Kevin R.

    2003-01-01

    In this paper we present neuro-electric interfaces for virtual device control. The examples presented rely upon sampling Electromyogram data from a participants forearm. This data is then fed into pattern recognition software that has been trained to distinguish gestures from a given gesture set. The pattern recognition software consists of hidden Markov models which are used to recognize the gestures as they are being performed in real-time. Two experiments were conducted to examine the feasibility of this interface technology. The first replicated a virtual joystick interface, and the second replicated a keyboard.

  4. Neural integration of speech and gesture in schizophrenia: evidence for differential processing of metaphoric gestures.

    Science.gov (United States)

    Straube, Benjamin; Green, Antonia; Sass, Katharina; Kirner-Veselinovic, André; Kircher, Tilo

    2013-07-01

    Gestures are an important component of interpersonal communication. Especially, complex multimodal communication is assumed to be disrupted in patients with schizophrenia. In healthy subjects, differential neural integration processes for gestures in the context of concrete [iconic (IC) gestures] and abstract sentence contents [metaphoric (MP) gestures] had been demonstrated. With this study we wanted to investigate neural integration processes for both gesture types in patients with schizophrenia. During functional magnetic resonance imaging-data acquisition, 16 patients with schizophrenia (P) and a healthy control group (C) were shown videos of an actor performing IC and MP gestures and associated sentences. An isolated gesture (G) and isolated sentence condition (S) were included to separate unimodal from bimodal effects at the neural level. During IC conditions (IC > G ∩ IC > S) we found increased activity in the left posterior middle temporal gyrus (pMTG) in both groups. Whereas in the control group the left pMTG and the inferior frontal gyrus (IFG) were activated for the MP conditions (MP > G ∩ MP > S), no significant activation was found for the identical contrast in patients. The interaction of group (P/C) and gesture condition (MP/IC) revealed activation in the bilateral hippocampus, the left middle/superior temporal and IFG. Activation of the pMTG for the IC condition in both groups indicates intact neural integration of IC gestures in schizophrenia. However, failure to activate the left pMTG and IFG for MP co-verbal gestures suggests a disturbed integration of gestures embedded in an abstract sentence context. This study provides new insight into the neural integration of co-verbal gestures in patients with schizophrenia. Copyright © 2012 Wiley Periodicals, Inc.

  5. Doing gesture promotes learning a mental transformation task better than seeing gesture

    OpenAIRE

    Goldin-Meadow, Susan; Levine, Susan C.; Zinchenko, Elena; Yip, Terina KuangYi; Hemani, Naureen; Factor, Laiah

    2012-01-01

    Performing action has been found to have a greater impact on learning than observing action. Here we ask whether a particular type of action—the gestures that accompany talk—affect learning in a comparable way. We gave 158 6-year-old children instruction in a mental transformation task. Half the children were asked to produce a Move gesture relevant to the task; half were asked to produce a Point gesture. The children also observed the experimenter producing either a Move or Point gesture. Ch...

  6. A speaker's gesture style can affect language comprehension: ERP evidence from gesture-speech integration.

    Science.gov (United States)

    Obermeier, Christian; Kelly, Spencer D; Gunter, Thomas C

    2015-09-01

    In face-to-face communication, speech is typically enriched by gestures. Clearly, not all people gesture in the same way, and the present study explores whether such individual differences in gesture style are taken into account during the perception of gestures that accompany speech. Participants were presented with one speaker that gestured in a straightforward way and another that also produced self-touch movements. Adding trials with such grooming movements makes the gesture information a much weaker cue compared with the gestures of the non-grooming speaker. The Electroencephalogram was recorded as participants watched videos of the individual speakers. Event-related potentials elicited by the speech signal revealed that adding grooming movements attenuated the impact of gesture for this particular speaker. Thus, these data suggest that there is sensitivity to the personal communication style of a speaker and that affects the extent to which gesture and speech are integrated during language comprehension. © The Author (2015). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  7. Blind Speakers Show Language-Specific Patterns in Co-Speech Gesture but Not Silent Gesture.

    Science.gov (United States)

    Özçalışkan, Şeyda; Lucero, Ché; Goldin-Meadow, Susan

    2017-05-08

    Sighted speakers of different languages vary systematically in how they package and order components of a motion event in speech. These differences influence how semantic elements are organized in gesture, but only when those gestures are produced with speech (co-speech gesture), not without speech (silent gesture). We ask whether the cross-linguistic similarity in silent gesture is driven by the visuospatial structure of the event. We compared 40 congenitally blind adult native speakers of English or Turkish (20/language) to 80 sighted adult speakers (40/language; half with, half without blindfolds) as they described three-dimensional motion scenes. We found an effect of language on co-speech gesture, not on silent gesture-blind speakers of both languages organized their silent gestures as sighted speakers do. Humans may have a natural semantic organization that they impose on events when conveying them in gesture without language-an organization that relies on neither visuospatial cues nor language structure. © 2017 Cognitive Science Society, Inc.

  8. Don Quijote, humour and gestures

    Directory of Open Access Journals (Sweden)

    Guillemette Bolens

    2016-11-01

    Full Text Available The present article is about a method of analysis of gestures and movements in literature called kinesic analysis. Its purpose is to develop ways of accounting for a type of humour elicited by narrated corporeal dynamics in interaction, as perceived in the cognitive act of reading literature. When centuries and multiple cultural variations separate corporeal experiences, how does literature enable us to grasp the dynamic sensations narrated by authors to the point of making us laugh? Beyond the undeniable historical divide that calls for the greatest caution, are there shared aspects that allow us to understand the type of embodied experience that an author such as Cervantes conveys through the movements of his writing? In a perspective that is both «narratological» and cognitive, this article provides an approach that takes into account the cognitive process called perceptual simulation, paying a particular attention to the sensorimotor aspect of tonicity.

  9. Gestural Control Of Wavefield synthesis

    DEFF Research Database (Denmark)

    Grani, Francesco; Di Carlo, Diego; Portillo, Jorge Madrid

    2016-01-01

    We present a report covering our preliminary research on the control of spatial sound sources in wavefield synthesis through gesture based interfaces. After a short general introduction on spatial sound and few basic concepts on wavefield synthesis, we presents a graphical application called sp......AAce which let users to con- trol real-time movements of sound sources by drawing tra- jectories on a screen. The first prototype of this application has been developed bound to WFSCollider, an open-source software based on Supercollider which let users control wavefield synthesis. The spAAce application has...... been im- plemented using Processing, a programming language for sketches and prototypes within the context of visual arts, and communicates with WFSCollider through the Open Sound Control protocol. This application aims to create a new way of interaction for live performance of spatial composition...

  10. A supramodal neural network for speech and gesture semantics: an fMRI study.

    Directory of Open Access Journals (Sweden)

    Benjamin Straube

    Full Text Available In a natural setting, speech is often accompanied by gestures. As language, speech-accompanying iconic gestures to some extent convey semantic information. However, if comprehension of the information contained in both the auditory and visual modality depends on same or different brain-networks is quite unknown. In this fMRI study, we aimed at identifying the cortical areas engaged in supramodal processing of semantic information. BOLD changes were recorded in 18 healthy right-handed male subjects watching video clips showing an actor who either performed speech (S, acoustic or gestures (G, visual in more (+ or less (- meaningful varieties. In the experimental conditions familiar speech or isolated iconic gestures were presented; during the visual control condition the volunteers watched meaningless gestures (G-, while during the acoustic control condition a foreign language was presented (S-. The conjunction of the visual and acoustic semantic processing revealed activations extending from the left inferior frontal gyrus to the precentral gyrus, and included bilateral posterior temporal regions. We conclude that proclaiming this frontotemporal network the brain's core language system is to take too narrow a view. Our results rather indicate that these regions constitute a supramodal semantic processing network.

  11. A Supramodal Neural Network for Speech and Gesture Semantics: An fMRI Study

    Science.gov (United States)

    Weis, Susanne; Kircher, Tilo

    2012-01-01

    In a natural setting, speech is often accompanied by gestures. As language, speech-accompanying iconic gestures to some extent convey semantic information. However, if comprehension of the information contained in both the auditory and visual modality depends on same or different brain-networks is quite unknown. In this fMRI study, we aimed at identifying the cortical areas engaged in supramodal processing of semantic information. BOLD changes were recorded in 18 healthy right-handed male subjects watching video clips showing an actor who either performed speech (S, acoustic) or gestures (G, visual) in more (+) or less (−) meaningful varieties. In the experimental conditions familiar speech or isolated iconic gestures were presented; during the visual control condition the volunteers watched meaningless gestures (G−), while during the acoustic control condition a foreign language was presented (S−). The conjunction of the visual and acoustic semantic processing revealed activations extending from the left inferior frontal gyrus to the precentral gyrus, and included bilateral posterior temporal regions. We conclude that proclaiming this frontotemporal network the brain's core language system is to take too narrow a view. Our results rather indicate that these regions constitute a supramodal semantic processing network. PMID:23226488

  12. Automatic imitation of pro- and antisocial gestures: Is implicit social behavior censored?

    Science.gov (United States)

    Cracco, Emiel; Genschow, Oliver; Radkova, Ina; Brass, Marcel

    2018-01-01

    According to social reward theories, automatic imitation can be understood as a means to obtain positive social consequences. In line with this view, it has been shown that automatic imitation is modulated by contextual variables that constrain the positive outcomes of imitation. However, this work has largely neglected that many gestures have an inherent pro- or antisocial meaning. As a result of their meaning, antisocial gestures are considered taboo and should not be used in public. In three experiments, we show that automatic imitation of symbolic gestures is modulated by the social intent of these gestures. Experiment 1 (N=37) revealed reduced automatic imitation of antisocial compared with prosocial gestures. Experiment 2 (N=118) and Experiment 3 (N=118) used a social priming procedure to show that this effect was stronger in a prosocial context than in an antisocial context. These findings were supported in a within-study meta-analysis using both frequentist and Bayesian statistics. Together, our results indicate that automatic imitation is regulated by internalized social norms that act as a stop signal when inappropriate actions are triggered. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Robust Affine Invariant Descriptors

    Directory of Open Access Journals (Sweden)

    Jianwei Yang

    2011-01-01

    Full Text Available An approach is developed for the extraction of affine invariant descriptors by cutting object into slices. Gray values associated with every pixel in each slice are summed up to construct affine invariant descriptors. As a result, these descriptors are very robust to additive noise. In order to establish slices of correspondence between an object and its affine transformed version, general contour (GC of the object is constructed by performing projection along lines with different polar angles. Consequently, affine in-variant division curves are derived. A slice is formed by points fall in the region enclosed by two adjacent division curves. To test and evaluate the proposed method, several experiments have been conducted. Experimental results show that the proposed method is very robust to noise.

  14. MGRA: Motion Gesture Recognition via Accelerometer

    Directory of Open Access Journals (Sweden)

    Feng Hong

    2016-04-01

    Full Text Available Accelerometers have been widely embedded in most current mobile devices, enabling easy and intuitive operations. This paper proposes a Motion Gesture Recognition system (MGRA based on accelerometer data only, which is entirely implemented on mobile devices and can provide users with real-time interactions. A robust and unique feature set is enumerated through the time domain, the frequency domain and singular value decomposition analysis using our motion gesture set containing 11,110 traces. The best feature vector for classification is selected, taking both static and mobile scenarios into consideration. MGRA exploits support vector machine as the classifier with the best feature vector. Evaluations confirm that MGRA can accommodate a broad set of gesture variations within each class, including execution time, amplitude and non-gestural movement. Extensive evaluations confirm that MGRA achieves higher accuracy under both static and mobile scenarios and costs less computation time and energy on an LG Nexus 5 than previous methods.

  15. Evolutionary Sound Synthesis Controlled by Gestural Data

    Directory of Open Access Journals (Sweden)

    Jose Fornari

    2011-05-01

    Full Text Available This article focuses on the interdisciplinary research involving Computer Music and Generative Visual Art. We describe the implementation of two interactive artistic systems based on principles of Gestural Data (WILSON, 2002 retrieval and self-organization (MORONI, 2003, to control an Evolutionary Sound Synthesis method (ESSynth. The first implementation uses, as gestural data, image mapping of handmade drawings. The second one uses gestural data from dynamic body movements of dance. The resulting computer output is generated by an interactive system implemented in Pure Data (PD. This system uses principles of Evolutionary Computation (EC, which yields the generation of a synthetic adaptive population of sound objects. Considering that music could be seen as “organized sound” the contribution of our study is to develop a system that aims to generate "self-organized sound" – a method that uses evolutionary computation to bridge between gesture, sound and music.

  16. Modular Invariant Theory

    CERN Document Server

    Campbell, HEA

    2011-01-01

    This book covers the modular invariant theory of finite groups, the case when the characteristic of the field divides the order of the group, a theory that is more complicated than the study of the classical non-modular case. Largely self-contained, the book develops the theory from its origins up to modern results. It explores many examples, illustrating the theory and its contrast with the better understood non-modular setting. It details techniques for the computation of invariants for many modular representations of finite groups, especially the case of the cyclic group of prime order. It

  17. Anisotropic Weyl invariance

    Energy Technology Data Exchange (ETDEWEB)

    Perez-Nadal, Guillem [Universidad de Buenos Aires, Buenos Aires (Argentina)

    2017-07-15

    We consider a non-relativistic free scalar field theory with a type of anisotropic scale invariance in which the number of coordinates ''scaling like time'' is generically greater than one. We propose the Cartesian product of two curved spaces, the metric of each space being parameterized by the other space, as a notion of curved background to which the theory can be extended. We study this type of geometries, and find a family of extensions of the theory to curved backgrounds in which the anisotropic scale invariance is promoted to a local, Weyl-type symmetry. (orig.)

  18. Embodied Head Gesture and Distance Education

    OpenAIRE

    Khan, Muhammad Sikandar Lal; Réhman, Shafiq ur

    2015-01-01

    Traditional distance education settings are usually based on video teleconferencing scenarios where human emotions and social presence are only expressed by the facial and vocal expressions which are not enough for complete presence; our bodily gestures and actions play a vital role in understanding exact meaning of communication patterns; especially in teaching-learning scenarios. The bodily gestures especially head movements offer cues to understand contextual knowledge during conversationa...

  19. The language-gesture connection: Evidence from aphasia.

    Science.gov (United States)

    Dipper, Lucy; Pritchard, Madeleine; Morgan, Gary; Cocks, Naomi

    2015-01-01

    A significant body of evidence from cross-linguistic and developmental studies converges to suggest that co-speech iconic gesture mirrors language. This paper aims to identify whether gesture reflects impaired spoken language in a similar way. Twenty-nine people with aphasia (PWA) and 29 neurologically healthy control participants (NHPs) produced a narrative discourse, retelling the story of a cartoon video. Gesture and language were analysed in terms of semantic content and structure for two key motion events. The aphasic data showed an influence on gesture from lexical choices but no corresponding clausal influence. Both the groups produced gesture that matched the semantics of the spoken language and gesture that did not, although there was one particular gesture-language mismatch (semantically "light" verbs paired with semantically richer gesture) that typified the PWA narratives. These results indicate that gesture is both closely related to spoken language impairment and compensatory.

  20. Age-invariant face recognition.

    Science.gov (United States)

    Park, Unsang; Tong, Yiying; Jain, Anil K

    2010-05-01

    One of the challenges in automatic face recognition is to achieve temporal invariance. In other words, the goal is to come up with a representation and matching scheme that is robust to changes due to facial aging. Facial aging is a complex process that affects both the 3D shape of the face and its texture (e.g., wrinkles). These shape and texture changes degrade the performance of automatic face recognition systems. However, facial aging has not received substantial attention compared to other facial variations due to pose, lighting, and expression. We propose a 3D aging modeling technique and show how it can be used to compensate for the age variations to improve the face recognition performance. The aging modeling technique adapts view-invariant 3D face models to the given 2D face aging database. The proposed approach is evaluated on three different databases (i.g., FG-NET, MORPH, and BROWNS) using FaceVACS, a state-of-the-art commercial face recognition engine.

  1. Gesture-Controlled Interfaces for Self-Service Machines

    Science.gov (United States)

    Cohen, Charles J.; Beach, Glenn

    2006-01-01

    Gesture-controlled interfaces are software- driven systems that facilitate device control by translating visual hand and body signals into commands. Such interfaces could be especially attractive for controlling self-service machines (SSMs) for example, public information kiosks, ticket dispensers, gasoline pumps, and automated teller machines (see figure). A gesture-controlled interface would include a vision subsystem comprising one or more charge-coupled-device video cameras (at least two would be needed to acquire three-dimensional images of gestures). The output of the vision system would be processed by a pure software gesture-recognition subsystem. Then a translator subsystem would convert a sequence of recognized gestures into commands for the SSM to be controlled; these could include, for example, a command to display requested information, change control settings, or actuate a ticket- or cash-dispensing mechanism. Depending on the design and operational requirements of the SSM to be controlled, the gesture-controlled interface could be designed to respond to specific static gestures, dynamic gestures, or both. Static and dynamic gestures can include stationary or moving hand signals, arm poses or motions, and/or whole-body postures or motions. Static gestures would be recognized on the basis of their shapes; dynamic gestures would be recognized on the basis of both their shapes and their motions. Because dynamic gestures include temporal as well as spatial content, this gesture- controlled interface can extract more information from dynamic than it can from static gestures.

  2. Visual speech gestures modulate efferent auditory system.

    Science.gov (United States)

    Namasivayam, Aravind Kumar; Wong, Wing Yiu Stephanie; Sharma, Dinaay; van Lieshout, Pascal

    2015-03-01

    Visual and auditory systems interact at both cortical and subcortical levels. Studies suggest a highly context-specific cross-modal modulation of the auditory system by the visual system. The present study builds on this work by sampling data from 17 young healthy adults to test whether visual speech stimuli evoke different responses in the auditory efferent system compared to visual non-speech stimuli. The descending cortical influences on medial olivocochlear (MOC) activity were indirectly assessed by examining the effects of contralateral suppression of transient-evoked otoacoustic emissions (TEOAEs) at 1, 2, 3 and 4 kHz under three conditions: (a) in the absence of any contralateral noise (Baseline), (b) contralateral noise + observing facial speech gestures related to productions of vowels /a/ and /u/ and (c) contralateral noise + observing facial non-speech gestures related to smiling and frowning. The results are based on 7 individuals whose data met strict recording criteria and indicated a significant difference in TEOAE suppression between observing speech gestures relative to the non-speech gestures, but only at the 1 kHz frequency. These results suggest that observing a speech gesture compared to a non-speech gesture may trigger a difference in MOC activity, possibly to enhance peripheral neural encoding. If such findings can be reproduced in future research, sensory perception models and theories positing the downstream convergence of unisensory streams of information in the cortex may need to be revised.

  3. Robust Real-Time and Rotation-Invariant American Sign Language Alphabet Recognition Using Range Camera

    Science.gov (United States)

    Lahamy, H.; Lichti, D.

    2012-07-01

    The automatic interpretation of human gestures can be used for a natural interaction with computers without the use of mechanical devices such as keyboards and mice. The recognition of hand postures have been studied for many years. However, most of the literature in this area has considered 2D images which cannot provide a full description of the hand gestures. In addition, a rotation-invariant identification remains an unsolved problem even with the use of 2D images. The objective of the current study is to design a rotation-invariant recognition process while using a 3D signature for classifying hand postures. An heuristic and voxelbased signature has been designed and implemented. The tracking of the hand motion is achieved with the Kalman filter. A unique training image per posture is used in the supervised classification. The designed recognition process and the tracking procedure have been successfully evaluated. This study has demonstrated the efficiency of the proposed rotation invariant 3D hand posture signature which leads to 98.24% recognition rate after testing 12723 samples of 12 gestures taken from the alphabet of the American Sign Language.

  4. ROBUST REAL-TIME AND ROTATION-INVARIANT AMERICAN SIGN LANGUAGE ALPHABET RECOGNITION USING RANGE CAMERA

    Directory of Open Access Journals (Sweden)

    H. Lahamy

    2012-07-01

    Full Text Available The automatic interpretation of human gestures can be used for a natural interaction with computers without the use of mechanical devices such as keyboards and mice. The recognition of hand postures have been studied for many years. However, most of the literature in this area has considered 2D images which cannot provide a full description of the hand gestures. In addition, a rotation-invariant identification remains an unsolved problem even with the use of 2D images. The objective of the current study is to design a rotation-invariant recognition process while using a 3D signature for classifying hand postures. An heuristic and voxelbased signature has been designed and implemented. The tracking of the hand motion is achieved with the Kalman filter. A unique training image per posture is used in the supervised classification. The designed recognition process and the tracking procedure have been successfully evaluated. This study has demonstrated the efficiency of the proposed rotation invariant 3D hand posture signature which leads to 98.24% recognition rate after testing 12723 samples of 12 gestures taken from the alphabet of the American Sign Language.

  5. Modular invariant gaugino condensation

    Energy Technology Data Exchange (ETDEWEB)

    Gaillard, M.K.

    1991-05-09

    The construction of effective supergravity lagrangians for gaugino condensation is reviewed and recent results are presented that are consistent with modular invariance and yield a positive definite potential of the noscale type. Possible implications for phenomenology are briefly discussed. 29 refs.

  6. Invariant differential operators

    CERN Document Server

    Dobrev, Vladimir K

    2016-01-01

    With applications in quantum field theory, elementary particle physics and general relativity, this two-volume work studies invariance of differential operators under Lie algebras, quantum groups, superalgebras including infinite-dimensional cases, Schrödinger algebras, applications to holography. This first volume covers the general aspects of Lie algebras and group theory.

  7. A speaker’s gesture style can affect language comprehension: ERP evidence from gesture-speech integration

    Science.gov (United States)

    Obermeier, Christian; Kelly, Spencer D.

    2015-01-01

    In face-to-face communication, speech is typically enriched by gestures. Clearly, not all people gesture in the same way, and the present study explores whether such individual differences in gesture style are taken into account during the perception of gestures that accompany speech. Participants were presented with one speaker that gestured in a straightforward way and another that also produced self-touch movements. Adding trials with such grooming movements makes the gesture information a much weaker cue compared with the gestures of the non-grooming speaker. The Electroencephalogram was recorded as participants watched videos of the individual speakers. Event-related potentials elicited by the speech signal revealed that adding grooming movements attenuated the impact of gesture for this particular speaker. Thus, these data suggest that there is sensitivity to the personal communication style of a speaker and that affects the extent to which gesture and speech are integrated during language comprehension. PMID:25688095

  8. Modular invariance and entanglement entropy

    Energy Technology Data Exchange (ETDEWEB)

    Lokhande, Sagar Fakirchand; Mukhi, Sunil [Indian Institute of Science Education and Research,Homi Bhabha Rd, Pashan, Pune 411 008 (India)

    2015-06-17

    We study the Rényi and entanglement entropies for free 2d CFT’s at finite temperature and finite size, with emphasis on their properties under modular transformations of the torus. We address the issue of summing over fermion spin structures in the replica trick, and show that the relation between entanglement and thermal entropy determines two different ways to perform this sum in the limits of small and large interval. Both answers are modular covariant, rather than invariant. Our results are compared with those for a free boson at unit radius in the two limits and complete agreement is found, supporting the view that entanglement respects Bose-Fermi duality. We extend our computations to multiple free Dirac fermions having correlated spin structures, dual to free bosons on the Spin(2d) weight lattice.

  9. Iconic gestures prime words: comparison of priming effects when gestures are presented alone and when they are accompanying speech.

    Science.gov (United States)

    So, Wing-Chee; Yi-Feng, Alvan Low; Yap, De-Fu; Kheng, Eugene; Yap, Ju-Min Melvin

    2013-01-01

    Previous studies have shown that iconic gestures presented in an isolated manner prime visually presented semantically related words. Since gestures and speech are almost always produced together, this study examined whether iconic gestures accompanying speech would prime words and compared the priming effect of iconic gestures with speech to that of iconic gestures presented alone. Adult participants (N = 180) were randomly assigned to one of three conditions in a lexical decision task: Gestures-Only (the primes were iconic gestures presented alone); Speech-Only (the primes were auditory tokens conveying the same meaning as the iconic gestures); Gestures-Accompanying-Speech (the primes were the simultaneous coupling of iconic gestures and their corresponding auditory tokens). Our findings revealed significant priming effects in all three conditions. However, the priming effect in the Gestures-Accompanying-Speech condition was comparable to that in the Speech-Only condition and was significantly weaker than that in the Gestures-Only condition, suggesting that the facilitatory effect of iconic gestures accompanying speech may be constrained by the level of language processing required in the lexical decision task where linguistic processing of words forms is more dominant than semantic processing. Hence, the priming effect afforded by the co-speech iconic gestures was weakened.

  10. The shrink point: audiovisual integration of speech-gesture synchrony

    OpenAIRE

    Kirchhof, Carolin

    2017-01-01

    Up to now, the focus in gesture research has long been on the production of speech-accompanying gestures and on how speech-gesture utterances contribute to communication. An issue that has mostly been neglected is in how far listeners even perceive the gesture-part of a multimodal utterance. For instance, there has been a major focus on the lexico-semiotic connection between spontaneously coproduced gestures and speech in gesture research (e.g., de Ruiter, 2007; Kita & Özyürek, 20...

  11. Combined Hand Gesture — Speech Model for Human Action Recognition

    Science.gov (United States)

    Cheng, Sheng-Tzong; Hsu, Chih-Wei; Li, Jian-Pan

    2013-01-01

    This study proposes a dynamic hand gesture detection technology to effectively detect dynamic hand gesture areas, and a hand gesture recognition technology to improve the dynamic hand gesture recognition rate. Meanwhile, the corresponding relationship between state sequences in hand gesture and speech models is considered by integrating speech recognition technology with a multimodal model, thus improving the accuracy of human behavior recognition. The experimental results proved that the proposed method can effectively improve human behavior recognition accuracy and the feasibility of system applications. Experimental results verified that the multimodal gesture-speech model provided superior accuracy when compared to the single modal versions. PMID:24351628

  12. Learning from gesture: How early does it happen?

    Science.gov (United States)

    Novack, Miriam A; Goldin-Meadow, Susan; Woodward, Amanda L

    2015-09-01

    Iconic gesture is a rich source of information for conveying ideas to learners. However, in order to learn from iconic gesture, a learner must be able to interpret its iconic form-a nontrivial task for young children. Our study explores how young children interpret iconic gesture and whether they can use it to infer a previously unknown action. In Study 1, 2- and 3-year-old children were shown iconic gestures that illustrated how to operate a novel toy to achieve a target action. Children in both age groups successfully figured out the target action more often after seeing an iconic gesture demonstration than after seeing no demonstration. However, the 2-year-olds (but not the 3-year-olds) figured out fewer target actions after seeing an iconic gesture demonstration than after seeing a demonstration of an incomplete-action and, in this sense, were not yet experts at interpreting gesture. Nevertheless, both age groups seemed to understand that gesture could convey information that can be used to guide their own actions, and that gesture is thus not movement for its own sake. That is, the children in both groups produced the action displayed in gesture on the object itself, rather than producing the action in the air (in other words, they rarely imitated the experimenter's gesture as it was performed). Study 2 compared 2-year-olds' performance following iconic vs. point gesture demonstrations. Iconic gestures led children to discover more target actions than point gestures, suggesting that iconic gesture does more than just focus a learner's attention, it conveys substantive information about how to solve the problem, information that is accessible to children as young as 2. The ability to learn from iconic gesture is thus in place by toddlerhood and, although still fragile, allows children to process gesture, not as meaningless movement, but as an intentional communicative representation. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Hand Matters: Left-Hand Gestures Enhance Metaphor Explanation

    Science.gov (United States)

    2017-01-01

    Research suggests that speech-accompanying gestures influence cognitive processes, but it is not clear whether the gestural benefit is specific to the gesturing hand. Two experiments tested the “(right/left) hand-specificity” hypothesis for self-oriented functions of gestures: gestures with a particular hand enhance cognitive processes involving the hemisphere contralateral to the gesturing hand. Specifically, we tested whether left-hand gestures enhance metaphor explanation, which involves right-hemispheric processing. In Experiment 1, right-handers explained metaphorical phrases (e.g., “to spill the beans,” beans represent pieces of information). Participants kept the one hand (right, left) still while they were allowed to spontaneously gesture (or not) with their other free hand (left, right). Metaphor explanations were better when participants chose to gesture when their left hand was free than when they did not. An analogous effect of gesturing was not found when their right hand was free. In Experiment 2, different right-handers performed the same metaphor explanation task but, unlike Experiment 1, they were encouraged to gesture with their left or right hand or to not gesture at all. Metaphor explanations were better when participants gestured with their left hand than when they did not gesture, but the right hand gesture condition did not significantly differ from the no-gesture condition. Furthermore, we measured participants’ mouth asymmetry during additional verbal tasks to determine individual differences in the degree of right-hemispheric involvement in speech production. The left-over-right-side mouth dominance, indicating stronger right-hemispheric involvement, positively correlated with the left-over-right-hand gestural benefit on metaphor explanation. These converging findings supported the “hand-specificity” hypothesis. PMID:28080121

  14. TOT phenomena: Gesture production in younger and older adults.

    Science.gov (United States)

    Theocharopoulou, Foteini; Cocks, Naomi; Pring, Timothy; Dipper, Lucy T

    2015-06-01

    This study explored age-related changes in gesture to better understand the relationship between gesture and word retrieval from memory. The frequency of gestures during tip-of-the-tongue (TOT) states highlights this relationship. There is a lack of evidence describing the form and content of iconic gestures arising spontaneously in such TOT states and a parallel gap addressing age-related variations. In this study, TOT states were induced in 45 participants from 2 age groups (older and younger adults) using a pseudoword paradigm. The type and frequency of gestures produced was recorded during 2 experimental conditions (single-word retrieval and narrative task). We found that both groups experienced a high number of TOT states, during which they gestured. Iconic co-TOT gestures were more common than noniconic gestures. Although there was no age effect on the type of gestures produced, there was a significant, task-specific age difference in the amount of gesturing. That is, younger adults gestured more in the narrative task, whereas older adults generated more gestures in the single-word-retrieval task. Task-specific age differences suggest that there are age-related differences in terms of the cognitive operations involved in TOT gesture production. (c) 2015 APA, all rights reserved.

  15. Make Gestures to Learn: Reproducing Gestures Improves the Learning of Anatomical Knowledge More than Just Seeing Gestures

    Directory of Open Access Journals (Sweden)

    Mélaine Cherdieu

    2017-10-01

    Full Text Available Manual gestures can facilitate problem solving but also language or conceptual learning. Both seeing and making the gestures during learning seem to be beneficial. However, the stronger activation of the motor system in the second case should provide supplementary cues to consolidate and re-enact the mental traces created during learning. We tested this hypothesis in the context of anatomy learning by naïve adult participants. Anatomy is a challenging topic to learn and is of specific interest for research on embodied learning, as the learning content can be directly linked to learners' body. Two groups of participants were asked to look at a video lecture on the forearm anatomy. The video included a model making gestures related to the content of the lecture. Both groups see the gestures but only one also imitate the model. Tests of knowledge were run just after learning and few days later. The results revealed that imitating gestures improves the recall of structures names and their localization on a diagram. This effect was however significant only in long-term assessments. This suggests that: (1 the integration of motor actions and knowledge may require sleep; (2 a specific activation of the motor system during learning may improve the consolidation and/or the retrieval of memories.

  16. Continuous Integrated Invariant Inference Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed project will develop a new technique for invariant inference and embed this and other current invariant inference and checking techniques in an...

  17. Conformal invariance of curvature perturbation

    CERN Document Server

    Gong, Jinn-Ouk; Park, Wan Il; Sasaki, Misao; Song, Yong-Seon

    2011-01-01

    We show that in the single component situation all perturbation variables in the comoving gauge are conformally invariant to all perturbation orders. Generally we identify a special time slicing, the uniform-conformal transformation slicing, where all perturbations are again conformally invariant to all perturbation orders. We apply this result to the delta N formalism, and show its conformal invariance.

  18. Reducing Lookups for Invariant Checking

    DEFF Research Database (Denmark)

    Thomsen, Jakob Grauenkjær; Clausen, Christian; Andersen, Kristoffer Just

    2013-01-01

    This paper helps reduce the cost of invariant checking in cases where access to data is expensive. Assume that a set of variables satisfy a given invariant and a request is received to update a subset of them. We reduce the set of variables to inspect, in order to verify that the invariant is still...

  19. Conformal invariance of curvature perturbation

    Energy Technology Data Exchange (ETDEWEB)

    Gong, Jinn-Ouk [Theory Division, CERN, CH-1211 Genève 23 (Switzerland); Hwang, Jai-chan [Department of Astronomy and Atmospheric Sciences, Kyungpook National University, Daegu 702-701 (Korea, Republic of); Park, Wan Il; Sasaki, Misao; Song, Yong-Seon, E-mail: jinn-ouk.gong@cern.ch, E-mail: jchan@knu.ac.kr, E-mail: wipark@kias.re.kr, E-mail: misao@yukawa.kyoto-u.ac.jp, E-mail: ysong@kias.re.kr [Korea Institute for Advanced Study, Seoul 130-722 (Korea, Republic of)

    2011-09-01

    We show that in the single component situation all perturbation variables in the comoving gauge are conformally invariant to all perturbation orders. Generally we identify a special time slicing, the uniform-conformal transformation slicing, where all perturbations are again conformally invariant to all perturbation orders. We apply this result to the δN formalism, and show its conformal invariance.

  20. A procedure for estimating gestural scores from speech acoustics.

    Science.gov (United States)

    Nam, Hosung; Mitra, Vikramjit; Tiede, Mark; Hasegawa-Johnson, Mark; Espy-Wilson, Carol; Saltzman, Elliot; Goldstein, Louis

    2012-12-01

    Speech can be represented as a constellation of constricting vocal tract actions called gestures, whose temporal patterning with respect to one another is expressed in a gestural score. Current speech datasets do not come with gestural annotation and no formal gestural annotation procedure exists at present. This paper describes an iterative analysis-by-synthesis landmark-based time-warping architecture to perform gestural annotation of natural speech. For a given utterance, the Haskins Laboratories Task Dynamics and Application (TADA) model is employed to generate a corresponding prototype gestural score. The gestural score is temporally optimized through an iterative timing-warping process such that the acoustic distance between the original and TADA-synthesized speech is minimized. This paper demonstrates that the proposed iterative approach is superior to conventional acoustically-referenced dynamic timing-warping procedures and provides reliable gestural annotation for speech datasets.

  1. Presentation and production: the role of gesture in spatial communication.

    Science.gov (United States)

    Austin, Elizabeth E; Sweller, Naomi

    2014-06-01

    During social interaction, verbal language as well as nonverbal behavior is exchanged between speakers and listeners. One social task that often involves nonverbal behavior is the relaying of spatial direction information. The questions addressed in this study were whether presenting gesture during encoding (a) enhanced corresponding spatial task performance and (b) elicited gesture production at recall for adults and children. Children (3-4years) and adults were presented with verbal route directions through a small-scale spatial array and, depending on the assigned condition (i.e., no gestures, beat gestures, or representational gestures), the accompanying gestures. Children, but not adults, benefited from the presence of gesture during encoding of the spatial route direction task, as measured by recall at test. Results suggest that the presence of gesture during encoding plays an integral part of effectively communicating spatial route direction information, particularly for children. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. The language–gesture connection: Evidence from aphasia

    Science.gov (United States)

    Dipper, Lucy; Pritchard, Madeleine; Morgan, Gary; Cocks, Naomi

    2015-01-01

    Abstract A significant body of evidence from cross-linguistic and developmental studies converges to suggest that co-speech iconic gesture mirrors language. This paper aims to identify whether gesture reflects impaired spoken language in a similar way. Twenty-nine people with aphasia (PWA) and 29 neurologically healthy control participants (NHPs) produced a narrative discourse, retelling the story of a cartoon video. Gesture and language were analysed in terms of semantic content and structure for two key motion events. The aphasic data showed an influence on gesture from lexical choices but no corresponding clausal influence. Both the groups produced gesture that matched the semantics of the spoken language and gesture that did not, although there was one particular gesture–language mismatch (semantically “light” verbs paired with semantically richer gesture) that typified the PWA narratives. These results indicate that gesture is both closely related to spoken language impairment and compensatory. PMID:26169504

  3. Gesturing by speakers with aphasia: How does it compare?

    NARCIS (Netherlands)

    L. Mol (Linda); E. Krahmer (Emiel); W.M.E. van de Sandt-Koenderman (Mieke)

    2013-01-01

    textabstractPurpose: To study the independence of gesture and verbal language production. The authors assessed whether gesture can besemantically compensatory in cases of verbal language impairment and whether speakers with aphasia and control participants use similar depiction techniques in

  4. Convex Graph Invariants

    Science.gov (United States)

    2010-12-02

    evaluating the function ΘP (A) for any fixed A,P is equivalent to solving the so-called Quadratic Assignment Problem ( QAP ), and thus we can employ various...tractable linear programming, spectral, and SDP relaxations of QAP [40, 11, 33]. In particular we discuss recent work [14] on exploiting group...symmetry in SDP relaxations of QAP , which is useful for approximately computing elementary convex graph invariants in many interesting cases. Finally in

  5. Galilei invariant molecular dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Hoermann, G. [Vienna Univ. (Austria). Mathematisches Inst.; Jaekel, C.D. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik

    1994-04-01

    We construct a C{sup *}-dynamical model for a chemical reaction. Galilei invariance of our nonrelativistic model is demonstrated by defining it directly on a Galilean space-time fibrebundle with C{sup *}-algebra valued fibre, i.e. without reference to any coordinate system. The existence of equilibrium states in this model is established and some of their properties are discussed. (orig.)

  6. Grammar of Dance Gesture from Bali Traditional Dance

    OpenAIRE

    Yaya Heryadi; Mohamad Ivan Fanany; Aniati Murni Arymurthy

    2012-01-01

    Automatic recognition of dance gesture is one important research area in computer vision with many potential applications. Bali traditional dance comprises of many dance gestures that relatively unchanged over the years. Although previous studies have reported various methods for recognizing gesture, to the best of our knowledge, a method to model and classify dance gesture of Bali traditional dance is still unfound in literature. The aim of this paper is to build a robust recognizer based on...

  7. Tactile Feedback for Above-Device Gesture Interfaces

    OpenAIRE

    Freeman, Euan; Brewster, Stephen; Lantz, Vuokko

    2014-01-01

    Above-device gesture interfaces let people interact in the space above mobile devices using hand and finger movements. For example, users could gesture over a mobile phone or wearable without having to use the touchscreen. We look at how above-device interfaces can also give feedback in the space over the device. Recent haptic and wearable technologies give new ways to provide tactile feedback while gesturing, letting touchless gesture interfaces give touch feedback. In this paper we take a f...

  8. Adaptation in Gesture: Converging Hands or Converging Minds?

    Science.gov (United States)

    Mol, Lisette; Krahmer, Emiel; Maes, Alfons; Swerts, Marc

    2012-01-01

    Interlocutors sometimes repeat each other's co-speech hand gestures. In three experiments, we investigate to what extent the copying of such gestures' form is tied to their meaning in the linguistic context, as well as to interlocutors' representations of this meaning at the conceptual level. We found that gestures were repeated only if they could…

  9. Preschoolers' Interpretations of Gesture: Label or Action Associate?

    Science.gov (United States)

    Marentette, Paula; Nicoladis, Elena

    2011-01-01

    This study explores a common assumption made in the cognitive development literature that children will treat gestures as labels for objects. Without doubt, researchers in these experiments intend to use gestures symbolically as labels. The present studies examine whether children interpret these gestures as labels. In Study 1 two-, three-, and…

  10. Gestures in Guidance and Counselling and Their Pedagogical ...

    African Journals Online (AJOL)

    The purpose of this paper was to x-ray the implications of gestures, their usage and relevance in guidance and counselling. The historical background of guidance and counselling was traced and concepts like guidance, counselling and gesture were clarified. Gestures as non-verbal cues in counselling do not stand on its ...

  11. Recognizing Stress Using Semantics and Modulation of Speech and Gestures

    NARCIS (Netherlands)

    Lefter, I.; Burghouts, G.J.; Rothkrantz, L.J.M.

    2016-01-01

    This paper investigates how speech and gestures convey stress, and how they can be used for automatic stress recognition. As a first step, we look into how humans use speech and gestures to convey stress. In particular, for both speech and gestures, we distinguish between stress conveyed by the

  12. Gesture Based Control and EMG Decomposition

    Science.gov (United States)

    Wheeler, Kevin R.; Chang, Mindy H.; Knuth, Kevin H.

    2005-01-01

    This paper presents two probabilistic developments for use with Electromyograms (EMG). First described is a new-electric interface for virtual device control based on gesture recognition. The second development is a Bayesian method for decomposing EMG into individual motor unit action potentials. This more complex technique will then allow for higher resolution in separating muscle groups for gesture recognition. All examples presented rely upon sampling EMG data from a subject's forearm. The gesture based recognition uses pattern recognition software that has been trained to identify gestures from among a given set of gestures. The pattern recognition software consists of hidden Markov models which are used to recognize the gestures as they are being performed in real-time from moving averages of EMG. Two experiments were conducted to examine the feasibility of this interface technology. The first replicated a virtual joystick interface, and the second replicated a keyboard. Moving averages of EMG do not provide easy distinction between fine muscle groups. To better distinguish between different fine motor skill muscle groups we present a Bayesian algorithm to separate surface EMG into representative motor unit action potentials. The algorithm is based upon differential Variable Component Analysis (dVCA) [l], [2] which was originally developed for Electroencephalograms. The algorithm uses a simple forward model representing a mixture of motor unit action potentials as seen across multiple channels. The parameters of this model are iteratively optimized for each component. Results are presented on both synthetic and experimental EMG data. The synthetic case has additive white noise and is compared with known components. The experimental EMG data was obtained using a custom linear electrode array designed for this study.

  13. Generating Culture-Specific Gestures for Virtual Agent Dialogs

    DEFF Research Database (Denmark)

    Endrass, Birgit; Damian, Ionut; Huber, Peter

    2010-01-01

    Integrating culture into the behavioral model of virtual agents has come into focus lately. When investigating verbal aspects of behavior, nonverbal behaviors are desirably added automatically, driven by the speech-act. In this paper, we present a corpus driven approach of generating gestures...... in a culture-specific way that accompany agent dialogs. The frequency of gestures and gesture-types, the correlation of gesture-types and speech-acts as well as the expressivity of gestures have been analyzed in the two cultures of Germany and Japan and integrated into a demonstrator....

  14. Exploring the Use of Discrete Gestures for Authentication

    Science.gov (United States)

    Chong, Ming Ki; Marsden, Gary

    Research in user authentication has been a growing field in HCI. Previous studies have shown that peoples’ graphical memory can be used to increase password memorability. On the other hand, with the increasing number of devices with built-in motion sensors, kinesthetic memory (or muscle memory) can also be exploited for authentication. This paper presents a novel knowledge-based authentication scheme, called gesture password, which uses discrete gestures as password elements. The research presents a study of multiple password retention using PINs and gesture passwords. The study reports that although participants could use kinesthetic memory to remember gesture passwords, retention of PINs is far superior to retention of gesture passwords.

  15. Gesture imitation in autism. II. Symbolic gestures and pantomimed object use.

    Science.gov (United States)

    Smith, Isabel M; Bryson, Susan E

    2007-10-01

    We report an experimental study of imitation of two types of meaningful gestures: (a) social-communicative gestures, and (b) pantomimed actions with objects (including counterfunctional object use) by children and adolescents with autism. Controls were (a) children with nonautistic developmental delays, matched for chronological age and receptive language age, and (b) typically developing children matched for receptive language. Children in both comparison groups imitated actions more accurately than did children with autism, who nonetheless demonstrated understanding of the meaning of the gestures. However, the autistic group tended to have difficulty naming gestures and also was less able than controls to produce actions on verbal request. Children with lower levels of language ability, including those with autism, had difficulty imitating unconventional use of objects, instead using the object for their conventional functions. The discussion addresses the implications of these results and our own previous related findings for representational and executive accounts of praxic deficits in autistic spectrum disorders.

  16. Dogs account for body orientation but not visual barriers when responding to pointing gestures

    Science.gov (United States)

    MacLean, Evan L.; Krupenye, Christopher; Hare, Brian

    2014-01-01

    In a series of 4 experiments we investigated whether dogs use information about a human’s visual perspective when responding to pointing gestures. While there is evidence that dogs may know what humans can and cannot see, and that they flexibly use human communicative gestures, it is unknown if they can integrate these two skills. In Experiment 1 we first determined that dogs were capable of using basic information about a human’s body orientation (indicative of her visual perspective) in a point following context. Subjects were familiarized with experimenters who either faced the dog and accurately indicated the location of hidden food, or faced away from the dog and (falsely) indicated the un-baited container. In test trials these cues were pitted against one another and dogs tended to follow the gesture from the individual who faced them while pointing. In Experiments 2–4 the experimenter pointed ambiguously toward two possible locations where food could be hidden. On test trials a visual barrier occluded the pointer’s view of one container, while dogs could always see both containers. We predicted that if dogs could take the pointer’s visual perspective they should search in the only container visible to the pointer. This hypothesis was supported only in Experiment 2. We conclude that while dogs are skilled both at following human gestures, and exploiting information about others’ visual perspectives, they may not integrate these skills in the manner characteristic of human children. PMID:24611643

  17. From action to abstraction: Gesture as a mechanism of change.

    Science.gov (United States)

    Goldin-Meadow, Susan

    2015-12-01

    Piaget was a master at observing the routine behaviors children produce as they go from knowing less to knowing more about at a task, and making inferences not only about how the children understood the task at each point, but also about how they progressed from one point to the next. In this paper, I examine a routine behavior that Piaget overlooked-the spontaneous gestures speakers produce as they explain their solutions to a problem. These gestures are not mere hand waving. They reflect ideas that the speaker has about the problem, often ideas that are not found in that speaker's talk. But gesture can do more than reflect ideas-it can also change them. In this sense, gesture behaves like any other action; both gesture and action on objects facilitate learning problems on which training was given. However, only gesture promotes transferring the knowledge gained to problems that require generalization. Gesture is, in fact, a special kind of action in that it represents the world rather than directly manipulating the world (gesture does not move objects around). The mechanisms by which gesture and action promote learning may therefore differ-gesture is able to highlight components of an action that promote abstract learning while leaving out details that could tie learning to a specific context. Because it is both an action and a representation, gesture can serve as a bridge between the two and thus be a powerful tool for learning abstract ideas.

  18. Hippocampal declarative memory supports gesture production: Evidence from amnesia.

    Science.gov (United States)

    Hilverman, Caitlin; Cook, Susan Wagner; Duff, Melissa C

    2016-12-01

    Spontaneous co-speech hand gestures provide a visuospatial representation of what is being communicated in spoken language. Although it is clear that gestures emerge from representations in memory for what is being communicated (De Ruiter, 1998; Wesp, Hesse, Keutmann, & Wheaton, 2001), the mechanism supporting the relationship between gesture and memory is unknown. Current theories of gesture production posit that action - supported by motor areas of the brain - is key in determining whether gestures are produced. We propose that when and how gestures are produced is determined in part by hippocampally-mediated declarative memory. We examined the speech and gesture of healthy older adults and of memory-impaired patients with hippocampal amnesia during four discourse tasks that required accessing episodes and information from the remote past. Consistent with previous reports of impoverished spoken language in patients with hippocampal amnesia, we predicted that these patients, who have difficulty generating multifaceted declarative memory representations, may in turn have impoverished gesture production. We found that patients gestured less overall relative to healthy comparison participants, and that this was particularly evident in tasks that may rely more heavily on declarative memory. Thus, gestures do not just emerge from the motor representation activated for speaking, but are also sensitive to the representation available in hippocampal declarative memory, suggesting a direct link between memory and gesture production. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. From action to abstraction: Gesture as a mechanism of change

    Science.gov (United States)

    Goldin-Meadow, Susan

    2015-01-01

    Piaget was a master at observing the routine behaviors children produce as they go from knowing less to knowing more about at a task, and making inferences not only about how the children understood the task at each point, but also about how they progressed from one point to the next. In this paper, I examine a routine behavior that Piaget overlooked—the spontaneous gestures speakers produce as they explain their solutions to a problem. These gestures are not mere hand waving. They reflect ideas that the speaker has about the problem, often ideas that are not found in that speaker’s talk. But gesture can do more than reflect ideas—it can also change them. In this sense, gesture behaves like any other action; both gesture and action on objects facilitate learning problems on which training was given. However, only gesture promotes transferring the knowledge gained to problems that require generalization. Gesture is, in fact, a special kind of action in that it represents the world rather than directly manipulating the world (gesture does not move objects around). The mechanisms by which gesture and action promote learning may therefore differ—gesture is able to highlight components of an action that promote abstract learning while leaving out details that could tie learning to a specific context. Because it is both an action and a representation, gesture can serve as a bridge between the two and thus be a powerful tool for learning abstract ideas. PMID:26692629

  20. Gesturing by speakers with aphasia: how does it compare?

    Science.gov (United States)

    Mol, Lisette; Krahmer, Emiel; van de Sandt-Koenderman, Mieke

    2013-08-01

    To study the independence of gesture and verbal language production. The authors assessed whether gesture can be semantically compensatory in cases of verbal language impairment and whether speakers with aphasia and control participants use similar depiction techniques in gesture. The informativeness of gesture was assessed in 3 forced-choice studies, in which raters assessed the topic of the speaker's message in video clips of 13 speakers with moderate aphasia and 12 speakers with severe aphasia, who were performing a communication test (the Scenario Test). Both groups were compared and contrasted with 17 control participants, who either were or were not allowed to communicate verbally. In addition, the representation techniques used in gesture were analyzed. Gestures produced by speakers with more severe aphasia were less informative than those by speakers with moderate aphasia, yet they were not necessarily uninformative. Speakers with more severe aphasia also tended to use fewer representation techniques (mostly relying on outlining gestures) in co-speech gesture than control participants, who were asked to use gesture instead of speech. It is important to note that limb apraxia may be a mediating factor here. These results suggest that in aphasia, gesture tends to degrade with verbal language. This may imply that the processes underlying verbal language and co-speech gesture production, although partly separate, are closely linked.

  1. Viability, invariance and applications

    CERN Document Server

    Carja, Ovidiu; Vrabie, Ioan I

    2007-01-01

    The book is an almost self-contained presentation of the most important concepts and results in viability and invariance. The viability of a set K with respect to a given function (or multi-function) F, defined on it, describes the property that, for each initial data in K, the differential equation (or inclusion) driven by that function or multi-function) to have at least one solution. The invariance of a set K with respect to a function (or multi-function) F, defined on a larger set D, is that property which says that each solution of the differential equation (or inclusion) driven by F and issuing in K remains in K, at least for a short time.The book includes the most important necessary and sufficient conditions for viability starting with Nagumo's Viability Theorem for ordinary differential equations with continuous right-hand sides and continuing with the corresponding extensions either to differential inclusions or to semilinear or even fully nonlinear evolution equations, systems and inclusions. In th...

  2. Permutationally invariant state reconstruction

    DEFF Research Database (Denmark)

    Moroder, Tobias; Hyllus, Philipp; Tóth, Géza

    2012-01-01

    Feasible tomography schemes for large particle numbers must possess, besides an appropriate data acquisition protocol, an efficient way to reconstruct the density operator from the observed finite data set. Since state reconstruction typically requires the solution of a nonlinear large-scale opti...... optimization, which has clear advantages regarding speed, control and accuracy in comparison to commonly employed numerical routines. First prototype implementations easily allow reconstruction of a state of 20 qubits in a few minutes on a standard computer.......-scale optimization problem, this is a major challenge in the design of scalable tomography schemes. Here we present an efficient state reconstruction scheme for permutationally invariant quantum state tomography. It works for all common state-of-the-art reconstruction principles, including, in particular, maximum...... likelihood and least squares methods, which are the preferred choices in today's experiments. This high efficiency is achieved by greatly reducing the dimensionality of the problem employing a particular representation of permutationally invariant states known from spin coupling combined with convex...

  3. Invariant Characteristics of Carcinogenesis.

    Directory of Open Access Journals (Sweden)

    Simon Sherman

    Full Text Available Carcinogenic modeling is aimed at mathematical descriptions of cancer development in aging. In this work, we assumed that a small fraction of individuals in the population is susceptible to cancer, while the rest of the population is resistant to cancer. For individuals susceptible to cancer we adopted methods of conditional survival analyses. We performed computational experiments using data on pancreatic, stomach, gallbladder, colon and rectum, liver, and esophagus cancers from the gastrointestinal system collected for men and women in the SEER registries during 1975-2009. In these experiments, we estimated the time period effects, the birth cohort effects, the age effects and the population (unconditional cancer hazard rates. We also estimated the individual cancer presentation rates and the individual cancer resistance rates, which are, correspondingly, the hazard and survival rates conditioned on the susceptibility to cancer. The performed experiments showed that for men and women, patterns of the age effects, the individual cancer presentation rates and the individual cancer resistance rates are: (i intrinsic for each cancer subtype, (ii invariant to the place of living of the individuals diagnosed with cancer, and (iii well adjusted for the modifiable variables averaged at a given time period. Such specificity and invariability of the age effects, the individual cancer presentation rates and the individual cancer resistance rates suggest that these carcinogenic characteristics can be useful for predictive carcinogenic studies by methods of inferential statistics and for the development of novel strategies for cancer prevention.

  4. Exploring the contribution of prosody and gesture to the perception of focus using an animated agent

    OpenAIRE

    Prieto Vives, Pilar, 1965-; Puglesi, Cecilia; Borràs Comes, Joan Manel, 1984-; Arroyo, Ernesto; Blat, Josep

    2015-01-01

    Speech prosody has traditionally been analyzed in terms of acoustic features. Although visual features have been shown to enhance linguistic processing, the conventional view is that facial and body gesture information in oral (non-signed) languages tends to be redundant and has the role of helping the hearer recover the meaning of an utterance. Though prosodic information in face-to-face communication is produced with concurrent visual information, little is known about their audiovisual mul...

  5. Distinguishing the communicative functions of gestures

    DEFF Research Database (Denmark)

    Jokinen, Kristiina; Navarretta, Costanza; Paggio, Patrizia

    2008-01-01

    This paper deals with the results of a machine learning experiment conducted on annotated gesture data from two case studies (Danish and Estonian). The data concern mainly facial displays, that are annotated with attributes relating to shape and dynamics, as well as communicative function...

  6. Making Pronunciation Visible: Gesture in Teaching Pronunciation

    Science.gov (United States)

    Smotrova, Tetyana

    2017-01-01

    The study examines the teacher and student gesture employed in teaching and learning suprasegmental features of second language (L2) pronunciation such as syllabification, word stress, and rhythm. It presents microanalysis of video-recorded classroom interactions occurring in a beginner-level reading class in an intensive English program at a U.S.…

  7. Talk and Gesture as Process Data

    Science.gov (United States)

    Maddox, Bryan

    2017-01-01

    This article discusses talk and gesture as neglected sources of process data (Maddox, 2015, Maddox and Zumbo, 2017). The significance of the article is the growing use of various sources of process data in computer-based testing (Ercikan and Pellegrino, (Eds.) 2017; Zumbo and Hubley, (Eds.) 2017). The use of process data on talk and gesture…

  8. The Gestural Theory of Language Origins

    Science.gov (United States)

    Armstrong, David F.

    2008-01-01

    The idea that iconic visible gesture had something to do with the origin of language, particularly speech, is a frequent element in speculation about this phenomenon and appears early in its history. Socrates hypothesizes about the origins of Greek words in Plato's satirical dialogue, "Cratylus", and his speculation includes a possible…

  9. A Prelinguistic Gestural Universal of Human Communication

    Science.gov (United States)

    Liszkowski, Ulf; Brown, Penny; Callaghan, Tara; Takada, Akira; de Vos, Conny

    2012-01-01

    Several cognitive accounts of human communication argue for a language-independent, prelinguistic basis of human communication and language. The current study provides evidence for the universality of a prelinguistic gestural basis for human communication. We used a standardized, semi-natural elicitation procedure in seven very different cultures…

  10. Aesthetic Challenges of Sonified Video Gestures

    Directory of Open Access Journals (Sweden)

    Michael Filimowicz

    2013-12-01

    Full Text Available This commentary responds to Jensenius and Godøy's paper, "Sonifying the shape of human body motion using motiongrams." Here I outline key aesthetic challenges to be addressed in future research in the sonification of video and gesture.

  11. Invariant and Absolute Invariant Means of Double Sequences

    Directory of Open Access Journals (Sweden)

    Abdullah Alotaibi

    2012-01-01

    Full Text Available We examine some properties of the invariant mean, define the concepts of strong σ-convergence and absolute σ-convergence for double sequences, and determine the associated sublinear functionals. We also define the absolute invariant mean through which the space of absolutely σ-convergent double sequences is characterized.

  12. Gestures make memories, but what kind? Patients with impaired procedural memory display disruptions in gesture production and comprehension.

    Science.gov (United States)

    Klooster, Nathaniel B; Cook, Susan W; Uc, Ergun Y; Duff, Melissa C

    2014-01-01

    Hand gesture, a ubiquitous feature of human interaction, facilitates communication. Gesture also facilitates new learning, benefiting speakers and listeners alike. Thus, gestures must impact cognition beyond simply supporting the expression of already-formed ideas. However, the cognitive and neural mechanisms supporting the effects of gesture on learning and memory are largely unknown. We hypothesized that gesture's ability to drive new learning is supported by procedural memory and that procedural memory deficits will disrupt gesture production and comprehension. We tested this proposal in patients with intact declarative memory, but impaired procedural memory as a consequence of Parkinson's disease (PD), and healthy comparison participants with intact declarative and procedural memory. In separate experiments, we manipulated the gestures participants saw and produced in a Tower of Hanoi (TOH) paradigm. In the first experiment, participants solved the task either on a physical board, requiring high arching movements to manipulate the discs from peg to peg, or on a computer, requiring only flat, sideways movements of the mouse. When explaining the task, healthy participants with intact procedural memory displayed evidence of their previous experience in their gestures, producing higher, more arching hand gestures after solving on a physical board, and smaller, flatter gestures after solving on a computer. In the second experiment, healthy participants who saw high arching hand gestures in an explanation prior to solving the task subsequently moved the mouse with significantly higher curvature than those who saw smaller, flatter gestures prior to solving the task. These patterns were absent in both gesture production and comprehension experiments in patients with procedural memory impairment. These findings suggest that the procedural memory system supports the ability of gesture to drive new learning.

  13. Mobius invariant QK spaces

    CERN Document Server

    Wulan, Hasi

    2017-01-01

    This monograph summarizes the recent major achievements in Möbius invariant QK spaces. First introduced by Hasi Wulan and his collaborators, the theory of QK spaces has developed immensely in the last two decades, and the topics covered in this book will be helpful to graduate students and new researchers interested in the field. Featuring a wide range of subjects, including an overview of QK spaces, QK-Teichmüller spaces, K-Carleson measures and analysis of weight functions, this book serves as an important resource for analysts interested in this area of complex analysis. Notes, numerous exercises, and a comprehensive up-to-date bibliography provide an accessible entry to anyone with a standard graduate background in real and complex analysis.

  14. Do Verbal Children with Autism Comprehend Gesture as Readily as Typically Developing Children?

    Science.gov (United States)

    Dimitrova, Nevena; Özçaliskan, Seyda; Adamson, Lauren B.

    2017-01-01

    Gesture comprehension remains understudied, particularly in children with autism spectrum disorder (ASD) who have difficulties in gesture production. Using a novel gesture comprehension task, Study 1 examined how 2- to 4-year-old typically-developing (TD) children comprehend types of gestures and gesture-speech combinations, and showed better…

  15. Effects of Learning with Gesture on Children's Understanding of a New Language Concept

    Science.gov (United States)

    Wakefield, Elizabeth M.; James, Karin H.

    2015-01-01

    Asking children to gesture while being taught a concept facilitates their learning. Here, we investigated whether children benefitted equally from producing gestures that reflected speech (speech-gesture matches) versus gestures that complemented speech (speech-gesture mismatches), when learning the concept of palindromes. As in previous studies,…

  16. To Beat or Not to Beat: Beat Gestures in Direction Giving

    NARCIS (Netherlands)

    Theune, Mariet; Brandhorst, Chris J.; Kopp, S.; Wachsmuth, I.

    2010-01-01

    Research on gesture generation for embodied conversational agents (ECA’s) mostly focuses on gesture types such as pointing and iconic gestures, while ignoring another gesture type frequently used by human speakers: beat gestures. Analysis of a corpus of route descriptions showed that although

  17. Gestural development and its relation to a child's early vocabulary.

    Science.gov (United States)

    Kraljević, Jelena Kuvač; Cepanec, Maja; Simleša, Sanja

    2014-05-01

    Gesture and language are tightly connected during the development of a child's communication skills. Gestures mostly precede and define the way of language development; even opposite direction has been found. Few recent studies have focused on the relationship between specific gestures and specific word categories, emphasising that the onset of one gesture type predicts the onset of certain word categories or of the earliest word combinations. The aim of this study was to analyse predicative roles of different gesture types on the onset of first word categories in a child's early expressive vocabulary. Our data show that different types of gestures predict different types of word production. Object gestures predict open-class words from the age of 13 months, and gestural routines predict closed-class words and social terms from 8 months. Receptive vocabulary has a strong mediating role for all linguistically defined categories (open- and closed-class words) but not for social terms, which are the largest word category in a child's early expressive vocabulary. Accordingly, main contribution of this study is to define the impact of different gesture types on early expressive vocabulary and to determine the role of receptive vocabulary in gesture-expressive vocabulary relation in the Croatian language. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Gesture's role in speaking, learning, and creating language.

    Science.gov (United States)

    Goldin-Meadow, Susan; Alibali, Martha Wagner

    2013-01-01

    When speakers talk, they gesture. The goal of this review is to investigate the contribution that these gestures make to how we communicate and think. Gesture can play a role in communication and thought at many timespans. We explore, in turn, gesture's contribution to how language is produced and understood in the moment; its contribution to how we learn language and other cognitive skills; and its contribution to how language is created over generations, over childhood, and on the spot. We find that the gestures speakers produce when they talk are integral to communication and can be harnessed in a number of ways. (a) Gesture reflects speakers' thoughts, often their unspoken thoughts, and thus can serve as a window onto cognition. Encouraging speakers to gesture can thus provide another route for teachers, clinicians, interviewers, etc., to better understand their communication partners. (b) Gesture can change speakers' thoughts. Encouraging gesture thus has the potential to change how students, patients, witnesses, etc., think about a problem and, as a result, alter the course of learning, therapy, or an interchange. (c) Gesture provides building blocks that can be used to construct a language. By watching how children and adults who do not already have a language put those blocks together, we can observe the process of language creation. Our hands are with us at all times and thus provide researchers and learners with an ever-present tool for understanding how we talk and think.

  19. Musical Shaping Gestures: Considerations about Terminology and Methodology

    Directory of Open Access Journals (Sweden)

    Elaine King

    2013-12-01

    Full Text Available Fulford and Ginsborg's investigation into non-verbal communication during music rehearsal-talk between performers with and without hearing impairments extends existing research in the field of gesture studies by contributing significantly to our understanding of musicians' physical gestures as well as opening up discussion about the relationship between speech, sign and gesture in discourse about music. Importantly, the authors weigh up the possibility of an emerging sign language about music. This commentary focuses on three key considerations in response to their paper: first, use of terminology in the study of gesture, specifically about 'musical shaping gestures' (MSGs; second, methodological issues about capturing physical gestures; and third, evaluation of the application of gesture research beyond the rehearsal context. While the difficulties of categorizing gestures in observational research are acknowledged, I indicate that the consistent application of terminology from outside and within the study is paramount. I also suggest that the classification of MSGs might be based upon a set of observed physical characteristics within a single gesture, including size, duration, speed, plane and handedness, leading towards an alternative taxonomy for interpreting these data. Finally, evaluation of the application of gesture research in education and performance arenas is provided.

  20. Meaningful gesture in monkeys? Investigating whether mandrills create social culture.

    Directory of Open Access Journals (Sweden)

    Mark E Laidre

    Full Text Available BACKGROUND: Human societies exhibit a rich array of gestures with cultural origins. Often these gestures are found exclusively in local populations, where their meaning has been crafted by a community into a shared convention. In nonhuman primates like African monkeys, little evidence exists for such culturally-conventionalized gestures. METHODOLOGY/PRINCIPAL FINDINGS: Here I report a striking gesture unique to a single community of mandrills (Mandrillus sphinx among nineteen studied across North America, Africa, and Europe. The gesture was found within a community of 23 mandrills where individuals old and young, female and male covered their eyes with their hands for periods which could exceed 30 min, often while simultaneously raising their elbow prominently into the air. This 'Eye covering' gesture has been performed within the community for a decade, enduring deaths, removals, and births, and it persists into the present. Differential responses to Eye covering versus controls suggested that the gesture might have a locally-respected meaning, potentially functioning over a distance to inhibit interruptions as a 'do not disturb' sign operates. CONCLUSIONS/SIGNIFICANCE: The creation of this gesture by monkeys suggests that the ability to cultivate shared meanings using novel manual acts may be distributed more broadly beyond the human species. Although logistically difficult with primates, the translocation of gesturers between communities remains critical to experimentally establishing the possible cultural origin and transmission of nonhuman gestures.

  1. Speech and gesture share the same communication system.

    Science.gov (United States)

    Bernardis, Paolo; Gentilucci, Maurizio

    2006-01-01

    Humans speak and produce symbolic gestures. Do these two forms of communication interact, and how? First, we tested whether the two communication signals influenced each other when emitted simultaneously. Participants either pronounced words, or executed symbolic gestures, or emitted the two communication signals simultaneously. Relative to the unimodal conditions, multimodal voice spectra were enhanced by gestures, whereas multimodal gesture parameters were reduced by words. In other words, gesture reinforced word, whereas word inhibited gesture. In contrast, aimless arm movements and pseudo-words had no comparable effects. Next, we tested whether observing word pronunciation during gesture execution affected verbal responses in the same way as emitting the two signals. Participants responded verbally to either spoken words, or to gestures, or to the simultaneous presentation of the two signals. We observed the same reinforcement in the voice spectra as during simultaneous emission. These results suggest that spoken word and symbolic gesture are coded as single signal by a unique communication system. This signal represents the intention to engage a closer interaction with a hypothetical interlocutor and it may have a meaning different from when word and gesture are encoded singly.

  2. Impaired gesture performance in schizophrenia: particular vulnerability of meaningless pantomimes.

    Science.gov (United States)

    Walther, Sebastian; Vanbellingen, Tim; Müri, René; Strik, Werner; Bohlhalter, Stephan

    2013-11-01

    Schizophrenia patients frequently present with subtle motor impairments, including higher order motor function such as hand gesture performance. Using cut off scores from a standardized gesture test, we previously reported gesture deficits in 40% of schizophrenia patients irrespective of the gesture content. However, these findings were based on normative data from an older control group. Hence, we now aimed at determining cut-off scores in an age and gender matched control group. Furthermore, we wanted to explore whether gesture categories are differentially affected in Schizophrenia. Gesture performance data of 30 schizophrenia patients and data from 30 matched controls were compared. Categories included meaningless, intransitive (communicative) and transitive (object related) hand gestures, which were either imitated or pantomimed, i.e. produced on verbal command. Cut-off scores of the age matched control group were higher than the previous cut-off scores in an older control group. An ANOVA tested effects of group, domain (imitation or pantomime), and semantic category (meaningless, transitive or intransitive), as well as their interaction. According to the new cut-off scores, 67% of the schizophrenia patients demonstrated gestural deficits. Patients performed worse in all gesture categories, however meaningless gestures on verbal command were particularly impaired (p = 0.008). This category correlated with poor frontal lobe function (p gestural deficits in schizophrenia are even more frequent than previously reported. Gesture categories that pose higher demands on planning and selection such as pantomime of meaningless gestures are predominantly affected and associated with the well-known frontal lobe dysfunction. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. Finite type invariants and fatgraphs

    DEFF Research Database (Denmark)

    Andersen, Jørgen Ellegaard; Bene, Alex; Meilhan, Jean-Baptiste Odet Thierry

    2010-01-01

    –Murakami–Ohtsuki of the link invariant of Andersen–Mattes–Reshetikhin computed relative to choices determined by the fatgraph G; this provides a basic connection between 2d geometry and 3d quantum topology. For each fixed G, this invariant is shown to be universal for homology cylinders, i.e., G establishes an isomorphism...

  4. Action-dependent perceptual invariants: from ecological to sensorimotor approaches.

    Science.gov (United States)

    Mossio, Matteo; Taraborelli, Dario

    2008-12-01

    Ecological and sensorimotor theories of perception build on the notion of action-dependent invariants as the basic structures underlying perceptual capacities. In this paper we contrast the assumptions these theories make on the nature of perceptual information modulated by action. By focusing on the question, how movement specifies perceptual information, we show that ecological and sensorimotor theories endorse substantially different views about the role of action in perception. In particular we argue that ecological invariants are characterized with reference to transformations produced in the sensory array by movement: such invariants are transformation-specific but do not imply motor-specificity. In contrast, sensorimotor theories assume that perceptual invariants are intrinsically tied to specific movements. We show that this difference leads to different empirical predictions and we submit that the distinction between motor equivalence and motor-specificity needs further clarification in order to provide a more constrained account of action/perception relations.

  5. 3D Hand Gesture Analysis through a Real-Time Gesture Search Engine

    Directory of Open Access Journals (Sweden)

    Shahrouz Yousefi

    2015-06-01

    Full Text Available 3D gesture recognition and tracking are highly desired features of interaction design in future mobile and smart environments. Specifically, in virtual/augmented reality applications, intuitive interaction with the physical space seems unavoidable and 3D gestural interaction might be the most effective alternative for the current input facilities such as touchscreens. In this paper, we introduce a novel solution for real-time 3D gesture-based interaction by finding the best match from an extremely large gesture database. This database includes images of various articulated hand gestures with the annotated 3D position/orientation parameters of the hand joints. Our unique matching algorithm is based on the hierarchical scoring of the low-level edge-orientation features between the query frames and database and retrieving the best match. Once the best match is found from the database in each moment, the pre-recorded 3D motion parameters can instantly be used for natural interaction. The proposed bare-hand interaction technology performs in real time with high accuracy using an ordinary camera.

  6. Physical Invariants of Intelligence

    Science.gov (United States)

    Zak, Michail

    2010-01-01

    A program of research is dedicated to development of a mathematical formalism that could provide, among other things, means by which living systems could be distinguished from non-living ones. A major issue that arises in this research is the following question: What invariants of mathematical models of the physics of systems are (1) characteristic of the behaviors of intelligent living systems and (2) do not depend on specific features of material compositions heretofore considered to be characteristic of life? This research at earlier stages has been reported, albeit from different perspectives, in numerous previous NASA Tech Briefs articles. To recapitulate: One of the main underlying ideas is to extend the application of physical first principles to the behaviors of living systems. Mathematical models of motor dynamics are used to simulate the observable physical behaviors of systems or objects of interest, and models of mental dynamics are used to represent the evolution of the corresponding knowledge bases. For a given system, the knowledge base is modeled in the form of probability distributions and the mental dynamics is represented by models of the evolution of the probability densities or, equivalently, models of flows of information. At the time of reporting the information for this article, the focus of this research was upon the following aspects of the formalism: Intelligence is considered to be a means by which a living system preserves itself and improves its ability to survive and is further considered to manifest itself in feedback from the mental dynamics to the motor dynamics. Because of the feedback from the mental dynamics, the motor dynamics attains quantum-like properties: The trajectory of the physical aspect of the system in the space of dynamical variables splits into a family of different trajectories, and each of those trajectories can be chosen with a probability prescribed by the mental dynamics. From a slightly different perspective

  7. Eye movement-invariant representations in the human visual system.

    Science.gov (United States)

    Nishimoto, Shinji; Huth, Alexander G; Bilenko, Natalia Y; Gallant, Jack L

    2017-01-01

    During natural vision, humans make frequent eye movements but perceive a stable visual world. It is therefore likely that the human visual system contains representations of the visual world that are invariant to eye movements. Here we present an experiment designed to identify visual areas that might contain eye-movement-invariant representations. We used functional MRI to record brain activity from four human subjects who watched natural movies. In one condition subjects were required to fixate steadily, and in the other they were allowed to freely make voluntary eye movements. The movies used in each condition were identical. We reasoned that the brain activity recorded in a visual area that is invariant to eye movement should be similar under fixation and free viewing conditions. In contrast, activity in a visual area that is sensitive to eye movement should differ between fixation and free viewing. We therefore measured the similarity of brain activity across repeated presentations of the same movie within the fixation condition, and separately between the fixation and free viewing conditions. The ratio of these measures was used to determine which brain areas are most likely to contain eye movement-invariant representations. We found that voxels located in early visual areas are strongly affected by eye movements, while voxels in ventral temporal areas are only weakly affected by eye movements. These results suggest that the ventral temporal visual areas contain a stable representation of the visual world that is invariant to eye movements made during natural vision.

  8. A Hierarchical Model for Continuous Gesture Recognition Using Kinect

    DEFF Research Database (Denmark)

    Jensen, Søren Kejser; Moesgaard, Christoffer; Nielsen, Christoffer Samuel

    2013-01-01

    Human gesture recognition is an area, which has been studied thoroughly in recent years,and close to100% recognition rates in restricted environments have been achieved, often either with single separated gestures in the input stream, or with computationally intensive systems. The results...... are unfortunately not as striking, when it comes to a continuous stream of gestures. In this paper we introduce a hierarchical system for gesture recognition for use in a gaming setting, with a continuous stream of data. Layer 1 is based on Nearest Neighbor Search and layer 2 uses Hidden Markov Models. The system...... propose a way of attributing recognised gestures with a force attribute, for use in gaming. The recognition rate in layer 1 is 68.2%, with an even higher rate for simple gestures. Layer 2 reduces the noise and has aaverage recognition rate of 85.1%. When some simple constraints are added we reach...

  9. Recognition of Gestures using Artifical Neural Network

    Directory of Open Access Journals (Sweden)

    Marcel MORE

    2013-12-01

    Full Text Available Sensors for motion measurements are now becoming more widespread. Thanks to their parameters and affordability they are already used not only in the professional sector, but also in devices intended for daily use or entertainment. One of their applications is in control of devices by gestures. Systems that can determine type of gesture from measured motion have many uses. Some are for example in medical practice, but they are still more often used in devices such as cell phones, where they serve as a non-standard form of input. Today there are already several approaches for solving this problem, but building sufficiently reliable system is still a challenging task. In our project we are developing solution based on artificial neural network. In difference to other solutions, this one doesn’t require building model for each measuring system and thus it can be used in combination with various sensors just with minimal changes in his structure.

  10. Human computer interaction using hand gestures

    CERN Document Server

    Premaratne, Prashan

    2014-01-01

    Human computer interaction (HCI) plays a vital role in bridging the 'Digital Divide', bringing people closer to consumer electronics control in the 'lounge'. Keyboards and mouse or remotes do alienate old and new generations alike from control interfaces. Hand Gesture Recognition systems bring hope of connecting people with machines in a natural way. This will lead to consumers being able to use their hands naturally to communicate with any electronic equipment in their 'lounge.' This monograph will include the state of the art hand gesture recognition approaches and how they evolved from their inception. The author would also detail his research in this area for the past 8 years and how the future might turn out to be using HCI. This monograph will serve as a valuable guide for researchers (who would endeavour into) in the world of HCI.

  11. Working memory for meaningless manual gestures.

    Science.gov (United States)

    Rudner, Mary

    2015-03-01

    Effects on working memory performance relating to item similarity have been linked to prior categorisation of representations in long-term memory. However, there is evidence from gesture processing that this link may not be obligatory. The present study investigated whether working memory for incidentally generated meaningless manual gestures is influenced by formational similarity and whether this effect is modulated by working-memory load. Results showed that formational similarity did lower performance, demonstrating that similarity effects are not dependent on prior categorisation. However, this effect was only found when working-memory load was low, supporting a flexible resource allocation model according to which it is the quality rather than quantity of working memory representations that determines performance. This interpretation is in line with proposals suggesting language modality specific allocation of resources in working memory. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  12. From facial expressions to bodily gestures

    Science.gov (United States)

    2016-01-01

    This article aims to determine to what extent photographic practices in psychology, psychiatry and physiology contributed to the definition of the external bodily signs of passions and emotions in the second half of the 19th century in France. Bridging the gap between recent research in the history of emotions and photographic history, the following analyses focus on the photographic production of scientists and photographers who made significant contributions to the study of expressions and gestures, namely Duchenne de Boulogne, Charles Darwin, Paul Richer and Albert Londe. This article argues that photography became a key technology in their works due to the adequateness of the exposure time of different cameras to the duration of the bodily manifestations to be recorded, and that these uses constituted facial expressions and bodily gestures as particular objects for the scientific study. PMID:26900264

  13. Conceptual Motorics - Generation and Evaluation of Communicative Robot Gesture

    OpenAIRE

    Salem, Maha

    2012-01-01

    How is communicative gesture behavior in robots perceived by humans? Although gesture is a crucial feature of social interaction, this research question is still largely unexplored in the field of social robotics. The present work thus sets out to investigate how robot gesture can be used to design and realize more natural and human-like communication capabilities for social robots. The adopted approach is twofold. Firstly, the technical challenges encountered when implementing a speech-g...

  14. Musical Shaping Gestures: Considerations about Terminology and Methodology

    OpenAIRE

    Elaine King

    2013-01-01

    Fulford and Ginsborg's investigation into non-verbal communication during music rehearsal-talk between performers with and without hearing impairments extends existing research in the field of gesture studies by contributing significantly to our understanding of musicians' physical gestures as well as opening up discussion about the relationship between speech, sign and gesture in discourse about music. Importantly, the authors weigh up the possibility of an emerging sign language about music...

  15. Adult Gesture in Collaborative Mathematics Reasoning in Different Ages

    Science.gov (United States)

    Noto, M. S.; Harisman, Y.; Harun, L.; Amam, A.; Maarif, S.

    2017-09-01

    This article describes the case study on postgraduate students by using descriptive method. A problem is designed to facilitate the reasoning in the topic of Chi-Square test. The problem was given to two male students with different ages to investigate the gesture pattern and it will be related to their reasoning process. The indicators in reasoning problem can obtain the conclusion of analogy and generalization, and arrange the conjectures. This study refers to some questions—whether unique gesture is for every individual or to identify the pattern of the gesture used by the students with different ages. Reasoning problem was employed to collect the data. Two students were asked to collaborate to reason the problem. The discussion process recorded in using video tape to observe the gestures. The video recorded are explained clearly in this writing. Prosodic cues such as time, conversation text, gesture that appears, might help in understanding the gesture. The purpose of this study is to investigate whether different ages influences the maturity in collaboration observed from gesture perspective. The finding of this study shows that age is not a primary factor that influences the gesture in that reasoning process. In this case, adult gesture or gesture performed by order student does not show that he achieves, maintains, and focuses on the problem earlier on. Adult gesture also does not strengthen and expand the meaning if the student’s words or the language used in reasoning is not familiar for younger student. Adult gesture also does not affect cognitive uncertainty in mathematics reasoning. The future research is suggested to take more samples to find the consistency from that statement.

  16. Interaction with images using hand gestures

    OpenAIRE

    Basnet, Suman

    2016-01-01

    The main objective of this Final Year Project (FYP) is to achieve prototype of an embedded system where any user can control the flow of the images in the Graphical User Interface (GUI). This report starts with working mechanism of the sensors where gesture recognition pattern of the sensors is discussed. Then, the hardware and software requirements are enlisted with their features’ description in the system specifications heading. Eventually, stepwise elaboration of two different phases...

  17. Designing Gestural Interfaces Touchscreens and Interactive Devices

    CERN Document Server

    Saffer, Dan

    2008-01-01

    If you want to get started in new era of interaction design, this is the reference you need. Packed with informative illustrations and photos, Designing Gestural Interfaces provides you with essential information about kinesiology, sensors, ergonomics, physical computing, touchscreen technology, and new interface patterns -- information you need to augment your existing skills in traditional" websites, software, or product development. This book will help you enter this new world of possibilities."

  18. Integration Head Mounted Display Device and Hand Motion Gesture Device for Virtual Reality Laboratory

    Science.gov (United States)

    Rengganis, Y. A.; Safrodin, M.; Sukaridhoto, S.

    2018-01-01

    Virtual Reality Laboratory (VR Lab) is an innovation for conventional learning media which show us whole learning process in laboratory. There are many tools and materials are needed by user for doing practical in it, so user could feel new learning atmosphere by using this innovation. Nowadays, technologies more sophisticated than before. So it would carry in education and it will be more effective, efficient. The Supported technologies are needed us for making VR Lab such as head mounted display device and hand motion gesture device. The integration among them will be used us for making this research. Head mounted display device for viewing 3D environment of virtual reality laboratory. Hand motion gesture device for catching user real hand and it will be visualized in virtual reality laboratory. Virtual Reality will show us, if using the newest technologies in learning process it could make more interesting and easy to understand.

  19. Hand gesture recognition based on convolutional neural networks

    Science.gov (United States)

    Hu, Yu-lu; Wang, Lian-ming

    2017-11-01

    Hand gesture has been considered a natural, intuitive and less intrusive way for Human-Computer Interaction (HCI). Although many algorithms for hand gesture recognition have been proposed in literature, robust algorithms have been pursued. A recognize algorithm based on the convolutional neural networks is proposed to recognize ten kinds of hand gestures, which include rotation and turnover samples acquired from different persons. When 6000 hand gesture images were used as training samples, and 1100 as testing samples, a 98% recognition rate was achieved with the convolutional neural networks, which is higher than that with some other frequently-used recognition algorithms.

  20. Spatial (mis-)interpretation of pointing gestures to distal referents.

    Science.gov (United States)

    Herbort, Oliver; Kunde, Wilfried

    2016-01-01

    Pointing gestures are a vital aspect of human communication. Nevertheless, observers consistently fail to determine the exact location to which another person points when that location lies in the distance. Here we explore the reasons for this misunderstanding. Humans usually point by extending the arm and finger. We show that observer's interpret these gestures by nonlinear extrapolation of the pointer's arm-finger line. The nonlinearity can be adequately described as the Bayesian-optimal integration of a linear extrapolation of the arm-finger line and observers' prior assumptions about likely referent positions. Surprisingly, the spatial rule describing the interpretation of pointing gestures differed from the rules describing the production of these gestures. In the latter case, the eye, index finger, and referent were aligned. We show that the differences in the production and interpretation of pointing gestures accounts for the systematic spatial misunderstanding of pointing gestures to distant referents. No evidence was found for the hypotheses that action-related processes are involved in the perception of pointing gestures. How participants interpreted pointing gestures was independent of how they produce these gestures and whether they had practiced pointing movements before. By contrast, both the production and interpretation seem to be primarily determined by salient visual cues. (c) 2015 APA, all rights reserved).

  1. Gesturing more diminishes recall of abstract words when gesture is allowed and concrete words when it is taboo.

    Science.gov (United States)

    Matthews-Saugstad, Krista M; Raymakers, Erik P; Kelty-Stephen, Damian G

    2017-07-01

    Gesture during speech can promote or diminish recall for conversation content. We explored effects of cognitive load on this relationship, manipulating it at two scales: individual-word abstractness and social constraints to prohibit gestures. Prohibited gestures can diminish recall but more so for abstract-word recall. Insofar as movement planning adds to cognitive load, movement amplitude may moderate gesture effects on memory, with greater permitted- and prohibited-gesture movements reducing abstract-word recall and concrete-word recall, respectively. We tested these effects in a dyadic game in which 39 adult participants described words to confederates without naming the word or five related words. Results supported our expectations and indicated that memory effects of gesturing depend on social, cognitive, and motoric aspects of discourse.

  2. On density of the Vassiliev invariants

    DEFF Research Database (Denmark)

    Røgen, Peter

    1999-01-01

    The main result is that the Vassiliev invariants are dense in the set of numeric knot invariants if and only if they separate knots.Keywords: Knots, Vassiliev invariants, separation, density, torus knots......The main result is that the Vassiliev invariants are dense in the set of numeric knot invariants if and only if they separate knots.Keywords: Knots, Vassiliev invariants, separation, density, torus knots...

  3. Invariant and semi-invariant probabilistic normed spaces

    Energy Technology Data Exchange (ETDEWEB)

    Ghaemi, M.B. [School of Mathematics Iran, University of Science and Technology, Narmak, Tehran (Iran, Islamic Republic of)], E-mail: mghaemi@iust.ac.ir; Lafuerza-Guillen, B. [Departamento de Estadistica y Matematica Aplicada, Universidad de Almeria, Almeria E-04120 (Spain)], E-mail: blafuerz@ual.es; Saiedinezhad, S. [School of Mathematics Iran, University of Science and Technology, Narmak, Tehran (Iran, Islamic Republic of)], E-mail: ssaiedinezhad@yahoo.com

    2009-10-15

    Probabilistic metric spaces were introduced by Karl Menger. Alsina, Schweizer and Sklar gave a general definition of probabilistic normed space based on the definition of Menger . We introduce the concept of semi-invariance among the PN spaces. In this paper we will find a sufficient condition for some PN spaces to be semi-invariant. We will show that PN spaces are normal spaces. Urysohn's lemma, and Tietze extension theorem for them are proved.

  4. Non-formal Therapy and Learning Potentials through Human Gesture Synchronised to Robotic Gesture

    DEFF Research Database (Denmark)

    Petersson, Eva; Brooks, Tony

    2007-01-01

    for use as a supplement to traditional rehabilitation therapy sessions. The process involves the capturing of gesture data through an intuitive non-intrusive interface. The interface is invisible to the naked eye and offers a direct and immediate association between the child's physical feed......-forward gesture and the physical reaction (feedback) of the robotic device. Results from multiple sessions with four children with severe physical disability suggest that the potential of non-intrusive interaction with a multimedia robotic device that is capable of giving synchronized physical response offers...

  5. Invariant measures in brain dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Boyarsky, Abraham [Department of Mathematics and Statistics, Concordia University, 7141 Sherbrooke Street West, Montreal, Quebec H4B 1R6 (Canada)]. E-mail: boyar@alcor.concordia.ca; Gora, Pawel [Department of Mathematics and Statistics, Concordia University, 1455 de Maisonneuve Blvd. West, Montreal, Quebec H3G 1M8 (Canada)]. E-mail: pgora@vax2.concordia.ca

    2006-10-02

    This note concerns brain activity at the level of neural ensembles and uses ideas from ergodic dynamical systems to model and characterize chaotic patterns among these ensembles during conscious mental activity. Central to our model is the definition of a space of neural ensembles and the assumption of discrete time ensemble dynamics. We argue that continuous invariant measures draw the attention of deeper brain processes, engendering emergent properties such as consciousness. Invariant measures supported on a finite set of ensembles reflect periodic behavior, whereas the existence of continuous invariant measures reflect the dynamics of nonrepeating ensemble patterns that elicit the interest of deeper mental processes. We shall consider two different ways to achieve continuous invariant measures on the space of neural ensembles: (1) via quantum jitters, and (2) via sensory input accompanied by inner thought processes which engender a 'folding' property on the space of ensembles.

  6. The invariant theory of matrices

    CERN Document Server

    Concini, Corrado De

    2017-01-01

    This book gives a unified, complete, and self-contained exposition of the main algebraic theorems of invariant theory for matrices in a characteristic free approach. More precisely, it contains the description of polynomial functions in several variables on the set of m\\times m matrices with coefficients in an infinite field or even the ring of integers, invariant under simultaneous conjugation. Following Hermann Weyl's classical approach, the ring of invariants is described by formulating and proving the first fundamental theorem that describes a set of generators in the ring of invariants, and the second fundamental theorem that describes relations between these generators. The authors study both the case of matrices over a field of characteristic 0 and the case of matrices over a field of positive characteristic. While the case of characteristic 0 can be treated following a classical approach, the case of positive characteristic (developed by Donkin and Zubkov) is much harder. A presentation of this case...

  7. Invariant description of solutions of hydrodynamic type systems in hodograph space: hydrodynamic surfaces

    OpenAIRE

    Ferapontov, E. V.

    2001-01-01

    Hydrodynamic surfaces are solutions of hydrodynamic type systems viewed as non-parametrized submanifolds of the hodograph space. We propose an invariant differential-geometric characterization of hydrodynamic surfaces by expressing the curvature form of the characteristic web in terms of the reciprocal invariants.

  8. Hidden scale invariance of metals

    DEFF Research Database (Denmark)

    Hummel, Felix; Kresse, Georg; Dyre, Jeppe C.

    2015-01-01

    Density functional theory (DFT) calculations of 58 liquid elements at their triple point show that most metals exhibit near proportionality between the thermal fluctuations of the virial and the potential energy in the isochoric ensemble. This demonstrates a general “hidden” scale invariance...... of iron and phosphorous are shown to increase at elevated pressures. Finally, we discuss how scale invariance explains the Grüneisen equation of state and a number of well-known empirical melting and freezing rules...

  9. Classification of simple current invariants

    CERN Document Server

    Gato-Rivera, Beatriz

    1992-01-01

    We summarize recent work on the classification of modular invariant partition functions that can be obtained with simple currents in theories with a center (Z_p)^k with p prime. New empirical results for other centers are also presented. Our observation that the total number of invariants is monodromy-independent for (Z_p)^k appears to be true in general as well. (Talk presented in the parallel session on string theory of the Lepton-Photon/EPS Conference, Geneva, 1991.)

  10. Beyond words: evidence for automatic language-gesture integration of symbolic gestures but not dynamic landscapes.

    Science.gov (United States)

    Vainiger, Dana; Labruna, Ludovica; Ivry, Richard B; Lavidor, Michal

    2014-01-01

    Understanding actions based on either language or action observation is presumed to involve the motor system, reflecting the engagement of an embodied conceptual network. We examined how linguistic and gestural information were integrated in a series of cross-domain priming studies. We varied the task demands across three experiments in which symbolic gestures served as primes for verbal targets. Primes were clips of symbolic gestures taken from a rich set of emblems. Participants responded by making a lexical decision to the target (Experiment 1), naming the target (Experiment 2), or performing a semantic relatedness judgment (Experiment 3). The magnitude of semantic priming was larger in the relatedness judgment and lexical decision tasks compared to the naming task. Priming was also observed in a control task in which the primes were pictures of landscapes with conceptually related verbal targets. However, for these stimuli, the amount of priming was similar across the three tasks. We propose that action observation triggers an automatic, pre-lexical spread of activation, consistent with the idea that language-gesture integration occurs in an obligatory and automatic fashion.

  11. Doing Gesture Promotes Learning a Mental Transformation Task Better than Seeing Gesture

    Science.gov (United States)

    Goldin-Meadow, Susan; Levine, Susan C.; Zinchenko, Elena; Yip, Terina KuangYi; Hemani, Naureen; Factor, Laiah

    2012-01-01

    Performing action has been found to have a greater impact on learning than observing action. Here we ask whether a particular type of action--the gestures that accompany talk--affect learning in a comparable way. We gave 158 6-year-old children instruction in a mental transformation task. Half the children were asked to produce a "Move"…

  12. Gesturing with an injured brain: How gesture helps children with early brain injury learn linguistic constructions

    Science.gov (United States)

    Özçalışkan, Şeyda; Levine, Susan C.; Goldin-Meadow, Susan

    2013-01-01

    Children with pre/perinatal unilateral brain lesions (PL) show remarkable plasticity for language development. Is this plasticity characterized by the same developmental trajectory that characterizes typically developing (TD) children, with gesture leading the way into speech? We explored this question, comparing 11 children with PL—matched to 30 TD children on expressive vocabulary—in the second year of life. Children with PL showed similarities to TD children for simple but not complex sentence types. Children with PL produced simple sentences across gesture and speech several months before producing them entirely in speech, exhibiting parallel delays in both gesture+speech and speech-alone. However, unlike TD children, children with PL produced complex sentence types first in speech-alone. Overall, the gesture-speech system appears to be a robust feature of language-learning for simple—but not complex—sentence constructions, acting as a harbinger of change in language development even when that language is developing in an injured brain. PMID:23217292

  13. Gesturing with an Injured Brain: How Gesture Helps Children with Early Brain Injury Learn Linguistic Constructions

    Science.gov (United States)

    Ozcaliskan, Seyda; Levine, Susan C.; Goldin-Meadow, Susan

    2013-01-01

    Children with pre/perinatal unilateral brain lesions (PL) show remarkable plasticity for language development. Is this plasticity characterized by the same developmental trajectory that characterizes typically developing (TD) children, with gesture leading the way into speech? We explored this question, comparing eleven children with PL -- matched…

  14. Neural correlates of gesture processing across human development.

    Science.gov (United States)

    Wakefield, Elizabeth M; James, Thomas W; James, Karin H

    2013-01-01

    Co-speech gesture facilitates learning to a greater degree in children than in adults, suggesting that the mechanisms underlying the processing of co-speech gesture differ as a function of development. We suggest that this may be partially due to children's lack of experience producing gesture, leading to differences in the recruitment of sensorimotor networks when comparing adults to children. Here, we investigated the neural substrates of gesture processing in a cross-sectional sample of 5-, 7.5-, and 10-year-old children and adults and focused on relative recruitment of a sensorimotor system that included the precentral gyrus (PCG) and the posterior middle temporal gyrus (pMTG). Children and adults were presented with videos in which communication occurred through different combinations of speech and gesture during a functional magnetic resonance imaging (fMRI) session. Results demonstrated that the PCG and pMTG were recruited to different extents in the two populations. We interpret these novel findings as supporting the idea that gesture perception (pMTG) is affected by a history of gesture production (PCG), revealing the importance of considering gesture processing as a sensorimotor process.

  15. Differential Diagnosis of Severe Speech Disorders Using Speech Gestures

    Science.gov (United States)

    Bahr, Ruth Huntley

    2005-01-01

    The differentiation of childhood apraxia of speech from severe phonological disorder is a common clinical problem. This article reports on an attempt to describe speech errors in children with childhood apraxia of speech on the basis of gesture use and acoustic analyses of articulatory gestures. The focus was on the movement of articulators and…

  16. Seeing Signs : On the appearance of manual movements in gestures

    NARCIS (Netherlands)

    Arendsen, J.

    2009-01-01

    This dissertation presents the results of a series of studies on the appearance of manual movements in gestures. The main goal of this research is to increase our understanding of how humans perceive signs and other gestures. Generated insights from human perception may aid the development of

  17. Communicative Effectiveness of Pantomime Gesture in People with Aphasia

    Science.gov (United States)

    Rose, Miranda L.; Mok, Zaneta; Sekine, Kazuki

    2017-01-01

    Background: Human communication occurs through both verbal and visual/motoric modalities. Simultaneous conversational speech and gesture occurs across all cultures and age groups. When verbal communication is compromised, more of the communicative load can be transferred to the gesture modality. Although people with aphasia produce meaning-laden…

  18. The Effect of Intentional, Preplanned Movement on Novice Conductors' Gesture

    Science.gov (United States)

    Bodnar, Erin N.

    2017-01-01

    Preplanning movement may be one way to broaden novice conductors' vocabulary of gesture and promote motor awareness. To test the difference between guided score study and guided score study with preplanned, intentional movement on the conducting gestures of novice conductors, undergraduate music education students (N = 20) were assigned to one of…

  19. Children's Use of Gesture in Ambiguous Pronoun Interpretation

    Science.gov (United States)

    Goodrich Smith, Whitney; Hudson Kam, Carla L.

    2015-01-01

    This study explores whether children can use gesture to inform their interpretation of ambiguous pronouns. Specifically, we ask whether four- to eight-year-old English-speaking children are sensitive to information contained in co-referential localizing gestures in video narrations. The data show that the older (7-8 years of age) but not younger…

  20. Reduction in gesture during the production of repeated references

    NARCIS (Netherlands)

    Hoetjes, M.W.; Koolen, R.M.F.; Goudbeek, M.B.; Krahmer, E.J.; Swerts, M.G.J.

    2015-01-01

    In dialogue, repeated references contain fewer words (which are also acoustically reduced) and fewer gestures than initial ones. In this paper, we describe three experiments studying to what extent gesture reduction is comparable to other forms of linguistic reduction. Since previous studies showed

  1. Function and processing of gesture in the context of language

    NARCIS (Netherlands)

    Özyürek, A.; Church, R.B.; Alibali, M.W.; Kelly, S.D.

    2017-01-01

    Most research focuses function of gesture independent of its link to the speech it accompanies and the coexpressive functions it has together with speech. This chapter instead approaches gesture in relation to its communicative function in relation to speech, and demonstrates how it is shaped by the

  2. Towards a Gesture Repertoire for Cooperative Interaction with Large Displays

    NARCIS (Netherlands)

    Fikkert, F.W.; van der Vet, P.E.

    2007-01-01

    Manual gesture input is a key component in multimodal human-computer interaction. In this position paper, we describe the design of a series of experiments aimed at the construction of a bimanual gesture repertoire for natural, cooperative, co-located interaction with large displays. The experiments

  3. Gesticulating Science: Emergent Bilingual Students' Use of Gestures

    Science.gov (United States)

    Ünsal, Zeynep; Jakobson, Britt; Wickman, Per-Olof; Molander, Bengt-Olov

    2018-01-01

    This article examines how emergent bilingual students used gestures in science class, and the consequences of students' gestures when their language repertoire limited their possibilities to express themselves. The study derived from observations in two science classes in Sweden. In the first class, 3rd grade students (9-10 years old) were…

  4. Hand Gesture and Mathematics Learning: Lessons from an Avatar

    Science.gov (United States)

    Cook, Susan Wagner; Friedman, Howard S.; Duggan, Katherine A.; Cui, Jian; Popescu, Voicu

    2017-01-01

    A beneficial effect of gesture on learning has been demonstrated in multiple domains, including mathematics, science, and foreign language vocabulary. However, because gesture is known to co-vary with other non-verbal behaviors, including eye gaze and prosody along with face, lip, and body movements, it is possible the beneficial effect of gesture…

  5. The processing of speech, gesture, and action during language comprehension.

    Science.gov (United States)

    Kelly, Spencer; Healey, Meghan; Özyürek, Asli; Holler, Judith

    2015-04-01

    Hand gestures and speech form a single integrated system of meaning during language comprehension, but is gesture processed with speech in a unique fashion? We had subjects watch multimodal videos that presented auditory (words) and visual (gestures and actions on objects) information. Half of the subjects related the audio information to a written prime presented before the video, and the other half related the visual information to the written prime. For half of the multimodal video stimuli, the audio and visual information contents were congruent, and for the other half, they were incongruent. For all subjects, stimuli in which the gestures and actions were incongruent with the speech produced more errors and longer response times than did stimuli that were congruent, but this effect was less prominent for speech-action stimuli than for speech-gesture stimuli. However, subjects focusing on visual targets were more accurate when processing actions than gestures. These results suggest that although actions may be easier to process than gestures, gestures may be more tightly tied to the processing of accompanying speech.

  6. Associations among Play, Gesture and Early Spoken Language Acquisition

    Science.gov (United States)

    Hall, Suzanne; Rumney, Lisa; Holler, Judith; Kidd, Evan

    2013-01-01

    The present study investigated the developmental interrelationships between play, gesture use and spoken language development in children aged 18-31 months. The children completed two tasks: (i) a structured measure of pretend (or "symbolic") play and (ii) a measure of vocabulary knowledge in which children have been shown to gesture.…

  7. Does gesture add to the comprehensibility of people with aphasia?

    NARCIS (Netherlands)

    van Nispen, Karin; Sekine, Kazuki; Rose, Miranda; Ferré, Gaëlle; Tutton, Mark

    2015-01-01

    Gesture can convey information co-occurring with and in the absence of speech. As such, it seems a useful strategy for people with aphasia (PWA) to compensate for their impaired speech. To find out whether gestures used by PWA add to the comprehensibility of their communication we looked at the

  8. Consolidation and Transfer of Learning after Observing Hand Gesture

    Science.gov (United States)

    Cook, Susan Wagner; Duffy, Ryan G.; Fenn, Kimberly M.

    2013-01-01

    Children who observe gesture while learning mathematics perform better than children who do not, when tested immediately after training. How does observing gesture influence learning over time? Children (n = 184, ages = 7-10) were instructed with a videotaped lesson on mathematical equivalence and tested immediately after training and 24 hr later.…

  9. How Iconic Gestures Enhance Communication: An ERP Study

    Science.gov (United States)

    Wu, Ying Choon; Coulson, Seana

    2007-01-01

    EEG was recorded as adults watched short segments of spontaneous discourse in which the speaker's gestures and utterances contained complementary information. Videos were followed by one of four types of picture probes: cross-modal related probes were congruent with both speech and gestures; speech-only related probes were congruent with…

  10. Gestures as Semiotic Resources in the Mathematics Classroom

    Science.gov (United States)

    Arzarello, Ferdinando; Paola, Domingo; Robutti, Ornella; Sabena, Cristina

    2009-01-01

    In this paper, we consider gestures as part of the resources activated in the mathematics classroom: speech, inscriptions, artifacts, etc. As such, gestures are seen as one of the semiotic tools used by students and teacher in mathematics teaching-learning. To analyze them, we introduce a suitable model, the "semiotic bundle." It allows focusing…

  11. How different iconic gestures add to the communication of PWA

    NARCIS (Netherlands)

    van Nispen, Karin

    2016-01-01

    Introduction Gestures can convey information in addition to speech (Beattie et al., 1999). In the absence of conventions on their meaning (McNeill, 2000), people probably rely on iconicity, the mapping between form and meaning, to construct and derive meaning from gesture (Perniss et al., 2010).

  12. Beat Gestures Modulate Auditory Integration in Speech Perception

    Science.gov (United States)

    Biau, Emmanuel; Soto-Faraco, Salvador

    2013-01-01

    Spontaneous beat gestures are an integral part of the paralinguistic context during face-to-face conversations. Here we investigated the time course of beat-speech integration in speech perception by measuring ERPs evoked by words pronounced with or without an accompanying beat gesture, while participants watched a spoken discourse. Words…

  13. Gesture, Landscape and Embrace: A Phenomenological Analysis of ...

    African Journals Online (AJOL)

    The 'radical reflection' on the 'flesh of the world' to which this analysis aspires in turn bears upon the general field of gestural reciprocities and connections, providing the insight that intimate gestures of the flesh, such as the embrace, are primordial attunements, motions of rhythm and reciprocity, that emanate from the world ...

  14. On invariant submanifolds of (LCSn-manifolds

    Directory of Open Access Journals (Sweden)

    Absos Ali Shaikh

    2016-04-01

    Full Text Available The object of the present paper is to study the invariant submanifolds of (LCSn-manifolds. We study semiparallel and 2-semiparallel invariant submanifolds of (LCSn-manifolds. Among others we study 3-dimensional invariant submanifolds of (LCSn-manifolds. It is shown that every 3-dimensional invariant submanifold of a (LCSn-manifold is totally geodesic.

  15. Invariant Matsumoto metrics on homogeneous spaces

    OpenAIRE

    Salimi Moghaddam, H.R.

    2014-01-01

    In this paper we consider invariant Matsumoto metrics which are induced by invariant Riemannian metrics and invariant vector fields on homogeneous spaces, and then we give the flag curvature formula of them. Also we study the special cases of naturally reductive spaces and bi-invariant metrics. We end the article by giving some examples of geodesically complete Matsumoto spaces.

  16. A Multimodal User Authentication System Using Faces and Gestures

    Directory of Open Access Journals (Sweden)

    Hyunsoek Choi

    2015-01-01

    Full Text Available As a novel approach to perform user authentication, we propose a multimodal biometric system that uses faces and gestures obtained from a single vision sensor. Unlike typical multimodal biometric systems using physical information, the proposed system utilizes gesture video signals combined with facial images. Whereas physical information such as face, fingerprints, and iris is fixed and not changeable, behavioral information such as gestures and signatures can be freely changed by the user, similar to a password. Therefore, it can be a countermeasure when the physical information is exposed. We aim to investigate the potential possibility of using gestures as a signal for biometric system and the robustness of the proposed multimodal user authentication system. Through computational experiments on a public database, we confirm that gesture information can help to improve the authentication performance.

  17. Gliding and Saccadic Gaze Gesture Recognition in Real Time

    DEFF Research Database (Denmark)

    Rozado, David; San Agustin, Javier; Rodriguez, Francisco

    2012-01-01

    paradigm in the context of human-machine interaction as low-cost gaze trackers become more ubiquitous. The viability of gaze gestures as an innovative way to control a computer rests on how easily they can be assimilated by potential users and also on the ability of machine learning algorithms......, and their corresponding real-time recognition algorithms, Hierarchical Temporal Memory networks and the Needleman-Wunsch algorithm for sequence alignment. Our results show how a specific combination of gaze gesture modality, namely saccadic gaze gestures, and recognition algorithm, Needleman-Wunsch, allows for reliable...... usage of intentional gaze gestures to interact with a computer with accuracy rates of up to 98% and acceptable completion speed. Furthermore, the gesture recognition engine does not interfere with otherwise standard human-machine gaze interaction generating therefore, very low false positive rates...

  18. Neural correlates of conflict between gestures and words: A domain-specific role for a temporal-parietal complex.

    Directory of Open Access Journals (Sweden)

    J Adam Noah

    Full Text Available The interpretation of social cues is a fundamental function of human social behavior, and resolution of inconsistencies between spoken and gestural cues plays an important role in successful interactions. To gain insight into these underlying neural processes, we compared neural responses in a traditional color/word conflict task and to a gesture/word conflict task to test hypotheses of domain-general and domain-specific conflict resolution. In the gesture task, recorded spoken words ("yes" and "no" were presented simultaneously with video recordings of actors performing one of the following affirmative or negative gestures: thumbs up, thumbs down, head nodding (up and down, or head shaking (side-to-side, thereby generating congruent and incongruent communication stimuli between gesture and words. Participants identified the communicative intent of the gestures as either positive or negative. In the color task, participants were presented the words "red" and "green" in either red or green font and were asked to identify the color of the letters. We observed a classic "Stroop" behavioral interference effect, with participants showing increased response time for incongruent trials relative to congruent ones for both the gesture and color tasks. Hemodynamic signals acquired using functional near-infrared spectroscopy (fNIRS were increased in the right dorsolateral prefrontal cortex (DLPFC for incongruent trials relative to congruent trials for both tasks consistent with a common, domain-general mechanism for detecting conflict. However, activity in the left DLPFC and frontal eye fields and the right temporal-parietal junction (TPJ, superior temporal gyrus (STG, supramarginal gyrus (SMG, and primary and auditory association cortices was greater for the gesture task than the color task. Thus, in addition to domain-general conflict processing mechanisms, as suggested by common engagement of right DLPFC, socially specialized neural modules localized to

  19. Co-verbal gestures among speakers with aphasia: Influence of aphasia severity, linguistic and semantic skills, and hemiplegia on gesture employment in oral discourse.

    Science.gov (United States)

    Kong, Anthony Pak-Hin; Law, Sam-Po; Wat, Watson Ka-Chun; Lai, Christy

    2015-01-01

    The use of co-verbal gestures is common in human communication and has been reported to assist word retrieval and to facilitate verbal interactions. This study systematically investigated the impact of aphasia severity, integrity of semantic processing, and hemiplegia on the use of co-verbal gestures, with reference to gesture forms and functions, by 131 normal speakers, 48 individuals with aphasia and their controls. All participants were native Cantonese speakers. It was found that the severity of aphasia and verbal-semantic impairment was associated with significantly more co-verbal gestures. However, there was no relationship between right-sided hemiplegia and gesture employment. Moreover, significantly more gestures were employed by the speakers with aphasia, but about 10% of them did not gesture. Among those who used gestures, content-carrying gestures, including iconic, metaphoric, deictic gestures, and emblems, served the function of enhancing language content and providing information additional to the language content. As for the non-content carrying gestures, beats were used primarily for reinforcing speech prosody or guiding speech flow, while non-identifiable gestures were associated with assisting lexical retrieval or with no specific functions. The above findings would enhance our understanding of the use of various forms of co-verbal gestures in aphasic discourse production and their functions. Speech-language pathologists may also refer to the current annotation system and the results to guide clinical evaluation and remediation of gestures in aphasia. None. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Co-verbal gestures among speakers with aphasia: Influence of aphasia severity, linguistic and semantic skills, and hemiplegia on gesture employment in oral discourse

    Science.gov (United States)

    Kong, Anthony Pak-Hin; Law, Sam-Po; Wat, Watson Ka-Chun; Lai, Christy

    2015-01-01

    The use of co-verbal gestures is common in human communication and has been reported to assist word retrieval and to facilitate verbal interactions. This study systematically investigated the impact of aphasia severity, integrity of semantic processing, and hemiplegia on the use of co-verbal gestures, with reference to gesture forms and functions, by 131 normal speakers, 48 individuals with aphasia and their controls. All participants were native Cantonese speakers. It was found that the severity of aphasia and verbal-semantic impairment was associated with significantly more co-verbal gestures. However, there was no relationship between right-sided hemiplegia and gesture employment. Moreover, significantly more gestures were employed by the speakers with aphasia, but about 10% of them did not gesture. Among those who used gestures, content-carrying gestures, including iconic, metaphoric, deictic gestures, and emblems, served the function of enhancing language content and providing information additional to the language content. As for the non-content carrying gestures, beats were used primarily for reinforcing speech prosody or guiding speech flow, while non-identifiable gestures were associated with assisting lexical retrieval or with no specific functions. The above findings would enhance our understanding of the use of various forms of co-verbal gestures in aphasic discourse production and their functions. Speech-language pathologists may also refer to the current annotation system and the results to guide clinical evaluation and remediation of gestures in aphasia. PMID:26186256

  1. Gestural Communication and Mating Tactics in Wild Chimpanzees.

    Directory of Open Access Journals (Sweden)

    Anna Ilona Roberts

    Full Text Available The extent to which primates can flexibly adjust the production of gestural communication according to the presence and visual attention of the audience provides key insights into the social cognition underpinning gestural communication, such as an understanding of third party relationships. Gestures given in a mating context provide an ideal area for examining this flexibility, as frequently the interests of a male signaller, a female recipient and a rival male bystander conflict. Dominant chimpanzee males seek to monopolize matings, but subordinate males may use gestural communication flexibly to achieve matings despite their low rank. Here we show that the production of mating gestures in wild male East African chimpanzees (Pan troglodytes schweunfurthii was influenced by a conflict of interest with females, which in turn was influenced by the presence and visual attention of rival males. When the conflict of interest was low (the rival male was present and looking away, chimpanzees used visual/ tactile gestures over auditory gestures. However, when the conflict of interest was high (the rival male was absent, or was present and looking at the signaller chimpanzees used auditory gestures over visual/ tactile gestures. Further, the production of mating gestures was more common when the number of oestrous and non-oestrus females in the party increased, when the female was visually perceptive and when there was no wind. Females played an active role in mating behaviour, approaching for copulations more often when the number of oestrus females in the party increased and when the rival male was absent, or was present and looking away. Examining how social and ecological factors affect mating tactics in primates may thus contribute to understanding the previously unexplained reproductive success of subordinate male chimpanzees.

  2. Gesture as a window onto children's number knowledge.

    Science.gov (United States)

    Gunderson, Elizabeth A; Spaepen, Elizabet; Gibson, Dominic; Goldin-Meadow, Susan; Levine, Susan C

    2015-11-01

    Before learning the cardinal principle (knowing that the last word reached when counting a set represents the size of the whole set), children do not use number words accurately to label most set sizes. However, it remains unclear whether this difficulty reflects a general inability to conceptualize and communicate about number, or a specific problem with number words. We hypothesized that children's gestures might reflect knowledge of number concepts that they cannot yet express in speech, particularly for numbers they do not use accurately in speech (numbers above their knower-level). Number gestures are iconic in the sense that they are item-based (i.e., each finger maps onto one item in a set) and therefore may be easier to map onto sets of objects than number words, whose forms do not map transparently onto the number of items in a set and, in this sense, are arbitrary. In addition, learners in transition with respect to a concept often produce gestures that convey different information than the accompanying speech. We examined the number words and gestures 3- to 5-year-olds used to label small set sizes exactly (1-4) and larger set sizes approximately (5-10). Children who had not yet learned the cardinal principle were more than twice as accurate when labeling sets of 2 and 3 items with gestures than with words, particularly if the values were above their knower-level. They were also better at approximating set sizes 5-10 with gestures than with words. Further, gesture was more accurate when it differed from the accompanying speech (i.e., a gesture-speech mismatch). These results show that children convey numerical information in gesture that they cannot yet convey in speech, and raise the possibility that number gestures play a functional role in children's development of number concepts. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Coming of age in gesture : A comparative study of gesturing and pantomiming in older children and adults

    NARCIS (Netherlands)

    Masson Carro, Ingrid; Goudbeek, Martijn; Krahmer, Emiel

    2015-01-01

    Research on the co-development of gestures and speech mainly focuses on children in early phases of language acquisition. This study investigates how children in later development use gestures to communicate, and whether the strategies they use are similar to adults’. Using a referential paradigm,

  4. Predicting an Individual’s Gestures from the Interlocutor’s Co-occurring Gestures and Related Speech

    DEFF Research Database (Denmark)

    Navarretta, Costanza

    2016-01-01

    to the prediction of gestures of the same type of the other subject. In this work, we also want to determine whether the speech segments to which these gestures are related to contribute to the prediction. The results of our pilot experiments show that a Naive Bayes classifier trained on the duration and shape...

  5. Timing of Gestures: Gestures Anticipating or Simultaneous with Speech as Indexes of Text Comprehension in Children and Adults

    Science.gov (United States)

    Ianì, Francesco; Cutica, Ilaria; Bucciarelli, Monica

    2017-01-01

    The deep comprehension of a text is tantamount to the construction of an articulated mental model of that text. The number of correct recollections is an index of a learner's mental model of a text. We assume that another index of comprehension is the timing of the gestures produced during text recall; gestures are simultaneous with speech when…

  6. Application of Template Matching Algorithm for Dynamic Gesture Recognition of American Sign Language Finger Spelling and Hand Gesture

    Directory of Open Access Journals (Sweden)

    KARL CAEZAR P. CARRERA

    2014-08-01

    Full Text Available —In this study the researchers developed a human computer interface system where the dynamic gestures on the American Sign Language can be recognized. This is another way of communicating by people who understands and do not understand American Sign Language. They proposed the application of template matching algorithm for the recognition of dynamic gestures where it is based on the number of templates per gesture, which must be taken by the user, to be trained and saved in the system. To be able to recognize the dynamic gestures three things must be considered. These are the number of templates required for the algorithm to be able to recognize the gestures, the factors in handling different hand orientation of other users, and the reliability of the system in terms of communication

  7. Gesture and Naming Therapy for People with Severe Aphasia: A Group Study

    Science.gov (United States)

    Marshall, Jane; Best, Wendy; Cocks, Naomi; Cruice, Madeline; Pring, Tim; Bulcock, Gemma; Creek, Gemma; Eales, Nancy; Mummery, Alice Lockhart; Matthews, Niina; Caute, Anna

    2012-01-01

    Purpose: In this study, the authors (a) investigated whether a group of people with severe aphasia could learn a vocabulary of pantomime gestures through therapy and (b) compared their learning of gestures with their learning of words. The authors also examined whether gesture therapy cued word production and whether naming therapy cued gestures.…

  8. Give Me a Hand: Differential Effects of Gesture Type in Guiding Young Children's Problem-Solving

    Science.gov (United States)

    Vallotton, Claire; Fusaro, Maria; Hayden, Julia; Decker, Kalli; Gutowski, Elizabeth

    2015-01-01

    Adults' gestures support children's learning in problem-solving tasks, but gestures may be differentially useful to children of different ages, and different features of gestures may make them more or less useful to children. The current study investigated parents' use of gestures to support their young children (1.5-6 years) in a block puzzle…

  9. Gesture Frequency Linked Primarily to Story Length in 4-10-Year Old Children's Stories

    Science.gov (United States)

    Nicoladis, Elena; Marentette, Paula; Navarro, Samuel

    2016-01-01

    Previous studies have shown that older children gesture more while telling a story than younger children. This increase in gesture use has been attributed to increased story complexity. In adults, both narrative complexity and imagery predict gesture frequency. In this study, we tested the strength of three predictors of children's gesture use in…

  10. Gesture Production in Language Impairment: It's Quality, Not Quantity, That Matters

    Science.gov (United States)

    Wray, Charlotte; Saunders, Natalie; McGuire, Rosie; Cousins, Georgia; Norbury, Courtenay Frazier

    2017-01-01

    Purpose: The aim of this study was to determine whether children with language impairment (LI) use gesture to compensate for their language difficulties. Method: The present study investigated gesture accuracy and frequency in children with LI (n = 21) across gesture imitation, gesture elicitation, spontaneous narrative, and interactive…

  11. Gesture Production in Language Impairment: It's Quality, Not Quantity, That Matters.

    Science.gov (United States)

    Wray, Charlotte; Saunders, Natalie; McGuire, Rosie; Cousins, Georgia; Norbury, Courtenay Frazier

    2017-04-14

    The aim of this study was to determine whether children with language impairment (LI) use gesture to compensate for their language difficulties. The present study investigated gesture accuracy and frequency in children with LI (n = 21) across gesture imitation, gesture elicitation, spontaneous narrative, and interactive problem-solving tasks, relative to typically developing (TD) peers (n = 18) and peers with low language and educational concerns (n = 21). Children with LI showed weaknesses in gesture accuracy (imitation and gesture elicitation) in comparison to TD peers, but no differences in gesture rate. Children with low language only showed weaknesses in gesture imitation and used significantly more gestures than TD peers during parent-child interaction. Across the whole sample, motor abilities were significantly related to gesture accuracy but not gesture rate. In addition, children with LI produced proportionately more extending gestures, suggesting that they may use gesture to replace words that they are unable to articulate verbally. The results support the notion that gesture and language form a tightly linked communication system in which gesture deficits are seen alongside difficulties with spoken communication. Furthermore, it is the quality, not quantity of gestures that distinguish children with LI from typical peers.

  12. Development of Communicative Gestures in Normally Developing Children between 8 and 18 Months: An Exploratory Study

    Science.gov (United States)

    Veena, Kadiyali D; Bellur, Rajashekhar

    2015-01-01

    Children who have not developed speech tend to use gestures to communicate. Since gestures are not encouraged and suppressed in the Indian traditional context while speaking, this study focused on profiling the developing gestures in children to explore whether they use the gestures before development of speech. Eight normally developing…

  13. Spatial and Temporal Properties of Gestures in North American English /r/

    Science.gov (United States)

    Campbell, Fiona; Gick, Bryan; Wilson, Ian; Vatikiotis-Bateson, Eric

    2010-01-01

    Systematic syllable-based variation has been observed in the relative spatial and temporal properties of supralaryngeal gestures in a number of complex segments. Generally, more anterior gestures tend to appear at syllable peripheries while less anterior gestures occur closer to syllable peaks. Because previous studies compared only two gestures,…

  14. Developing and testing a human-based gesture vocabulary for tabletop systems.

    Science.gov (United States)

    Urakami, Jacqueline

    2012-08-01

    The goal was to study the natural and intuitive use of surface gestures for the development of a tabletop system. Furthermore, the effect of expertise on choice of gestures was examined. It is still not well understood what kinds of gestures novice users choose when they interact with gesture recognition systems. First, novices' and experts' choice of gestures for a tabletop system was compared in a quasiexperimental design. Second, memorability of novices' and experts' gesture sets derived from the first study was compared in an experimental study. Third, memorization of hand shape and motion path was examined in a further experiment. Data revealed user preferences for specific hand shapes and motion paths. Choice of gestures was affected by the size of the manipulated object,expertise, and nature of the command (direct manipulation of objects vs. assessment of abstract functions). Follow-up experiments revealed that the novices' gesture set was better memorized than were the experts' gesture set. Furthermore, the motion path of a gesture is better memorized than the specific hand shapes for a gesture. Expertise affects the choice of gesture to a certain degree. It is therefore essential to involve novice users in the development of gesture vocabularies. Gestures for technical systems should be simple and should involve distinctive motion patterns instead of focusing on specific hand shapes or number of fingers. Abstract or symbolic gestures should be avoided. Results of the study can be applied to the development of surface gestures for tabletop systems.

  15. Depth Camera-Based 3D Hand Gesture Controls with Immersive Tactile Feedback for Natural Mid-Air Gesture Interactions

    Directory of Open Access Journals (Sweden)

    Kwangtaek Kim

    2015-01-01

    Full Text Available Vision-based hand gesture interactions are natural and intuitive when interacting with computers, since we naturally exploit gestures to communicate with other people. However, it is agreed that users suffer from discomfort and fatigue when using gesture-controlled interfaces, due to the lack of physical feedback. To solve the problem, we propose a novel complete solution of a hand gesture control system employing immersive tactile feedback to the user’s hand. For this goal, we first developed a fast and accurate hand-tracking algorithm with a Kinect sensor using the proposed MLBP (modified local binary pattern that can efficiently analyze 3D shapes in depth images. The superiority of our tracking method was verified in terms of tracking accuracy and speed by comparing with existing methods, Natural Interaction Technology for End-user (NITE, 3D Hand Tracker and CamShift. As the second step, a new tactile feedback technology with a piezoelectric actuator has been developed and integrated into the developed hand tracking algorithm, including the DTW (dynamic time warping gesture recognition algorithm for a complete solution of an immersive gesture control system. The quantitative and qualitative evaluations of the integrated system were conducted with human subjects, and the results demonstrate that our gesture control with tactile feedback is a promising technology compared to a vision-based gesture control system that has typically no feedback for the user’s gesture inputs. Our study provides researchers and designers with informative guidelines to develop more natural gesture control systems or immersive user interfaces with haptic feedback.

  16. Depth camera-based 3D hand gesture controls with immersive tactile feedback for natural mid-air gesture interactions.

    Science.gov (United States)

    Kim, Kwangtaek; Kim, Joongrock; Choi, Jaesung; Kim, Junghyun; Lee, Sangyoun

    2015-01-08

    Vision-based hand gesture interactions are natural and intuitive when interacting with computers, since we naturally exploit gestures to communicate with other people. However, it is agreed that users suffer from discomfort and fatigue when using gesture-controlled interfaces, due to the lack of physical feedback. To solve the problem, we propose a novel complete solution of a hand gesture control system employing immersive tactile feedback to the user's hand. For this goal, we first developed a fast and accurate hand-tracking algorithm with a Kinect sensor using the proposed MLBP (modified local binary pattern) that can efficiently analyze 3D shapes in depth images. The superiority of our tracking method was verified in terms of tracking accuracy and speed by comparing with existing methods, Natural Interaction Technology for End-user (NITE), 3D Hand Tracker and CamShift. As the second step, a new tactile feedback technology with a piezoelectric actuator has been developed and integrated into the developed hand tracking algorithm, including the DTW (dynamic time warping) gesture recognition algorithm for a complete solution of an immersive gesture control system. The quantitative and qualitative evaluations of the integrated system were conducted with human subjects, and the results demonstrate that our gesture control with tactile feedback is a promising technology compared to a vision-based gesture control system that has typically no feedback for the user's gesture inputs. Our study provides researchers and designers with informative guidelines to develop more natural gesture control systems or immersive user interfaces with haptic feedback.

  17. Human computer interaction using hand gesture.

    Science.gov (United States)

    Wan, Silas; Nguyen, Hung T

    2008-01-01

    Hand gesture is a very natural form of human interaction and can be used effectively in human computer interaction (HCI). This project involves the design and implementation of a HCI using a small hand-worn wireless module with a 3-axis accelerometer as the motion sensor. The small stand-alone unit contains an accelerometer and a wireless Zigbee transceiver with microcontroller. To minimize intrusiveness to the user, the module is designed to be small (3cm by 4 cm). A time-delay neural network algorithm is developed to analyze the time series data from the 3-axis accelerometer. Power consumption is reduced by the non-continuous transmission of data and the use of low-power components, efficient algorithm and sleep mode between sampling for the wireless module. A home control interface is designed so that the user can control home appliances by moving through menus. The results demonstrate the feasibility of controlling home appliances using hand gestures and would present an opportunity for a section of the aging population and disabled people to lead a more independent life.

  18. In vivo measurement of surgical gestures.

    Science.gov (United States)

    Dubois, Patrick; Thommen, Quentin; Jambon, Anne Claire

    2002-01-01

    Virtual reality techniques are now more and more widely used in the field of surgical training. However, the realism of the simulation devices requires a good knowledge of the mechanical behavior of the living organs. To provide perioperative measurement of laparoscopic surgical operations, we equipped a conventional operating grasper with a force sensor and a position sensor. The entire apparatus was connected to a PC that controlled the real-time data acquisition. After calibrating the sensors, we conducted three series of in vivo measurements on animals under video control. A standardized protocol was set up to perform various surgical gestures in a reproducible manner. Under these conditions, we can assess an original tool for a quantitative approach of surgical gestures' mechanics. The preliminary results will be extended by measurements during other operations and with other surgical instruments. The in vivo quantification of the mechanical interactions between operating instruments and anatomical structures is of great interest for the introduction of the force feedback in virtual surgery, for the modeling of the mechanical behavior of living organs, and for the design of new surgical instruments. This quantification of manipulations opens new prospects in the evaluation of surgical practices.

  19. Numeric invariants from multidimensional persistence

    Energy Technology Data Exchange (ETDEWEB)

    Skryzalin, Jacek [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Carlsson, Gunnar [Stanford Univ., Stanford, CA (United States)

    2017-05-19

    In this paper, we analyze the space of multidimensional persistence modules from the perspectives of algebraic geometry. We first build a moduli space of a certain subclass of easily analyzed multidimensional persistence modules, which we construct specifically to capture much of the information which can be gained by using multidimensional persistence over one-dimensional persistence. We argue that the global sections of this space provide interesting numeric invariants when evaluated against our subclass of multidimensional persistence modules. Lastly, we extend these global sections to the space of all multidimensional persistence modules and discuss how the resulting numeric invariants might be used to study data.

  20. A generalization of gauge invariance

    Science.gov (United States)

    Grigore, Dan-Radu

    2017-08-01

    We consider perturbative quantum field theory in the causal framework. Gauge invariance is, in this framework, an identity involving chronological products of the interaction Lagrangian; it expresses the fact that the scattering matrix must leave invariant the sub-space of physical states. We are interested in generalizations of such identity involving Wick sub-monomials of the interaction Lagrangian. The analysis can be performed by direct computation in the lower orders of perturbation theory; guided by these computations, we conjecture a generalization for arbitrary orders.

  1. Dark Coupling and Gauge Invariance

    CERN Document Server

    Gavela, M B; Mena, O; Rigolin, S

    2010-01-01

    We study a coupled dark energy-dark matter model in which the energy-momentum exchange is proportional to the Hubble expansion rate. The inclusion of its perturbation is required by gauge invariance. We derive the linear perturbation equations for the gauge invariant energy density contrast and velocity of the coupled fluids, and we determine the initial conditions. The latter turn out to be adiabatic for dark energy, when assuming adiabatic initial conditions for all the standard fluids. We perform a full Monte Carlo Markov Chain likelihood analysis of the model, using WMAP 7-year data.

  2. Test of charge conjugation invariance.

    Science.gov (United States)

    Nefkens, B M K; Prakhov, S; Gårdestig, A; Allgower, C E; Bekrenev, V; Briscoe, W J; Clajus, M; Comfort, J R; Craig, K; Grosnick, D; Isenhower, D; Knecht, N; Koetke, D; Koulbardis, A; Kozlenko, N; Kruglov, S; Lolos, G; Lopatin, I; Manley, D M; Manweiler, R; Marusić, A; McDonald, S; Olmsted, J; Papandreou, Z; Peaslee, D; Phaisangittisakul, N; Price, J W; Ramirez, A F; Sadler, M; Shafi, A; Spinka, H; Stanislaus, T D S; Starostin, A; Staudenmaier, H M; Supek, I; Tippens, W B

    2005-02-04

    We report on the first determination of upper limits on the branching ratio (BR) of eta decay to pi0pi0gamma and to pi0pi0pi0gamma. Both decay modes are strictly forbidden by charge conjugation (C) invariance. Using the Crystal Ball multiphoton detector, we obtained BR(eta-->pi0pi0gamma)pi0pi0pi0gamma)<6 x 10(-5) at the 90% confidence level, in support of C invariance of isovector electromagnetic interactions.

  3. Gesture Performance in Schizophrenia Predicts Functional Outcome After 6 Months.

    Science.gov (United States)

    Walther, Sebastian; Eisenhardt, Sarah; Bohlhalter, Stephan; Vanbellingen, Tim; Müri, René; Strik, Werner; Stegmayer, Katharina

    2016-11-01

    The functional outcome of schizophrenia is heterogeneous and markers of the course are missing. Functional outcome is associated with social cognition and negative symptoms. Gesture performance and nonverbal social perception are critically impaired in schizophrenia. Here, we tested whether gesture performance or nonverbal social perception could predict functional outcome and the ability to adequately perform relevant skills of everyday function (functional capacity) after 6 months. In a naturalistic longitudinal study, 28 patients with schizophrenia completed tests of nonverbal communication at baseline and follow-up. In addition, functional outcome, social and occupational functioning, as well as functional capacity at follow-up were assessed. Gesture performance and nonverbal social perception at baseline predicted negative symptoms, functional outcome, and functional capacity at 6-month follow-up. Gesture performance predicted functional outcome beyond the baseline measure of functioning. Patients with gesture deficits at baseline had stable negative symptoms and experienced a decline in social functioning. While in patients without gesture deficits, negative symptom severity decreased and social functioning remained stable. Thus, a simple test of hand gesture performance at baseline may indicate favorable outcomes in short-term follow-up. The results further support the importance of nonverbal communication skills in subjects with schizophrenia. © The Author 2016. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center.

  4. Gesture as representational action: A paper about function.

    Science.gov (United States)

    Novack, Miriam A; Goldin-Meadow, Susan

    2017-06-01

    A great deal of attention has recently been paid to gesture and its effects on thinking and learning. It is well established that the hand movements that accompany speech are an integral part of communication, ubiquitous across cultures, and a unique feature of human behavior. In an attempt to understand this intriguing phenomenon, researchers have focused on pinpointing the mechanisms that underlie gesture production. One proposal--that gesture arises from simulated action (Hostetter & Alibali Psychonomic Bulletin & Review, 15, 495-514, 2008)--has opened up discussions about action, gesture, and the relation between the two. However, there is another side to understanding a phenomenon and that is to understand its function. A phenomenon's function is its purpose rather than its precipitating cause--the why rather than the how. This paper sets forth a theoretical framework for exploring why gesture serves the functions that it does, and reviews where the current literature fits, and fails to fit, this proposal. Our framework proposes that whether or not gesture is simulated action in terms of its mechanism--it is clearly not reducible to action in terms of its function. Most notably, because gestures are abstracted representations and are not actions tied to particular events and objects, they can play a powerful role in thinking and learning beyond the particular, specifically, in supporting generalization and transfer of knowledge.

  5. Iconic Gestures for Robot Avatars, Recognition and Integration with Speech.

    Science.gov (United States)

    Bremner, Paul; Leonards, Ute

    2016-01-01

    Co-verbal gestures are an important part of human communication, improving its efficiency and efficacy for information conveyance. One possible means by which such multi-modal communication might be realized remotely is through the use of a tele-operated humanoid robot avatar. Such avatars have been previously shown to enhance social presence and operator salience. We present a motion tracking based tele-operation system for the NAO robot platform that allows direct transmission of speech and gestures produced by the operator. To assess the capabilities of this system for transmitting multi-modal communication, we have conducted a user study that investigated if robot-produced iconic gestures are comprehensible, and are integrated with speech. Robot performed gesture outcomes were compared directly to those for gestures produced by a human actor, using a within participant experimental design. We show that iconic gestures produced by a tele-operated robot are understood by participants when presented alone, almost as well as when produced by a human. More importantly, we show that gestures are integrated with speech when presented as part of a multi-modal communication equally well for human and robot performances.

  6. What can iconic gestures tell us about the language system? A case of conduction aphasia.

    Science.gov (United States)

    Cocks, Naomi; Dipper, Lucy; Middleton, Ruth; Morgan, Gary

    2011-01-01

    Speech and language therapists rarely analyse iconic gesture when assessing a client with aphasia, despite a growing body of research suggesting that language and gesture are part of either the same system or two highly integrated systems. This may be because there has been limited research that has systematically analysed iconic gesture production by people with aphasia. The aim was to determine whether the gesture production of a participant with conduction aphasia was able to provide information about her language system. The iconic gestures produced by a participant with conduction aphasia (LT) and five control participants produced during the retelling of a cartoon were analysed. In particular, the iconic gestures produced during lexical retrieval difficulties (co-tip-of-the-tongue (co-TOT) gestures) were compared with the iconic gestures produced during fluent speech (co-speech gestures). It was found that LT produced 57 co-speech gestures that were similar in form to the co-speech gestures produced by the control participants (mean = 34.2, standard deviation (SD) = 22.2). LT also produced an additional eleven co-TOT gestures that were unlike her co-speech gestures and unlike the co-speech gestures produced by the control participants. While the co-speech gestures depicted events, the co-TOT gestures depicted 'things' (for example, objects and animals). Furthermore, all but one of the co-TOT gestures produced by LT was classified as a shape-outline gesture, whereas co-speech gestures were rarely classified as shape-outline gestures. LT also produced a new type of gesture that has not previously been described in the literature: a homophone gesture. This co-TOT homophone gesture depicted the homophone of the target word. The iconic gestures produced by LT suggest that she had an intact semantic system but had difficulties with phonological encoding, consistent with a diagnosis of conduction aphasia. This raises the possibility that iconic gesture production

  7. Gesturing beyond the Frame: Transnational Trauma and US War Fiction

    Directory of Open Access Journals (Sweden)

    Ruth A. H. Lahti

    2012-12-01

    Full Text Available The convergent boundary between the fields of trauma theory and US war fiction has resulted in a narrow focus on the subjectivity of the American soldier in war fiction, which partly conditions American war fiction's privileging of the soldier-author. However, this focus on American soldiers does not adequately account for the essentially interactive nature of war trauma, and it elides the experiences of nurses and noncombatants on all sides of the battle while also obscuring women's distinctive war experiences, even when the fiction itself sometimes includes these dimensions. In this essay, Lahti argues that a transnational method can counter these imbalances in trauma theory and in studies of US war fiction. She engages Tim O'Brien's highly influential The Things They Carried from a transnational perspective by interrogating the text's figuring of the survivor author and focusing on critically neglected scenes of interaction between the American soldiers and Vietnamese civilians. In order to discern the way these scenes reveal the text's own struggle with its national US frame, she elaborates a methodology of close reading characters' bodily gestures to foreground the way that fiction offers a glimpse into war as a relational event, always involving two or more participants. In the case of The Things They Carried, this approach brings into view a heretofore unnoticed pattern of mimicry between the American characters and Vietnamese characters that reshapes our scholarly understanding of the text's representation of war trauma.

  8. The effect of musical practice on gesture/sound pairing.

    Science.gov (United States)

    Proverbio, Alice M; Attardo, Lapo; Cozzi, Matteo; Zani, Alberto

    2015-01-01

    Learning to play a musical instrument is a demanding process requiring years of intense practice. Dramatic changes in brain connectivity, volume, and functionality have been shown in skilled musicians. It is thought that music learning involves the formation of novel audio visuomotor associations, but not much is known about the gradual acquisition of this ability. In the present study, we investigated whether formal music training enhances audiovisual multisensory processing. To this end, pupils at different stages of education were examined based on the hypothesis that the strength of audio/visuomotor associations would be augmented as a function of the number of years of conservatory study (expertise). The study participants were violin and clarinet students of pre-academic and academic levels and of different chronological ages, ages of acquisition, and academic levels. A violinist and a clarinetist each played the same score, and each participant viewed the video corresponding to his or her instrument. Pitch, intensity, rhythm, and sound duration were matched across instruments. In half of the trials, the soundtrack did not match (in pitch) the corresponding musical gestures. Data analysis indicated a correlation between the number of years of formal training (expertise) and the ability to detect an audiomotor incongruence in music performance (relative to the musical instrument practiced), thus suggesting a direct correlation between knowing how to play and perceptual sensitivity.

  9. The effect of musical practice on gesture/sound pairing

    Directory of Open Access Journals (Sweden)

    Alice Mado eProverbio

    2015-04-01

    Full Text Available Learning to play a musical instrument is a demanding process requiring years of intense practice. Dramatic changes in brain connectivity, volume and functionality have been shown in skilled musicians. It is thought that music learning involves the formation of novel audio visuomotor associations, but not much is known about the gradual acquisition of this ability. In the present study, we investigated whether formal music training enhances audiovisual multisensory processing. To this end, pupils at different stages of education were examined based on the hypothesis that the strength of audio/visuomotor associations would be augmented as a function of the number of years of conservatory study (expertise. The study participants were violin and clarinet students of pre-academic and academic levels and of different chronological ages, ages of acquisition and academic levels. A violinist and a clarinetist each played the same score, and each participant viewed the video corresponding to his or her instrument. Pitch, intensity, rhythm and sound duration were matched across instruments. In half of the trials, the soundtrack did not match (in pitch the corresponding musical gestures. Data analysis indicated a correlation between the number of years of formal training (expertise and the ability to detect an audiomotor incongruence in music performance (relative to the musical instrument practiced, thus suggesting a direct correlation between knowing how to play and perceptual sensitivity.

  10. Gesture and word analysis: the same or different processes?

    Science.gov (United States)

    De Marco, Doriana; De Stefani, Elisa; Gentilucci, Maurizio

    2015-08-15

    The present study aimed at determining whether elaboration of communicative signals (symbolic gestures and words) is always accompanied by integration with each other and, if present, this integration can be considered in support of the existence of a same control mechanism. Experiment 1 aimed at determining whether and how gesture is integrated with word. Participants were administered with a semantic priming paradigm with a lexical decision task and pronounced a target word, which was preceded by a meaningful or meaningless prime gesture. When meaningful, the gesture could be either congruent or incongruent with word meaning. Duration of prime presentation (100, 250, 400 ms) randomly varied. Voice spectra, lip kinematics, and time to response were recorded and analyzed. Formant 1 of voice spectra, and mean velocity in lip kinematics increased when the prime was meaningful and congruent with the word, as compared to meaningless gesture. In other words, parameters of voice and movement were magnified by congruence, but this occurred only when prime duration was 250 ms. Time to response to meaningful gesture was shorter in the condition of congruence compared to incongruence. Experiment 2 aimed at determining whether the mechanism of integration of a prime word with a target word is similar to that of a prime gesture with a target word. Formant 1 of the target word increased when word prime was meaningful and congruent, as compared to meaningless congruent prime. Increase was, however, present for whatever prime word duration. Experiment 3 aimed at determining whether symbolic prime gesture comprehension makes use of motor simulation. Transcranial Magnetic Stimulation was delivered to left primary motor cortex 100, 250, 500 ms after prime gesture presentation. Motor Evoked Potential of First Dorsal Interosseus increased when stimulation occurred 100 ms post-stimulus. Thus, gesture was understood within 100 ms and integrated with the target word within 250 ms

  11. Gestures in Passion Cycles in Central European Mural Painting

    Directory of Open Access Journals (Sweden)

    Zdzisław Kliś

    2007-12-01

    Full Text Available In medieval Passion cycles represented in Czech, Slovak (former Hungary, and Polish murals dating from the fourteenth to the fifteenth centuries one may observe a number of-gestures which appear in respective scenes starting from the Entry into Jerusalem and ending with the Entombment (laying in the sepulchre. The most significant gesture in the entry scene is the outstretched hand of Christ riding a donkey. It is the language of gesture used since antiquity, transmitted through Byzantine and Italian art (including Giotto’s Entry into Jerusalem in his Arena Chapel frescoes, and transferred into art north of the Alps.

  12. A Modified Tactile Brush Algorithm for Complex Touch Gestures

    Energy Technology Data Exchange (ETDEWEB)

    Ragan, Eric [Texas A& M University

    2015-01-01

    Several researchers have investigated phantom tactile sensation (i.e., the perception of a nonexistent actuator between two real actuators) and apparent tactile motion (i.e., the perception of a moving actuator due to time delays between onsets of multiple actuations). Prior work has focused primarily on determining appropriate Durations of Stimulation (DOS) and Stimulus Onset Asynchronies (SOA) for simple touch gestures, such as a single finger stroke. To expand upon this knowledge, we investigated complex touch gestures involving multiple, simultaneous points of contact, such as a whole hand touching the arm. To implement complex touch gestures, we modified the Tactile Brush algorithm to support rectangular areas of tactile stimulation.

  13. Transitive gesture production in apraxia: visual and nonvisual sensory contributions.

    Science.gov (United States)

    Westwood, D A; Schweizer, T A; Heath, M D; Roy, E A; Dixon, M J; Black, S E

    2001-01-01

    The production of transitive limb gestures is optimized when the appropriate tool can be physically manipulated. Little research has addressed the independent contributions of visual and nonvisual sources of sensory information to this phenomenon. In this study, 12 control, 37 LHD, and 50 RHD stroke patients performed transitive limb gestures to pantomime (to verbal command with the object visible) and object manipulation. Performance was more accurate in the object manipulation condition, suggesting that haptic and kinesthetic cues are important for transitive gesture production. Various patterns of performance were observed in the stroke groups, indicating that selective damage to the haptic/kinesthetic processing system is possible and common following unilateral stroke.

  14. Invariant Classification of Gait Types

    DEFF Research Database (Denmark)

    Fihl, Preben; Moeslund, Thomas B.

    2008-01-01

    This paper presents a method of classifying human gait in an invariant manner based on silhouette comparison. A database of artificially generated silhouettes is created representing the three main types of gait, i.e. walking, jogging, and running. Silhouettes generated from different camera angles...

  15. Lie groups and invariant theory

    CERN Document Server

    Vinberg, Ernest

    2005-01-01

    This volume, devoted to the 70th birthday of A. L. Onishchik, contains a collection of articles by participants in the Moscow Seminar on Lie Groups and Invariant Theory headed by E. B. Vinberg and A. L. Onishchik. The book is suitable for graduate students and researchers interested in Lie groups and related topics.

  16. Multivariate dice recognition using invariant features

    Science.gov (United States)

    Hsu, Gee-Sern; Peng, Hsiao-Chia; Yeh, Shang-Min; Lin, Chyi-Yeu

    2013-04-01

    A system is proposed for automatic reading of the number of dots on dice in general table game settings. Different from previous dice recognition systems that recognize dice of a specific color using a single top-view camera in an enclosure with controlled settings, the proposed one uses multiple cameras to recognize dice of various colors and under uncontrolled conditions. It is composed of three modules. Module-1 locates the dice using the gradient-conditioned color segmentation, proposed, to segment dice of arbitrary colors from the background. Module-2 exploits the local invariant features good for building homographies, giving a solution to segment the top faces of the dice. To identify the dots on the segmented top faces, a maximally stable extremal region detector is embedded in module-3 for its consistency in locating the dot region. Experiments show that the proposed system performs satisfactorily in various test conditions.

  17. The Cinematic Bergson: From Virtual Image to Actual Gesture

    Directory of Open Access Journals (Sweden)

    John Ó Maoilearca

    2016-12-01

    Full Text Available Deleuze’s film-philosophy makes much of the notion of virtual images in Bergson’s Matter and Memory, but in doing so he transforms a psycho-meta-physical thesis into a (very unBergsonian ontological one. In this essay, we will offer a corrective by exploring Bergson’s own explanation of the image as an “attitude of the body”—something that projects an actual, corporeal, and postural approach, not only to cinema, but also to philosophy. Indeed, just as Renoir famously said that “a director makes only one movie in his life. Then he breaks it into pieces and makes it again,” so Bergson wrote that each philosopher only makes one “single point” throughout his or her whole career. And this one point, he then declares, is like a “vanishing image,” only one best understood as an attitude of the body. It is embodied image that underlies an alternative Bergsonian cinema of the actual and the body—one that we will examine through what Bergson’s has to say about “attitude” as well as “gesture” and “mime.” We will also look at it through a gestural concept enacted by a film, to be precise, the five remakes that comprise Lars von Trier’s and Jørgen Leth’s The Five Obstructions (2003. This will bring us back to the idea of what it is that is being remade, both by directors and philosophers, in Renoir’s “one film” and Bergson’s singular “vanishing image” respectively.  Is the “one” being remade an image understood as a representation, or is it a gesture, understood as a bodily movement? It is the latter stance that provides a wholly new and alternative view of Bergson’s philosophy of cinema.

  18. Part of the message comes in gesture : how people with aphasia convey information in different gesture types as compared with information in their speech

    NARCIS (Netherlands)

    van Nispen, Karin; van de Sandt-Koenderman, Mieke; Sekine, Kazuki; Krahmer, Emiel; Rose, Miranda L.

    2017-01-01

    Background: Studies have shown that the gestures produced by people with aphasia (PWA) can convey information useful for their communication. However, the exact significance of the contribution to message communication via gesture remains unclear. Furthermore, it remains unclear how different

  19. Towards real-time and rotation-invariant American Sign Language alphabet recognition using a range camera.

    Science.gov (United States)

    Lahamy, Hervé; Lichti, Derek D

    2012-10-29

    The automatic interpretation of human gestures can be used for a natural interaction with computers while getting rid of mechanical devices such as keyboards and mice. In order to achieve this objective, the recognition of hand postures has been studied for many years. However, most of the literature in this area has considered 2D images which cannot provide a full description of the hand gestures. In addition, a rotation-invariant identification remains an unsolved problem, even with the use of 2D images. The objective of the current study was to design a rotation-invariant recognition process while using a 3D signature for classifying hand postures. A heuristic and voxel-based signature has been designed and implemented. The tracking of the hand motion is achieved with the Kalman filter. A unique training image per posture is used in the supervised classification. The designed recognition process, the tracking procedure and the segmentation algorithm have been successfully evaluated. This study has demonstrated the efficiency of the proposed rotation invariant 3D hand posture signature which leads to 93.88% recognition rate after testing 14,732 samples of 12 postures taken from the alphabet of the American Sign Language.

  20. Towards Real-Time and Rotation-Invariant American Sign Language Alphabet Recognition Using a Range Camera

    Directory of Open Access Journals (Sweden)

    Derek D. Lichti

    2012-10-01

    Full Text Available The automatic interpretation of human gestures can be used for a natural interaction with computers while getting rid of mechanical devices such as keyboards and mice. In order to achieve this objective, the recognition of hand postures has been studied for many years. However, most of the literature in this area has considered 2D images which cannot provide a full description of the hand gestures. In addition, a rotation-invariant identification remains an unsolved problem, even with the use of 2D images. The objective of the current study was to design a rotation-invariant recognition process while using a 3D signature for classifying hand postures. A heuristic and voxel-based signature has been designed and implemented. The tracking of the hand motion is achieved with the Kalman filter. A unique training image per posture is used in the supervised classification. The designed recognition process, the tracking procedure and the segmentation algorithm have been successfully evaluated. This study has demonstrated the efficiency of the proposed rotation invariant 3D hand posture signature which leads to 93.88% recognition rate after testing 14,732 samples of 12 postures taken from the alphabet of the American Sign Language.

  1. Towards Real-Time and Rotation-Invariant American Sign Language Alphabet Recognition Using a Range Camera

    Science.gov (United States)

    Lahamy, Hervé; Lichti, Derek D.

    2012-01-01

    The automatic interpretation of human gestures can be used for a natural interaction with computers while getting rid of mechanical devices such as keyboards and mice. In order to achieve this objective, the recognition of hand postures has been studied for many years. However, most of the literature in this area has considered 2D images which cannot provide a full description of the hand gestures. In addition, a rotation-invariant identification remains an unsolved problem, even with the use of 2D images. The objective of the current study was to design a rotation-invariant recognition process while using a 3D signature for classifying hand postures. A heuristic and voxel-based signature has been designed and implemented. The tracking of the hand motion is achieved with the Kalman filter. A unique training image per posture is used in the supervised classification. The designed recognition process, the tracking procedure and the segmentation algorithm have been successfully evaluated. This study has demonstrated the efficiency of the proposed rotation invariant 3D hand posture signature which leads to 93.88% recognition rate after testing 14,732 samples of 12 postures taken from the alphabet of the American Sign Language. PMID:23202168

  2. A Smart phone Identification Method Based on Gesture

    Directory of Open Access Journals (Sweden)

    Xu Di

    2016-01-01

    Full Text Available In order to promote the practicality of smart phone identification based on gesture, we introduce weighted morphological characteristics and early termination of authentication into dynamic time warping (DTW algorithm, which based on characteristic moving data captured by in-built accelerometer and gyroscopes, and put forward an effective identification algorithm (ME-DTW for smartphone. The algorithm controls the contribution made by the difference between the latitudes to the total Euclidean distance by smartphone morphological characteristics. It also introduces restricted areas touch trigger gesture acquisition scheme and authentication gesture length selection scheme based on normal distribution, which effectively improve the identification accuracy and efficiency. Experimental results show that: when others imitate gestures attack false acceptance rate tends to 0%, the personal identification false rejection rate remained at 3.29%, which can meet most practical security needs

  3. Language, gesture, skill: the co-evolutionary foundations of language

    Science.gov (United States)

    Sterelny, Kim

    2012-01-01

    This paper defends a gestural origins hypothesis about the evolution of enhanced communication and language in the hominin lineage. The paper shows that we can develop an incremental model of language evolution on that hypothesis, but not if we suppose that language originated in an expansion of great ape vocalization. On the basis of the gestural origins hypothesis, the paper then advances solutions to four classic problems about the evolution of language: (i) why did language evolve only in the hominin lineage? (ii) why is language use an evolutionarily stable form of informational cooperation, despite the fact that hominins have diverging evolutionary interests? (iii) how did stimulus independent symbols emerge? (iv) what were the origins of complex, syntactically organized symbols? The paper concludes by confronting two challenges: those of testability and of explaining the gesture-to-speech transition; crucial issues for any gestural origins hypothesis PMID:22734057

  4. Listeners' Bodies in Music Analysis: Gestures, Motor Intentionality, and Models

    National Research Council Canada - National Science Library

    Mariusz Kozak

    2015-01-01

    .... Specifically, I draw on the phenomenology of Maurice Merleau-Ponty to argue that situated, active listeners project their motor intentional gestures inside music, where they reconstitute the very...

  5. An Interactive Astronaut-Robot System with Gesture Control

    Directory of Open Access Journals (Sweden)

    Jinguo Liu

    2016-01-01

    Full Text Available Human-robot interaction (HRI plays an important role in future planetary exploration mission, where astronauts with extravehicular activities (EVA have to communicate with robot assistants by speech-type or gesture-type user interfaces embedded in their space suits. This paper presents an interactive astronaut-robot system integrating a data-glove with a space suit for the astronaut to use hand gestures to control a snake-like robot. Support vector machine (SVM is employed to recognize hand gestures and particle swarm optimization (PSO algorithm is used to optimize the parameters of SVM to further improve its recognition accuracy. Various hand gestures from American Sign Language (ASL have been selected and used to test and validate the performance of the proposed system.

  6. Language, gesture, skill: the co-evolutionary foundations of language.

    Science.gov (United States)

    Sterelny, Kim

    2012-08-05

    This paper defends a gestural origins hypothesis about the evolution of enhanced communication and language in the hominin lineage. The paper shows that we can develop an incremental model of language evolution on that hypothesis, but not if we suppose that language originated in an expansion of great ape vocalization. On the basis of the gestural origins hypothesis, the paper then advances solutions to four classic problems about the evolution of language: (i) why did language evolve only in the hominin lineage? (ii) why is language use an evolutionarily stable form of informational cooperation, despite the fact that hominins have diverging evolutionary interests? (iii) how did stimulus independent symbols emerge? (iv) what were the origins of complex, syntactically organized symbols? The paper concludes by confronting two challenges: those of testability and of explaining the gesture-to-speech transition; crucial issues for any gestural origins hypothesis.

  7. Gesture-Based Control of Spaces and Objects in Augmented

    National Research Council Canada - National Science Library

    Yacoob, Yaser

    2002-01-01

    .... Our research focuses on detection, tracking, recognition and visual feedback of the hand and finger movements in a cooperative user environment and the integration of gesture and speech recognition...

  8. Sound Synthesis Affected by Physical Gestures in Real-Time

    DEFF Research Database (Denmark)

    Graugaard, Lars

    2006-01-01

    Motivation and strategies for affecting electronic music through physical gestures are presented and discussed. Two implementations are presented and experience with their use in performance is reported. A concept of sound shaping and sound colouring that connects an instrumental performer...

  9. Hands in space: gesture interaction with augmented-reality interfaces.

    Science.gov (United States)

    Billinghurst, Mark; Piumsomboon, Tham; Huidong Bai

    2014-01-01

    Researchers at the Human Interface Technology Laboratory New Zealand (HIT Lab NZ) are investigating free-hand gestures for natural interaction with augmented-reality interfaces. They've applied the results to systems for desktop computers and mobile devices.

  10. The hands of Donald Trump: Entertainment, gesture, spectacle

    National Research Council Canada - National Science Library

    Kira Hall; Donna Meryl Goldstein; Matthew Bruce Ingram

    2016-01-01

    ... as comedic entertainment. We examine the ways that Trump's unconventional political style, particularly his use of gesture to critique the political system and caricature his opponents, brought momentum to his campaign by creating spectacle...

  11. Gesture Commanding of a Robot with EVA Gloves Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Gesture commanding can be applied and evaluated with NASA robot systems. Application of this input modality can improve the way crewmembers interact with robots...

  12. Invariant manifolds near hyperbolic fixed points

    NARCIS (Netherlands)

    Homburg, A.J.

    2006-01-01

    Abstract: In these notes, we discuss obstructions to the existence of local invariant manifolds of some smoothness class, near hyperbolic fixed points of diffeomorphisms. We present an elementary construction for continuously differentiable invariant manifolds that are not necessarily normally

  13. When do speakers use gestures to specify who does what to whom? The role of language proficiency and type of gestures in narratives.

    Science.gov (United States)

    So, Wing Chee; Kita, Sotaro; Goldin-Meadow, Susan

    2013-12-01

    Previous research has found that iconic gestures (i.e., gestures that depict the actions, motions or shapes of entities) identify referents that are also lexically specified in the co-occurring speech produced by proficient speakers. This study examines whether concrete deictic gestures (i.e., gestures that point to physical entities) bear a different kind of relation to speech, and whether this relation is influenced by the language proficiency of the speakers. Two groups of speakers who had different levels of English proficiency were asked to retell a story in English. Their speech and gestures were transcribed and coded. Our findings showed that proficient speakers produced concrete deictic gestures for referents that were not specified in speech, and iconic gestures for referents that were specified in speech, suggesting that these two types of gestures bear different kinds of semantic relations with speech. In contrast, less proficient speakers produced concrete deictic gestures and iconic gestures whether or not referents were lexically specified in speech. Thus, both type of gesture and proficiency of speaker need to be considered when accounting for how gesture and speech are used in a narrative context.

  14. When do speakers use gesture to specify who does what to whom? The role of language proficiency and type of gesture in narratives

    Science.gov (United States)

    So, Wing Chee; Kita, Sotaro; Goldin-Meadow, Susan

    2014-01-01

    Previous research has found that iconic gestures (i.e., gestures that depict the actions, motions or shapes of entities) identify referents that are also lexically specified in the co-occurring speech produced by proficient speakers. This study examines whether concrete deictic gestures (i.e., gestures that point to physical entities) bear a different kind of relation to speech, and whether this relation is influenced by the language proficiency of the speakers. Two groups of speakers who had different levels of English proficiency were asked to retell a story in English. Their speech and gestures were transcribed and coded. Our findings showed that proficient speakers produced concrete deictic gestures for referents that were not specified in speech, and iconic gestures for referents that were specified in speech, suggesting that these two types of gestures bear different kinds of semantic relations with speech. In contrast, less proficient speakers produced concrete deictic gestures and iconic gestures whether or not referents were lexically specified in speech. Thus, both type of gesture and proficiency of speaker need to be considered when accounting for how gesture and speech are used in a narrative context. PMID:23337950

  15. Dynamic Monitoring Reveals Motor Task Characteristics in Prehistoric Technical Gestures

    OpenAIRE

    Radu Iovita; Jonas Buchli; Johannes Pfleging

    2015-01-01

    Reconstructing ancient technical gestures associated with simple tool actions is crucial for understanding the co-evolution of the human forelimb and its associated control-related cognitive functions on the one hand, and of the human technological arsenal on the other hand. Although the topic of gesture is an old one in Paleolithic archaeology and in anthropology in general, very few studies have taken advantage of the new technologies from the science of kinematics in order to improve repli...

  16. Authentication based on gestures with smartphone in hand

    Science.gov (United States)

    Varga, Juraj; Švanda, Dominik; Varchola, Marek; Zajac, Pavol

    2017-08-01

    We propose a new method of authentication for smartphones and similar devices based on gestures made by user with the device itself. The main advantage of our method is that it combines subtle biometric properties of the gesture (something you are) with a secret information that can be freely chosen by the user (something you know). Our prototype implementation shows that the scheme is feasible in practice. Further development, testing and fine tuning of parameters is required for deployment in the real world.

  17. A system for teaching sign language using live gesture feedback

    OpenAIRE

    Kelly, Daniel; McDonald, John; Markham, Charles

    2008-01-01

    This paper presents a computer vision based virtual learning environment for teaching communicative hand gestures used in Sign Language. A virtual learning environment was developed to demonstrate signs to the user. The system then gives real time feedback to the user on their performance of the demonstrated sign. Gesture features are extracted from a standard web-cam video stream and shape and trajectory matching techniques are applied to these features to determine the feedback given to the...

  18. Development of a Hand Gestures SDK for NUI-Based Applications

    Directory of Open Access Journals (Sweden)

    Seongjo Lee

    2015-01-01

    Full Text Available Concomitant with the advent of the ubiquitous era, research into better human computer interaction (HCI for human-focused interfaces has intensified. Natural user interface (NUI, in particular, is being actively investigated with the objective of more intuitive and simpler interaction between humans and computers. However, developing NUI-based applications without special NUI-related knowledge is difficult. This paper proposes a NUI-specific SDK, called “Gesture SDK,” for development of NUI-based applications. Gesture SDK provides a gesture generator with which developers can directly define gestures. Further, a “Gesture Recognition Component” is provided that enables defined gestures to be recognized by applications. We generated gestures using the proposed SDK and developed a “Smart Interior,” NUI-based application using the Gesture Recognition Component. The results of experiments conducted indicate that the recognition rate of the generated gestures was 96% on average.

  19. Getting to the elephants: Gesture and preschoolers' comprehension of route direction information.

    Science.gov (United States)

    Austin, Elizabeth E; Sweller, Naomi

    2017-11-01

    During early childhood, children find spatial tasks such as following novel route directions challenging. Spatial tasks place demands on multiple cognitive processes, including language comprehension and memory, at a time in development when resources are limited. As such, gestures accompanying route directions may aid comprehension and facilitate task performance by scaffolding cognitive processes, including language and memory processing. This study examined the effect of presenting gesture during encoding on spatial task performance during early childhood. Three- to five-year-olds were presented with verbal route directions through a zoo-themed spatial array and, depending on assigned condition (no gesture, beat gesture, or iconic/deictic gesture), accompanying gestures. Children presented with verbal route directions accompanied by a combination of iconic (pantomime) and deictic (pointing) gestures verbally recalled more than children presented with beat gestures (rhythmic hand movements) or no gestures accompanying the route directions. The presence of gesture accompanying route directions similarly influenced physical route navigation, such that children presented with gesture (beat, pantomime, and pointing) navigated the route more accurately than children presented with no gestures. Across all gesture conditions, location information (e.g., the penguin pond) was recalled more than movement information (e.g., go around) and descriptive information (e.g., bright red). These findings suggest that speakers' gestures accompanying spatial task information influence listeners' recall and task performance. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. A Comparison of Coverbal Gesture Use in Oral Discourse Among Speakers With Fluent and Nonfluent Aphasia.

    Science.gov (United States)

    Kong, Anthony Pak-Hin; Law, Sam-Po; Chak, Gigi Wan-Chi

    2017-07-12

    Coverbal gesture use, which is affected by the presence and degree of aphasia, can be culturally specific. The purpose of this study was to compare gesture use among Cantonese-speaking individuals: 23 neurologically healthy speakers, 23 speakers with fluent aphasia, and 21 speakers with nonfluent aphasia. Multimedia data of discourse samples from these speakers were extracted from the Cantonese AphasiaBank. Gestures were independently annotated on their forms and functions to determine how gesturing rate and distribution of gestures differed across speaker groups. A multiple regression was conducted to determine the most predictive variable(s) for gesture-to-word ratio. Although speakers with nonfluent aphasia gestured most frequently, the rate of gesture use in counterparts with fluent aphasia did not differ significantly from controls. Different patterns of gesture functions in the 3 speaker groups revealed that gesture plays a minor role in lexical retrieval whereas its role in enhancing communication dominates among the speakers with aphasia. The percentages of complete sentences and dysfluency strongly predicted the gesturing rate in aphasia. The current results supported the sketch model of language-gesture association. The relationship between gesture production and linguistic abilities and clinical implications for gesture-based language intervention for speakers with aphasia are also discussed.

  1. Unsupervised Trajectory Segmentation for Surgical Gesture Recognition in Robotic Training.

    Science.gov (United States)

    Despinoy, Fabien; Bouget, David; Forestier, Germain; Penet, Cedric; Zemiti, Nabil; Poignet, Philippe; Jannin, Pierre

    2016-06-01

    Dexterity and procedural knowledge are two critical skills that surgeons need to master to perform accurate and safe surgical interventions. However, current training systems do not allow us to provide an in-depth analysis of surgical gestures to precisely assess these skills. Our objective is to develop a method for the automatic and quantitative assessment of surgical gestures. To reach this goal, we propose a new unsupervised algorithm that can automatically segment kinematic data from robotic training sessions. Without relying on any prior information or model, this algorithm detects critical points in the kinematic data that define relevant spatio-temporal segments. Based on the association of these segments, we obtain an accurate recognition of the gestures involved in the surgical training task. We, then, perform an advanced analysis and assess our algorithm using datasets recorded during real expert training sessions. After comparing our approach with the manual annotations of the surgical gestures, we observe 97.4% accuracy for the learning purpose and an average matching score of 81.9% for the fully automated gesture recognition process. Our results show that trainees workflow can be followed and surgical gestures may be automatically evaluated according to an expert database. This approach tends toward improving training efficiency by minimizing the learning curve.

  2. Real time gesture based control: A prototype development

    Science.gov (United States)

    Bhargava, Deepshikha; Solanki, L.; Rai, Satish Kumar

    2016-03-01

    The computer industry is getting advanced. In a short span of years, industry is growing high with advanced techniques. Robots have been replacing humans, increasing the efficiency, accessibility and accuracy of the system and creating man-machine interaction. Robotic industry is developing many new trends. However, they still need to be controlled by humans itself. This paper presents an approach to control a motor like a robot with hand gestures not by old ways like buttons or physical devices. Controlling robots with hand gestures is very popular now-a-days. Currently, at this level, gesture features are applied for detecting and tracking the hand in real time. A principal component analysis algorithm is being used for identification of a hand gesture by using open CV image processing library. Contours, convex-hull, and convexity defects are the gesture features. PCA is a statistical approach used for reducing the number of variables in hand recognition. While extracting the most relevant information (feature) contained in the images (hand). After detecting and recognizing hand a servo motor is being controlled, which uses hand gesture as an input device (like mouse and keyboard), and reduces human efforts.

  3. Pointing and tracing gestures may enhance anatomy and physiology learning.

    Science.gov (United States)

    Macken, Lucy; Ginns, Paul

    2014-07-01

    Currently, instructional effects generated by Cognitive load theory (CLT) are limited to visual and auditory cognitive processing. In contrast, "embodied cognition" perspectives suggest a range of gestures, including pointing, may act to support communication and learning, but there is relatively little research showing benefits of such "embodied learning" in the health sciences. This study investigated whether explicit instructions to gesture enhance learning through its cognitive effects. Forty-two university-educated adults were randomly assigned to conditions in which they were instructed to gesture, or not gesture, as they learnt from novel, paper-based materials about the structure and function of the human heart. Subjective ratings were used to measure levels of intrinsic, extraneous and germane cognitive load. Participants who were instructed to gesture performed better on a knowledge test of terminology and a test of comprehension; however, instructions to gesture had no effect on subjective ratings of cognitive load. This very simple instructional re-design has the potential to markedly enhance student learning of typical topics and materials in the health sciences and medicine.

  4. Constructing Invariant Fairness Measures for Surfaces

    DEFF Research Database (Denmark)

    Gravesen, Jens; Ungstrup, Michael

    1998-01-01

    of the size of this vector field is used as the fairness measure on the family.Six basic 3rd order invariants satisfying two quadratic equations are defined. They form a complete set in the sense that any invariant 3rd order function can be written as a function of the six basic invariants together...

  5. Generic R-transform for invariant pattern representation

    OpenAIRE

    Hoang, Thai V; Tabbone, Salvatore

    2011-01-01

    ISBN: 978-1-61284-432-9; International audience; The beneficial properties of the Radon transform make it an useful intermediate representation for the extraction of invariant features from pattern images for the purpose of indexing/matching. This paper revisits the problem with a generic view on a popular Radon-based pattern descriptor, the R-signature, bringing in a class of descriptors spatially describing patterns at all the directions and at different levels. The domain of this class and...

  6. Invariance for Single Curved Manifold

    KAUST Repository

    Castro, Pedro Machado Manhaes de

    2012-08-01

    Recently, it has been shown that, for Lambert illumination model, solely scenes composed by developable objects with a very particular albedo distribution produce an (2D) image with isolines that are (almost) invariant to light direction change. In this work, we provide and investigate a more general framework, and we show that, in general, the requirement for such in variances is quite strong, and is related to the differential geometry of the objects. More precisely, it is proved that single curved manifolds, i.e., manifolds such that at each point there is at most one principal curvature direction, produce invariant is surfaces for a certain relevant family of energy functions. In the three-dimensional case, the associated energy function corresponds to the classical Lambert illumination model with albedo. This result is also extended for finite-dimensional scenes composed by single curved objects. © 2012 IEEE.

  7. Homotopy invariants of Gauss words

    OpenAIRE

    Gibson, Andrew

    2009-01-01

    By defining combinatorial moves, we can define an equivalence relation on Gauss words called homotopy. In this paper we define a homotopy invariant of Gauss words. We use this to show that there exist Gauss words that are not homotopically equivalent to the empty Gauss word, disproving a conjecture by Turaev. In fact, we show that there are an infinite number of equivalence classes of Gauss words under homotopy.

  8. A Many Particle Adiabatic Invariant

    DEFF Research Database (Denmark)

    Hjorth, Poul G.

    1999-01-01

    For a system of N charged particles moving in a homogeneous, sufficiently strong magnetic field, a many-particle adiabatic invariant constrains the collisional exchange of energy between the degrees of freedom perpendicular to and parallel to the magnetic field. A description of the phenomenon in...... in terms of Hamiltonian dynamics is given. The relation to the Equipartition Theorem of statistical Mechanics is briefly discussed....

  9. Scale invariance implies conformal invariance for the three-dimensional Ising model.

    Science.gov (United States)

    Delamotte, Bertrand; Tissier, Matthieu; Wschebor, Nicolás

    2016-01-01

    Using the Wilson renormalization group, we show that if no integrated vector operator of scaling dimension -1 exists, then scale invariance implies conformal invariance. By using the Lebowitz inequalities, we prove that this necessary condition is fulfilled in all dimensions for the Ising universality class. This shows, in particular, that scale invariance implies conformal invariance for the three-dimensional Ising model.

  10. Disformal invariance of curvature perturbation

    Energy Technology Data Exchange (ETDEWEB)

    Motohashi, Hayato [Kavli Institute for Cosmological Physics, The University of Chicago, 5640 South Ellis Avenue, Chicago, Illinois, 60637 (United States); White, Jonathan, E-mail: motohashi@kicp.uchicago.edu, E-mail: jwhite@post.kek.jp [Research Center for the Early Universe (RESCEU), The University of Tokyo, Hongo 7-3-1, Tokyo, 113-0033 Japan (Japan)

    2016-02-01

    We show that under a general disformal transformation the linear comoving curvature perturbation is not identically invariant, but is invariant on superhorizon scales for any theory that is disformally related to Horndeski's theory. The difference between disformally related curvature perturbations is found to be given in terms of the comoving density perturbation associated with a single canonical scalar field. In General Relativity it is well-known that this quantity vanishes on superhorizon scales through the Poisson equation that is obtained on combining the Hamiltonian and momentum constraints, and we confirm that a similar result holds for any theory that is disformally related to Horndeski's scalar-tensor theory so long as the invertibility condition for the disformal transformation is satisfied. We also consider the curvature perturbation at full nonlinear order in the unitary gauge, and find that it is invariant under a general disformal transformation if we assume that an attractor regime has been reached. Finally, we also discuss the counting of degrees of freedom in theories disformally related to Horndeski's.

  11. Learning Invariant Color Features for Person Re-Identification.

    Science.gov (United States)

    Rama Varior, Rahul; Wang, Gang; Lu, Jiwen; Liu, Ting

    2016-02-18

    Matching people across multiple camera views known as person re-identification, is a challenging problem due to the change in visual appearance caused by varying lighting conditions. The perceived color of the subject appears to be different under different illuminations. Previous works use color as it is or address these challenges by designing color spaces focusing on a specific cue. In this paper, we propose an approach for learning color patterns from pixels sampled from images across two camera views. The intuition behind this work is that, even though varying lighting conditions across views affect the pixel values of same color, the final representation of a particular color should be stable and invariant to these variations, i.e. they should be encoded with the same values. We model color feature generation as a learning problem by jointly learning a linear transformation and a dictionary to encode pixel values. We also analyze different photometric invariant color spaces as well as popular color constancy algorithm for person re-identification. Using color as the only cue, we compare our approach with all the photometric invariant color spaces and show superior performance over all of them. Combining with other learned low-level and high-level features, we obtain promising results in VIPeR, Person Re-ID 2011 and CAVIAR4REID datasets.

  12. GestuRe and ACtion Exemplar (GRACE) video database: stimuli for research on manners of human locomotion and iconic gestures.

    Science.gov (United States)

    Aussems, Suzanne; Kwok, Natasha; Kita, Sotaro

    2017-09-15

    Human locomotion is a fundamental class of events, and manners of locomotion (e.g., how the limbs are used to achieve a change of location) are commonly encoded in language and gesture. To our knowledge, there is no openly accessible database containing normed human locomotion stimuli. Therefore, we introduce the GestuRe and ACtion Exemplar (GRACE) video database, which contains 676 videos of actors performing novel manners of human locomotion (i.e., moving from one location to another in an unusual manner) and videos of a female actor producing iconic gestures that represent these actions. The usefulness of the database was demonstrated across four norming experiments. First, our database contains clear matches and mismatches between iconic gesture videos and action videos. Second, the male actors and female actors whose action videos matched the gestures in the best possible way, perform the same actions in very similar manners and different actions in highly distinct manners. Third, all the actions in the database are distinct from each other. Fourth, adult native English speakers were unable to describe the 26 different actions concisely, indicating that the actions are unusual. This normed stimuli set is useful for experimental psychologists working in the language, gesture, visual perception, categorization, memory, and other related domains.

  13. Chronometric Invariance and String Theory

    Science.gov (United States)

    Pollock, M. D.

    The Einstein-Hilbert Lagrangian R is expressed in terms of the chronometrically invariant quantities introduced by Zel'manov for an arbitrary four-dimensional metric gij. The chronometrically invariant three-space is the physical space γαβ = -gαβ+e2ϕ γαγβ, where e2ϕ = g00 and γα = g0α/g00, and whose determinant is h. The momentum canonically conjugate to γαβ is π α β =-√ {h}(Kα β -γ α β K), where Kα β =½ ∂ tγ α β and ∂t≡e-ϕ∂0 is the chronometrically invariant derivative with respect to time. The Wheeler-DeWitt equation for the wave function Ψ is derived. For a stationary space-time, such as the Kerr metric, παβ vanishes, implying that there is then no dynamics. The most symmetric, chronometrically-invariant space, obtained after setting ϕ = γα = 0, is Rα β =-λ (t)δ α β , where δαβ is constant and has curvature k. From the Friedmann and Raychaudhuri equations, we find that λ is constant only if k=1 and the source is a perfect fluid of energy-density ρ and pressure p=(γ-1)ρ, with adiabatic index γ=2/3, which is the value for a random ensemble of strings, thus yielding a three-dimensional de Sitter space embedded in four-dimensional space-time. Furthermore, Ψ is only invariant under the time-reversal operator {T} if γ=2/(2n-1), where n is a positive integer, the first two values n=1,2 defining the high-temperature and low-temperature limits ρ T±2, respectively, of the heterotic superstring theory, which are thus dual to one another in the sense T↔1/2π2α‧T.

  14. Position-invariant, rotation-invariant, and scale-invariant process for binary image recognition.

    Science.gov (United States)

    Levkovitz, J; Oron, E; Tur, M

    1997-05-10

    A novel recognition process is presented that is invariant under position, rotation, and scale changes. The recognition process is based on the Fang-Häusler transform [Appl. Opt. 29, 704 (1990)] and is applied to the autoconvolved image, rather than to the image itself. This makes the recognition process sensitive not only to the image histogram but also to its detailed pattern, resulting in a more reliable process that is also applicable to binary images. The proposed recognition process is demonstrated, by use of a fast algorithm, on several types of binary images with a real transform kernel, which contains amplitude, as well as phase, information. Good recognition is achieved for both synthetic and scanned images. In addition, it is shown that the Fang-Hausler transform is also invariant under a general affine transformation of the spatial coordinates.

  15. Lorentz and Poincaré invariance 100 years of relativity

    CERN Document Server

    Hsu Jong Ping

    2001-01-01

    This collection of papers provides a broad view of the development of Lorentz and Poincaré invariance and spacetime symmetry throughout the past 100 years. The issues explored in these papers include: (1) formulations of relativity theories in which the speed of light is not a universal constant but which are consistent with the four-dimensional symmetry of the Lorentz and Poincaré groups and with experimental results, (2) analyses and discussions by Reichenbach concerning the concepts of simultaneity and physical time from a philosophical point of view, and (3) results achieved by the union o

  16. Scale invariance and universality of economic fluctuations

    Science.gov (United States)

    Stanley, H. E.; Amaral, L. A. N.; Gopikrishnan, P.; Plerou, V.

    2000-08-01

    In recent years, physicists have begun to apply concepts and methods of statistical physics to study economic problems, and the neologism “econophysics” is increasingly used to refer to this work. Much recent work is focused on understanding the statistical properties of time series. One reason for this interest is that economic systems are examples of complex interacting systems for which a huge amount of data exist, and it is possible that economic time series viewed from a different perspective might yield new results. This manuscript is a brief summary of a talk that was designed to address the question of whether two of the pillars of the field of phase transitions and critical phenomena - scale invariance and universality - can be useful in guiding research on economics. We shall see that while scale invariance has been tested for many years, universality is relatively less frequently discussed. This article reviews the results of two recent studies - (i) The probability distribution of stock price fluctuations: Stock price fluctuations occur in all magnitudes, in analogy to earthquakes - from tiny fluctuations to drastic events, such as market crashes. The distribution of price fluctuations decays with a power-law tail well outside the Lévy stable regime and describes fluctuations that differ in size by as much as eight orders of magnitude. (ii) Quantifying business firm fluctuations: We analyze the Computstat database comprising all publicly traded United States manufacturing companies within the years 1974-1993. We find that the distributions of growth rates is different for different bins of firm size, with a width that varies inversely with a power of firm size. Similar variation is found for other complex organizations, including country size, university research budget size, and size of species of bird populations.

  17. Great ape gestures: intentional communication with a rich set of innate signals.

    Science.gov (United States)

    Byrne, R W; Cartmill, E; Genty, E; Graham, K E; Hobaiter, C; Tanner, J

    2017-07-01

    Great apes give gestures deliberately and voluntarily, in order to influence particular target audiences, whose direction of attention they take into account when choosing which type of gesture to use. These facts make the study of ape gesture directly relevant to understanding the evolutionary precursors of human language; here we present an assessment of ape gesture from that perspective, focusing on the work of the "St Andrews Group" of researchers. Intended meanings of ape gestures are relatively few and simple. As with human words, ape gestures often have several distinct meanings, which are effectively disambiguated by behavioural context. Compared to the signalling of most other animals, great ape gestural repertoires are large. Because of this, and the relatively small number of intended meanings they achieve, ape gestures are redundant, with extensive overlaps in meaning. The great majority of gestures are innate, in the sense that the species' biological inheritance includes the potential to develop each gestural form and use it for a specific range of purposes. Moreover, the phylogenetic origin of many gestures is relatively old, since gestures are extensively shared between different genera in the great ape family. Acquisition of an adult repertoire is a process of first exploring the innate species potential for many gestures and then gradual restriction to a final (active) repertoire that is much smaller. No evidence of syntactic structure has yet been detected.

  18. Gesture comprehension, knowledge and production in Alzheimer's disease.

    Science.gov (United States)

    Rousseaux, M; Rénier, J; Anicet, L; Pasquier, F; Mackowiak-Cordoliani, M A

    2012-07-01

      Although apraxia is a typical consequence of Alzheimer's disease (AD), the profile of apraxic impairments is still subject to debate. Here, we analysed apraxia components in patients with AD with mild-to-moderate or moderately severe dementia.   Thirty-one patients were included. We first evaluated simple gestures, that is, the imitation of finger and hand configurations, symbolic gestures (recognition, production on verbal command and imitation), pantomimes (recognition, production on verbal command, imitation and production with the object), general knowledge and complex gestures (tool-object association, function-tool association, production of complex actions and knowledge about action sequences). Tests for dementia (Mini Mental State Examination and the Dementia Rating Scale), language disorders, visual agnosia and executive function were also administered.   Compared with controls, patients showed significant difficulties (P ≤ 0.01) in subtests relating to simple gestures (except for the recognition and imitation of symbolic gestures). General knowledge about tools, objects and action sequences was less severely impaired. Performance was frequently correlated with the severity of dementia. Multiple-case analyses revealed that (i) the frequency of apraxia depended on the definition used, (ii) ideomotor apraxia was more frequent than ideational apraxia, (iii) conceptual difficulties were slightly more frequent than production difficulties in the early stage of AD and (iv) difficulties in gesture recognition were frequent (especially for pantomimes).   Patients with AD can clearly show gesture apraxia from the mild-moderate stage of dementia onwards. Recognition and imitation disorders are relatively frequent (especially for pantomimes). We did not find conceptual difficulties to be the main problem in early-stage AD. © 2012 The Author(s) European Journal of Neurology © 2012 EFNS.

  19. Spectral-Spatial Scale Invariant Feature Transform for Hyperspectral Images.

    Science.gov (United States)

    Al-Khafaji, Suhad Lateef; Zhou, Jun; Zia, Ali; Liew, Alan Wee-Chung

    2017-09-04

    Spectral-spatial feature extraction is an important task in hyperspectral image processing. In this paper we propose a novel method to extract distinctive invariant features from hyperspectral images for registration of hyperspectral images with different spectral conditions. Spectral condition means images are captured with different incident lights, viewing angles, or using different hyperspectral cameras. In addition, spectral condition includes images of objects with the same shape but different materials. This method, which is named Spectral-Spatial Scale Invariant Feature Transform (SS-SIFT), explores both spectral and spatial dimensions simultaneously to extract spectral and geometric transformation invariant features. Similar to the classic SIFT algorithm, SS-SIFT consists of keypoint detection and descriptor construction steps. Keypoints are extracted from spectral-spatial scale space and are detected from extrema after 3D difference of Gaussian is applied to the data cube. Two descriptors are proposed for each keypoint by exploring the distribution of spectral-spatial gradient magnitude in its local 3D neighborhood. The effectiveness of the SS-SIFT approach is validated on images collected in different light conditions, different geometric projections, and using two hyperspectral cameras with different spectral wavelength ranges and resolutions. The experimental results show that our method generates robust invariant features for spectral-spatial image matching.

  20. THE CONTRIBUTION OF GESTURES TO PERSONAL BRANDING

    Directory of Open Access Journals (Sweden)

    Brînduşa-Mariana Amălăncei

    2015-07-01

    Full Text Available A form of (self-promotion but also an authentic strategic choice, the personal brand has become a topical preoccupation of marketing specialists. Personal branding or self-marketing represents an innovative concept that associates the efficiency of personal development with the effectiveness of communication and marketing techniques adapted to the individual and that comprises the entire collection of techniques allowing the identification and promotion of the self/individual. The main objective is a clear communication with regard to personal identity, no matter by means of which method, so that it gives uniqueness and offers a competitive advantage. Although online promotion is increasingly gaining ground for the creation of a personal brand, an individual’s verbal and nonverbal behaviour represent very important differentiating elements. Starting from the premise that gestures often complement, anticipate, substitute or contradict the verbal, we will endeavour to highlight a number of significations that can be attributed to the various body movements and that can successfully contribute to the creation of a powerful personal brand.

  1. Hegel’s Gesture Towards Radical Cosmopolitanism

    Directory of Open Access Journals (Sweden)

    Shannon Brincat

    2009-09-01

    Full Text Available This is a preliminary argument of a much larger research project inquiring into the relation betweenHegel’s philosophical system and the project of emancipation in Critical International Relations Theory. Specifically, the paper examines how Hegel’s theory of recognition gestures towards a form of radical cosmopolitanism in world politics to ensure the conditions of rational freedom for all humankind. Much of the paper is a ground-clearing exercise defining what is ‘living’ in Hegel’s thought for emancipatory approaches in world politics, to borrow from Croce’s now famous question. It focuses on Hegel’s unique concept of freedom which places recognition as central in the formation of self-consciousness and therefore as a key determinant in the conditions necessary forhuman freedom to emerge in political community. While further research is needed to ascertain the precise relationship between Hegel’s recognition theoretic, emancipation and cosmopolitanism, it is contended that the intersubjective basis of Hegel’s concept of freedom through recognition necessitates some form of radical cosmopolitanism that ensures successful processes of recognition between all peoples, the precise institutional form of which remains unspecified.

  2. Quantum Weyl invariance and cosmology

    Energy Technology Data Exchange (ETDEWEB)

    Dabholkar, Atish, E-mail: atish@ictp.it [International Centre for Theoretical Physics, ICTP-UNESCO, Strada Costiera 11, Trieste 34151 (Italy); Sorbonne Universités, UPMC Univ Paris 06, CNRS UMR 7589, LPTHE, F-75005, Paris (France)

    2016-09-10

    Equations for cosmological evolution are formulated in a Weyl invariant formalism to take into account possible Weyl anomalies. Near two dimensions, the renormalized cosmological term leads to a nonlocal energy-momentum tensor and a slowly decaying vacuum energy. A natural generalization to four dimensions implies a quantum modification of Einstein field equations at long distances. It offers a new perspective on time-dependence of couplings and naturalness with potentially far-reaching consequences for the cosmological constant problem, inflation, and dark energy.

  3. Gesture is more effective than spatial language in encoding spatial information.

    Science.gov (United States)

    So, Wing-Chee; Shum, Priscilla Lok-Chee; Wong, Miranda Kit-Yi

    2015-01-01

    The present research investigates whether producing gestures with and without speech facilitates route learning at different levels of route complexity and in learners with different levels of spatial skills. It also examines whether the facilitation effect of gesture is stronger than that of spatial language. Adults studied routes with 10, 13, and 16 steps and reconstructed them with sticks, either without rehearsal or after rehearsal by producing gestures with speech, gestures alone, or speech only. For all levels of route complexity and spatial skills, participants who were encouraged to gesture (with or without speech) during rehearsal had the best recall. Additionally, we found that number of steps rehearsed in gesture, but not that rehearsed in speech, predicted the recall accuracy. Thus, gesture is more effective than spatial language in encoding spatial information, and thereby enhancing spatial recall. These results further corroborate the beneficial nature of gesture in processing spatial information.

  4. MODIFICATIONS AND FREQUENCY OCCURRENCE OF GESTURES IN NS - NS AND NNS - NS DYADS

    Directory of Open Access Journals (Sweden)

    Juliana Wijaya

    1999-01-01

    Full Text Available In this study, I investigate cross-linguistic differences and similarities in the speech associated gesture in the NS (Native Speaker - NS and NNS (Nonnative Speaker - NS dyads when they are telling a narrative. The gesture production between Indonesian native speakers when communicating in Indonesian (L1 and in English (L2 was coded and assessed based on Mc.Neill's model of overall gesture units. The Indonesian speakers' gesture modification when interacting in English was measured by the size of the gestures. The results indicate that Indonesian native speakers gesture more when they communicate in English and modify their gestures by making them bigger and therefore more noticeable to their interlocutors. They use gestures as a communication strategy to help interlocutors comprehend their idea.

  5. Preserved Imitation of Known Gestures in Children with High-Functioning Autism

    Science.gov (United States)

    Carmo, Joana C.; Rumiati, Raffaella I.; Siugzdaite, Roma; Brambilla, Paolo

    2013-01-01

    It has been suggested that children with autism are particularly deficient at imitating novel gestures or gestures without goals. In the present study, we asked high-functioning autistic children and age-matched typically developing children to imitate several types of gestures that could be either already known or novel to them. Known gestures either conveyed a communicative meaning (i.e., intransitive) or involved the use of objects (i.e., transitive). We observed a significant interaction between gesture type and group of participants, with children with autism performing known gestures better than novel gestures. However, imitation of intransitive and transitive gestures did not differ across groups. These findings are discussed in light of a dual-route model for action imitation. PMID:24062956

  6. Combined Dynamic Time Warping with Multiple Sensors for 3D Gesture Recognition

    National Research Council Canada - National Science Library

    Hyo-Rim Choi; TaeYong Kim

    2017-01-01

    .... Several studies have been undertaken in the field of gesture recognition; however, gesture recognition was conducted based on data captured from various independent sensors, which rendered the capture and combination of real-time data complicated...

  7. Perceived gesture dynamics in nonverbal expression of emotion.

    Science.gov (United States)

    Dael, Nele; Goudbeek, Martijn; Scherer, K R

    2013-01-01

    Recent judgment studies have shown that people are able to fairly correctly attribute emotional states to others' bodily expressions. It is, however, not clear which movement qualities are salient, and how this applies to emotional gesture during speech-based interaction. In this study we investigated how the expression of emotions that vary on three major emotion dimensions-that is, arousal, valence, and potency-affects the perception of dynamic arm gestures. Ten professional actors enacted 12 emotions in a scenario-based social interaction setting. Participants (N = 43) rated all emotional expressions with muted sound and blurred faces on six spatiotemporal characteristics of gestural arm movement that were found to be related to emotion in previous research (amount of movement, movement speed, force, fluency, size, and height/vertical position). Arousal and potency were found to be strong determinants of the perception of gestural dynamics, whereas the differences between positive or negative emotions were less pronounced. These results confirm the importance of arm movement in communicating major emotion dimensions and show that gesture forms an integrated part of multimodal nonverbal emotion communication.

  8. Gesture development in toddlers with an older sibling with autism

    Science.gov (United States)

    LeBarton, Eve Sauer; Iverson, Jana M.

    2016-01-01

    Background Nonverbal communication deficits are characteristic of autism spectrum disorder (ASD) and have been reported in some later-born siblings of children with ASD (heightened-risk (HR) children). However, little work has investigated gesture as a function of language ability, which varies greatly in this population. Aims This longitudinal study characterizes gesture in HR children and examines differences related to diagnostic outcome (ASD, language delay, no diagnosis) and age. Methods & Procedures We coded communicative gesture use for 29 HR children at ages 2 and 3 years during interactions with a caregiver at home. Outcomes & Results Children in the ASD group produced fewer gestures than their HR peers at 2 years, though large individual differences were observed within each subgroup at both ages. In addition, reliance on particular types of gestures varied with age and outcome. Both ASD and language delay children exhibited a pattern of reduced pointing relative to their no diagnosis peers. Conclusions & Implications Similarities and differences exist between communication in HR infants with language delay and their HR peers, reinforcing our understanding of links between verbal and nonverbal communication in populations at risk for language delay. PMID:26343932

  9. Spontaneous gesture and spatial language: Evidence from focal brain injury

    Science.gov (United States)

    Göksun, Tilbe; Lehet, Matthew; Malykhina, Katsiaryna; Chatterjee, Anjan

    2015-01-01

    People often use spontaneous gestures when communicating spatial information. We investigated focal brain-injured individuals to test the hypotheses that (1) naming motion event components of manner-path (represented by verbs-prepositions in English) are impaired selectively, (2) gestures compensate for impaired naming. Patients with left or right hemisphere damage (LHD or RHD) and elderly control participants were asked to describe motion events (e.g., running across) depicted in brief videos. Damage to the left posterior middle frontal gyrus, left inferior frontal gyrus, and left anterior superior temporal gyrus (aSTG) produced impairments in naming paths of motion; lesions to the left caudate and adjacent white matter produced impairments in naming manners of motion. While the frequency of spontaneous gestures were low, lesions to the left aSTG significantly correlated with greater production of path gestures. These suggest that producing prepositions-verbs can be separately impaired and gesture production compensates for naming impairments when damage involves left aSTG. PMID:26283001

  10. Gesturing Meaning: Non-action Words Activate the Motor System.

    Science.gov (United States)

    Bach, Patric; Griffiths, Debra; Weigelt, Matthias; Tipper, Steven P

    2010-01-01

    Across cultures, speakers produce iconic gestures, which add - through the movement of the speakers' hands - a pictorial dimension to the speakers' message. These gestures capture not only the motor content but also the visuospatial content of the message. Here, we provide first evidence for a direct link between the representation of perceptual information and the motor system that can account for these observations. Across four experiments, participants' hand movements captured both shapes that were directly perceived, and shapes that were only implicitly activated by unrelated semantic judgments of object words. These results were obtained even though the objects were not associated with any motor behaviors that would match the gestures the participants had to produce. Moreover, implied shape affected not only gesture selection processes but also their actual execution - as measured by the shape of hand motion through space - revealing intimate links between implied shape representation and motor output. The results are discussed in terms of ideomotor theories of action and perception, and provide one avenue for explaining the ubiquitous phenomenon of iconic gestures.

  11. Web-based interactive drone control using hand gesture

    Science.gov (United States)

    Zhao, Zhenfei; Luo, Hao; Song, Guang-Hua; Chen, Zhou; Lu, Zhe-Ming; Wu, Xiaofeng

    2018-01-01

    This paper develops a drone control prototype based on web technology with the aid of hand gesture. The uplink control command and downlink data (e.g., video) are transmitted by WiFi communication, and all the information exchange is realized on web. The control command is translated from various predetermined hand gestures. Specifically, the hardware of this friendly interactive control system is composed by a quadrotor drone, a computer vision-based hand gesture sensor, and a cost-effective computer. The software is simplified as a web-based user interface program. Aided by natural hand gestures, this system significantly reduces the complexity of traditional human-computer interaction, making remote drone operation more intuitive. Meanwhile, a web-based automatic control mode is provided in addition to the hand gesture control mode. For both operation modes, no extra application program is needed to be installed on the computer. Experimental results demonstrate the effectiveness and efficiency of the proposed system, including control accuracy, operation latency, etc. This system can be used in many applications such as controlling a drone in global positioning system denied environment or by handlers without professional drone control knowledge since it is easy to get started.

  12. Proposing a speech to gesture translation architecture for Spanish deaf people.

    OpenAIRE

    San Segundo Hernández, Rubén; Montero Martínez, Juan Manuel; Macías Guarasa, Javier; Córdoba Herralde, Ricardo de; Ferreiros López, Javier; PARDO MUÑOZ, JOSÉ MANUEL

    2008-01-01

    This article describes an architecture for translating speech into Spanish Sign Language (SSL). The architecture proposed is made up of four modules: speech recognizer, semantic analysis, gesture sequence generation and gesture playing. For the speech recognizer and the semantic analysis modules, we use software developed by IBM and CSLR (Center for Spoken Language Research at University of Colorado), respectively. Gesture sequence generation and gesture animation are the modules on which we ...

  13. Lexical learning in mild aphasia: gesture benefit depends on patholinguistic profile and lesion pattern.

    Science.gov (United States)

    Kroenke, Klaus-Martin; Kraft, Indra; Regenbrecht, Frank; Obrig, Hellmuth

    2013-01-01

    Gestures accompany speech and enrich human communication. When aphasia interferes with verbal abilities, gestures become even more relevant, compensating for and/or facilitating verbal communication. However, small-scale clinical studies yielded diverging results with regard to a therapeutic gesture benefit for lexical retrieval. Based on recent functional neuroimaging results, delineating a speech-gesture integration network for lexical learning in healthy adults, we hypothesized that the commonly observed variability may stem from differential patholinguistic profiles in turn depending on lesion pattern. Therefore we used a controlled novel word learning paradigm to probe the impact of gestures on lexical learning, in the lesioned language network. Fourteen patients with chronic left hemispheric lesions and mild residual aphasia learned 30 novel words for manipulable objects over four days. Half of the words were trained with gestures while the other half were trained purely verbally. For the gesture condition, rootwords were visually presented (e.g., Klavier, [piano]), followed by videos of the corresponding gestures and the auditory presentation of the novel words (e.g., /krulo/). Participants had to repeat pseudowords and simultaneously reproduce gestures. In the verbal condition no gesture-video was shown and participants only repeated pseudowords orally. Correlational analyses confirmed that gesture benefit depends on the patholinguistic profile: lesser lexico-semantic impairment correlated with better gesture-enhanced learning. Conversely largely preserved segmental-phonological capabilities correlated with better purely verbal learning. Moreover, structural MRI-analysis disclosed differential lesion patterns, most interestingly suggesting that integrity of the left anterior temporal pole predicted gesture benefit. Thus largely preserved semantic capabilities and relative integrity of a semantic integration network are prerequisites for successful use of

  14. A common functional neural network for overt production of speech and gesture.

    Science.gov (United States)

    Marstaller, L; Burianová, H

    2015-01-22

    The perception of co-speech gestures, i.e., hand movements that co-occur with speech, has been investigated by several studies. The results show that the perception of co-speech gestures engages a core set of frontal, temporal, and parietal areas. However, no study has yet investigated the neural processes underlying the production of co-speech gestures. Specifically, it remains an open question whether Broca's area is central to the coordination of speech and gestures as has been suggested previously. The objective of this study was to use functional magnetic resonance imaging to (i) investigate the regional activations underlying overt production of speech, gestures, and co-speech gestures, and (ii) examine functional connectivity with Broca's area. We hypothesized that co-speech gesture production would activate frontal, temporal, and parietal regions that are similar to areas previously found during co-speech gesture perception and that both speech and gesture as well as co-speech gesture production would engage a neural network connected to Broca's area. Whole-brain analysis confirmed our hypothesis and showed that co-speech gesturing did engage brain areas that form part of networks known to subserve language and gesture. Functional connectivity analysis further revealed a functional network connected to Broca's area that is common to speech, gesture, and co-speech gesture production. This network consists of brain areas that play essential roles in motor control, suggesting that the coordination of speech and gesture is mediated by a shared motor control network. Our findings thus lend support to the idea that speech can influence co-speech gesture production on a motoric level. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.

  15. A Local Galilean Invariant Thermostat.

    Science.gov (United States)

    Groot, Robert D

    2006-05-01

    The thermostat introduced recently by Stoyanov and Groot (J. Chem. Phys. 2005, 122, 114112) is analyzed for inhomogeneous systems. This thermostat has one global feature, because the mean temperature used to drive the system toward equilibrium is a global average. The consequence is that the thermostat locally conserves energy rather than temperature. Thus, local temperature variations can be long-lived, although they do average out by thermal diffusion. To obtain a faster local temperature equilibration, a truly local thermostat must be introduced. To conserve momentum and, hence, to simulate hydrodynamic interactions, the thermostat must be Galilean invariant. Such a local Galilean invariant thermostat is studied here. It is shown that, by defining a local temperature on each particle, the ensemble is locally isothermal. The local temperature is obtained from a local square velocity average around each particle. Simulations on the ideal gas show that this local Nosé-Hoover algorithm has a similar artifact as dissipative particle dynamics:  the ideal gas pair correlation function is slightly distorted. This is attributed to the fact that the thermostat compensates fluctuations that are natural within a small cluster of particles. When the cutoff range rc for the square velocity average is increased, systematic errors decrease proportionally to rc(-)(3/2); hence, the systematic error can be made arbitrary small.

  16. A Coding System with Independent Annotations of Gesture Forms and Functions during Verbal Communication: Development of a Database of Speech and GEsture (DoSaGE)

    Science.gov (United States)

    Kong, Anthony Pak-Hin; Law, Sam-Po; Kwan, Connie Ching-Yin; Lai, Christy; Lam, Vivian

    2014-01-01

    Gestures are commonly used together with spoken language in human communication. One major limitation of gesture investigations in the existing literature lies in the fact that the coding of forms and functions of gestures has not been clearly differentiated. This paper first described a recently developed Database of Speech and GEsture (DoSaGE) based on independent annotation of gesture forms and functions among 119 neurologically unimpaired right-handed native speakers of Cantonese (divided into three age and two education levels), and presented findings of an investigation examining how gesture use was related to age and linguistic performance. Consideration of these two factors, for which normative data are currently very limited or lacking in the literature, is relevant and necessary when one evaluates gesture employment among individuals with and without language impairment. Three speech tasks, including monologue of a personally important event, sequential description, and story-telling, were used for elicitation. The EUDICO Linguistic ANnotator (ELAN) software was used to independently annotate each participant’s linguistic information of the transcript, forms of gestures used, and the function for each gesture. About one-third of the subjects did not use any co-verbal gestures. While the majority of gestures were non-content-carrying, which functioned mainly for reinforcing speech intonation or controlling speech flow, the content-carrying ones were used to enhance speech content. Furthermore, individuals who are younger or linguistically more proficient tended to use fewer gestures, suggesting that normal speakers gesture differently as a function of age and linguistic performance. PMID:25667563

  17. A Coding System with Independent Annotations of Gesture Forms and Functions during Verbal Communication: Development of a Database of Speech and GEsture (DoSaGE).

    Science.gov (United States)

    Kong, Anthony Pak-Hin; Law, Sam-Po; Kwan, Connie Ching-Yin; Lai, Christy; Lam, Vivian

    2015-03-01

    Gestures are commonly used together with spoken language in human communication. One major limitation of gesture investigations in the existing literature lies in the fact that the coding of forms and functions of gestures has not been clearly differentiated. This paper first described a recently developed Database of Speech and GEsture (DoSaGE) based on independent annotation of gesture forms and functions among 119 neurologically unimpaired right-handed native speakers of Cantonese (divided into three age and two education levels), and presented findings of an investigation examining how gesture use was related to age and linguistic performance. Consideration of these two factors, for which normative data are currently very limited or lacking in the literature, is relevant and necessary when one evaluates gesture employment among individuals with and without language impairment. Three speech tasks, including monologue of a personally important event, sequential description, and story-telling, were used for elicitation. The EUDICO Linguistic ANnotator (ELAN) software was used to independently annotate each participant's linguistic information of the transcript, forms of gestures used, and the function for each gesture. About one-third of the subjects did not use any co-verbal gestures. While the majority of gestures were non-content-carrying, which functioned mainly for reinforcing speech intonation or controlling speech flow, the content-carrying ones were used to enhance speech content. Furthermore, individuals who are younger or linguistically more proficient tended to use fewer gestures, suggesting that normal speakers gesture differently as a function of age and linguistic performance.

  18. Gesture in Multiparty Interaction: A Study of Embodied Discourse in Spoken English and American Sign Language

    Science.gov (United States)

    Shaw, Emily P.

    2013-01-01

    This dissertation is an examination of gesture in two game nights: one in spoken English between four hearing friends and another in American Sign Language between four Deaf friends. Analyses of gesture have shown there exists a complex integration of manual gestures with speech. Analyses of sign language have implicated the body as a medium…

  19. Review Teknik, Teknologi, Metodologi Dan Implementasi Pengenalan Gestur Tangan Berbasis Visi

    OpenAIRE

    Sunyoto, Andi; Harjoko, Agus

    2014-01-01

    Semakin berkembangnya teknologi komputer dan pemanfaatannya dimasyarakat, model interaksi konvensional (mouse dan keyboard) akan menjadi hambatan pemanfaatan arus informasi antara manusia dan komputer. Pengenalan gestur berbasis visi menjadi alat yang alami untuk mensupport efisiensi dan intituitif interaksi antara manusia dan komputer. Pengenalan gestur (gesture recognition) adalah untuk mengenali makna dari ekpresi gerakan manusia, temasuk didalamnya tangan, lengan, wajah, kepala, dan atau ...

  20. Methodological Reflections on Gesture Analysis in Second Language Acquisition and Bilingualism Research

    Science.gov (United States)

    Gullberg, Marianne

    2010-01-01

    Gestures, i.e. the symbolic movements that speakers perform while they speak, form a closely interconnected system with speech, where gestures serve both addressee-directed ("communicative") and speaker-directed ("internal") functions. This article aims (1) to show that a combined analysis of gesture and speech offers new ways to address…

  1. [Verbal and gestural communication in interpersonal interaction with Alzheimer's disease patients].

    Science.gov (United States)

    Schiaratura, Loris Tamara; Di Pastena, Angela; Askevis-Leherpeux, Françoise; Clément, Sylvain

    2015-03-01

    Communication can be defined as a verbal and non verbal exchange of thoughts and emotions. While verbal communication deficit in Alzheimer's disease is well documented, very little is known about gestural communication, especially in interpersonal situations. This study examines the production of gestures and its relations with verbal aspects of communication. Three patients suffering from moderately severe Alzheimer's disease were compared to three healthy adults. Each one were given a series of pictures and asked to explain which one she preferred and why. The interpersonal interaction was video recorded. Analyses concerned verbal production (quantity and quality) and gestures. Gestures were either non representational (i.e., gestures of small amplitude punctuating speech or accentuating some parts of utterance) or representational (i.e., referring to the object of the speech). Representational gestures were coded as iconic (depicting of concrete aspects), metaphoric (depicting of abstract meaning) or deictic (pointing toward an object). In comparison with healthy participants, patients revealed a decrease in quantity and quality of speech. Nevertheless, their production of gestures was always present. This pattern is in line with the conception that gestures and speech depend on different communicational systems and look inconsistent with the assumption of a parallel dissolution of gesture and speech. Moreover, analyzing the articulation between verbal and gestural dimensions suggests that representational gestures may compensate for speech deficits. It underlines the importance for the role of gestures in maintaining interpersonal communication.

  2. The Understanding and Use of Interpersonal Gestures by Autistic and Down's Syndrome Children.

    Science.gov (United States)

    Attwood, Anthony; And Others

    1988-01-01

    Regardless of diagnosis, 22 autistic adolescents, 21 Down's syndrome adolescents, and a sample of clinically normal preschoolers were all able to respond correctly to simple instrumental gestures (e.g., be quiet, come here). However, the ability to initiate such gestures varied, and no autistic adolescent ever used expressive gestures. (JW)

  3. Asymmetric dynamic attunement of speech and gestures as children learn: competing synergies?

    NARCIS (Netherlands)

    de Jonge-Hoekstra, Lisette; van der Steen, Steffie; van Geert, Paul; Cox, Ralf

    2016-01-01

    Gestures and speech are mostly well-aligned. However, during difficult tasks, gesture-speech mismatches occur. This study investigates the dynamic interplay between gestures and speech as 12 children perform a hands-on air pressure task. We applied Cross Recurrence Quantification Analysis on the

  4. Domestic Dogs Use Contextual Information and Tone of Voice when following a Human Pointing Gesture

    NARCIS (Netherlands)

    Scheider, Linda; Grassmann, Susanne; Kaminski, Juliane; Tomasello, Michael

    2011-01-01

    Domestic dogs are skillful at using the human pointing gesture. In this study we investigated whether dogs take contextual information into account when following pointing gestures, specifically, whether they follow human pointing gestures more readily in the context in which food has been found

  5. Who Did What to Whom? Children Track Story Referents First in Gesture

    Science.gov (United States)

    Stites, Lauren J.; Özçaliskan, Seyda

    2017-01-01

    Children achieve increasingly complex language milestones initially in gesture or in gesture+speech combinations before they do so in speech, from first words to first sentences. In this study, we ask whether gesture continues to be part of the language-learning process as children begin to develop more complex language skills, namely narratives.…

  6. Gesture analysis of students' majoring mathematics education in micro teaching process

    Science.gov (United States)

    Maldini, Agnesya; Usodo, Budi; Subanti, Sri

    2017-08-01

    In the process of learning, especially math learning, process of interaction between teachers and students is certainly a noteworthy thing. In these interactions appear gestures or other body spontaneously. Gesture is an important source of information, because it supports oral communication and reduce the ambiguity of understanding the concept/meaning of the material and improve posture. This research which is particularly suitable for an exploratory research design to provide an initial illustration of the phenomenon. The goal of the research in this article is to describe the gesture of S1 and S2 students of mathematics education at the micro teaching process. To analyze gesture subjects, researchers used McNeil clarification. The result is two subjects using 238 gesture in the process of micro teaching as a means of conveying ideas and concepts in mathematics learning. During the process of micro teaching, subjects using the four types of gesture that is iconic gestures, deictic gesture, regulator gesturesand adapter gesture as a means to facilitate the delivery of the intent of the material being taught and communication to the listener. Variance gesture that appear on the subject due to the subject using a different gesture patterns to communicate mathematical ideas of their own so that the intensity of gesture that appeared too different.

  7. Parent-Child Gesture Use during Problem Solving in Autistic Spectrum Disorder

    Science.gov (United States)

    Medeiros, Kristen; Winsler, Adam

    2014-01-01

    This study examined the relationship between child language skills and parent and child gestures of 58 youths with and without an autism spectrum disorder (ASD) diagnosis. Frequencies and rates of total gesture use as well as five categories of gestures (deictic, conventional, beat, iconic, and metaphoric) were reliably coded during the…

  8. Modelling Gesture Use and Early Language Development in Autism Spectrum Disorder

    Science.gov (United States)

    Manwaring, Stacy S.; Mead, Danielle L.; Swineford, Lauren; Thurm, Audrey

    2017-01-01

    Background: Nonverbal communication abilities, including gesture use, are impaired in autism spectrum disorder (ASD). However, little is known about how common gestures may influence or be influenced by other areas of development. Aims: To examine the relationships between gesture, fine motor and language in young children with ASD compared with a…

  9. A User-Developed 3-D Hand Gesture Set for Human-Computer Interaction.

    Science.gov (United States)

    Pereira, Anna; Wachs, Juan P; Park, Kunwoo; Rempel, David

    2015-06-01

    The purpose of this study was to develop a lexicon for 3-D hand gestures for common human-computer interaction (HCI) tasks by considering usability and effort ratings. Recent technologies create an opportunity for developing a free-form 3-D hand gesture lexicon for HCI. Subjects (N = 30) with prior experience using 2-D gestures on touch screens performed 3-D gestures of their choice for 34 common HCI tasks and rated their gestures on preference, match, ease, and effort. Videos of the 1,300 generated gestures were analyzed for gesture popularity, order, and response times. Gesture hand postures were rated by the authors on biomechanical risk and fatigue. A final task gesture set is proposed based primarily on subjective ratings and hand posture risk. The different dimensions used for evaluating task gestures were not highly correlated and, therefore, measured different properties of the task-gesture match. A method is proposed for generating a user-developed 3-D gesture lexicon for common HCIs that involves subjective ratings and a posture risk rating for minimizing arm and hand fatigue. © 2014, Human Factors and Ergonomics Society.

  10. Baby Sign but Not Spontaneous Gesture Predicts Later Vocabulary in Children with Down Syndrome

    Science.gov (United States)

    Özçaliskan, Seyda; Adamson, Lauren B.; Dimitrova, Nevena; Bailey, Jhonelle; Schmuck, Lauren

    2016-01-01

    Early spontaneous gesture, specifically deictic gesture, predicts subsequent vocabulary development in typically developing (TD) children. Here, we ask whether deictic gesture plays a similar role in predicting later vocabulary size in children with Down Syndrome (DS), who have been shown to have difficulties in speech production, but strengths in…

  11. Prosodic Structure Shapes the Temporal Realization of Intonation and Manual Gesture Movements

    Science.gov (United States)

    Esteve-Gibert, Nuria; Prieto, Pilar

    2013-01-01

    Purpose: Previous work on the temporal coordination between gesture and speech found that the prominence in gesture coordinates with speech prominence. In this study, the authors investigated the anchoring regions in speech and pointing gesture that align with each other. The authors hypothesized that (a) in contrastive focus conditions, the…

  12. Using a social robot to teach gestural recognition and production in children with autism spectrum disorders.

    Science.gov (United States)

    So, Wing-Chee; Wong, Miranda Kit-Yi; Lam, Carrie Ka-Yee; Lam, Wan-Yi; Chui, Anthony Tsz-Fung; Lee, Tsz-Lok; Ng, Hoi-Man; Chan, Chun-Hung; Fok, Daniel Chun-Wing

    2017-07-04

    While it has been argued that children with autism spectrum disorders are responsive to robot-like toys, very little research has examined the impact of robot-based intervention on gesture use. These children have delayed gestural development. We used a social robot in two phases to teach them to recognize and produce eight pantomime gestures that expressed feelings and needs. Compared to the children in the wait-list control group (N = 6), those in the intervention group (N = 7) were more likely to recognize gestures and to gesture accurately in trained and untrained scenarios. They also generalized the acquired recognition (but not production) skills to human-to-human interaction. The benefits and limitations of robot-based intervention for gestural learning were highlighted. Implications for Rehabilitation Compared to typically-developing children, children with autism spectrum disorders have delayed development of gesture comprehension and production. Robot-based intervention program was developed to teach children with autism spectrum disorders recognition (Phase I) and production (Phase II) of eight pantomime gestures that expressed feelings and needs. Children in the intervention group (but not in the wait-list control group) were able to recognize more gestures in both trained and untrained scenarios and generalize the acquired gestural recognition skills to human-to-human interaction. Similar findings were reported for gestural production except that there was no strong evidence showing children in the intervention group could produce gestures accurately in human-to-human interaction.

  13. Observing Iconic Gestures Enhances Word Learning in Typically Developing Children and Children with Specific Language Impairment

    Science.gov (United States)

    Vogt, Susanne; Kauschke, Christina

    2017-01-01

    Research has shown that observing iconic gestures helps typically developing children (TD) and children with specific language impairment (SLI) learn new words. So far, studies mostly compared word learning with and without gestures. The present study investigated word learning under two gesture conditions in children with and without language…

  14. Gestural Communication in Children with Autism Spectrum Disorders during Mother-Child Interaction

    Science.gov (United States)

    Mastrogiuseppe, Marilina; Capirci, Olga; Cuva, Simone; Venuti, Paola

    2015-01-01

    Children with autism spectrum disorders display atypical development of gesture production, and gesture impairment is one of the determining factors of autism spectrum disorder diagnosis. Despite the obvious importance of this issue for children with autism spectrum disorder, the literature on gestures in autism is scarce and contradictory. The…

  15. The Co-Development of Speech and Gesture in Children with Autism

    Science.gov (United States)

    Sowden, Hannah; Perkins, Mick; Clegg, Judy

    2008-01-01

    Recent interest in gesture has led to an understanding of the development of gesture and speech in typically developing young children. Research suggests that initially gesture and speech form two independent systems which combine together temporally and semantically before children enter the two-word period of language development. However,…

  16. Invariance of visual operations at the level of receptive fields.

    Directory of Open Access Journals (Sweden)

    Tony Lindeberg

    Full Text Available The brain is able to maintain a stable perception although the visual stimuli vary substantially on the retina due to geometric transformations and lighting variations in the environment. This paper presents a theory for achieving basic invariance properties already at the level of receptive fields. Specifically, the presented framework comprises (i local scaling transformations caused by objects of different size and at different distances to the observer, (ii locally linearized image deformations caused by variations in the viewing direction in relation to the object, (iii locally linearized relative motions between the object and the observer and (iv local multiplicative intensity transformations caused by illumination variations. The receptive field model can be derived by necessity from symmetry properties of the environment and leads to predictions about receptive field profiles in good agreement with receptive field profiles measured by cell recordings in mammalian vision. Indeed, the receptive field profiles in the retina, LGN and V1 are close to ideal to what is motivated by the idealized requirements. By complementing receptive field measurements with selection mechanisms over the parameters in the receptive field families, it is shown how true invariance of receptive field responses can be obtained under scaling transformations, affine transformations and Galilean transformations. Thereby, the framework provides a mathematically well-founded and biologically plausible model for how basic invariance properties can be achieved already at the level of receptive fields and support invariant recognition of objects and events under variations in viewpoint, retinal size, object motion and illumination. The theory can explain the different shapes of receptive field profiles found in biological vision, which are tuned to different sizes and orientations in the image domain as well as to different image velocities in space-time, from a

  17. Multispectral and hyperspectral images invariant to illumination

    OpenAIRE

    Yazdani Salekdeh, Amin

    2011-01-01

    In this thesis a novel method is proposed that makes use of multispectral and hyperspectral image data to generate a novel photometric-invariant spectral image. For RGB colour image, an illuminant-invariant image was constructed independent of the illuminant and shading. To generate this image either a set of calibration images was required, or entropy information from a single image was used. For spectral images we show that photometric-invariant image formation is in essence greatly simplif...

  18. Invariant texture segmentation via circular gabor filter

    OpenAIRE

    ZHANG, Jianguo; Tan, Tieniu

    2002-01-01

    International audience; In this paper, we focus on invariant texture segmentation, and propose a new method using circular Gabor filters (CGF) for rotation invariant texture segmentation. The traditional Gabor function is modified into a circular symmetric version. The rotation invariant texture features are achieved via the channel output of the CGF. A new scheme of the selection of Gabor parameters is also proposed for texture segmentation. Experiments show the efficacy of this method

  19. Character-based Recognition of Simple Word Gesture

    Directory of Open Access Journals (Sweden)

    Paulus Insap Santosa

    2013-11-01

    Full Text Available People with normal senses use spoken language to communicate with others. This method cannot be used by those with hearing and speech impaired. These two groups of people will have difficulty when they try to communicate to each other using their own language. Sign language is not easy to learn, as there are various sign languages, and not many tutors are available. This research focused on a simple word recognition gesture based on characters that form a word to be recognized. The method used for character recognition was the nearest neighbour method. This method identified different fingers using the different markers attached to each finger. Testing a simple word gesture recognition is done by providing a series of characters that make up the intended simple word. The accuracy of a simple word gesture recognition depended upon the accuracy of recognition of each character.

  20. Hand Gesture Recognition Using Modified 1$ and Background Subtraction Algorithms

    Directory of Open Access Journals (Sweden)

    Hazem Khaled

    2015-01-01

    Full Text Available Computers and computerized machines have tremendously penetrated all aspects of our lives. This raises the importance of Human-Computer Interface (HCI. The common HCI techniques still rely on simple devices such as keyboard, mice, and joysticks, which are not enough to convoy the latest technology. Hand gesture has become one of the most important attractive alternatives to existing traditional HCI techniques. This paper proposes a new hand gesture detection system for Human-Computer Interaction using real-time video streaming. This is achieved by removing the background using average background algorithm and the 1$ algorithm for hand’s template matching. Then every hand gesture is translated to commands that can be used to control robot movements. The simulation results show that the proposed algorithm can achieve high detection rate and small recognition time under different light changes, scales, rotation, and background.

  1. Spatial and temporal segmented dense trajectories for gesture recognition

    Science.gov (United States)

    Yamada, Kaho; Yoshida, Takeshi; Sumi, Kazuhiko; Habe, Hitoshi; Mitsugami, Ikuhisa

    2017-03-01

    Recently, dense trajectories [1] have been shown to be a successful video representation for action recognition, and have demonstrated state-of-the-art results with a variety of datasets. However, if we apply these trajectories to gesture recognition, recognizing similar and fine-grained motions is problematic. In this paper, we propose a new method in which dense trajectories are calculated in segmented regions around detected human body parts. Spatial segmentation is achieved by body part detection [2]. Temporal segmentation is performed for a fixed number of video frames. The proposed method removes background video noise and can recognize similar and fine-grained motions. Only a few video datasets are available for gesture classification; therefore, we have constructed a new gesture dataset and evaluated the proposed method using this dataset. The experimental results show that the proposed method outperforms the original dense trajectories.

  2. An Efficient Solution for Hand Gesture Recognition from Video Sequence

    Directory of Open Access Journals (Sweden)

    PRODAN, R.-C.

    2012-08-01

    Full Text Available The paper describes a system of hand gesture recognition by image processing for human robot interaction. The recognition and interpretation of the hand postures acquired through a video camera allow the control of the robotic arm activity: motion - translation and rotation in 3D - and tightening/releasing the clamp. A gesture dictionary was defined and heuristic algorithms for recognition were developed and tested. The system can be used for academic and industrial purposes, especially for those activities where the movements of the robotic arm were not previously scheduled, for training the robot easier than using a remote control. Besides the gesture dictionary, the novelty of the paper consists in a new technique for detecting the relative positions of the fingers in order to recognize the various hand postures, and in the achievement of a robust system for controlling robots by postures of the hands.

  3. Does brain injury impair speech and gesture differently?

    Directory of Open Access Journals (Sweden)

    Tilbe Göksun

    2016-09-01

    Full Text Available People often use spontaneous gestures when talking about space, such as when giving directions. In a recent study from our lab, we examined whether focal brain-injured individuals’ naming motion event components of manner and path (represented in English by verbs and prepositions, respectively are impaired selectively, and whether gestures compensate for impairment in speech. Left or right hemisphere damaged patients and elderly control participants were asked to describe motion events (e.g., walking around depicted in brief videos. Results suggest that producing verbs and prepositions can be separately impaired in the left hemisphere and gesture production compensates for naming impairments when damage involves specific areas in the left temporal cortex.

  4. Recognition of Hand Gestures Observed by Depth Cameras

    Directory of Open Access Journals (Sweden)

    Tomasz Kapuscinski

    2015-04-01

    Full Text Available We focus on gesture recognition based on 3D information in the form of a point cloud of the observed scene. A descriptor of the scene is built on the basis of a Viewpoint Feature Histogram (VFH. To increase the distinctiveness of the descriptor the scene is divided into smaller 3D cells and VFH is calculated for each of them. A verification of the method on publicly available Polish and American sign language datasets containing dynamic gestures as well as hand postures acquired by a time-of-flight (ToF camera or Kinect is presented. Results of cross-validation test are given. Hand postures are recognized using a nearest neighbour classifier with city-block distance. For dynamic gestures two types of classifiers are applied: (i the nearest neighbour technique with dynamic time warping and (ii hidden Markov models. The results confirm the usefulness of our approach.

  5. Hamiltonian approach to second order gauge invariant cosmological perturbations

    Science.gov (United States)

    Domènech, Guillem; Sasaki, Misao

    2018-01-01

    In view of growing interest in tensor modes and their possible detection, we clarify the definition of tensor modes up to 2nd order in perturbation theory within the Hamiltonian formalism. Like in gauge theory, in cosmology the Hamiltonian is a suitable and consistent approach to reduce the gauge degrees of freedom. In this paper we employ the Faddeev-Jackiw method of Hamiltonian reduction. An appropriate set of gauge invariant variables that describe the dynamical degrees of freedom may be obtained by suitable canonical transformations in the phase space. We derive a set of gauge invariant variables up to 2nd order in perturbation expansion and for the first time we reduce the 3rd order action without adding gauge fixing terms. In particular, we are able to show the relation between the uniform-ϕ and Newtonian slicings, and study the difference in the definition of tensor modes in these two slicings.

  6. Wilson loop invariants from WN conformal blocks

    Directory of Open Access Journals (Sweden)

    Oleg Alekseev

    2015-12-01

    Full Text Available Knot and link polynomials are topological invariants calculated from the expectation value of loop operators in topological field theories. In 3D Chern–Simons theory, these invariants can be found from crossing and braiding matrices of four-point conformal blocks of the boundary 2D CFT. We calculate crossing and braiding matrices for WN conformal blocks with one component in the fundamental representation and another component in a rectangular representation of SU(N, which can be used to obtain HOMFLY knot and link invariants for these cases. We also discuss how our approach can be generalized to invariants in higher-representations of WN algebra.

  7. Interrogating the Founding Gestures of the New Materialism

    Directory of Open Access Journals (Sweden)

    Dennis Bruining

    2016-11-01

    Full Text Available In this article, I aim to further thinking in the broadly ‘new materialist’ field by insisting it attends to some ubiquitous assumptions. More specifically, I critically interrogate what Sara Ahmed has termed ‘the founding gestures of the “new materialism”’. These founding rhetorical gestures revolve around a perceived neglect of the matter of materiality in ‘postmodernism’ and ‘poststructuralism’ and are meant to pave the way for new materialism’s own conception of matter-in/of-the-world. I argue in this article that an engagement with the postmodern critique of language as constitutive, as well as the poststructuralist critique of pure self-presence, does not warrant these founding gestures to be so uncritically rehearsed. Moreover, I demonstrate that texts which rely on these gestures, or at least the ones I discuss in this article, are not only founded on a misrepresentation of postmodern and poststructuralist thought, but are also guilty of repeating the perceived mistakes of which they are critical, such as upholding the language/matter dichotomy. I discuss a small selection of texts that make use of those popular rhetorical gestures to juxtapose the past that is invoked with a more nuanced reading of that past. My contention is that if ‘the founding gestures of the “new materialism”’ are not addressed, the complexity of the postmodern and poststructuralist positions continues to be obscured, with damaging consequences for the further development of the emerging field of new materialism, as well as our understanding of cultural theory’s past.

  8. Scientific Visualization of Radio Astronomy Data using Gesture Interaction

    Science.gov (United States)

    Mulumba, P.; Gain, J.; Marais, P.; Woudt, P.

    2015-09-01

    MeerKAT in South Africa (Meer = More Karoo Array Telescope) will require software to help visualize, interpret and interact with multidimensional data. While visualization of multi-dimensional data is a well explored topic, little work has been published on the design of intuitive interfaces to such systems. More specifically, the use of non-traditional interfaces (such as motion tracking and multi-touch) has not been widely investigated within the context of visualizing astronomy data. We hypothesize that a natural user interface would allow for easier data exploration which would in turn lead to certain kinds of visualizations (volumetric, multidimensional). To this end, we have developed a multi-platform scientific visualization system for FITS spectral data cubes using VTK (Visualization Toolkit) and a natural user interface to explore the interaction between a gesture input device and multidimensional data space. Our system supports visual transformations (translation, rotation and scaling) as well as sub-volume extraction and arbitrary slicing of 3D volumetric data. These tasks were implemented across three prototypes aimed at exploring different interaction strategies: standard (mouse/keyboard) interaction, volumetric gesture tracking (Leap Motion controller) and multi-touch interaction (multi-touch monitor). A Heuristic Evaluation revealed that the volumetric gesture tracking prototype shows great promise for interfacing with the depth component (z-axis) of 3D volumetric space across multiple transformations. However, this is limited by users needing to remember the required gestures. In comparison, the touch-based gesture navigation is typically more familiar to users as these gestures were engineered from standard multi-touch actions. Future work will address a complete usability test to evaluate and compare the different interaction modalities against the different visualization tasks.

  9. Dynamic Monitoring Reveals Motor Task Characteristics in Prehistoric Technical Gestures.

    Directory of Open Access Journals (Sweden)

    Johannes Pfleging

    Full Text Available Reconstructing ancient technical gestures associated with simple tool actions is crucial for understanding the co-evolution of the human forelimb and its associated control-related cognitive functions on the one hand, and of the human technological arsenal on the other hand. Although the topic of gesture is an old one in Paleolithic archaeology and in anthropology in general, very few studies have taken advantage of the new technologies from the science of kinematics in order to improve replicative experimental protocols. Recent work in paleoanthropology has shown the potential of monitored replicative experiments to reconstruct tool-use-related motions through the study of fossil bones, but so far comparatively little has been done to examine the dynamics of the tool itself. In this paper, we demonstrate that we can statistically differentiate gestures used in a simple scraping task through dynamic monitoring. Dynamics combines kinematics (position, orientation, and speed with contact mechanical parameters (force and torque. Taken together, these parameters are important because they play a role in the formation of a visible archaeological signature, use-wear. We present our new affordable, yet precise methodology for measuring the dynamics of a simple hide-scraping task, carried out using a pull-to (PT and a push-away (PA gesture. A strain gage force sensor combined with a visual tag tracking system records force, torque, as well as position and orientation of hafted flint stone tools. The set-up allows switching between two tool configurations, one with distal and the other one with perpendicular hafting of the scrapers, to allow for ethnographically plausible reconstructions. The data show statistically significant differences between the two gestures: scraping away from the body (PA generates higher shearing forces, but requires greater hand torque. Moreover, most benchmarks associated with the PA gesture are more highly variable than in

  10. Gesture Recognition for Educational Games: Magic Touch Math

    Science.gov (United States)

    Kye, Neo Wen; Mustapha, Aida; Azah Samsudin, Noor

    2017-08-01

    Children nowadays are having problem learning and understanding basic mathematical operations because they are not interested in studying or learning mathematics. This project proposes an educational game called Magic Touch Math that focuses on basic mathematical operations targeted to children between the age of three to five years old using gesture recognition to interact with the game. Magic Touch Math was developed in accordance to the Game Development Life Cycle (GDLC) methodology. The prototype developed has helped children to learn basic mathematical operations via intuitive gestures. It is hoped that the application is able to get the children motivated and interested in mathematics.

  11. Invariants of DNA genomic signals

    Science.gov (United States)

    Cristea, Paul Dan A.

    2005-02-01

    For large scale analysis purposes, the conversion of genomic sequences into digital signals opens the possibility to use powerful signal processing methods for handling genomic information. The study of complex genomic signals reveals large scale features, maintained over the scale of whole chromosomes, that would be difficult to find by using only the symbolic representation. Based on genomic signal methods and on statistical techniques, the paper defines parameters of DNA sequences which are invariant to transformations induced by SNPs, splicing or crossover. Re-orienting concatenated coding regions in the same direction, regularities shared by the genomic material in all exons are revealed, pointing towards the hypothesis of a regular ancestral structure from which the current chromosome structures have evolved. This property is not found in non-nuclear genomic material, e.g., plasmids.

  12. Scale invariance in road networks.

    Science.gov (United States)

    Kalapala, Vamsi; Sanwalani, Vishal; Clauset, Aaron; Moore, Cristopher

    2006-02-01

    We study the topological and geographic structure of the national road networks of the United States, England, and Denmark. By transforming these networks into their dual representation, where roads are vertices and an edge connects two vertices if the corresponding roads ever intersect, we show that they exhibit both topological and geographic scale invariance. That is, we show that for sufficiently large geographic areas, the dual degree distribution follows a power law with exponent 2.2< or = alpha < or =2.4, and that journeys, regardless of their length, have a largely identical structure. To explain these properties, we introduce and analyze a simple fractal model of road placement that reproduces the observed structure, and suggests a testable connection between the scaling exponent and the fractal dimensions governing the placement of roads and intersections.

  13. Negation switching invariant signed graphs

    Directory of Open Access Journals (Sweden)

    Deepa Sinha

    2014-04-01

    Full Text Available A signed graph (or, $sigraph$ in short is a graph G in which each edge x carries a value $\\sigma(x \\in \\{-, +\\}$ called its sign. Given a sigraph S, the negation $\\eta(S$ of the sigraph S is a sigraph obtained from S by reversing the sign of every edge of S. Two sigraphs $S_{1}$ and $S_{2}$ on the same underlying graph are switching equivalent if it is possible to assign signs `+' (`plus' or `-' (`minus' to vertices of $S_{1}$ such that by reversing the sign of each of its edges that has received opposite signs at its ends, one obtains $S_{2}$. In this paper, we characterize sigraphs which are negation switching invariant and also see for what sigraphs, S and $\\eta (S$ are signed isomorphic.

  14. Fishing or a Z?: investigating the effects of error on mimetic and alphabet device-based gesture interaction

    NARCIS (Netherlands)

    el Ali, A.; Kildal, J.; Lantz, V.

    2012-01-01

    While gesture taxonomies provide a classification of device-based gestures in terms of communicative intent, little work has addressed the usability differences in manually performing these gestures. In this primarily qualitative study, we investigate how two sets of iconic gestures that vary in

  15. The Impact of American Sign Language Fluency on Co-Speech Gesture Production of Hearing English/ASL Bilinguals

    Science.gov (United States)

    Faust, Katrina Danielle

    2012-01-01

    This dissertation describes the features of co-speech gestures of English/ASL bilinguals and addresses three main questions: 1) How do English/ASL bilinguals gesture differently than non-signers? 2) How do native ASL/English bilinguals gesture differently than non-native English/ASL bilinguals? 3) Do English/ASL bilinguals gesture differently to…

  16. Early Gesture Provides a Helping Hand to Spoken Vocabulary Development for Children with Autism, Down Syndrome, and Typical Development

    Science.gov (United States)

    Özçaliskan, Seyda; Adamson, Lauren B.; Dimitrova, Nevena; Baumann, Stephanie

    2017-01-01

    Typically developing (TD) children refer to objects uniquely in gesture (e.g., point at a cat) before they produce verbal labels for these objects ("cat"). The onset of such gestures predicts the onset of similar spoken words, showing a strong positive relation between early gestures and early words. We asked whether gesture plays the…

  17. The role of gesture in cross-cultural and cross-linguistic learning contexts : the effect of gesture on the learning of mathematics

    OpenAIRE

    2013-01-01

    M.A. (Anthropology) This study explores the role of four teachers’ communicative styles in a multilingual and multicultural classroom focusing on the role of gesture when teaching. To compare their gestural behaviour under similar conditions, I filmed four grade one teachers (two Setswana mother tongue and two Afrikaans mother tongue) teaching the mathematical concept of halving. I classified the gestures and their sematic relation to speech on ELAN using an adapted version of Colletta et ...

  18. Co-verbal gestures among speakers with aphasia: Influence of aphasia severity, linguistic and semantic skills, and hemiplegia on gesture employment in oral discourse

    OpenAIRE

    LAI, CT; Kong, APH; Law, SP; Wat, WKC

    2015-01-01

    The use of co-verbal gestures is common in human communication and has been reported to assist word retrieval and to facilitate verbal interactions. This study systematically investigated the impact of aphasia severity, integrity of semantic processing, and hemiplegia on the use of co-verbal gestures, with reference to gesture forms and functions, by 131 normal speakers, 48 individuals with aphasia and their controls. All participants were native Cantonese speakers. It was found that the seve...

  19. The relationship of aphasia type and gesture production in people with aphasia.

    Science.gov (United States)

    Sekine, Kazuki; Rose, Miranda L

    2013-11-01

    For many individuals with aphasia, gestures form a vital component of message transfer and are the target of speech-language pathology intervention. What remains unclear are the participant variables that predict successful outcomes from gesture treatments. The authors examined the gesture production of a large number of individuals with aphasia-in a consistent discourse sampling condition and with a detailed gesture coding system-to determine patterns of gesture production associated with specific types of aphasia. The authors analyzed story retell samples from AphasiaBank (TalkBank, n.d.), gathered from 98 individuals with aphasia resulting from stroke and 64 typical controls. Twelve gesture types were coded. Descriptive statistics were used to describe the patterns of gesture production. Possible significant differences in production patterns according to aphasia type were examined using a series of chi-square, Fisher exact, and logistic regression statistics. A significantly higher proportion of individuals with aphasia gestured as compared to typical controls, and for many individuals with aphasia, this gesture was iconic and was capable of communicative load. Aphasia type impacted significantly on gesture type in specific identified patterns, detailed here. These type-specific patterns suggest the opportunity for gestures as targets of aphasia therapy.

  20. Combining point context and dynamic time warping for online gesture recognition

    Science.gov (United States)

    Mao, Xia; Li, Chen

    2017-05-01

    Previous gesture recognition methods usually focused on recognizing gestures after the entire gesture sequences were obtained. However, in many practical applications, a system has to identify gestures before they end to give instant feedback. We present an online gesture recognition approach that can realize early recognition of unfinished gestures with low latency. First, a curvature buffer-based point context (CBPC) descriptor is proposed to extract the shape feature of a gesture trajectory. The CBPC descriptor is a complete descriptor with a simple computation, and thus has its superiority in online scenarios. Then, we introduce an online windowed dynamic time warping algorithm to realize online matching between the ongoing gesture and the template gestures. In the algorithm, computational complexity is effectively decreased by adding a sliding window to the accumulative distance matrix. Lastly, the experiments are conducted on the Australian sign language data set and the Kinect hand gesture (KHG) data set. Results show that the proposed method outperforms other state-of-the-art methods especially when gesture information is incomplete.

  1. Borromean surgery formula for the Casson invariant

    DEFF Research Database (Denmark)

    Meilhan, Jean-Baptiste Odet Thierry

    2008-01-01

    It is known that every oriented integral homology 3-sphere can be obtained from S3 by a finite sequence of Borromean surgeries. We give an explicit formula for the variation of the Casson invariant under such a surgery move. The formula involves simple classical invariants, namely the framing, li...

  2. Numerical Approximation of Normally Hyperbolic Invariant Manifolds

    NARCIS (Netherlands)

    Broer, Henk; Hagen, Aaron; Vegter, Gert

    2003-01-01

    This paper deals with the numerical continuation of invariant manifolds, regardless of the restricted dynamics. Typically, invariant manifolds make up the skeleton of the dynamics of phase space. Examples include limit sets, co-dimension 1 manifolds separating basins of attraction (separatrices),

  3. Invariant Ordering of Item-Total Regressions

    Science.gov (United States)

    Tijmstra, Jesper; Hessen, David J.; van der Heijden, Peter G. M.; Sijtsma, Klaas

    2011-01-01

    A new observable consequence of the property of invariant item ordering is presented, which holds under Mokken's double monotonicity model for dichotomous data. The observable consequence is an invariant ordering of the item-total regressions. Kendall's measure of concordance "W" and a weighted version of this measure are proposed as measures for…

  4. Scale invariant Volkov–Akulov supergravity

    Directory of Open Access Journals (Sweden)

    S. Ferrara

    2015-10-01

    Full Text Available A scale invariant goldstino theory coupled to supergravity is obtained as a standard supergravity dual of a rigidly scale-invariant higher-curvature supergravity with a nilpotent chiral scalar curvature. The bosonic part of this theory describes a massless scalaron and a massive axion in a de Sitter Universe.

  5. The invariator principle in convex geometry

    DEFF Research Database (Denmark)

    Thórisdóttir, Ólöf; Kiderlen, Markus

    The invariator principle is a measure decomposition that was rediscovered in local stereology in 2005 and has since been used widely in the stereological literature. We give an exposition of invariator related results where existing formulae are generalized and new ones proposed. In particular, w...

  6. Recognition of sign language gestures using neural networks

    Directory of Open Access Journals (Sweden)

    Simon Vamplew

    2007-04-01

    Full Text Available This paper describes the structure and performance of the SLARTI sign language recognition system developed at the University of Tasmania. SLARTI uses a modular architecture consisting of multiple feature-recognition neural networks and a nearest-neighbour classifier to recognise Australian sign language (Auslan hand gestures.

  7. Onomatopoeia, Gesture, and Synaesthesia in the Perception of Poetic Meaning.

    Science.gov (United States)

    Salper, Donald R.

    The author states that phonetic symbolism is not a generalizable phenomenon but maintains that those interested in the status of a poem as a speech event need not totally discount or discredit such perceptions. In his discussion of the theories which ascribe meaning to vocal utterance--the two imitative theories, the onomatopoeic and the gestural,…

  8. Perspectives on gesture from music informatics, performance and aesthetics

    DEFF Research Database (Denmark)

    Jensen, Kristoffer; Frimodt-Møller, Søren; Grund, Cynthia

    2014-01-01

    and gestures in emotional musical expression using motion capture, the visual and auditive cues musicians provide each other in an ensemble when rehearsing, and the decision processes involved when a musician coordinates with other musicians. These projects seek to combine and compare intuitions derived from...

  9. Cascading neural networks for upper-body gesture recognition

    CSIR Research Space (South Africa)

    Mangera, R

    2014-01-01

    Full Text Available Gesture recognition has many applications ranging from health care to entertainment. However for it to be a feasible method of human-computer interaction it is essential that only intentional movements are interpreted and that the system can work...

  10. Temporal Dynamics of Speech and Gesture in Autism Spectrum Disorder

    DEFF Research Database (Denmark)

    Lambrechts, Anna; Gaigg, Sebastian; Yarrow, Kielan

    2015-01-01

    Autism Spectrum Disorder (ASD) is characterized by difficulties in communication and social interaction. Abnormalities in the use of gestures or flow of conversation are frequently reported in clinical observations and contribute to a diagnosis of the disorder but the mechanisms underlying...

  11. How Symbolic Gestures and Words Interact with Each Other

    Science.gov (United States)

    Barbieri, Filippo; Buonocore, Antimo; Volta, Riccardo Dalla; Gentilucci, Maurizio

    2009-01-01

    Previous repetitive Transcranial Magnetic Stimulation and neuroimaging studies showed that Broca's area is involved in the interaction between gestures and words. However, in these studies the nature of this interaction was not fully investigated; consequently, we addressed this issue in three behavioral experiments. When compared to the…

  12. Gesture-Speech Integration in Children with Specific Language Impairment

    Science.gov (United States)

    Mainela-Arnold, Elina; Alibali, Martha W.; Hostetter, Autumn B.; Evans, Julia L.

    2014-01-01

    Background: Previous research suggests that speakers are especially likely to produce manual communicative gestures when they have relative ease in thinking about the spatial elements of what they are describing, paired with relative difficulty organizing those elements into appropriate spoken language. Children with specific language impairment…

  13. Gestural Abilities of Children with Specific Language Impairment

    Science.gov (United States)

    Wray, Charlotte; Norbury, Courtenay Frazier; Alcock, Katie

    2016-01-01

    Background: Specific language impairment (SLI) is diagnosed when language is significantly below chronological age expectations in the absence of other developmental disorders, sensory impairments or global developmental delays. It has been suggested that gesture may enhance communication in children with SLI by providing an alternative means to…

  14. On what happens in gesture when communication is unsuccessful

    NARCIS (Netherlands)

    Hoetjes, Marieke; Krahmer, Emiel; Swerts, Marc

    2015-01-01

    Previous studies found that repeated references in successful communication are often reduced, not only at the acoustic level, but also in terms of words and manual co-speech gestures. In the present study, we investigated whether repeated references are still reduced in a situation when reduction

  15. Origins of the Human Pointing Gesture: A Training Study

    Science.gov (United States)

    Matthews, Danielle; Behne, Tanya; Lieven, Elena; Tomasello, Michael

    2012-01-01

    Despite its importance in the development of children's skills of social cognition and communication, very little is known about the ontogenetic origins of the pointing gesture. We report a training study in which mothers gave children one month of extra daily experience with pointing as compared with a control group who had extra experience with…

  16. The Perception of Sound Movements as Expressive Gestures

    DEFF Research Database (Denmark)

    Götzen, Amalia De; Sikström, Erik; Korsgaard, Dannie

    2014-01-01

    This paper is a preliminary attempt to investigate the perception of sound movements as expressive gestures. The idea is that if sound movement is used as a musical parameter, a listener (or a subject) should be able to distinguish among dierent movements and she/he should be able to group them...

  17. Behand: augmented virtuality gestural interaction for mobile phones

    DEFF Research Database (Denmark)

    Caballero, Luz; Chang, Ting-Ray; Menendez Blanco, Maria

    2010-01-01

    This paper introduces Behand. Behand is a new way of interaction that allows a mobile phone user to manipulate virtual three-dimensional objects inside the phone by gesturing with his hand. Behand provides a straightforward 3D interface, something current mobile phones do not offer, and extends t...

  18. Motion-Based Gesture Recognition Algorithms for Robot Manipulation

    Directory of Open Access Journals (Sweden)

    Tea Marasović

    2015-05-01

    Full Text Available The prevailing trend of integrating inertial sensors in consumer electronics devices has inspired research on new forms of human-computer interaction utilizing hand gestures, which may be set-up on mobile devices themselves. At present, motion gesture recognition is intensely studied, with various recognition techniques being employed and tested. This paper provides an in-depth, unbiased comparison of different algorithms used to recognize gestures based primarily on the single 3D accelerometer recordings. The study takes two of the most popular and arguably the best recognition methods currently in use - dynamic time warping and hidden Markov model - and sets them against a relatively novel approach founded on distance metric learning. The three selected algorithms are evaluated in terms of their overall performance, accuracy, training time, execution time and storage efficacy. The optimal algorithm is further implemented in a prototype user application, aimed to serve as an interface for controlling the motion of a toy robot via gestures made with a smartphone.

  19. Generating Control Commands From Gestures Sensed by EMG

    Science.gov (United States)

    Wheeler, Kevin R.; Jorgensen, Charles

    2006-01-01

    An effort is under way to develop noninvasive neuro-electric interfaces through which human operators could control systems as diverse as simple mechanical devices, computers, aircraft, and even spacecraft. The basic idea is to use electrodes on the surface of the skin to acquire electromyographic (EMG) signals associated with gestures, digitize and process the EMG signals to recognize the gestures, and generate digital commands to perform the actions signified by the gestures. In an experimental prototype of such an interface, the EMG signals associated with hand gestures are acquired by use of several pairs of electrodes mounted in sleeves on a subject s forearm (see figure). The EMG signals are sampled and digitized. The resulting time-series data are fed as input to pattern-recognition software that has been trained to distinguish gestures from a given gesture set. The software implements, among other things, hidden Markov models, which are used to recognize the gestures as they are being performed in real time. Thus far, two experiments have been performed on the prototype interface to demonstrate feasibility: an experiment in synthesizing the output of a joystick and an experiment in synthesizing the output of a computer or typewriter keyboard. In the joystick experiment, the EMG signals were processed into joystick commands for a realistic flight simulator for an airplane. The acting pilot reached out into the air, grabbed an imaginary joystick, and pretended to manipulate the joystick to achieve left and right banks and up and down pitches of the simulated airplane. In the keyboard experiment, the subject pretended to type on a numerical keypad, and the EMG signals were processed into keystrokes. The results of the experiments demonstrate the basic feasibility of this method while indicating the need for further research to reduce the incidence of errors (including confusion among gestures). Topics that must be addressed include the numbers and arrangements

  20. Negative electric susceptibility and magnetism from translational invariance and rotational invariance

    Energy Technology Data Exchange (ETDEWEB)

    Koo, Je Huan, E-mail: koo@kw.ac.kr

    2015-02-01

    In this work we investigate magnetic effects in terms of the translational and rotational invariances of magnetisation. Whilst Landau-type diamagnetism originates from translational invariance, a new diamagnetism could result from rotational invariance. Translational invariance results in only conventional Landau-type diamagnetism, whereas rotational invariance can induce a paramagnetic susceptibility for localised electrons and also a new kind of diamagnetism that is specific to conducting electrons. In solids, the moving electron shows a paramagnetic susceptibility but the surrounding screening of electrons may produce a new diamagnetic response by Lenz's law, resulting in a total susceptibility that tends to zero. For electricity, similar behaviours are obtained. We also derive the DC-type negative electric susceptibility via two methods in analogy with Landau diamagnetism. - Highlights: • The translational invariance of magnetisation. • The rotational invariance of magnetisation. • An electron attached to an electric vortex. • A kind of Landau paramagnetism. • A kind of Pauli diamagnetism.

  1. A cross-species study of gesture and its role in symbolic development: Implications for the gestural theory of language evolution

    Directory of Open Access Journals (Sweden)

    Kristen eGillespie-Lynch

    2013-06-01

    Full Text Available Using a naturalistic video database, we examined whether gestures scaffolded the symbolic development of a language-enculturated chimpanzee, a language-enculturated bonobo, and a human child during the second year of life. These three species constitute a complete clade: species possessing a common immediate ancestor. A basic finding was the functional and formal similarity of many gestures between chimpanzee, bonobo, and human child. The child’s symbols were spoken words; the apes’ symbols were lexigrams, noniconic visual signifiers. A developmental pattern in which gestural representation of a referent preceded symbolic representation of the same referent appeared in all three species (but was statistically significant only for the child. Nonetheless, across species, the ratio of symbol to gesture increased significantly with age. But even though their symbol production increased, the apes continued to communicate more frequently by gesture than by symbol. In contrast, by15-18 months of age, the child used symbols more frequently than gestures. This ontogenetic sequence from gesture to symbol, present across the clade but more pronounced in child than ape, provides support for the role of gesture in language evolution. In all three species, the overwhelming majority of gestures were communicative (paired with eye-contact, vocalization, and/or persistence. However, vocalization was rare for the apes, but accompanied the majority of the child’s communicative gestures. This finding suggests the co-evolution of speech and gesture after the evolutionary divergence of the hominid line. Multimodal expressions of communicative intent (e.g., vocalization plus persistence were normative for the child, but less common for the apes. This finding suggests that multimodal expression of communicative intent was also strengthened after hominids diverged from apes.

  2. A cross-species study of gesture and its role in symbolic development: implications for the gestural theory of language evolution.

    Science.gov (United States)

    Gillespie-Lynch, K; Greenfield, P M; Feng, Y; Savage-Rumbaugh, S; Lyn, H

    2013-01-01

    Using a naturalistic video database, we examined whether gestures scaffold the symbolic development of a language-enculturated chimpanzee, a language-enculturated bonobo, and a human child during the second year of life. These three species constitute a complete clade: species possessing a common immediate ancestor. A basic finding was the functional and formal similarity of many gestures between chimpanzee, bonobo, and human child. The child's symbols were spoken words; the apes' symbols were lexigrams - non-iconic visual signifiers. A developmental pattern in which gestural representation of a referent preceded symbolic representation of the same referent appeared in all three species (but was statistically significant only for the child). Nonetheless, across species, the ratio of symbol to gesture increased significantly with age. But even though their symbol production increased, the apes continued to communicate more frequently by gesture than by symbol. In contrast, by 15-18 months of age, the child used symbols more frequently than gestures. This ontogenetic sequence from gesture to symbol, present across the clade but more pronounced in child than ape, provides support for the role of gesture in language evolution. In all three species, the overwhelming majority of gestures were communicative (i.e., paired with eye contact, vocalization, and/or persistence). However, vocalization was rare for the apes, but accompanied the majority of the child's communicative gestures. This species difference suggests the co-evolution of speech and gesture after the evolutionary divergence of the hominid line. Multimodal expressions of communicative intent (e.g., vocalization plus persistence) were normative for the child, but less common for the apes. This species difference suggests that multimodal expression of communicative intent was also strengthened after hominids diverged from apes.

  3. A scale invariance criterion for LES parametrizations

    Directory of Open Access Journals (Sweden)

    Urs Schaefer-Rolffs

    2015-01-01

    Full Text Available Turbulent kinetic energy cascades in fluid dynamical systems are usually characterized by scale invariance. However, representations of subgrid scales in large eddy simulations do not necessarily fulfill this constraint. So far, scale invariance has been considered in the context of isotropic, incompressible, and three-dimensional turbulence. In the present paper, the theory is extended to compressible flows that obey the hydrostatic approximation, as well as to corresponding subgrid-scale parametrizations. A criterion is presented to check if the symmetries of the governing equations are correctly translated into the equations used in numerical models. By applying scaling transformations to the model equations, relations between the scaling factors are obtained by demanding that the mathematical structure of the equations does not change.The criterion is validated by recovering the breakdown of scale invariance in the classical Smagorinsky model and confirming scale invariance for the Dynamic Smagorinsky Model. The criterion also shows that the compressible continuity equation is intrinsically scale-invariant. The criterion also proves that a scale-invariant turbulent kinetic energy equation or a scale-invariant equation of motion for a passive tracer is obtained only with a dynamic mixing length. For large-scale atmospheric flows governed by the hydrostatic balance the energy cascade is due to horizontal advection and the vertical length scale exhibits a scaling behaviour that is different from that derived for horizontal length scales.

  4. Feedback-Driven Dynamic Invariant Discovery

    Science.gov (United States)

    Zhang, Lingming; Yang, Guowei; Rungta, Neha S.; Person, Suzette; Khurshid, Sarfraz

    2014-01-01

    Program invariants can help software developers identify program properties that must be preserved as the software evolves, however, formulating correct invariants can be challenging. In this work, we introduce iDiscovery, a technique which leverages symbolic execution to improve the quality of dynamically discovered invariants computed by Daikon. Candidate invariants generated by Daikon are synthesized into assertions and instrumented onto the program. The instrumented code is executed symbolically to generate new test cases that are fed back to Daikon to help further re ne the set of candidate invariants. This feedback loop is executed until a x-point is reached. To mitigate the cost of symbolic execution, we present optimizations to prune the symbolic state space and to reduce the complexity of the generated path conditions. We also leverage recent advances in constraint solution reuse techniques to avoid computing results for the same constraints across iterations. Experimental results show that iDiscovery converges to a set of higher quality invariants compared to the initial set of candidate invariants in a small number of iterations.

  5. The impact of impaired semantic knowledge on spontaneous iconic gesture production.

    Science.gov (United States)

    Cocks, Naomi; Dipper, Lucy; Pritchard, Madeleine; Morgan, Gary

    2013-09-01

    Previous research has found that people with aphasia produce more spontaneous iconic gesture than control participants, especially during word-finding difficulties. There is some evidence that impaired semantic knowledge impacts on the diversity of gestural handshapes, as well as the frequency of gesture production. However, no previous research has explored how impaired semantic knowledge impacts on the frequency and type of iconic gestures produced during fluent speech compared with those produced during word-finding difficulties. To explore the impact of impaired semantic knowledge on the frequency and type of iconic gestures produced during fluent speech and those produced during word-finding difficulties. A group of 29 participants with aphasia and 29 control participants were video recorded describing a cartoon they had just watched. All iconic gestures were tagged and coded as either "manner," "path only," "shape outline" or "other". These gestures were then separated into either those occurring during fluent speech or those occurring during a word-finding difficulty. The relationships between semantic knowledge and gesture frequency and form were then investigated in the two different conditions. As expected, the participants with aphasia produced a higher frequency of iconic gestures than the control participants, but when the iconic gestures produced during word-finding difficulties were removed from the analysis, the frequency of iconic gesture was not significantly different between the groups. While there was not a significant relationship between the frequency of iconic gestures produced during fluent speech and semantic knowledge, there was a significant positive correlation between semantic knowledge and the proportion of word-finding difficulties that contained gesture. There was also a significant positive correlation between the speakers' semantic knowledge and the proportion of gestures that were produced during fluent speech that were

  6. Comment on ``Pairing interaction and Galilei invariance''

    Science.gov (United States)

    Arias, J. M.; Gallardo, M.; Gómez-Camacho, J.

    1999-05-01

    A recent article by Dussel, Sofia, and Tonina studies the relation between Galilei invariance and dipole energy weighted sum rule (EWSR). The authors find that the pairing interaction, which is neither Galilei nor Lorentz invariant, produces big changes in the EWSR and in effective masses of the nucleons. They argue that these effects of the pairing force could be realistic. In this Comment we stress the validity of Galilei invariance to a very good approximation in this context of low-energy nuclear physics and show that the effective masses and the observed change in the EWSR for the electric dipole operator relative to its classical value are compatible with this symmetry.

  7. Iconic gesture in normal language and word searching conditions: a case of conduction aphasia.

    Science.gov (United States)

    Pritchard, Madeleine; Cocks, Naomi; Dipper, Lucy

    2013-10-01

    Although there is a substantive body of research about the language used by individuals with aphasia, relatively little is known about their spontaneous iconic gesture. A single case study of LT, an individual with conduction aphasia indicated qualitative differences between the spontaneous iconic gestures produced alongside fluent speech and in tip of the tongue states. The current study examined the iconic gestures produced by another individual with conduction aphasia, WT, and a group of 11 control participants. Comparisons were made between iconic gestures produced alongside normal language and those produced alongside word-searching behaviour. Participants recounted the Tweety and Sylvester cartoon Canary Row. All gesture produced was analysed qualitatively and quantitatively. WT produced more iconic gestures than controls accompanying word searching behaviour, whereas he produced a similar frequency of iconic gestures to control participants alongside normal language. The iconic gestures produced in the two language contexts also differed qualitatively. Frequency of iconic gesture production was not affected by limb apraxia. This study suggests that there are differences between iconic gestures that are produced alongside normal language and those produced alongside word-searching behaviour. Theoretical and clinical implications of these findings are discussed.

  8. Multisensory integration: the case of a time window of gesture-speech integration.

    Science.gov (United States)

    Obermeier, Christian; Gunter, Thomas C

    2015-02-01

    This experiment investigates the integration of gesture and speech from a multisensory perspective. In a disambiguation paradigm, participants were presented with short videos of an actress uttering sentences like "She was impressed by the BALL, because the GAME/DANCE...." The ambiguous noun (BALL) was accompanied by an iconic gesture fragment containing information to disambiguate the noun toward its dominant or subordinate meaning. We used four different temporal alignments between noun and gesture fragment: the identification point (IP) of the noun was either prior to (+120 msec), synchronous with (0 msec), or lagging behind the end of the gesture fragment (-200 and -600 msec). ERPs triggered to the IP of the noun showed significant differences for the integration of dominant and subordinate gesture fragments in the -200, 0, and +120 msec conditions. The outcome of this integration was revealed at the target words. These data suggest a time window for direct semantic gesture-speech integration ranging from at least -200 up to +120 msec. Although the -600 msec condition did not show any signs of direct integration at the homonym, significant disambiguation was found at the target word. An explorative analysis suggested that gesture information was directly integrated at the verb, indicating that there are multiple positions in a sentence where direct gesture-speech integration takes place. Ultimately, this would implicate that in natural communication, where a gesture lasts for some time, several aspects of that gesture will have their specific and possibly distinct impact on different positions in an utterance.

  9. Selection of suitable hand gestures for reliable myoelectric human computer interface.

    Science.gov (United States)

    Castro, Maria Claudia F; Arjunan, Sridhar P; Kumar, Dinesh K

    2015-04-09

    Myoelectric controlled prosthetic hand requires machine based identification of hand gestures using surface electromyogram (sEMG) recorded from the forearm muscles. This study has observed that a sub-set of the hand gestures have to be selected for an accurate automated hand gesture recognition, and reports a method to select these gestures to maximize the sensitivity and specificity. Experiments were conducted where sEMG was recorded from the muscles of the forearm while subjects performed hand gestures and then was classified off-line. The performances of ten gestures were ranked using the proposed Positive-Negative Performance Measurement Index (PNM), generated by a series of confusion matrices. When using all the ten gestures, the sensitivity and specificity was 80.0% and 97.8%. After ranking the gestures using the PNM, six gestures were selected and these gave sensitivity and specificity greater than 95% (96.5% and 99.3%); Hand open, Hand close, Little finger flexion, Ring finger flexion, Middle finger flexion and Thumb flexion. This work has shown that reliable myoelectric based human computer interface systems require careful selection of the gestures that have to be recognized and without such selection, the reliability is poor.

  10. An investigation of co-speech gesture production during action description in Parkinson's disease.

    Science.gov (United States)

    Cleary, Rebecca A; Poliakoff, Ellen; Galpin, Adam; Dick, Jeremy P R; Holler, Judith

    2011-12-01

    Parkinson's disease (PD) can impact enormously on speech communication. One aspect of non-verbal behaviour closely tied to speech is co-speech gesture production. In healthy people, co-speech gestures can add significant meaning and emphasis to speech. There is, however, little research into how this important channel of communication is affected in PD. The present study provides a systematic analysis of co-speech gestures which spontaneously accompany the description of actions in a group of PD patients (N = 23, Hoehn and Yahr Stage III or less) and age-matched healthy controls (N = 22). The analysis considers different co-speech gesture types, using established classification schemes from the field of gesture research. The analysis focuses on the rate of these gestures as well as on their qualitative nature. In doing so, the analysis attempts to overcome several methodological shortcomings of research in this area. Contrary to expectation, gesture rate was not significantly affected in our patient group, with relatively mild PD. This indicates that co-speech gestures could compensate for speech problems. However, while gesture rate seems unaffected, the qualitative precision of gestures representing actions was significantly reduced. This study demonstrates the feasibility of carrying out fine-grained, detailed analyses of gestures in PD and offers insights into an as yet neglected facet of communication in patients with PD. Based on the present findings, an important next step is the closer investigation of the qualitative changes in gesture (including different communicative situations) and an analysis of the heterogeneity in co-speech gesture production in PD. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. Coronary Heart Disease Preoperative Gesture Interactive Diagnostic System Based on Augmented Reality.

    Science.gov (United States)

    Zou, Yi-Bo; Chen, Yi-Min; Gao, Ming-Ke; Liu, Quan; Jiang, Si-Yu; Lu, Jia-Hui; Huang, Chen; Li, Ze-Yu; Zhang, Dian-Hua

    2017-08-01

    Coronary heart disease preoperative diagnosis plays an important role in the treatment of vascular interventional surgery. Actually, most doctors are used to diagnosing the position of the vascular stenosis and then empirically estimating vascular stenosis by selective coronary angiography images instead of using mouse, keyboard and computer during preoperative diagnosis. The invasive diagnostic modality is short of intuitive and natural interaction and the results are not accurate enough. Aiming at above problems, the coronary heart disease preoperative gesture interactive diagnostic system based on Augmented Reality is proposed. The system uses Leap Motion Controller to capture hand gesture video sequences and extract the features which that are the position and orientation vector of the gesture motion trajectory and the change of the hand shape. The training planet is determined by K-means algorithm and then the effect of gesture training is improved by multi-features and multi-observation sequences for gesture training. The reusability of gesture is improved by establishing the state transition model. The algorithm efficiency is improved by gesture prejudgment which is used by threshold discriminating before recognition. The integrity of the trajectory is preserved and the gesture motion space is extended by employing space rotation transformation of gesture manipulation plane. Ultimately, the gesture recognition based on SRT-HMM is realized. The diagnosis and measurement of the vascular stenosis are intuitively and naturally realized by operating and measuring the coronary artery model with augmented reality and gesture interaction techniques. All of the gesture recognition experiments show the distinguish ability and generalization ability of the algorithm and gesture interaction experiments prove the availability and reliability of the system.

  12. Part of the message comes in gesture: how people with aphasia convey information in different gesture types as compared with information in their speech

    NARCIS (Netherlands)

    K. van Nispen (Karin); W.M.E. van de Sandt-Koenderman (Mieke); K. Sekine (Kazuki); E. Krahmer (Emiel); M.L. Rose (Miranda L.)

    2017-01-01

    markdownabstractBackground: Studies have shown that the gestures produced by people with aphasia (PWA) can convey information useful for their communication. However, the exact significance of the contribution to message communication via gesture remains unclear. Furthermore, it remains unclear how

  13. Modified dispersion relations, inflation, and scale invariance

    Science.gov (United States)

    Bianco, Stefano; Friedhoff, Victor Nicolai; Wilson-Ewing, Edward

    2018-02-01

    For a certain type of modified dispersion relations, the vacuum quantum state for very short wavelength cosmological perturbations is scale-invariant and it has been suggested that this may be the source of the scale-invariance observed in the temperature anisotropies in the cosmic microwave background. We point out that for this scenario to be possible, it is necessary to redshift these short wavelength modes to cosmological scales in such a way that the scale-invariance is not lost. This requires nontrivial background dynamics before the onset of standard radiation-dominated cosmology; we demonstrate that one possible solution is inflation with a sufficiently large Hubble rate, for this slow roll is not necessary. In addition, we also show that if the slow-roll condition is added to inflation with a large Hubble rate, then for any power law modified dispersion relation quantum vacuum fluctuations become nearly scale-invariant when they exit the Hubble radius.

  14. Invariant Measures of Genetic Recombination Processes

    Science.gov (United States)

    Akopyan, Arseniy V.; Pirogov, Sergey A.; Rybko, Aleksandr N.

    2015-07-01

    We construct a non-linear Markov process connected with a biological model of a bacterial genome recombination. The description of invariant measures of this process gives us the solution of one problem in elementary probability theory.

  15. Testing Lorentz invariance of dark matter

    CERN Document Server

    Blas, Diego; Sibiryakov, Sergey

    2012-01-01

    We study the possibility to constrain deviations from Lorentz invariance in dark matter (DM) with cosmological observations. Breaking of Lorentz invariance generically introduces new light gravitational degrees of freedom, which we represent through a dynamical timelike vector field. If DM does not obey Lorentz invariance, it couples to this vector field. We find that this coupling affects the inertial mass of small DM halos which no longer satisfy the equivalence principle. For large enough lumps of DM we identify a (chameleon) mechanism that restores the inertial mass to its standard value. As a consequence, the dynamics of gravitational clustering are modified. Two prominent effects are a scale dependent enhancement in the growth of large scale structure and a scale dependent bias between DM and baryon density perturbations. The comparison with the measured linear matter power spectrum in principle allows to bound the departure from Lorentz invariance of DM at the per cent level.

  16. Testing Lorentz invariance of dark matter

    Energy Technology Data Exchange (ETDEWEB)

    Blas, Diego [Theory Group, Physics Department, CERN, CH-1211 Geneva 23 (Switzerland); Ivanov, Mikhail M.; Sibiryakov, Sergey, E-mail: diego.blas@cern.ch, E-mail: mm.ivanov@physics.msu.ru, E-mail: sibir@inr.ac.ru [Faculty of Physics, Moscow State University, Vorobjevy Gory, 119991 Moscow (Russian Federation)

    2012-10-01

    We study the possibility to constrain deviations from Lorentz invariance in dark matter (DM) with cosmological observations. Breaking of Lorentz invariance generically introduces new light gravitational degrees of freedom, which we represent through a dynamical timelike vector field. If DM does not obey Lorentz invariance, it couples to this vector field. We find that this coupling affects the inertial mass of small DM halos which no longer satisfy the equivalence principle. For large enough lumps of DM we identify a (chameleon) mechanism that restores the inertial mass to its standard value. As a consequence, the dynamics of gravitational clustering are modified. Two prominent effects are a scale dependent enhancement in the growth of large scale structure and a scale dependent bias between DM and baryon density perturbations. The comparison with the measured linear matter power spectrum in principle allows to bound the departure from Lorentz invariance of DM at the per cent level.

  17. Ermakov–Lewis invariants and Reid systems

    Energy Technology Data Exchange (ETDEWEB)

    Mancas, Stefan C., E-mail: stefan.mancas@erau.edu [Department of Mathematics, Embry-Riddle Aeronautical University, Daytona Beach, FL 32114-3900 (United States); Rosu, Haret C., E-mail: hcr@ipicyt.edu.mx [IPICyT, Instituto Potosino de Investigacion Cientifica y Tecnologica, Camino a la presa San José 2055, Col. Lomas 4a Sección, 78216 San Luis Potosí, S.L.P. (Mexico)

    2014-06-13

    Reid's mth-order generalized Ermakov systems of nonlinear coupling constant α are equivalent to an integrable Emden–Fowler equation. The standard Ermakov–Lewis invariant is discussed from this perspective, and a closed formula for the invariant is obtained for the higher-order Reid systems (m≥3). We also discuss the parametric solutions of these systems of equations through the integration of the Emden–Fowler equation and present an example of a dynamical system for which the invariant is equivalent to the total energy. - Highlights: • Reid systems of order m are connected to Emden–Fowler equations. • General expressions for the Ermakov–Lewis invariants both for m=2 and m≥3 are obtained. • Parametric solutions of the Emden–Fowler equations related to Reid systems are obtained.

  18. Numerical considerations in computing invariant subspaces

    Energy Technology Data Exchange (ETDEWEB)

    Dongarra, J.J. (Tennessee Univ., Knoxville, TN (USA). Dept. of Computer Science Oak Ridge National Lab., TN (USA)); Hammarling, S. (Numerical Algorithms Group Ltd., Oxford (UK)); Wilkinson, J.H. (Oak Ridge National Lab., TN (USA))

    1990-11-01

    This paper describes two methods for computing the invariant subspace of a matrix. The first involves using transformations to interchange the eigenvalues; the second involves direct computation of the vectors. 10 refs.

  19. Gauge invariance for a whole Abelian model

    Science.gov (United States)

    Chauca, J.; Doria, R.; Soares, W.

    2012-10-01

    Light invariance is a fundamental principle for physics be done. It generates Maxwell equations, relativity, Lorentz group. However there is still space for a fourth picture be developed which is to include fields with same Lorentz nature. It brings a new room for field theory. It says that light invariance does not work just to connect space and time but it also associates different fields with same nature. Thus for the (1/2,1/2) representation there is a fields family {AμI} to be studied. This means that given such fields association one should derive its corresponding gauge theory. This is the effort at this work. Show that there is a whole gauge theory to cover these fields relationships. Considering the abelian case, prove its gauge invariance. It yields the kinetic, massive, trilinear and quadrilinear gauge invariant terms.

  20. Gauge invariance for a whole Abelian model

    Energy Technology Data Exchange (ETDEWEB)

    Chauca, J.; Doria, R.; Soares, W. [CBPF, Rio de Janeiro (Brazil); Aprendanet, Petropolis, 25600 (Brazil)

    2012-09-24

    Light invariance is a fundamental principle for physics be done. It generates Maxwell equations, relativity, Lorentz group. However there is still space for a fourth picture be developed which is to include fields with same Lorentz nature. It brings a new room for field theory. It says that light invariance does not work just to connect space and time but it also associates different fields with same nature. Thus for the ((1/2),(1/2)) representation there is a fields family {l_brace}A{sub {mu}I}{r_brace} to be studied. This means that given such fields association one should derive its corresponding gauge theory. This is the effort at this work. Show that there is a whole gauge theory to cover these fields relationships. Considering the abelian case, prove its gauge invariance. It yields the kinetic, massive, trilinear and quadrilinear gauge invariant terms.

  1. On invariant measures of nonlinear Markov processes

    Directory of Open Access Journals (Sweden)

    N. U. Ahmed

    1993-01-01

    Full Text Available We consider a nonlinear (in the sense of McKean Markov process described by a stochastic differential equations in Rd. We prove the existence and uniqueness of invariant measures of such process.

  2. Galilean invariance in 2+1 dimensions

    OpenAIRE

    Brihaye, Y.; Gonera, C.; Giller, S; Kosinski, P.

    1995-01-01

    The Galilean invariance in three dimensional space-time is considered. It appears that the Galilei group in 2+1 dimensions posses a three-parameter family of projective representations. Their physical interpretation is discussed in some detail.

  3. Fibred knots and twisted Alexander invariants

    OpenAIRE

    Cha, Jae Choon

    2001-01-01

    We introduce a new algebraic topological technique to detect non-fibred knots in the three sphere using the twisted Alexander invariants. As an application, we show that for any Seifert matrix of a knot with a nontrivial Alexander polynomial, there exist infinitely many non-fibered knots with the given Seifert matrix. We illustrate examples of knots that have trivial Alexander polynomials but do not have twisted Alexander invariants of fibred knots.

  4. On the way to language: event segmentation in homesign and gesture*

    Science.gov (United States)

    ÖZYÜREK, ASLI; FURMAN, REYHAN; GOLDIN-MEADOW, SUSAN

    2014-01-01

    Languages typically express semantic components of motion events such as manner (roll) and path (down) in separate lexical items. We explore how these combinatorial possibilities of language arise by focusing on (i) gestures produced by deaf children who lack access to input from a conventional language (homesign); (ii) gestures produced by hearing adults and children while speaking; and (iii) gestures used by hearing adults without speech when asked to do so in elicited descriptions of motion events with simultaneous manner and path. Homesigners tended to conflate manner and path in one gesture, but also used a mixed form, adding a manner and/or path gesture to the conflated form sequentially. Hearing speakers, with or without speech, used the conflated form, gestured manner, or path, but rarely used the mixed form. Mixed form may serve as an intermediate structure on the way to the discrete and sequenced forms found in natural languages. PMID:24650738

  5. Brief Report: Gestures in Children at Risk for Autism Spectrum Disorders.

    Science.gov (United States)

    Gordon, Rupa Gupta; Watson, Linda R

    2015-07-01

    Retrospective video analyses indicate that disruptions in gesture use occur as early as 9-12 months of age in infants later diagnosed with autism spectrum disorders (ASD). We report a prospective study of gesture use in 42 children identified as at-risk for ASD using a general population screening. At age 13-15 months, gesture were more disrupted in infants who, at 20-24 months, met cutoffs for "autism" on the ADOS than for those who met cutoffs for "autism spectrum" or those who did not meet cutoffs for either, whereas these latter two groups displayed similar patterns of gesture use. Total gestures predicted later receptive and expressive language outcomes; therefore, gesture use may help identify infants who can benefit from early communication interventions.

  6. Intelligent RF-Based Gesture Input Devices Implemented Using e-Textiles

    Directory of Open Access Journals (Sweden)

    Dana Hughes

    2017-01-01

    Full Text Available We present an radio-frequency (RF-based approach to gesture detection and recognition, using e-textile versions of common transmission lines used in microwave circuits. This approach allows for easy fabrication of input swatches that can detect a continuum of finger positions and similarly basic gestures, using a single measurement line. We demonstrate that the swatches can perform gesture detection when under thin layers of cloth or when weatherproofed, providing a high level of versatility not present with other types of approaches. Additionally, using small convolutional neural networks, low-level gestures can be identified with a high level of accuracy using a small, inexpensive microcontroller, allowing for an intelligent fabric that reports only gestures of interest, rather than a simple sensor requiring constant surveillance from an external computing device. The resulting e-textile smart composite has applications in controlling wearable devices by providing a simple, eyes-free mechanism to input simple gestures.

  7. Low-level contrast statistics are diagnostic of invariance of natural textures.

    Science.gov (United States)

    Groen, Iris I A; Ghebreab, Sennay; Lamme, Victor A F; Scholte, H Steven

    2012-01-01

    Texture may provide important clues for real world object and scene perception. To be reliable, these clues should ideally be invariant to common viewing variations such as changes in illumination and orientation. In a large image database of natural materials, we found textures with low-level contrast statistics that varied substantially under viewing variations, as well as textures that remained relatively constant. This led us to ask whether textures with constant contrast statistics give rise to more invariant representations compared to other textures. To test this, we selected natural texture images with either high (HV) or low (LV) variance in contrast statistics and presented these to human observers. In two distinct behavioral categorization paradigms, participants more often judged HV textures as "different" compared to LV textures, showing that textures with constant contrast statistics are perceived as being more invariant. In a separate electroencephalogram (EEG) experiment, evoked responses to single texture images (single-image ERPs) were collected. The results show that differences in contrast statistics correlated with both early and late differences in occipital ERP amplitude between individual images. Importantly, ERP differences between images of HV textures were mainly driven by illumination angle, which was not the case for LV images: there, differences were completely driven by texture membership. These converging neural and behavioral results imply that some natural textures are surprisingly invariant to illumination changes and that low-level contrast statistics are diagnostic of the extent of this invariance.

  8. Barack Obama’s pauses and gestures in humorous speeches

    DEFF Research Database (Denmark)

    Navarretta, Costanza

    2017-01-01

    The main aim of this paper is to investigate speech pauses and gestures as means to engage the audience and present the humorous message in an effective way. The data consist of two speeches by the USA president Barack Obama at the 2011 and 2016 Annual White House Correspondents’ Association Dinner...... and they emphasise the speech segment which they follow or precede. We also found a highly significant correlation between Obama’s speech pauses and audience response. Obama produces numerous head movements, facial expressions and hand gestures and their functions are related to both discourse content and structure....... Characteristics for these speeches is that Obama points to individuals in the audience and often smiles and laughs. Audience response is equally frequent in the two events, and there are no significant changes in speech rate and frequency of head movements and facial expressions in the two speeches while Obama...

  9. Gesture recognition for smart home applications using portable radar sensors.

    Science.gov (United States)

    Wan, Qian; Li, Yiran; Li, Changzhi; Pal, Ranadip

    2014-01-01

    In this article, we consider the design of a human gesture recognition system based on pattern recognition of signatures from a portable smart radar sensor. Powered by AAA batteries, the smart radar sensor operates in the 2.4 GHz industrial, scientific and medical (ISM) band. We analyzed the feature space using principle components and application-specific time and frequency domain features extracted from radar signals for two different sets of gestures. We illustrate that a nearest neighbor based classifier can achieve greater than 95% accuracy for multi class classification using 10 fold cross validation when features are extracted based on magnitude differences and Doppler shifts as compared to features extracted through orthogonal transformations. The reported results illustrate the potential of intelligent radars integrated with a pattern recognition system for high accuracy smart home and health monitoring purposes.

  10. Constraints on the Use of Language, Gesture and Speech for Multimodal Dialogues

    OpenAIRE

    Gaiffe, Bertrand; Romary, Laurent

    1997-01-01

    International audience; We try to show how language, gesture and perception can be seen in a uniform way from the perspective of referential analysis, by looking at the specific constraints which underly the speaker's choice of a given expression. To this end, we first situate the relative importance of speech and gesture in man-machine communication. Then, we concentrate upon the specific effects resulting from the combination of verbal, gestural and perceptual information, showing that on t...

  11. Development of a hand-gesture recognition system for human-computer interaction

    OpenAIRE

    Maqueda Nieto, Ana I.

    2014-01-01

    The aim of this Master Thesis is the analysis, design and development of a robust and reliable Human-Computer Interaction interface, based on visual hand-gesture recognition. The implementation of the required functions is oriented to the simulation of a classical hardware interaction device: the mouse, by recognizing a specific hand-gesture vocabulary in color video sequences. For this purpose, a prototype of a hand-gesture recognition system has been designed and implemented, which is c...

  12. Assessing Optimal Relationships Among Multi-Touch Gestures and Functions in Computer Applications

    Science.gov (United States)

    2013-07-01

    of gestures, mice, or video game controllers. Assuming a similar expectation existed for all functions before participants used a touchscreen, the...results: pairs of pie charts detail the gesture motions and draw patterns used for each of the 42 gestures. APPROVED FOR PUBLIC RELEASE; DISTRIBUTION...indicating that even the worst fits were reported as being relatively good. As depicted in the graph , little difference was noted in difficulty among

  13. Domestic dogs use contextual information and tone of voice when following a human pointing gesture

    OpenAIRE

    Linda Scheider; Susanne Grassmann; Juliane Kaminski; Michael Tomasello

    2011-01-01

    Domestic dogs are skillful at using the human pointing gesture. In this study we investigated whether dogs take contextual information into account when following pointing gestures, specifically, whether they follow human pointing gestures more readily in the context in which food has been found previously. Also varied was the human's tone of voice as either imperative or informative. Dogs were more sustained in their searching behavior in the ‘context’ condition as opposed to the ‘no context...

  14. Language and iconic gesture use in procedural discourse by speakers with aphasia.

    Science.gov (United States)

    Pritchard, Madeleine; Dipper, Lucy; Morgan, Gary; Cocks, Naomi

    2015-07-03

    Background: Conveying instructions is an everyday use of language, and gestures are likely to be a key feature of this. Although co-speech iconic gestures are tightly integrated with language, and people with aphasia (PWA) produce procedural discourses impaired at a linguistic level, no previous studies have investigated how PWA use co-speech iconic gestures in these contexts. Aims: This study investigated how PWA communicated meaning using gesture and language in procedural discourses, compared with neurologically healthy people (NHP). We aimed to identify the relative relationship of gesture and speech, in the context of impaired language, both overall and in individual events. Methods & Procedures: Twenty-nine PWA and 29 NHP produced two procedural discourses. The structure and semantic content of language of the whole discourses were analysed through predicate argument structure and spatial motor terms, and gestures were analysed for frequency and semantic form. Gesture and language were analysed in two key events, to determine the relative information presented in each modality. Outcomes & Results: PWA and NHP used similar frequencies and forms of gestures, although PWA used syntactically simpler language and fewer spatial words. This meant, overall, relatively more information was present in PWA gesture. This finding was also reflected in the key events, where PWA used gestures conveying rich semantic information alongside semantically impoverished language more often than NHP. Conclusions: PWA gestures, containing semantic information omitted from the concurrent speech, may help listeners with meaning when language is impaired. This finding indicates gesture should be included in clinical assessments of meaning-making.

  15. Training experience in gestures affects the display of social gaze in baboons' communication with a human.

    Science.gov (United States)

    Bourjade, Marie; Canteloup, Charlotte; Meguerditchian, Adrien; Vauclair, Jacques; Gaunet, Florence

    2015-01-01

    Gaze behaviour, notably the alternation of gaze between distal objects and social partners that accompanies primates' gestural communication is considered a standard indicator of intentionality. However, the developmental precursors of gaze behaviour in primates' communication are not well understood. Here, we capitalized on the training in gestures dispensed to olive baboons (Papio anubis) as a way of manipulating individual communicative experience with humans. We aimed to delineate the effects of such a training experience on gaze behaviour displayed by the monkeys in relation with gestural requests. Using a food-requesting paradigm, we compared subjects trained in requesting gestures (i.e. trained subjects) to naïve subjects (i.e. control subjects) for their occurrences of (1) gaze behaviour, (2) requesting gestures and (3) temporal combination of gaze alternation with gestures. We found that training did not affect the frequencies of looking at the human's face, looking at food or alternating gaze. Hence, social gaze behaviour occurs independently from the amount of communicative experience with humans. However, trained baboons-gesturing more than control subjects-exhibited most gaze alternation combined with gestures, whereas control baboons did not. By reinforcing the display of gaze alternation along with gestures, we suggest that training may have served to enhance the communicative function of hand gestures. Finally, this study brings the first quantitative report of monkeys producing requesting gestures without explicit training by humans (controls). These results may open a window on the developmental mechanisms (i.e. incidental learning vs. training) underpinning gestural intentional communication in primates.

  16. From manual gesture to speech: a gradual transition.

    Science.gov (United States)

    Gentilucci, Maurizio; Corballis, Michael C

    2006-01-01

    There are a number of reasons to suppose that language evolved from manual gestures. We review evidence that the transition from primarily manual to primarily vocal language was a gradual process, and is best understood if it is supposed that speech itself a gestural system rather than an acoustic system, an idea captured by the motor theory of speech perception and articulatory phonology. Studies of primate premotor cortex, and, in particular, of the so-called "mirror system" suggest a double hand/mouth command system that may have evolved initially in the context of ingestion, and later formed a platform for combined manual and vocal communication. In humans, speech is typically accompanied by manual gesture, speech production itself is influenced by executing or observing hand movements, and manual actions also play an important role in the development of speech, from the babbling stage onwards. The final stage at which speech became relatively autonomous may have occurred late in hominid evolution, perhaps with a mutation of the FOXP2 gene around 100,000 years ago.

  17. Kinesthetic Elementary Mathematics - Creating Flow with Gesture Modality

    Directory of Open Access Journals (Sweden)

    Jussi Okkonen

    2016-06-01

    Full Text Available Educational games for young children have boomed with the growing abundance of easy to use interfaces, especially on smartphones and tablets. In addition, most major gaming consoles boast of multimodal interaction, including the more novel and immersive gesture-based or bodily interaction; a concept proved by masses of consumers including young children.  In this paper, we examine an elementary mathematics learning application that aims to promote a state of flow for children aged between 6-8 years. The application is run on a PC and uses the Microsoft Kinect sensor for motion tracking. It provides gamified approaches to teach the number system between 0-20. Our underlying hypothesis is that kinesthetic learning methods supported by bodily interaction provide leverage to different types of learners.The paper describes the results of two sets (n1=23, n2=44 of pilot tests for exercise application for PC and Kinect. The tools utilized include a short and simplified survey for the children, and another survey and open-ended questionnaire for the teachers. Our key findings relate to the user experience of gesture-based interaction and show how the gesture modality promotes flow. Furthermore, we discuss our preliminary assessment on several learning related themes.

  18. Gesture and form in the Neolithic graphic expression

    Directory of Open Access Journals (Sweden)

    Philippe HAMEAU

    2011-10-01

    Full Text Available The parietal painted sign keeps on the memory of gesture that produces it. It is one of the distinctive features of this particular artefact. However, it is not possible to reconstruct this gesture if we do not give the context of the sign, if we do not present the numerous physical and cultural parameters which are in charge of their production. About schematic paintings of Neolithic age, we must take the union of criterions into account such as the parietal and site topography, the cultural constraints that appoint the location of figures and the ritual practices originally the graphical expression. The painter perceives, adapts and behaves according to this spatial and social environment. We refer here to several strategies: the attention for the parietal microtopography in accordance with the signs to draw, the respect of some criterions that specify the choice of the site like the hygrophily of places and the rubefaction of rock walls, the need to paint at the limits of the accessibility of site and wall, the use of drawing-tools for increase the capacities of the body. The efficiency of the gesture consists in realizing a sign bearing a meaning because in harmony with the features of its support.

  19. A Natural Interaction Interface for UAVs Using Intuitive Gesture Recognition

    Science.gov (United States)

    Chandarana, Meghan; Trujillo, Anna; Shimada, Kenji; Allen, Danette

    2016-01-01

    The popularity of unmanned aerial vehicles (UAVs) is increasing as technological advancements boost their favorability for a broad range of applications. One application is science data collection. In fields like Earth and atmospheric science, researchers are seeking to use UAVs to augment their current portfolio of platforms and increase their accessibility to geographic areas of interest. By increasing the number of data collection platforms UAVs will significantly improve system robustness and allow for more sophisticated studies. Scientists would like be able to deploy an available fleet of UAVs to fly a desired flight path and collect sensor data without needing to understand the complex low-level controls required to describe and coordinate such a mission. A natural interaction interface for a Ground Control System (GCS) using gesture recognition is developed to allow non-expert users (e.g., scientists) to define a complex flight path for a UAV using intuitive hand gesture inputs from the constructed gesture library. The GCS calculates the combined trajectory on-line, verifies the trajectory with the user, and sends it to the UAV controller to be flown.

  20. Links between Gestures and Multisensory Processing: Individual Differences Suggest a Compensation Mechanism

    Directory of Open Access Journals (Sweden)

    Simon B. Schmalenbach

    2017-10-01

    Full Text Available Speech-associated gestures represent an important communication modality. However, individual differences in the production and perception of gestures are not well understood so far. We hypothesized that the perception of multisensory action consequences might play a crucial role. Verbal communication involves continuous calibration of audio–visual information produced by the speakers. The effective production and perception of gestures supporting this process could depend on the given capacities to perceive multisensory information accurately. We explored the association between the production and perception of gestures and the monitoring of multisensory action consequences in a sample of 31 participants. We applied a recently introduced gesture scale to assess self-reported gesture production and perception in everyday life situations. In the perceptual experiment, we presented unimodal (visual and bimodal (visual and auditory sensory outcomes with various delays after a self-initiated (active or externally generated (passive button press. Participants had to report whether they detected a delay between the button press and the visual stimulus. We derived psychometric functions for each condition and determined points of subjective equality, reflecting detection thresholds for delays. Results support a robust link between gesture scores and detection thresholds. Individuals with higher detection thresholds (lower performance reported more frequent gesture production and perception and furthermore profited more from multisensory information in the experimental task. We propose that our findings indicate a compensational function of multisensory processing as a basis for individual differences in both action outcome monitoring and gesture production and perception in everyday life situations.

  1. Links between Gestures and Multisensory Processing: Individual Differences Suggest a Compensation Mechanism.

    Science.gov (United States)

    Schmalenbach, Simon B; Billino, Jutta; Kircher, Tilo; van Kemenade, Bianca M; Straube, Benjamin

    2017-01-01

    Speech-associated gestures represent an important communication modality. However, individual differences in the production and perception of gestures are not well understood so far. We hypothesized that the perception of multisensory action consequences might play a crucial role. Verbal communication involves continuous calibration of audio-visual information produced by the speakers. The effective production and perception of gestures supporting this process could depend on the given capacities to perceive multisensory information accurately. We explored the association between the production and perception of gestures and the monitoring of multisensory action consequences in a sample of 31 participants. We applied a recently introduced gesture scale to assess self-reported gesture production and perception in everyday life situations. In the perceptual experiment, we presented unimodal (visual) and bimodal (visual and auditory) sensory outcomes with various delays after a self-initiated (active) or externally generated (passive) button press. Participants had to report whether they detected a delay between the button press and the visual stimulus. We derived psychometric functions for each condition and determined points of subjective equality, reflecting detection thresholds for delays. Results support a robust link between gesture scores and detection thresholds. Individuals with higher detection thresholds (lower performance) reported more frequent gesture production and perception and furthermore profited more from multisensory information in the experimental task. We propose that our findings indicate a compensational function of multisensory processing as a basis for individual differences in both action outcome monitoring and gesture production and perception in everyday life situations.

  2. The effect of the visual context in the recognition of symbolic gestures.

    Science.gov (United States)

    Villarreal, Mirta F; Fridman, Esteban A; Leiguarda, Ramón C

    2012-01-01

    To investigate, by means of fMRI, the influence of the visual environment in the process of symbolic gesture recognition. Emblems are semiotic gestures that use movements or hand postures to symbolically encode and communicate meaning, independently of language. They often require contextual information to be correctly understood. Until now, observation of symbolic gestures was studied against a blank background where the meaning and intentionality of the gesture was not fulfilled. Normal subjects were scanned while observing short videos of an individual performing symbolic gesture with or without the corresponding visual context and the context scenes without gestures. The comparison between gestures regardless of the context demonstrated increased activity in the inferior frontal gyrus, the superior parietal cortex and the temporoparietal junction in the right hemisphere and the precuneus and posterior cingulate bilaterally, while the comparison between context and gestures alone did not recruit any of these regions. These areas seem to be crucial for the inference of intentions in symbolic gestures observed in their natural context and represent an interrelated network formed by components of the putative human neuron mirror system as well as the mentalizing system.

  3. Early deictic but not other gestures predict later vocabulary in both typical development and autism.

    Science.gov (United States)

    Özçalışkan, Şeyda; Adamson, Lauren B; Dimitrova, Nevena

    2016-08-01

    Research with typically developing children suggests a strong positive relation between early gesture use and subsequent vocabulary development. In this study, we ask whether gesture production plays a similar role for children with autism spectrum disorder. We observed 23 18-month-old typically developing children and 23 30-month-old children with autism spectrum disorder interact with their caregivers (Communication Play Protocol) and coded types of gestures children produced (deictic, give, conventional, and iconic) in two communicative contexts (commenting and requesting). One year later, we assessed children's expressive vocabulary, using Expressive Vocabulary Test. Children with autism spectrum disorder showed significant deficits in gesture production, particularly in deictic gestures (i.e. gestures that indicate objects by pointing at them or by holding them up). Importantly, deictic gestures-but not other gestures-predicted children's vocabulary 1 year later regardless of communicative context, a pattern also found in typical development. We conclude that the production of deictic gestures serves as a stepping-stone for vocabulary development. © The Author(s) 2015.

  4. Feeling addressed! The role of body orientation and co-speech gesture in social communication.

    Science.gov (United States)

    Nagels, Arne; Kircher, Tilo; Steines, Miriam; Straube, Benjamin

    2015-05-01

    During face-to-face communication, body orientation and coverbal gestures influence how information is conveyed. The neural pathways underpinning the comprehension of such nonverbal social cues in everyday interaction are to some part still unknown. During fMRI data acquisition, 37 participants were presented with video clips showing an actor speaking short sentences. The actor produced speech-associated iconic gestures (IC) or no gestures (NG) while he was visible either from an egocentric (ego) or from an allocentric (allo) position. Participants were asked to indicate via button press whether they felt addressed or not. We found a significant interaction of body orientation and gesture in addressment evaluations, indicating that participants evaluated IC-ego conditions as most addressing. The anterior cingulate cortex (ACC) and left fusiform gyrus were stronger activated for egocentric versus allocentric actor position in gesture context. Activation increase in the ACC for IC-ego>IC-allo further correlated positively with increased addressment ratings in the egocentric gesture condition. Gesture-related activation increase in the supplementary motor area, left inferior frontal gyrus and right insula correlated positively with gesture-related increase of addressment evaluations in the egocentric context. Results indicate that gesture use and body-orientation contribute to the feeling of being addressed and together influence neural processing in brain regions involved in motor simulation, empathy and mentalizing. © 2015 Wiley Periodicals, Inc.

  5. Gesture-controlled interfaces for self-service machines and other applications

    Science.gov (United States)

    Cohen, Charles J. (Inventor); Beach, Glenn (Inventor); Cavell, Brook (Inventor); Foulk, Gene (Inventor); Jacobus, Charles J. (Inventor); Obermark, Jay (Inventor); Paul, George (Inventor)

    2004-01-01

    A gesture recognition interface for use in controlling self-service machines and other devices is disclosed. A gesture is defined as motions and kinematic poses generated by humans, animals, or machines. Specific body features are tracked, and static and motion gestures are interpreted. Motion gestures are defined as a family of parametrically delimited oscillatory motions, modeled as a linear-in-parameters dynamic system with added geometric constraints to allow for real-time recognition using a small amount of memory and processing time. A linear least squares method is preferably used to determine the parameters which represent each gesture. Feature position measure is used in conjunction with a bank of predictor bins seeded with the gesture parameters, and the system determines which bin best fits the observed motion. Recognizing static pose gestures is preferably performed by localizing the body/object from the rest of the image, describing that object, and identifying that description. The disclosure details methods for gesture recognition, as well as the overall architecture for using gesture recognition to control of devices, including self-service machines.

  6. An arc-length warping algorithm for gesture recognition using quaternion representation.

    Science.gov (United States)

    Cifuentes, Jenny; Pham, Minh Tu; Moreau, Richard; Prieto, Flavio; Boulanger, Pierre

    2013-01-01

    This paper presents a new algorithm, called Dynamic Arc-Length Warping algorithm (DALW) for hand gesture recognition based on the orientation data. In this algorithm, after calculating the quaternion for each orientation measurement, we use DALW algorithm to obtain a similarity measure between different trajectories. We present the benefits of using quaternion alongside the implementation of Dynamic Arc Length Warping to present an optimized tool for gesture recognition.We show the advantages of this approach compared with other techniques. This tool can be used to distinguish similar and different gestures. An experimental validation is carried out to classify a series of simple human gestures.

  7. Gesture and motor skill in relation to language in children with language impairment.

    Science.gov (United States)

    Iverson, Jana M; Braddock, Barbara A

    2011-02-01

    To examine gesture and motor abilities in relation to language in children with language impairment (LI). Eleven children with LI (aged 2;7 to 6;1 [years;months]) and 16 typically developing (TD) children of similar chronological ages completed 2 picture narration tasks, and their language (rate of verbal utterances, mean length of utterance, and number of different words) and gestures (coded for type, co-occurrence with language, and informational relationship to language) were examined. Fine and gross motor items from the Battelle Developmental Screening Inventory (J. Newborg, J. R. Stock, L. Wneck, J. Guidubaldi, & J. Suinick, 1994) and the Child Development Inventory (H. R. Ireton, 1992) were administered. Relative to TD peers, children with LI used gestures at a higher rate and produced greater proportions of gesture-only communications, conventional gestures, and gestures that added unique information to co-occurring language. However, they performed more poorly on measures of fine and gross motor abilities. Regression analyses indicated that within the LI but not the TD group, poorer expressive language was related to more frequent gesture production. When language is impaired, difficulties are also apparent in motor abilities, but gesture assumes a compensatory role. These findings underscore the utility of including spontaneous gesture and motor abilities in clinical assessment of and intervention for preschool children with language concerns.

  8. The effect of the visual context in the recognition of symbolic gestures.

    Directory of Open Access Journals (Sweden)

    Mirta F Villarreal

    Full Text Available BACKGROUND: To investigate, by means of fMRI, the influence of the visual environment in the process of symbolic gesture recognition. Emblems are semiotic gestures that use movements or hand postures to symbolically encode and communicate meaning, independently of language. They often require contextual information to be correctly understood. Until now, observation of symbolic gestures was studied against a blank background where the meaning and intentionality of the gesture was not fulfilled. METHODOLOGY/PRINCIPAL FINDINGS: Normal subjects were scanned while observing short videos of an individual performing symbolic gesture with or without the corresponding visual context and the context scenes without gestures. The comparison between gestures regardless of the context demonstrated increased activity in the inferior frontal gyrus, the superior parietal cortex and the temporoparietal junction in the right hemisphere and the precuneus and posterior cingulate bilaterally, while the comparison between context and gestures alone did not recruit any of these regions. CONCLUSIONS/SIGNIFICANCE: These areas seem to be crucial for the inference of intentions in symbolic gestures observed in their natural context and represent an interrelated network formed by components of the putative human neuron mirror system as well as the mentalizing system.

  9. Effects of hand gestures on auditory learning of second-language vowel length contrasts.

    Science.gov (United States)

    Hirata, Yukari; Kelly, Spencer D; Huang, Jessica; Manansala, Michael

    2014-12-01

    Research has shown that hand gestures affect comprehension and production of speech at semantic, syntactic, and pragmatic levels for both native language and second language (L2). This study investigated a relatively less explored question: Do hand gestures influence auditory learning of an L2 at the segmental phonology level? To examine auditory learning of phonemic vowel length contrasts in Japanese, 88 native English-speaking participants took an auditory test before and after one of the following 4 types of training in which they (a) observed an instructor in a video speaking Japanese words while she made syllabic-rhythm hand gesture, (b) produced this gesture with the instructor, (c) observed the instructor speaking those words and her moraic-rhythm hand gesture, or (d) produced the moraic-rhythm gesture with the instructor. All of the training types yielded similar auditory improvement in identifying vowel length contrast. However, observing the syllabic-rhythm hand gesture yielded the most balanced improvement between word-initial and word-final vowels and between slow and fast speaking rates. The overall effect of hand gesture on learning of segmental phonology is limited. Implications for theories of hand gesture are discussed in terms of the role it plays at different linguistic levels.

  10. DDF and Pohlmeyer invariants of (super)string

    OpenAIRE

    Schreiber, Urs

    2004-01-01

    We show how the Pohlmeyer invariants of the bosonic string are expressible in terms of DDF invariants. Quantization of the DDF observables in the usual way yields a consistent quantization of the algebra of Pohlmeyer invariants. Furthermore it becomes straightforward to generalize the Pohlmeyer invariants to the superstring as well as to all backgrounds which allow a free field realization of the worldsheet theory.

  11. Invariant geodynamical information in geometric geodetic measurements

    Science.gov (United States)

    Xu, Peiliang; Shimada, Seiichi; Fujii, Yoichiro; Tanaka, Torao

    2000-08-01

    Repeated geodetic measurements have been used to extract geodynamical quantities such as displacements, velocities of movement and crustal strains. Historical geodetic networks, especially those established before the space geodetic era, were, and still are, very important in providing a unique insight into the (local or regional) historical deformation state of the Earth. For the geodetic network without a tie to an external reference frame, free network adjustment methods have been widely applied to derive geodynamical quantities. Currently, it is commonly accepted that absolute displacements cannot be uniquely determined from triangulation/trilateration measurements, but relative displacements can be found uniquely if the geodetic network is geometrically overdetermined (see e.g. Segall & Matthews 1988). Strain tensors were derived using the coordinate method and were reported to be uniquely determined. We have carried out a theoretical analysis of invariant geodynamical information in geometric geodetic observations and concluded: (1) that relative displacements are not invariant quantities and thus cannot be uniquely determined from the geodetic network without a tie to an external reference frame; and (2) the components of the strain tensors are not all invariant and thus cannot individually be determined uniquely from the network. However, certain combinations of strain components are indeed invariant and can be uniquely determined from geometric geodetic measurements. The theory of invariant information is then applied to the analysis of the Tokai first-order triangulation/trilateration network spanning an interval of more than 100yr. The results show that the normal and principal strains are significantly affected by the unknown scaling biases and orientation differences; thus any attempt at geophysical interpretation of these quantities must be exercised with great care. If the scaling bias and the orientation difference are small, the shear strain is

  12. The neural basis of hand gesture comprehension: A meta-analysis of functional magnetic resonance imaging studies.

    Science.gov (United States)

    Yang, Jie; Andric, Michael; Mathew, Mili M

    2015-10-01

    Gestures play an important role in face-to-face communication and have been increasingly studied via functional magnetic resonance imaging. Although a large amount of data has been provided to describe the neural substrates of gesture comprehension, these findings have never been quantitatively summarized and the conclusion is still unclear. This activation likelihood estimation meta-analysis investigated the brain networks underpinning gesture comprehension while considering the impact of gesture type (co-speech gestures vs. speech-independent gestures) and task demand (implicit vs. explicit) on the brain activation of gesture comprehension. The meta-analysis of 31 papers showed that as hand actions, gestures involve a perceptual-motor network important for action recognition. As meaningful symbols, gestures involve a semantic network for conceptual processing. Finally, during face-to-face interactions, gestures involve a network for social emotive processes. Our finding also indicated that gesture type and task demand influence the involvement of the brain networks during gesture comprehension. The results highlight the complexity of gesture comprehension, and suggest that future research is necessary to clarify the dynamic interactions among these networks. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. How children make language out of gesture: morphological structure in gesture systems developed by American and Chinese deaf children.

    Science.gov (United States)

    Goldin-Meadow, Susan; Mylander, Carolyn; Franklin, Amy

    2007-09-01

    When children learn language, they apply their language-learning skills to the linguistic input they receive. But what happens if children are not exposed to input from a conventional language? Do they engage their language-learning skills nonetheless, applying them to whatever unconventional input they have? We address this question by examining gesture systems created by four American and four Chinese deaf children. The children's profound hearing losses prevented them from learning spoken language, and their hearing parents had not exposed them to sign language. Nevertheless, the children in both cultures invented gesture systems that were structured at the morphological/word level. Interestingly, the differences between the children's systems were no bigger across cultures than within cultures. The children's morphemes could not be traced to their hearing mothers' gestures; however, they were built out of forms and meanings shared with their mothers. The findings suggest that children construct morphological structure out of the input that is handed to them, even if that input is not linguistic in form.

  14. Dactyl Alphabet Gesture Recognition in a Video Sequence Using Microsoft Kinect

    Science.gov (United States)

    Artyukhin, S. G.; Mestetskiy, L. M.

    2015-05-01

    This paper presents an efficient framework for solving the problem of static gesture recognition based on data obtained from the web cameras and depth sensor Kinect (RGB-D - data). Each gesture given by a pair of images: color image and depth map. The database store gestures by it features description, genereated by frame for each gesture of the alphabet. Recognition algorithm takes as input a video sequence (a sequence of frames) for marking, put in correspondence with each frame sequence gesture from the database, or decide that there is no suitable gesture in the database. First, classification of the frame of the video sequence is done separately without interframe information. Then, a sequence of successful marked frames in equal gesture is grouped into a single static gesture. We propose a method combined segmentation of frame by depth map and RGB-image. The primary segmentation is based on the depth map. It gives information about the position and allows to get hands rough border. Then, based on the color image border is specified and performed analysis of the shape of the hand. Method of continuous skeleton is used to generate features. We propose a method of skeleton terminal branches, which gives the opportunity to determine the position of the fingers and wrist. Classification features for gesture is description of the position of the fingers relative to the wrist. The experiments were carried out with the developed algorithm on the example of the American Sign Language. American Sign Language gesture has several components, including the shape of the hand, its orientation in space and the type of movement. The accuracy of the proposed method is evaluated on the base of collected gestures consisting of 2700 frames.

  15. Communicative gestures and vocabulary development in 36-month-old children with Down's syndrome.

    Science.gov (United States)

    Zampini, Laura; D'Odorico, Laura

    2009-01-01

    Children with Down's syndrome seem to show a preference for the use of gestural rather than vocal productions during the first stages of language development. This 'gestural advantage' could actually be due to a developmental strategy used to compensate the difficulties in verbal production that are typical of language development in this population. The principal aim of the present study is to analyse the relationships between gesture production and vocabulary development in children with Down's syndrome, verifying the existence of any similarities with the processes that characterize typical development. Twenty 36-month-old Italian children with Down's syndrome participated in the study. Each child's spontaneous gestural and vocal production was assessed during a mother-and-child play session. Their psychomotor development was evaluated using the Brunet-Lézine Scale of Infant Development; and their vocabulary size, with regards both production and comprehension, was assessed using the Italian version of the MacArthur Communicative Development Inventory (CDI). Following a longitudinal perspective, parents were requested to complete the inventory at 6 months after the first evaluation (i.e., at 42 months). Data analyses focused both on the concurrent relations of gesture production, vocabulary size and psychomotor development, and on the longitudinal relations between gesture production and subsequent vocabulary development. As regards concurrent relationships, gesture production appeared to be related both to psychomotor development and to word comprehension, but not to word production. Nevertheless, as regards longitudinal relationships, gesture production at 36 months was significantly correlated to the subsequent vocabulary production, assessed at 42 months using the MacArthur Communicative Development Inventory (CDI), though this relationship appeared to be mediated by word comprehension. Two processes in the gesture production of children with Down

  16. Hand-Based Gesture Recognition for Vehicular Applications Using IR-UWB Radar.

    Science.gov (United States)

    Khan, Faheem; Leem, Seong Kyu; Cho, Sung Ho

    2017-04-11

    Modern cars continue to offer more and more functionalities due to which they need a growing number of commands. As the driver tries to monitor the road and the graphic user interface simultaneously, his/her overall efficiency is reduced. In order to reduce the visual attention necessary for monitoring, a gesture-based user interface is very important. In this paper, gesture recognition for a vehicle through impulse radio ultra-wideband (IR-UWB) radar is discussed. The gestures can be used to control different electronic devices inside a vehicle. The gestures are based on human hand and finger motion. We have implemented a real-time version using only one radar sensor. Studies on gesture recognition using IR-UWB radar have rarely been carried out, and some studies are merely simple methods using the magnitude of the reflected signal or those whose performance deteriorates largely due to changes in distance or direction. In this study, we propose a new hand-based gesture recognition algorithm that works robustly against changes in distance or direction while responding only to defined gestures by ignoring meaningless motions. We used three independent features, i.e., variance of the probability density function (pdf) of the magnitude histogram, time of arrival (TOA) variation and the frequency of the reflected signal, to classify the gestures. A data fitting method is included to differentiate between gesture signals and unintended hand or body motions. We have used the clustering technique for the classification of the gestures. Moreover, the distance information is used as an additional input parameter to the clustering algorithm, such that the recognition technique will not be vulnerable to distance change. The hand-based gesture recognition proposed in this paper would be a key technology of future automobile user interfaces.

  17. Gauge-Invariant Formulation of Circular Dichroism.

    Science.gov (United States)

    Raimbault, Nathaniel; de Boeij, Paul L; Romaniello, Pina; Berger, J A

    2016-07-12

    Standard formulations of magnetic response properties, such as circular dichroism spectra, are plagued by gauge dependencies, which can lead to unphysical results. In this work, we present a general gauge-invariant and numerically efficient approach for the calculation of circular dichroism spectra from the current density. First we show that in this formulation the optical rotation tensor, the response function from which circular dichroism spectra can be obtained, is independent of the origin of the coordinate system. We then demonstrate that its trace is independent of the gauge origin of the vector potential. We also show how gauge invariance can be retained in practical calculations with finite basis sets. As an example, we explain how our method can be applied to time-dependent current-density-functional theory. Finally, we report gauge-invariant circular dichroism spectra obtained using the adiabatic local-density approximation. The circular dichroism spectra we thus obtain are in good agreement with experiment.

  18. Blurred image recognition by Legendre moment invariants.

    Science.gov (United States)

    Zhang, Hui; Shu, Huazhong; Han, Guoniu N; Coatrieux, Gouenou; Luo, Limin; Coatrieux, Jean Louis

    2010-03-01

    Processing blurred images is a key problem in many image applications. Existing methods to obtain blur invariants which are invariant with respect to centrally symmetric blur are based on geometric moments or complex moments. In this paper, we propose a new method to construct a set of blur invariants using the orthogonal Legendre moments. Some important properties of Legendre moments for the blurred image are presented and proved. The performance of the proposed descriptors is evaluated with various point-spread functions and different image noises. The comparison of the present approach with previous methods in terms of pattern recognition accuracy is also provided. The experimental results show that the proposed descriptors are more robust to noise and have better discriminative power than the methods based on geometric or complex moments.

  19. Spontaneous breaking of continuous translational invariance

    Science.gov (United States)

    Watanabe, Haruki; Brauner, Tomáš

    2012-04-01

    Unbroken continuous translational invariance is often taken as a basic assumption in discussions of spontaneous symmetry breaking (SSB), which singles out SSB of translational invariance itself as an exceptional case. We present a framework that allows us to treat translational invariance on the same footing as other symmetries. It is shown that existing theorems on SSB can be straightforwardly extended to this general case. As a concrete application, we analyze the Nambu-Goldstone modes in a (ferromagnetic) supersolid. We prove on the ground of the general theorems that the Bogoliubov mode stemming from a spontaneously broken internal U(1) symmetry and the longitudinal phonon due to a crystalline order are distinct physical modes.

  20. Invariant death [version 1; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Steven A. Frank

    2016-08-01

    Full Text Available In nematodes, environmental or physiological perturbations alter death’s scaling of time. In human cancer, genetic perturbations alter death’s curvature of time. Those changes in scale and curvature follow the constraining contours of death’s invariant geometry. I show that the constraints arise from a fundamental extension to the theories of randomness, invariance and scale. A generalized Gompertz law follows. The constraints imposed by the invariant Gompertz geometry explain the tendency of perturbations to stretch or bend death’s scaling of time. Variability in death rate arises from a combination of constraining universal laws and particular biological processes.

  1. Differential invariants in nonclassical models of hydrodynamics

    Science.gov (United States)

    Bublik, Vasily V.

    2017-10-01

    In this paper, differential invariants are used to construct solutions for equations of the dynamics of a viscous heat-conducting gas and the dynamics of a viscous incompressible fluid modified by nanopowder inoculators. To describe the dynamics of a viscous heat-conducting gas, we use the complete system of Navier—Stokes equations with allowance for heat fluxes. Mathematical description of the dynamics of liquid metals under high-energy external influences (laser radiation or plasma flow) includes, in addition to the Navier—Stokes system of an incompressible viscous fluid, also heat fluxes and processes of nonequilibrium crystallization of a deformable fluid. Differentially invariant solutions are a generalization of partially invariant solutions, and their active study for various models of continuous medium mechanics is just beginning. Differentially invariant solutions can also be considered as solutions with differential constraints; therefore, when developing them, the approaches and methods developed by the science schools of academicians N. N. Yanenko and A. F. Sidorov will be actively used. In the construction of partially invariant and differentially invariant solutions, there are overdetermined systems of differential equations that require a compatibility analysis. The algorithms for reducing such systems to involution in a finite number of steps are described by Cartan, Finikov, Kuranishi, and other authors. However, the difficultly foreseeable volume of intermediate calculations complicates their practical application. Therefore, the methods of computer algebra are actively used here, which largely helps in solving this difficult problem. It is proposed to use the constructed exact solutions as tests for formulas, algorithms and their software implementations when developing and creating numerical methods and computational program complexes. This combination of effective numerical methods, capable of solving a wide class of problems, with

  2. Emotional Interaction with a Robot Using Facial Expressions, Face Pose and Hand Gestures

    Directory of Open Access Journals (Sweden)

    Myung-Ho Ju

    2012-09-01

    Full Text Available Facial expression is one of the major cues for emotional communications between humans and robots. In this paper, we present emotional human robot interaction techniques using facial expressions combined with an exploration of other useful concepts, such as face pose and hand gesture. For the efficient recognition of facial expressions, it is important to understand the positions of facial feature points. To do this, our technique estimates the 3D positions of each feature point by constructing 3D face models fitted on the user. To construct the 3D face models, we first construct an Active Appearance Model (AAM for variations of the facial expression. Next, we estimate depth information at each feature point from frontal- and side-view images. By combining the estimated depth information with AAM, the 3D face model is fitted on the user according to the various 3D transformations of each feature point. Self-occlusions due to the 3D pose variation are also processed by the region weighting function on the normalized face at each frame. The recognized facial expressions - such as happiness, sadness, fear and anger - are used to change the colours of foreground and background objects in the robot displays, as well as other robot responses. The proposed method displays desirable results in viewing comics with the entertainment robots in our experiments.

  3. The Uniqueness of -Matrix Graph Invariants

    Science.gov (United States)

    Dehmer, Matthias; Shi, Yongtang

    2014-01-01

    In this paper, we examine the uniqueness (discrimination power) of a newly proposed graph invariant based on the matrix defined by Randić et al. In order to do so, we use exhaustively generated graphs instead of special graph classes such as trees only. Using these graph classes allow us to generalize the findings towards complex networks as they usually do not possess any structural constraints. We obtain that the uniqueness of this newly proposed graph invariant is approximately as low as the uniqueness of the Balaban index on exhaustively generated (general) graphs. PMID:24392099

  4. Galilean invariant resummation schemes of cosmological perturbations

    Science.gov (United States)

    Peloso, Marco; Pietroni, Massimo

    2017-01-01

    Many of the methods proposed so far to go beyond Standard Perturbation Theory break invariance under time-dependent boosts (denoted here as extended Galilean Invariance, or GI). This gives rise to spurious large scale effects which spoil the small scale predictions of these approximation schemes. By using consistency relations we derive fully non-perturbative constraints that GI imposes on correlation functions. We then introduce a method to quantify the amount of GI breaking of a given scheme, and to correct it by properly tailored counterterms. Finally, we formulate resummation schemes which are manifestly GI, discuss their general features, and implement them in the so called Time-Flow, or TRG, equations.

  5. Application of invariant embedding to reactor physics

    CERN Document Server

    Shimizu, Akinao; Parsegian, V L

    1972-01-01

    Application of Invariant Embedding to Reactor Physics describes the application of the method of invariant embedding to radiation shielding and to criticality calculations of atomic reactors. The authors intend to show how this method has been applied to realistic problems, together with the results of applications which will be useful to shielding design. The book is organized into two parts. Part A deals with the reflection and transmission of gamma rays by slabs. The chapters in this section cover topics such as the reflection and transmission problem of gamma rays; formulation of the probl

  6. Symmetric form-invariant dual Pearcey beams.

    Science.gov (United States)

    Ren, Zhijun; Fan, Changjiang; Shi, Yile; Chen, Bo

    2016-08-01

    We introduce another type of Pearcey beam, namely, dual Pearcey (DP) beams, based on the Pearcey function of catastrophe theory. DP beams are experimentally generated by applying Fresnel diffraction of bright elliptic rings. Form-invariant Bessel distribution beams can be regarded as a special case of DP beams. Subsequently, the basic propagation characteristics of DP beams are identified. DP beams are the result of the interference of two half DP beams instead of two classical Pearcey beams. Moreover, we also verified that half DP beams (including special-case parabolic-like beams) generated by half elliptical rings (circular rings) are a new member of the family of form-invariant beams.

  7. Difference spaces and invariant linear forms

    CERN Document Server

    Nillsen, Rodney

    1994-01-01

    Difference spaces arise by taking sums of finite or fractional differences. Linear forms which vanish identically on such a space are invariant in a corresponding sense. The difference spaces of L2 (Rn) are Hilbert spaces whose functions are characterized by the behaviour of their Fourier transforms near, e.g., the origin. One aim is to establish connections between these spaces and differential operators, singular integral operators and wavelets. Another aim is to discuss aspects of these ideas which emphasise invariant linear forms on locally compact groups. The work primarily presents new results, but does so from a clear, accessible and unified viewpoint, which emphasises connections with related work.

  8. Conformal invariants topics in geometric function theory

    CERN Document Server

    Ahlfors, Lars V

    2010-01-01

    Most conformal invariants can be described in terms of extremal properties. Conformal invariants and extremal problems are therefore intimately linked and form together the central theme of this classic book which is primarily intended for students with approximately a year's background in complex variable theory. The book emphasizes the geometric approach as well as classical and semi-classical results which Lars Ahlfors felt every student of complex analysis should know before embarking on independent research. At the time of the book's original appearance, much of this material had never ap

  9. Invariant distances and metrics in complex analysis

    CERN Document Server

    Jarnicki, Marek

    2013-01-01

    As in the field of ""Invariant Distances and Metrics in Complex Analysis"" there was and is a continuous progress this is the second extended edition of the corresponding monograph. This comprehensive book is about the study of invariant pseudodistances (non-negative functions on pairs of points) and pseudometrics (non-negative functions on the tangent bundle) in several complex variables. It is an overview over a highly active research area at the borderline between complex analysis, functional analysis and differential geometry. New chapters are covering the Wu, Bergman and several other met

  10. The decomposition of global conformal invariants

    CERN Document Server

    Alexakis, Spyros

    2012-01-01

    This book addresses a basic question in differential geometry that was first considered by physicists Stanley Deser and Adam Schwimmer in 1993 in their study of conformal anomalies. The question concerns conformally invariant functionals on the space of Riemannian metrics over a given manifold. These functionals act on a metric by first constructing a Riemannian scalar out of it, and then integrating this scalar over the manifold. Suppose this integral remains invariant under conformal re-scalings of the underlying metric. What information can one then deduce about the Riemannian scalar? Dese

  11. Dihadron fragmentation functions for large invariant mass.

    Science.gov (United States)

    Zhou, J; Metz, A

    2011-04-29

    Using perturbative quantum chromodynamics, we compute dihadron fragmentation functions for a large invariant mass of the dihadron pair. The main focus is on the interference fragmentation function H(1)(∢), which plays an important role in spin physics of the nucleon. Our calculation also reveals that H(1)(∢) and the Collins fragmentation function have closely related underlying dynamics. By considering semi-inclusive deep-inelastic scattering, we further show that collinear factorization in terms of dihadron fragmentation functions and collinear factorization in terms of single-hadron fragmentation functions provide the same result in the region of intermediate invariant mass.

  12. Invariant measures on multimode quantum Gaussian states

    Energy Technology Data Exchange (ETDEWEB)

    Lupo, C. [School of Science and Technology, Universita di Camerino, I-62032 Camerino (Italy); Mancini, S. [School of Science and Technology, Universita di Camerino, I-62032 Camerino (Italy); Istituto Nazionale di Fisica Nucleare, Sezione di Perugia, I-06123 Perugia (Italy); De Pasquale, A. [NEST, Scuola Normale Superiore and Istituto Nanoscienze-CNR, I-56126 Pisa (Italy); Facchi, P. [Dipartimento di Matematica and MECENAS, Universita di Bari, I-70125 Bari (Italy); Istituto Nazionale di Fisica Nucleare, Sezione di Bari, I-70126 Bari (Italy); Florio, G. [Istituto Nazionale di Fisica Nucleare, Sezione di Bari, I-70126 Bari (Italy); Museo Storico della Fisica e Centro Studi e Ricerche Enrico Fermi, Piazza del Viminale 1, I-00184 Roma (Italy); Dipartimento di Fisica and MECENAS, Universita di Bari, I-70126 Bari (Italy); Pascazio, S. [Istituto Nazionale di Fisica Nucleare, Sezione di Bari, I-70126 Bari (Italy); Dipartimento di Fisica and MECENAS, Universita di Bari, I-70126 Bari (Italy)

    2012-12-15

    We derive the invariant measure on the manifold of multimode quantum Gaussian states, induced by the Haar measure on the group of Gaussian unitary transformations. To this end, by introducing a bipartition of the system in two disjoint subsystems, we use a parameterization highlighting the role of nonlocal degrees of freedom-the symplectic eigenvalues-which characterize quantum entanglement across the given bipartition. A finite measure is then obtained by imposing a physically motivated energy constraint. By averaging over the local degrees of freedom we finally derive the invariant distribution of the symplectic eigenvalues in some cases of particular interest for applications in quantum optics and quantum information.

  13. L2 Vocabulary Teaching with Student- and Teacher-Generated Gestures: A Classroom Perspective

    Science.gov (United States)

    Clark, Jordan; Trofimovich, Pavel

    2016-01-01

    This action research project explored the use of gestures for teaching and learning French vocabulary in an upper-beginner adult classroom with 21 students from various language backgrounds. Over the course of 4 weeks, the teacher developed and used 4 sets of themed activities using both teacher- and student-generated gestures to introduce new…

  14. Monolingual and Bilingual Preschoolers' Use of Gestures to Interpret Ambiguous Pronouns

    Science.gov (United States)

    Yow, W. Quin

    2015-01-01

    Young children typically do not use order-of-mention to resolve ambiguous pronouns, but may do so if given additional cues, such as gestures. Additionally, this ability to utilize gestures may be enhanced in bilingual children, who may be more sensitive to such cues due to their unique language experience. We asked monolingual and bilingual…

  15. Using Robot Animation to Promote Gestural Skills in Children with Autism Spectrum Disorders

    Science.gov (United States)

    So, W.-C.; Wong, M. K.-Y.; Cabibihan, J.-J.; Lam, C. K.-Y.; Chan, R. Y.-Y.; Qian, H.-H.

    2016-01-01

    School-aged children with autism spectrum disorders (ASDs) have delayed gestural development, in comparison with age-matched typically developing children. In this study, an intervention program taught children with low-functioning ASD gestural comprehension and production using video modelling (VM) by a computer-generated robot animation. Six to…

  16. Interaction Between Words and Symbolic Gestures as Revealed By N400.

    Science.gov (United States)

    Fabbri-Destro, Maddalena; Avanzini, Pietro; De Stefani, Elisa; Innocenti, Alessandro; Campi, Cristina; Gentilucci, Maurizio

    2015-07-01

    What happens if you see a person pronouncing the word "go" after having gestured "stop"? Differently from iconic gestures, that must necessarily be accompanied by verbal language in order to be unambiguously understood, symbolic gestures are so conventionalized that they can be effortlessly understood in the absence of speech. Previous studies proposed that gesture and speech belong to a unique communication system. From an electrophysiological perspective the N400 modulation was considered the main variable indexing the interplay between two stimuli. However, while many studies tested this effect between iconic gestures and speech, little is known about the capability of an emblem to modulate the neural response to subsequently presented words. Using high-density EEG, the present study aimed at evaluating the presence of an N400 effect and its spatiotemporal dynamics, in terms of cortical activations, when emblems primed the observation of words. Participants were presented with symbolic gestures followed by a semantically congruent or incongruent verb. A N400 modulation was detected, showing larger negativity when gesture and words were incongruent. The source localization during N400 time window evidenced the activation of different portions of temporal cortex according to the gesture and word congruence. Our data provide further evidence of how the observation of an emblem influences verbal language perception, and of how this interplay is mainly instanced by different portions of the temporal cortex.

  17. Nine-month-old infants are sensitive to the temporal alignment of prosodic and gesture prominences.

    Science.gov (United States)

    Esteve-Gibert, Núria; Prieto, Pilar; Pons, Ferran

    2015-02-01

    This study investigated the sensitivity of 9-month-old infants to the alignment between prosodic and gesture prominences in pointing-speech combinations. Results revealed that the perception of prominence is multimodal and that infants are aware of the timing of gesture-speech combinations well before they can produce them. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. The effects of learning American Sign Language on co-speech gesture().

    Science.gov (United States)

    Casey, Shannon; Emmorey, Karen; Larrabee, Heather

    2012-10-01

    Given that the linguistic articulators for sign language are also used to produce co-speech gesture, we examined whether one year of academic instruction in American Sign Language (ASL) impacts the rate and nature of gestures produced when speaking English. A survey study revealed that 75% of ASL learners (N = 95), but only 14% of Romance language learners (N = 203), felt that they gestured more after one year of language instruction. A longitudinal study confirmed this perception. Twenty-one ASL learners and 20 Romance language learners (French, Italian, Spanish) were filmed re-telling a cartoon story before and after one academic year of language instruction. Only the ASL learners exhibited an increase in gesture rate, an increase in the production of iconic gestures, and an increase in the number of handshape types exploited in co-speech gesture. Five ASL students also produced at least one ASL sign when re-telling the cartoon. We suggest that learning ASL may (i) lower the neural threshold for co-speech gesture production, (ii) pose a unique challenge for language control, and (iii) have the potential to improve cognitive processes that are linked to gesture.

  19. Hand Leading and Hand Taking Gestures in Autism and Typically Developing Children

    Science.gov (United States)

    Gómez, Juan-Carlos

    2015-01-01

    Children with autism use hand taking and hand leading gestures to interact with others. This is traditionally considered to be an example of atypical behaviour illustrating the lack of intersubjective understanding in autism. However the assumption that these gestures are atypical is based upon scarce empirical evidence. In this paper I present…

  20. Conceptual and lexical effects on gestures : the case of vertical spatial metaphors for time in Chinese

    NARCIS (Netherlands)

    Gu, Yan; Mol, Elisabeth; Hoetjes, Marieke; Swerts, Marc

    2017-01-01

    The linguistic metaphors of time appear to influence how people gesture about time. This study finds that Chinese English bilinguals produce more vertical gestures when talking about Chinese time references with vertical spatial metaphors than (1) when talking about time conceptions in the English